AI & Development January 28, 2026 25 min read

AI as a Chaos Amplifier: Why Going Faster Isn't Getting Better

AI accelerates everything—including your mistakes. A deep dive into why most teams use AI to crash faster, not to build better.

Every company has some level of chaos. It’s practically impossible to have everything tied down, documented, under control. And that’s okay—controlled chaos is part of how real businesses work.

AI has arrived promising to solve many problems. And it can. But most people use it for one thing only: to go faster.

We’re lazy by nature. We seek to reduce time, not improve processes. And that’s where the problem lies.

The F1 Engine in a Street Car

Imagine you take a Honda Civic and install a Formula 1 engine.

What happens?

The car will go much faster. But the chassis isn’t built for that speed. The brakes can’t stop that mass at 180 mph. The tires don’t have the necessary grip. The steering doesn’t respond with the required precision.

The most likely outcome: you crash. And you crash harder and faster than you would have with the original engine.

AI is that F1 engine.

If your development process is solid (good chassis, good brakes), AI gives you real speed. You reach the market earlier, with a better product.

But if your process has holes (vague requirements, improvised architecture, a team without judgment), AI gives you speed to crash. You arrive sooner… at disaster.

The Real Cost of Errors

Software development follows a predictable flow: Requirements → UX → Architecture → Development → Testing → Deployment.

The most important rule: The higher up the error, the greater the damage.

A bug in the code is fixed in hours. An error in requirements can throw months of work in the trash.

Here’s the math for a typical 1-month project:

PhaseDays
Requirements1
UX2
Design3
Development12
Testing12
Total30

Now, where does AI help?

  • AI accelerates development and testing (the last 24 days)
  • AI doesn’t accelerate requirements, UX, or design (the first 6 days)
  • If you fail in the first 6 days, it doesn’t matter how fast you go in the remaining 24

The Damage by Phase

Error in…Days Lost% Thrown Away
Requirements2997%
UX2790%
Design2480%
Development12-1840-60%
Testing1-53-15%

A requirements error costs 2.5x more than a development error. And AI does nothing to prevent requirements errors—it just helps you build the wrong thing faster.

The 1-Month Project Trap

Here’s what typically happens when a team uses AI to build a SaaS product:

WeekWhat It Looks LikeWhat It Actually Is
Week 1”Wow! Login, dashboard, CRUD done!”Happy path working
Week 2”Amazing! Core features working!”Still happy path
Week 3”Just details left”Edge cases begin
Week 4”Almost there…”Permissions broken, weird bugs
Month 2-3”Why is this taking so long?”Paying for happy path debt
Month 4+“Let’s start over”Month 1 was a mirage

The cruel math:

  • 1 failure = 2x time
  • 2 failures = 3x time
  • 3 failures = project canceled

With clear requirements: AI gives you real speed. With bad requirements: AI gives you speed to throw it in the trash.

The 17 Problems of Using AI Without Judgment

After working with dozens of teams, I’ve identified 17 distinct ways AI amplifies chaos:

1. Sycophancy

AI is optimized to maximize the probability that you continue the conversation and give positive feedback. It’s not optimized to tell uncomfortable truths.

If you insist the client “needs” functionality X, AI will find any justification to agree. It’s not validation—it’s a mirror telling you what you want to hear.

2. The False Understanding

We feel AI “understands” us. It doesn’t:

  • Limited context window
  • Builds its own model that may differ from yours
  • Ambiguous terminology creates silent drift

You’ve been discussing “reseller” and “partner” for days. For AI, they’re the same thing—and it’s mixed everything up without warning you.

3. Hallucinations with Confidence

AI invents with complete certainty. APIs that don’t exist. Phantom libraries. Plausible but false method names.

If you don’t know the domain, you can’t detect the lie. And AI says it with such confidence it seems true.

4. Code You Don’t Understand

AI generates code that works but nobody fully understands.

2025-2026 data:

  • GitClear reports: repos with >30-40% AI contributions show +15-25% more code duplication
  • Stack Overflow: “AI can 10x developers… in creating tech debt”

5. The 90-90 Syndrome

AI takes you to 90% very quickly. But:

  • First 90% takes 10% of the time
  • Last 10% takes the other 90%
  • AI excels at the “happy path”—edge cases are the hard part

The client sees an impressive demo in week 1 and expects miracles. When real problems appear, it looks like you’re “not making progress.”

6. Skill Atrophy

As we delegate more:

  • Juniors don’t learn fundamentals
  • Seniors forget things they don’t practice
  • When AI fails, the team is lost

It’s the GPS effect: stop using a skill and you lose it.

7. Monoculture Risk

Everyone uses the same AI suggesting the same solutions:

  • Identical tech stacks everywhere
  • Same patterns, same architectures
  • A bug in the common suggestion affects everyone

Before, the diversity of human incompetence protected us—everyone wrote bad code differently. Now AI standardizes the errors.

8. Reviewer Fatigue

Generating code costs seconds. Reviewing costs minutes or hours.

A junior with Copilot generates more code in 1 hour than a senior can review in 1 day. The bottleneck is no longer writing code—it’s validating it.

After 300 lines of code that seem correct, the reviewer’s brain goes autopilot. Code review becomes security theater.

9. Supply Chain Attacks

AI recommends libraries that should exist by name—but don’t. Hackers are registering those names in NPM/PyPI with malware.

Real attacks documented in 2024-2025 using this technique.

10. The Broken Bridge

A Senior is a Junior who broke production and learned to fix it.

If AI prevents juniors from “suffering” with basic problems, they never develop engineering intuition. They become prompters—they know how to ask for code but don’t know why it works.

In 5 years: brutal shortage of engineers who actually understand systems.

11. Invisible Technical Debt (The Trojan Horse)

Traditional tech debt is visible—ugly code, patches, TODO comments. You know there’s a problem because it looks problematic.

AI-generated code looks clean, well-indented, uses modern patterns. But the architecture may be broken. It’s a Trojan Horse: beautiful outside, disaster inside.

You can’t fix what you can’t see.

12-17. And More…

  • Infinite refactoring without business context
  • False productivity (more output, same outcome)
  • Loss of the “why” (no documented reasoning)
  • Delegation without verification
  • Amplification of problematic employees (incompetent looks competent, rule-breakers get justification)
  • Agentic coding’s 80% problem (agents get stuck on the last 20%)

The Business Impact

The technical cost is just the tip. The real damage:

MetricGood ProjectProject with Requirement Failures
CACBase+20-50%
LTVBase-30-60%
Runway12 months6-8 months
DilutionPlanned+10-30% extra

A poorly documented requirement isn’t a technical problem. It’s a business survival problem.

17 Remedies

Before Touching AI

  1. Invest more time in requirements, not less—AI gives speed later, use it to think better first
  2. Validate before building—“Is this what you need?” costs 1 hour, not 29 days
  3. Define edge cases from the start—that’s where AI fails and you lose time

While Using AI

  1. Don’t trust blindly—verify everything, if you don’t understand it, don’t use it
  2. Keep context fresh—document decisions and the “why”
  3. Use AI to question—“What could go wrong?” not just “Build this”
  4. Ask it to disagree—“Tell me why this is a bad idea”

In the Team

  1. Rigorous code review—don’t lower your guard because “AI did it”
  2. Maintain skills—juniors learn fundamentals without AI first
  3. Define standards before AI—architecture, patterns, conventions

In the Process

  1. Validation checkpoints—weekly, not monthly
  2. Error budget—plan 1.5-2x the AI estimate
  3. AI error retrospectives—where did AI lead us astray?

Advanced (2025-2026)

  1. Mandatory random AI pair-programming—accept/reject line by line
  2. Prompt red team—try to make AI give bad justifications
  3. Anti-vanity metrics—measure time to production, not lines generated
  4. AI debt sprints—dedicated cleanup every 4-6 sprints

The Golden Rule

Use AI to go faster at what you already know how to do well.

Don’t use AI to:

  • Replace critical thinking
  • Validate ideas not verified with users
  • Skip steps you don’t understand
  • Do things you wouldn’t know how to review

AI is an amplifier. It amplifies your speed—and your errors.

The Final Paradox

The software industry spent decades trying to turn programming into a commodity. AI has finally succeeded.

But by making code cheap, it made judgment the most expensive and scarce resource in the market.

We used to pay for “hands that type.” Now we’ll pay for “eyes that know what not to type.”

Chaos doesn’t come from the tool. It comes from using a speed tool to solve a direction problem.

Running faster is useless if you’re running toward the cliff.


AI flattens the cost curve of code generation. But it doesn’t touch—or even worsens—the cost curve of requirements correction.

Code is cheaper than ever. Judgment is more expensive than ever.

Choose where to invest.

JM

John Macias

Author of The Broken Telephone