FROM THE BOOK
30 Common Software Development Errors
The most frequent mistakes across Product, Development, and AI adoption—and how AI can help mitigate each one.
After years building software products and leading engineering teams, I've identified patterns that appear again and again. These aren't obscure edge cases—they're the everyday mistakes that cost companies millions in wasted effort, frustrated customers, and burned-out teams.
The Broken Telephone explores each of these errors in depth, with real stories, data, and practical solutions. Below is a preview of what you'll learn.
Product & Management Errors
Mistakes in prioritization, requirements, and team structure
1. Building the Wrong Thing
Customer asks for "Excel export" but actually needs real-time tax summaries. The request passes through Sales, Product, and finally Engineering—each translation losing context. By the time code is written, the original problem is forgotten.
How AI Helps:
- • Analyze customer conversations to find the real problem behind requests
- • Engineers can query: "What do customers actually complain about?"
- • Synthesize patterns across hundreds of support tickets automatically
2. Subjective Prioritization
Decisions based on gut feelings, political pressure, or whoever shouts loudest. "No customer complained" becomes justification for ignoring technical debt—until small issues compound into a catastrophic outage.
How AI Helps:
- • Connect AI to production data, error logs, and usage analytics
- • Prioritize objectively based on actual impact, not opinions
- • Shift from reactive to predictive: "fix what will break next week"
3. Wrong Slicing
Features sliced too large become multi-month projects with unclear progress. Sliced too small, they deliver no independent value. Both approaches destroy team momentum and make planning impossible.
How AI Helps:
- • Analyze feature scope and suggest optimal breakdown
- • Identify dependencies between slices that humans miss
- • Estimate effort by comparing to historical data
4. Feature-ism (Allergy to Refactoring)
Only new features get prioritized. Technical debt, refactoring, and infrastructure improvements are perpetually pushed to "next sprint." System quality degrades until even simple changes become dangerous.
How AI Helps:
- • Quantify technical debt impact on velocity and bug rates
- • Identify "hot spots" in code that cause most issues
- • Make refactoring faster and safer, reducing perceived cost
5. Deadlines Without Requirements
Committing to delivery dates before understanding scope. "We need this by Q2" is announced before anyone knows what "this" actually means. Teams scramble to hit arbitrary dates with incomplete information.
How AI Helps:
- • Rapidly generate requirement drafts from rough descriptions
- • Identify ambiguities and missing requirements early
- • Estimate based on actual scope, not wishful thinking
6. Output vs Outcome Thinking
Measuring success by features shipped, not problems solved. Velocity is high, but customer satisfaction is flat. Teams celebrate launches that users ignore—or actively dislike.
How AI Helps:
- • Correlate feature releases with actual usage and satisfaction metrics
- • Identify features with low adoption for review
- • Connect customer feedback to specific features automatically
7. Ignoring Qualitative Feedback
Over-reliance on quantitative metrics. NPS scores and surveys capture numbers but miss nuance. Teams don't understand why customers feel frustrated—just that they are.
How AI Helps:
- • Process customer interviews, support calls, and reviews at scale
- • Extract themes and sentiments from unstructured feedback
- • Answer: "Why are customers frustrated with feature X?"
8. Shielding Engineers from Customers
Developers never hear customer pain directly. They see filtered Jira tickets, not frustrated humans. They build for specifications, not people—and the disconnect shows in every feature.
How AI Helps:
- • Summarize and surface relevant customer feedback directly in the IDE
- • Enable engineers to query customer data without waiting for reports
- • Bridge the gap when direct access isn't possible
9. Scope Creep Through "Small" Additions
"Can we just add..." multiplied by ten stakeholders. Each addition seems small in isolation. Together, they double the project scope and timeline without anyone noticing until it's too late.
How AI Helps:
- • Estimate the true cost of each "small" addition
- • Track cumulative scope changes and alert when thresholds are exceeded
- • Propose which additions to defer based on value/effort ratio
10. Inexperienced Product Owners
Companies hire junior POs to save costs. They lack domain knowledge, technical understanding, and customer empathy. Decisions are made without context, and the team pays the price.
How AI Helps:
- • Synthesize customer feedback, market research, and competitor analysis
- • Answer technical questions, reducing the knowledge gap
- • Validate assumptions against data before committing resources
Developer Errors
Technical mistakes in coding, architecture, and technology choices
1. The 90% Done Illusion
"Almost finished" for months. The last 10% takes 90% of the time. Edge cases, error handling, testing, and integration reveal hidden complexity that nobody anticipated.
How AI Helps:
- • Identify edge cases early by analyzing requirements and code
- • Generate comprehensive test cases that expose complexity upfront
- • Catch integration issues faster through automated code review
2. Over-Engineering
Building enterprise-grade solutions for simple problems. Abstract factories, dependency injection frameworks, microservices architecture—for a CRUD app with 100 users that may never scale.
How AI Helps:
- • Suggest the simplest solution when given business context
- • Review designs and flag unnecessary complexity
- • Prompt: "This needs to work for 100 users, not 1 million"
3. Technology Hype Cycles
Overestimating benefits and underestimating costs of new technologies. Blog posts show the happy path. Six months into migration, edge cases and ecosystem gaps reveal the true cost.
How AI Helps:
- • Ask about real-world limitations and edge cases before committing
- • Analyze your codebase and predict migration challenges
- • Prototype quickly with new tech before full commitment
4. Poor Code Quality
Code that works but is unmaintainable. Missing tests, unclear naming, copy-pasted logic, no documentation. Technical debt accumulates until simple changes take days and carry high risk.
How AI Helps:
- • Generate documentation and tests for existing code
- • Catch quality issues in real-time during code review
- • Refactor while preserving behavior (30-40% faster)
5. Custom Framework Syndrome
"Our use case is special." Teams build internal frameworks instead of using established ones. Years later, they maintain abandoned tools while the ecosystem moved on without them.
How AI Helps:
- • Know standard solutions: "Is there a library that does X?"
- • Help migrate from custom to standard solutions
- • Reduce the "learning curve" excuse for building custom
6. Siloed Knowledge (Bus Factor = 1)
Critical knowledge lives in one person's head. When they're unavailable, the team is paralyzed. Documentation is outdated or nonexistent. Every question becomes "ask Sarah."
How AI Helps:
- • Generate documentation from code and commit history
- • Answer codebase questions, reducing dependency on individuals
- • Explain legacy code to new team members (50-70% faster onboarding)
7. Premature Optimization
Optimizing code before knowing if it's a bottleneck. Hours spent on micro-optimizations that have zero user impact. Meanwhile, the actual bottleneck sits elsewhere, ignored.
How AI Helps:
- • Analyze performance data to identify real bottlenecks
- • Focus optimization efforts where they actually matter
- • Quickly test whether an optimization actually helps
8. Security as Afterthought
SQL injection, XSS, authentication flaws discovered in production—or worse, by attackers. Fixing security debt after launch is 10x more expensive than building it in from the start.
How AI Helps:
- • Identify security vulnerabilities during code review
- • Suggest secure patterns as code is written
- • Scan existing code for OWASP Top 10 vulnerabilities
9. Fear of Better Technology
"We can't switch, it would take forever." Fear of the unknown keeps teams on outdated stacks. The daily friction cost exceeds the migration cost, but nobody calculates it.
How AI Helps:
- • Estimate actual migration effort more accurately
- • Handle much of the migration work automatically (70% faster)
- • Make learning new technology faster and less intimidating
10. Underestimating Migration Costs
Learning curves, ecosystem gaps, debugging unfamiliar code, missing documentation. The "simple migration" becomes a multi-quarter nightmare that nobody budgeted for.
How AI Helps:
- • Reduce learning curves by explaining unfamiliar code and patterns
- • Fill documentation gaps by analyzing source code
- • Assist with the actual migration work
AI-Specific Errors
New mistakes that emerge when adopting AI tools
1. Shipping Code You Don't Understand
AI generates working code. Developer ships it without understanding how it works. At 3 AM when it breaks—and it will break—nobody can debug it because nobody knows what it does.
How to Avoid:
- • Use AI to explain AI-generated code before committing
- • Rule: "If you can't explain it, you can't ship it"
- • Ask AI to add comments explaining the logic
2. AI-Generated Over-Engineering
AI optimizes for correctness, not simplicity. It generates enterprise-grade patterns for simple problems—performance considerations for code that runs once a day.
How to Avoid:
- • Explicitly prompt for simplicity: "Make this simple and readable"
- • Include constraints: "This is a small internal tool"
- • Senior review specifically for unnecessary complexity
3. Trusting Without Verification
AI confidently generates incorrect code. Tests pass but edge cases fail. The code looks professional and clean but has subtle bugs that only appear in production.
How to Avoid:
- • Always verify AI output against requirements
- • Generate comprehensive tests (including edge cases) and run them
- • Cross-check critical logic with alternative AI queries
4. Prompt Laziness
Vague prompts produce vague results. "Build an auth system" without specifying requirements. Teams iterate through many bad outputs instead of investing in one good prompt.
How to Avoid:
- • Invest time in clear, specific prompts with context
- • Include constraints, examples, and edge cases in prompts
- • Save and refine prompts that work well
5. Losing the Learning Opportunity
Junior developers use AI as a crutch instead of learning. They can generate code but can't debug or modify it. The skill gap widens as they outsource understanding.
How to Avoid:
- • Use AI to teach, not just generate: "Explain why this works"
- • Code along with AI, understanding each step
- • Deliberately practice without AI to build foundational skills
6. Context Window Blindness
AI only sees what you show it. It suggests changes that break code in other files, or solutions that conflict with existing patterns elsewhere in the codebase.
How to Avoid:
- • Provide relevant context: related files, patterns, constraints
- • Use AI tools that understand the full codebase
- • Review AI suggestions for system-wide impact
7. Hallucinated APIs and Libraries
AI invents functions that don't exist, references outdated library versions, or suggests deprecated approaches. The code looks correct but won't compile.
How to Avoid:
- • Verify all API calls and imports against actual documentation
- • Use AI tools with up-to-date training data
- • Test generated code immediately, don't batch
8. Copy-Paste Multiplication
AI makes it easy to duplicate code. Instead of abstracting, developers generate variations. The codebase bloats with near-identical code that drifts over time.
How to Avoid:
- • Ask AI to identify duplication and suggest abstractions
- • Prompt: "Is there a pattern here we should abstract?"
- • Regular AI-assisted code reviews for DRY violations
9. Security Blind Spots
AI generates code with subtle security flaws—hardcoded credentials, SQL injection vulnerabilities, improper input validation. It looks clean but is dangerous.
How to Avoid:
- • Explicitly ask AI to review for security vulnerabilities
- • Use specialized security scanning tools on AI-generated code
- • Never trust AI with auth logic without expert review
10. Treating AI as Infallible
Not questioning suggestions. Assuming AI "knows best." Missing obvious errors because "the AI said so." Abdicating judgment to a tool that has none.
How to Avoid:
- • Maintain healthy skepticism—AI is a tool, not an authority
- • Know where AI excels (boilerplate) and struggles (novel problems)
- • Final judgment always remains with the human developer
Go Deeper in the Book
Each error is explored with real stories, data, and practical solutions. Learn how leading companies are fixing the broken telephone and building software that actually solves customer problems.