A Love Letter to AI Skeptics (From Someone Who Built the Thing You Don’t Trust)

Valentine’s Day seems like a weird time to write about AI coding tools. But if we’re being honest, every conversation about AI and software development is a weird kind of love triangle between developers who love coding, tools that promise to make it easier, and a whole lot of trust issues in between.

I built one of these tools. I led the product team that created one of the AI coding assistants you might be skeptical about. And I want you to know I get it.

When developers tell me they don’t trust AI tools, I don’t hear Luddites resisting progress. I hear people who’ve been burned by tools that promised magic and delivered bugs. I hear engineers who care about craft. I hear the same instinct that makes you suspicious of any abstraction that hides too much.

You’re not wrong to be skeptical. Some of your concerns are real and aren’t going away. But some of them are problems we can solve if we’re honest about what they are.

So consider this a Valentine’s Day card to the skeptics. Not the kind that tries to win you over with promises and chocolates, but the kind that says “I see you, I hear you, and here’s what’s actually true.”

The Concerns That Are Real and Persistent

Hallucinations Aren’t Going Away

Let’s start with the big one. AI coding tools hallucinate. They generate code that looks right but does the wrong thing. They confidently suggest APIs that don’t exist. They create subtle bugs that pass code review because they look plausible.

This isn’t a temporary problem we’ll fix with better models. It’s fundamental to how these systems work. They’re predicting what code should look like based on patterns, not reasoning about what code should do based on logic.

I watched this play out while building Amazon CodeWhisperer. We could reduce the hallucination rate. We could improve accuracy on common patterns. But we couldn’t eliminate the core issue. The model doesn’t understand your codebase the way you do.

Does this mean AI tools are useless? No. But it means they’re junior developers who hallucinate with confidence, not senior engineers who know when they don’t know. You wouldn’t trust a junior dev to work unsupervised on critical paths. Same rule applies here.

Technical Debt at Machine Speed

AI tools let you create technical debt faster than ever before.

You can generate a REST API in 30 seconds. You can scaffold an entire service layer before lunch. The constraint used to be typing speed and mental load. Now the constraint is review capacity and architectural judgment.

I’ve seen this firsthand. Teams using AI tools ship features faster, but they also ship more code that needs to be refactored later. Technical debt increases as code reuse decreases. The debt compounds at machine speed because the tools don’t have opinions about simplicity, maintainability, or whether you should even be writing that code in the first place.

Here’s the thing. This is real, and it’s not going away. AI tools optimize for generating code, not for not generating code. They’ll happily create new classes and functions that already exist elsewhere in your codebase. They’ll happily create a solution using 5 classes when one was the better solution, because their training data is full of “five-class solutions.”

The speed advantage is real. The debt risk is also real. Both can be true.

Loss of Craft Is a Legitimate Fear

What happens when developers stop learning the fundamentals because AI tools abstract them away?

I think about this a lot. Those of us that have been coding a long time have learned languages, runtimes, and frameworks. We have learned new languages that improve on older ones. We have happily moved to higher levels of abstraction with frameworks that made our lives easier…but we understood what they were doing underneath. That foundation still matters when I’m building products today.

If you’re learning to code right now with Copilot or Claude, are you building the same foundation? Or are you learning to prompt AI systems without understanding what the generated code actually does?

This isn’t hypothetical. I’ve talked to bootcamp grads who can ship features quickly but can’t explain fundamentals. The AI tools work until they don’t, and then the developer is stuck.

I don’t have an easy answer for this one. The industry is running an experiment in real-time. Can you build great software by learning to architect and review code without learning to write it from scratch first? I have spoken about this before. I believe its analogous to using a scientific calculator in school…that only happens after you learn the fundamentals of mathematics.

Maybe you can. Maybe the craft evolves and the fundamentals change. But I understand why experienced developers are worried. You learned a certain way, and it worked. Now we’re implying that junior engineers can skip that path because the AI handles it.

I’m worried too.

The Concerns We Can Actually Solve

Not all the skepticism is about unsolvable problems. Some of it is about tooling, process, and how we use these systems.

Architecture Review Scales Better Than Code Review

One pattern I’ve seen work is treating AI-generated code like you’d treat any third-party library or contractor work. You don’t review every line. You review the architecture, the interfaces, and the test coverage.

If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions — Albert Einstein

When I’m using AI tools, I spend 80% of my time collaborating on detailed specifications and 20% reviewing the output. The spec defines what success looks like. The review confirms the AI understood the spec.

This isn’t new. It’s how you’d work with any developer you don’t fully trust yet. Clear requirements, defined acceptance criteria, verification that the implementation matches the spec.

The difference is the “developer” executes in 30 seconds instead of two weeks, so the feedback loop is faster. Bad architectural decisions surface immediately instead of three sprints later.

Can you catch everything in review? No. But you can catch the things that matter—security issues, performance problems, violations of your architectural principles. The rest is technical debt you can refactor later, just like you would with human-written code.

Guardrails and Testing Beat Post-Hoc Fixes

We’re using AI tools with none of the safety systems we’d require for any other high-velocity development approach.

You wouldn’t let a junior dev commit directly to main without tests. You wouldn’t skip code coverage requirements because someone shipped fast. But teams use AI tools with no guardrails and then wonder why they’re creating problems.

The fix isn’t complicated. Required test coverage. Automated security scanning. Architecture decision records. The same practices that make human development safe make AI-assisted development safe.

I’ve been working on side projects using AI coding tools with strict requirements. Every generated component needs tests, every API needs input validation, every database query needs to be reviewed for injection vulnerabilities. It’s slower than just accepting whatever the AI generates, but it’s still faster than writing everything by hand.

The mistake is thinking AI tools mean you can skip the engineering discipline. They don’t. They mean you need more discipline because the velocity is higher.

We Can Measure What Matters…We Just Aren’t

Teams are measuring the wrong things.

Everyone’s tracking “code generated per day” or “time saved writing boilerplate.” These are vanity metrics. They tell you the AI is fast. They don’t tell you if it’s useful.

What actually matters? Are you shipping better products faster? Is your bug rate going up or down? Are developers spending more time on high-leverage work? Is technical debt growing faster than your ability to pay it down?

After building and working with these tools for years now, I can tell you measuring AI tool effectiveness is hard. The inputs are easy to count (lines of code, suggestions accepted). The outcomes are slow to materialize (product quality, developer satisfaction, long-term maintainability).

But we can measure outcomes if we try. Track cycle time from feature idea to production. Track bug rates and P0 incidents. Survey your developers about whether they’re spending time on work that matters or just reviewing AI output.

Most teams aren’t doing this. They’re using AI tools because everyone else is, not because they’ve proven the tools make them better.

This is solvable. It requires treating AI tools like any other engineering investment. Define success criteria, measure actual outcomes, kill the tools if they’re not working.

What This Means for the Next Few Years

Here’s my honest take after years using them in my own work and seeing customers use them.

AI tools are not going to replace developers. They’re going to change what being a developer means.

The developers who thrive will be the ones who get good at specification, architecture, and review. The ones who can take a complex problem, break it into clear requirements, delegate the implementation to AI agents, and verify the results match the intent.

This is a real skill. It’s not “easier” than coding. It’s different. You need to think more clearly about what you want before you start, because the AI will build exactly what you asked for, not what you meant.

The developers who struggle will be the ones who try to use AI tools as a faster keyboard. You can’t just prompt your way to good software any more than you could Stack Overflow your way to good software. The judgment still matters. The architectural thinking still matters.

And yes, some developers will get left behind. Not because they can’t learn the new tools, but because they won’t. They’ll dig in on the old way because it feels safer.

I get it. I learned to code in BASIC on a TRS-80 when I was 11. In the 2000’s, I spent years getting great at C#. Every new abstraction felt like a threat to skills I’d worked hard to build.

But here’s what I learned over 25 years building developer tools. The abstraction layers keep rising, and developers adapt. Each time there is another layer of abstraction, people worried about loss of craft. Each time, the craft evolved.

AI tools are another step in that evolution. A big step. Maybe the biggest since high-level languages. But it’s still the same pattern. We’re trading implementation details for higher-level thinking.

The Honest Ending

I think AI tools for software development are powerful. I also think a lot of the skepticism is justified.

Hallucinations are real. Technical debt velocity is real. Loss of craft is a legitimate concern. If you’re worried about these things, you’re paying attention.

But the problems we can solve—guardrails, measurement, architecture review—are worth solving. The industry is going to use these tools whether individual developers want to or not. The question is whether we use them well or use them poorly.

I’d rather have skeptical developers building the practices and tooling that make AI-assisted development safe than have believers shipping code without questioning whether it’s good.

So if you’re skeptical, stay skeptical. Push back on the hype. Demand real measurements. Insist on engineering discipline. Ask hard questions about what we’re trading away for speed.

But also try the tools. Not to replace your workflow, but to see what they’re actually good at. You might find they’re useful for the boring parts while you focus on the interesting ones. You might find they’re overhyped and not worth the risk. Either way, you’ll know from experience instead of assumptions.

That’s the relationship I want with AI skeptics. Not “trust me, this is great” but “here’s what’s real, here’s what we can fix, and let’s figure out the rest together.”

Happy Valentine’s Day. Let’s build something good.

Leave a comment

I’m Doug

Doug Seven

Welcome to my digital workshop. A space dedicated to the art of building category-defining platforms and the teams that power them. Here, I invite you to join me in exploring the intersection of Generative AI, developer experience, and the craftsmanship required to scale technical innovation with a human touch. Let’s build something extraordinary!

Let’s connect