← All posts

I Was Wrong About AI (Again)

I spent months dismissing generative AI as overhyped. Then I realized I was doing the same thing I did with React: being wrong because I didn’t want to admit the hype might be real.

When React got popular, I dismissed it. Not because I had good reasons, I just didn’t believe the hype. I did the same thing with AI tools at first. Watching myself repeat this pattern was uncomfortable enough that I decided to actually engage with these tools instead of having opinions from the sidelines.

What Actually Changed My Mind

I signed up for JetBrains AI mostly to prove I was trying. Used it occasionally for chat, found it mildly helpful. Then Claude Sonnet integration showed up and something clicked. It could scaffold a UI that actually worked, fix bugs I’d been staring at for 20 minutes, understand vague descriptions and build something functional.

I’m now paying for additional credits and considering a full Claude subscription. That’s not something I planned.

It’s genuinely useful for specific things. Rewriting my unfiltered Slack messages into something appropriate. Catching stupid bugs I’ve been blind to. Generating the kind of scaffolding code that’s tedious but necessary: HTML forms with proper accessibility, basic React structures, that sort of thing. And navigating obscure syntax for tools like Splunk queries where the documentation is… not helpful.

I’ve even tried using it as a devil’s advocate against my own documentation and strategy documents. Mixed results there, but interesting.

I think this is genuinely transformative, but not in the way most people are claiming. It won’t replace engineers the way calculators replaced human computers. It changes what we spend time on. Less time fighting with syntax, more time on architecture. Less time on boilerplate, more time on the parts that actually matter.

If you’re cutting engineering staff to replace them with AI, start with the CEO because you clearly run a company where the work doesn’t matter, just the optics.

A tool that loves to gaslight you

Every LLM will confidently lie to your face. Then apologize and lie differently. It’s fascinating when I’m working in domains I know well, I catch it immediately. What scares me is how easy it would be to miss in unfamiliar territory.

My experience with AI in Enterprise software projects (so far)

Most AI mandates in enterprise feel like ā€˜just add AI and see what happens.’ I get it: adoption drives optimization somewhere. But AI doesn’t add inherent value just by existing. An AI-powered calculator is a Rube Goldberg machine as a service. Applied thoughtfully, it can drive real improvements. Mandated blindly, it just makes things worse faster.

Three of the main challenges I’ve encountered with AI on our enterprise software projects:

  1. Context management – Our monorepo floods the context window of every major LLM immediately. We have to optimize and tailor our code just to make these tools work. And the lack of standards means we’re littering our codebase with .claude.md files: development tooling leaking into production code. It irks me.
  2. Documentation – Our documentation platforms can’t mark stale or outdated content. LLMs will confidently cite documentation from 2019 alongside current stuff, and they can’t tell the difference any better than our engineers can.
  3. Cultural issues – Use of AI tools will amplify any existing issues on your development team. If your team doesn’t write tests, AI will just help them not write tests faster. If engineers don’t have a sense of ownership over their code, AI will only accelerate the decline in quality. At the bare minimum, I’d expect engineers to be able to explain and more importantly debug their code regardless of how it was generated. Either by an LLM or by good old-fashioned gray matter.

What I’m still figuring out

  • How to prompt effectively. — I fall into the trap of doing high-level chats with the AI then going back and forth refining the ask as a discussion which I am sure the vendors enjoy as I burn through credits quickly. I need to get used to the new specification/plan-driven development where you articulate your plan first then iterate on that.
  • How to keep up. — Changes are happening so quickly, new models, new AI platforms new benchmarks. It’s a lot to keep up with.
  • Building my own agents. — Haven’t found a use case yet, not convinced I need one just to feel current.

Closing: It’s good, but it’s not that good

This is a useful tool. Not a revolution, not a replacement for thinking, but genuinely useful for specific things. The Luddite position doesn’t make sense here. This is happening whether you engage with it or not.

No one cares if your code is organic. What matters is that it works and you understand it.