I spent months dismissing generative AI as overhyped. Then I realized I was doing the same thing I did with React: being wrong because I didnāt want to admit the hype might be real.
When React got popular, I dismissed it. Not because I had good reasons, I just didnāt believe the hype. I did the same thing with AI tools at first. Watching myself repeat this pattern was uncomfortable enough that I decided to actually engage with these tools instead of having opinions from the sidelines.
What Actually Changed My Mind
I signed up for JetBrains AI mostly to prove I was trying. Used it occasionally for chat, found it mildly helpful. Then Claude Sonnet integration showed up and something clicked. It could scaffold a UI that actually worked, fix bugs Iād been staring at for 20 minutes, understand vague descriptions and build something functional.
Iām now paying for additional credits and considering a full Claude subscription. Thatās not something I planned.
Itās genuinely useful for specific things. Rewriting my unfiltered Slack messages into something appropriate. Catching stupid bugs Iāve been blind to. Generating the kind of scaffolding code thatās tedious but necessary: HTML forms with proper accessibility, basic React structures, that sort of thing. And navigating obscure syntax for tools like Splunk queries where the documentation is⦠not helpful.
Iāve even tried using it as a devilās advocate against my own documentation and strategy documents. Mixed results there, but interesting.
I think this is genuinely transformative, but not in the way most people are claiming. It wonāt replace engineers the way calculators replaced human computers. It changes what we spend time on. Less time fighting with syntax, more time on architecture. Less time on boilerplate, more time on the parts that actually matter.
If youāre cutting engineering staff to replace them with AI, start with the CEO because you clearly run a company where the work doesnāt matter, just the optics.
A tool that loves to gaslight you
Every LLM will confidently lie to your face. Then apologize and lie differently. Itās fascinating when Iām working in domains I know well, I catch it immediately. What scares me is how easy it would be to miss in unfamiliar territory.
My experience with AI in Enterprise software projects (so far)
Most AI mandates in enterprise feel like ājust add AI and see what happens.ā I get it: adoption drives optimization somewhere. But AI doesnāt add inherent value just by existing. An AI-powered calculator is a Rube Goldberg machine as a service. Applied thoughtfully, it can drive real improvements. Mandated blindly, it just makes things worse faster.
Three of the main challenges Iāve encountered with AI on our enterprise software projects:
- Context management ā Our monorepo floods the context window of every major LLM immediately.
We have to optimize and tailor our code just to make these tools work. And the lack of standards
means weāre littering our codebase with
.claude.mdfiles: development tooling leaking into production code. It irks me. - Documentation ā Our documentation platforms canāt mark stale or outdated content. LLMs will confidently cite documentation from 2019 alongside current stuff, and they canāt tell the difference any better than our engineers can.
- Cultural issues ā Use of AI tools will amplify any existing issues on your development team. If your team doesnāt write tests, AI will just help them not write tests faster. If engineers donāt have a sense of ownership over their code, AI will only accelerate the decline in quality. At the bare minimum, Iād expect engineers to be able to explain and more importantly debug their code regardless of how it was generated. Either by an LLM or by good old-fashioned gray matter.
What Iām still figuring out
- How to prompt effectively. ā I fall into the trap of doing high-level chats with the AI then going back and forth refining the ask as a discussion which I am sure the vendors enjoy as I burn through credits quickly. I need to get used to the new specification/plan-driven development where you articulate your plan first then iterate on that.
- How to keep up. ā Changes are happening so quickly, new models, new AI platforms new benchmarks. Itās a lot to keep up with.
- Building my own agents. ā Havenāt found a use case yet, not convinced I need one just to feel current.
Closing: Itās good, but itās not that good
This is a useful tool. Not a revolution, not a replacement for thinking, but genuinely useful for specific things. The Luddite position doesnāt make sense here. This is happening whether you engage with it or not.
No one cares if your code is organic. What matters is that it works and you understand it.