I’ve been talking to friends about changes in the market driven by AI. Do we still need developers? Has anyone noticed real changes yet? I don’t have radical answers. As with other shifts, I think it won’t move in a single line — some areas will be hit fast, some slow, some maybe not at all. I get that people like clean narratives about the whole world changing (or not changing, if you’re a conservative). But the reality is usually that one part changes brutally, another stays the same, and a third pushes back in the opposite direction. What the proportions will be, I have no idea.
Let me start with what I’m seeing in myself. I have almost ten projects. I used to have lots of ideas — now I can turn each one into at least a prototype in no time. I’m an architect and I don’t even see the source code. Every developer I talk to who actually uses these tools is blown away. And I mostly talk to creative people who, instead of fearing for their jobs, are watching in awe at what they can suddenly do. So what happened? What’s going on? And what will it mean? I’ll start by saying I have no good macro prediction. But let’s break it down…
The revolution in tooling
If you tried programming with AI even six months ago, it looked quite different from today. One change is obviously the models themselves, which are improving at a brutal pace, but the other change is how we use those models — how they’re harnessed (the tools built around them are literally called a “harness”).
There’s Claude Code, OpenCode, Codex, Kimi-CLI, Github Copilot… Each of these tools can use different models. The harness provides the interaction environment — not just where you write prompts, but how the models are actually used. Which parts of the codebase get sent to the model, how changes are applied, and so on. It’s essentially packaged prompt engineering plus a pile of surrounding tools. Individual harnesses can use command-line tools, which gives them the capabilities of any geek in a Linux terminal.
And it’s the combination of these two changes that creates the step change. If your last experience was chatting with ChatGPT in a little window, trying to get it to make your old website responsive — and it broke things three times or wrote “…change the rest similarly” — then you haven’t experienced what’s happening today.
I’ve successfully built applications in Python, Rust (which I don’t know, by the way), Ruby on Rails, and C. Most of them are relatively complex apps and I saved a ton of time on each, but more importantly — if I couldn’t build them with AI, I wouldn’t have started them at all. Learning Rust or Android development vs. “I want an app that does this” is a completely different equation. And even in languages I know well, I’d still need to find libraries, figure out how to use them, and type out the characters. The classic advice “things take time, be patient” doesn’t apply anymore. I do need patience because the AI doesn’t generate content instantly, but while I wait, I open another terminal and work on the next project. I can handle about four at once — beyond that my head starts to feel the pressure.
What’s happening in practice
I’m already seeing the first cases around me where these changes have materialized. One person with AI tools (say Claude Code for 20-200 dollars a month) can replace and has replaced an entire team of developers and sysadmins. Not because they’re a genius, but because they know what they want. People who have a vision and understand the product suddenly don’t need someone to code it for them. Technical English is enough.
And yes, it leads to layoffs. Entrenched developers with their comfort zone and their own little pace are simply an unnecessary luxury. And I say this as a developer with a comfort zone.
On the other side, CVs are appearing on the market from developers who were fired because they refused to use AI. It’s like a coachman refusing to get into a car because he has a relationship with horses and understands them. One such developer explained that he writes beautiful code and understands it. The answer was simple: nobody looks at your beautiful code except the compiler, and the compiler doesn’t care if it’s beautiful. You need things that work, not art that takes ten times longer.
I’m a fan of ugly code
For years I’ve seen some programmers act superior. “Ugh, you work in PHP.” Overengineering, slow coding, modularity… And yet so often it’s simply about getting it done so the product meets requirements. This often leads to something my good friend calls “the reinstall illusion.” My computer feels slow, so I reinstall the OS to clean out years of accumulated mess. Sure, it’ll run better for a while. But I’ll also kill weeks reinstalling and configuring all my programs. A clean operating system is nice, but you lose everything you taught your computer to do the way you want it.
I see this constantly in IT departments. A hyped-up professional software architect or developer walks in, looks at the ugly non-modular code in a single file, and declares that this program simply cannot work because it’s a hideous blob. They dust off the latest hipster framework and start rewriting. They burn months on reprogramming (a nice example from the open-source world: Firefox in Rust). And they lose all the accumulated knowledge embedded in the old code. Why is there this ugly hack where product code 7 makes the whole thing behave differently? That’s unsystematic and ugly. And then they discover there was a good reason — this product genuinely needs to behave differently. I’m not saying this kind of cleanup doesn’t remove plenty of nonsense, but does anyone ask whether the rewrite is actually worth it? Whether we’re throwing precious resources out the window just because someone is a purist who wants beautiful code?
Plot twist: The same thing will happen to the new code in three years. Someone new will come along and say it’s terrible, unmaintainable, and needs to be rewritten from scratch. And round and round it goes.
I get that human programmers don’t want to dig through ugly century-old code in a single file full of hacks and hotfixes. But human programmers don’t have to do that anymore. When you need to change the behavior of one field, you tell the AI tool that you need to change the behavior of one field. The AI model reads it and does it. No complaining.
Where the biggest savings are
Everything below the software architect level. You need to know what you want and what it should look like. If you know that, it flies. If you don’t, AI won’t help you. People without technical skills who I’ve seen try to sit down with it can’t pull it off yet.
The productivity is absurd. I know people who ship a software project per week (roughly the same for me). People who have their own software studio and are asking around for ideas because they’ve already built everything they could think of. In the short term there’s massive arbitrage, because doing things this way is radically cheaper.
The creativity problem
The key issue: Most developers have a craft but not creativity. They need to be told what to do, and they do it. Now that the person who knows what needs to be done no longer needs people to execute, the question remains: what will these “unnecessary” people do?
For a while they’ll clean up technical debt at work. But long-term, I don’t know.
The open question is whether there are enough creative entrepreneurs who can unlock that new productive capacity. And whether people can handle such a fast-moving environment.
Here’s what I mean: For people who have ideas and execution skills beyond coding, this is a brutal boost. For people who spent eight hours a day typing what someone else told them to — it’s also a boost, but if they now need four hours a week instead, can they use the freed-up time productively and do something else? If yes, they’ll help their project enormously. But if they can’t, nine out of ten will be let go.
Not just IT
This doesn’t end with coding. Tools already exist for office work and for lawyers. The difference between a lawyer who spends three days drafting a contract and one who prompts it in an hour — confident it references the right and current laws — will be enormous. For a while they’ll still bill the three-day price, but that won’t last. At the end of the day, a lawyer will be a commonly accessible service. Or you’ll do it yourself.
I’m not saying lawyers and developers are useless — quite the opposite, they have skills because they know what they want. But they can do far more, which means far fewer of them will be needed. And at the same time we’ll use their services more — I expect a wave of great new software. And maybe better contracts, better representation in court (you still have to show up for that), and so on.
The cloud analogy in banking
This reminds me of the story of banks and the cloud. Banks spent decades building their own data centers. It was insanely expensive, but they felt there was no other way. Then someone came along, moved everything to the cloud, negotiated contract terms that satisfied the regulator, and suddenly the sacred rule wasn’t sacred.
AI is the same. Managers pretend they can’t send sensitive data to an American corporation. But local inference is improving, AI companies have solutions for Europe, and most of those requirements are bullshit anyway. One company will start doing it, their costs will drop, and then everyone will follow.
AI arbitrage
For a few months, maybe years, AI arbitrage will work. People who need problems solved don’t know about the productivity explosion and will keep paying slightly lower but still “pre-AI” prices for a while. I believe some people — developers, for instance — will take on three or four full-time remote jobs and handle all of them. They’ll live off one salary and buy Bitcoin with the other two or three. Then, when the burnout hits, they’ll take a three-year vacation, and Bitcoin will have mooned by then. I expect them on beaches.
OK, this prediction probably won’t fully come true, but the opportunity for AI arbitrage is massive. I’m curious whether and how people will use it.
What it all means
I can’t make a good macro prediction. I have no idea how the market will shake out. The upside and the downside are both brutal. A lot of money will flow to American corporations, though open source models (like Kimi K2.5) are starting to be genuinely good.
I hope the result will be better and more accessible services for everyone.
But I don’t think “business as usual, it’s just hype” is a realistic scenario. The opportunities for AI arbitrage and entrepreneurship are massive.
Hold on to your hats.