> But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.
This was essentially my experience vibe coding a web app. I got great results initially and made it quite far quickly but over time velocity exponentially slowed due to exactly this cognitive debt. Took my time and did a ground up rewrite manually and made way faster progress and a much more stable app.
You could argue LLMs let me learn enough about the product I was trying to build that the second rewrite was faster and better informed, and that’s probably true to some degree, but it also was quite a few weeks down the drain.
I'm not a coder, I'm a medical doctor. I see some interesting parallels in how medical students sort themselves into specialties by cognitive style to this new rift in programming with LLMs.
Some people like the deep work, some like managing a steady rain of chaos. There's no one right answer. But I'll tell you that my classmates who are happy as nephrologists are very different to the ones that are happy as transplant surgeons.
The cognitive debt framing really resonates. I've noticed two very different failure modes when working with AI coding tools:
1. The "black box" problem — you accept generated code without fully understanding it, and later can't debug or extend it because you never built the mental model.
2. The "context fragmentation" problem — you're constantly switching between reviewing AI output, correcting course, and holding your own design intent in your head. It's like pair programming with someone who has perfect syntax recall but zero memory of what you discussed 5 minutes ago.
The teams that seem to avoid the wall are the ones who treat AI as a drafting tool rather than a decision-making tool. You still need to be the architect. The AI just types faster than you do.
> a third of them were instantly converted to being very pro-LLM. That suggests that practical experience
I wasn't aware one could get 'practical experience' "instantly." I would assume that their instant change of heart owes more to other factors. Perhaps concern over the source of their next paycheck? You have admitted you just "forced" them to do this. Isn't the question then, why didn't they do it before? Shouldn't you answer that before you prognosticate?
> that junior developers will still be needed, if nothing else because they are open-minded about LLMs
You're broadcasting, to me, that you understand all of the above perfectly, yet instead of acknowledging it, you're planning on taking advantage of it.
> I think the equivalent of cruft is ignorance
Exceedingly ironic.
> Will two-pizza teams shrink to one-pizza teams
The language you use to describe work, workers, and overcoming challenges are too depressing to continue. You have become everything we hated about this profession.
If you haven't experenced a post-November-2025 coding agent before and someone coaches you through how to one-shot prompt it into solving a difficult problem in your own codebase that you are deeply familiar with I can see how you might be an almost instant convert.
(Based on your comment history I'm guessing you haven't experienced this yourself yet.)
This rings true for me. Up until the end of 2025 I had my doubts. I haven't fully adopted AI, but I am using it for several side projects where I normally would not have made much progress. The output w/Claude Code is solid.
Ask around and see if you can find anyone you know who's experienced the November 2025 effect. Claude Code / Codex with GPT-5.1+ or Opus 4.5+ really did make a material difference - they flipped the script from "can write code that often works" to "can write code that almost always works".
I know you'll dismiss that as the same old crap you've heard before, but it's pretty widely observed now.
Part of me feels like LLMs will struggle to architect code properly, no matter how good they get.
Software engineering is different from programming. Other kinds of engineers often ridiculed software engineers as "not real engineers" because mainstream engineers never had to build arbitrarily complex software systems from scratch. They have never experienced the cascading issues which often happen when trying to make changes to complex software systems. Their brief exposure to programming during their university days gave them a glimpse into programming but not software engineering. They think they understand it but they don't.
Other engineers think that they're the only ones wrestling with the laws of nature.
They're wrong. Software engineering involves wrestling with entropy itself. In some ways, it's an even purer form of engineering. Software engineering struggles against the most fundamental forces and requires reasoning skills of the highest order.
I think software engineers will be among the last of the white collar professions to be automated besides the ones which have legal protections like lawyers, judges, politicians, accountants, pilots... where a human is required to provide a layer of accountability. Though I think lawyers will be reduced to being "official human stamping machines" before software engineers are reduced to mere Product Owners.
> Though I think lawyers will be reduced to being "official human stamping machines" before software engineers are reduced to mere Product Owners
GeLLMan Amnesia – AI can fully automate every profession except the ones I’m deeply familiar with.
I’m a software engineer who wears the product owner hat a lot these days, there’s no way AI will automate this any time soon. Too much peopling and accountability.
Don't be so sure about that. These days I'm already finding it 100x easier/informative to have complicated charged discussions (eg immigration) with Gemini than with actual people. It's day and night. Accountability might be solvable too, maybe escrow and pay me if you waste my time. Or amazon-like reviews.
> These days I'm already finding it 100x easier/informative to have complicated charged discussions (eg immigration) with Gemini than with actual people.
It’s scary how quickly people start to mistake LLMs appeasing them for actual conversation.
Discussing something with an LLM isn’t equivalent to having a conversation with a person. It’s just a text generator trained to show you what you want to see.
Don't assume everyone is like you. I'm an early adopter and I know how to scaffold it. I generally love the bleeding edge in all things, and I'm increasingly sure it's an actual talent to be able to quickly adapt to unfamiliar things (this includes not making assumptions).
I'm not sure. Most of it is not even on the logs, it's followed up elsewhere.
You can try something like this on Gemini 3 Pro:
> Break down aspects of the economy by amenability to state control high/medium/low, based on what we see in successful economies. Include a rationale and supporting evidence/counterexamples. Present it in 3 tables.
It should give you dozens of things you can look up. It might mention successful Singapore and Vienna-style public housing. Some nice videos on that on Youtube.
Online discussions are usually at the level of "[Flagged] Communism bad".
Not that guy, but for me it's something like Tensorflow/Pytorch. A domain-specific language for a scientific application, Python API with a Rust core for very fast/safe calculations. It has all kinds of bells & whistles you'd want, like automatic differentiation, lazy evaluation, provenance, serialization, etc. Occasionally dips down to raw pointer work too. It's easy to test, so AI excels at this type of thing.
Beautifully expressed… you missed doctors in your list of white collar professions, but I’m sure surgeons and pilots will outlive all of us from an AI resilience standpoint.
It's kind of terrifying to think that all professions are going to have to shift away from value creation to pure politics to survive.
I have a feeling that big tech companies will be legally forced to pay royalties to software engineers. Once software engineers stop applying their reasoning skills to solving real problems and start vengefully focusing it on politics, we're going to corrupt the whole system in our favor. We have enough collective knowledge to frame such corruption as moral in the context of an already corrupt system.
Either software engineers will create regulatory moats for themselves or there will be a more broad political movement like communism. I've met many people working deep in the critical systems which underpin our society who are full-blown communists.
> But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.
You could argue LLMs let me learn enough about the product I was trying to build that the second rewrite was faster and better informed, and that’s probably true to some degree, but it also was quite a few weeks down the drain.
Some people like the deep work, some like managing a steady rain of chaos. There's no one right answer. But I'll tell you that my classmates who are happy as nephrologists are very different to the ones that are happy as transplant surgeons.
1. The "black box" problem — you accept generated code without fully understanding it, and later can't debug or extend it because you never built the mental model.
2. The "context fragmentation" problem — you're constantly switching between reviewing AI output, correcting course, and holding your own design intent in your head. It's like pair programming with someone who has perfect syntax recall but zero memory of what you discussed 5 minutes ago.
The teams that seem to avoid the wall are the ones who treat AI as a drafting tool rather than a decision-making tool. You still need to be the architect. The AI just types faster than you do.
I wasn't aware one could get 'practical experience' "instantly." I would assume that their instant change of heart owes more to other factors. Perhaps concern over the source of their next paycheck? You have admitted you just "forced" them to do this. Isn't the question then, why didn't they do it before? Shouldn't you answer that before you prognosticate?
> that junior developers will still be needed, if nothing else because they are open-minded about LLMs
You're broadcasting, to me, that you understand all of the above perfectly, yet instead of acknowledging it, you're planning on taking advantage of it.
> I think the equivalent of cruft is ignorance
Exceedingly ironic.
> Will two-pizza teams shrink to one-pizza teams
The language you use to describe work, workers, and overcoming challenges are too depressing to continue. You have become everything we hated about this profession.
(Based on your comment history I'm guessing you haven't experienced this yourself yet.)
So far it's all been endless unfounded FOMO hype by people who have something to sell or podcasts to be on. I am so tired of it.
I know you'll dismiss that as the same old crap you've heard before, but it's pretty widely observed now.
Software engineering is different from programming. Other kinds of engineers often ridiculed software engineers as "not real engineers" because mainstream engineers never had to build arbitrarily complex software systems from scratch. They have never experienced the cascading issues which often happen when trying to make changes to complex software systems. Their brief exposure to programming during their university days gave them a glimpse into programming but not software engineering. They think they understand it but they don't.
Other engineers think that they're the only ones wrestling with the laws of nature.
They're wrong. Software engineering involves wrestling with entropy itself. In some ways, it's an even purer form of engineering. Software engineering struggles against the most fundamental forces and requires reasoning skills of the highest order.
I think software engineers will be among the last of the white collar professions to be automated besides the ones which have legal protections like lawyers, judges, politicians, accountants, pilots... where a human is required to provide a layer of accountability. Though I think lawyers will be reduced to being "official human stamping machines" before software engineers are reduced to mere Product Owners.
GeLLMan Amnesia – AI can fully automate every profession except the ones I’m deeply familiar with.
I’m a software engineer who wears the product owner hat a lot these days, there’s no way AI will automate this any time soon. Too much peopling and accountability.
It’s scary how quickly people start to mistake LLMs appeasing them for actual conversation.
Discussing something with an LLM isn’t equivalent to having a conversation with a person. It’s just a text generator trained to show you what you want to see.
YMMV. Ask the bot for supporting evidence, and follow up on google/wikipedia.
You can try something like this on Gemini 3 Pro:
> Break down aspects of the economy by amenability to state control high/medium/low, based on what we see in successful economies. Include a rationale and supporting evidence/counterexamples. Present it in 3 tables.
It should give you dozens of things you can look up. It might mention successful Singapore and Vienna-style public housing. Some nice videos on that on Youtube.
Online discussions are usually at the level of "[Flagged] Communism bad".
I've been doing this for more than 25 years.
It's kind of terrifying to think that all professions are going to have to shift away from value creation to pure politics to survive.
I have a feeling that big tech companies will be legally forced to pay royalties to software engineers. Once software engineers stop applying their reasoning skills to solving real problems and start vengefully focusing it on politics, we're going to corrupt the whole system in our favor. We have enough collective knowledge to frame such corruption as moral in the context of an already corrupt system.
Either software engineers will create regulatory moats for themselves or there will be a more broad political movement like communism. I've met many people working deep in the critical systems which underpin our society who are full-blown communists.