TBH, it all feels like a huge gamble at this point. Neither skills, education, institutional ties, nor current employment can guarantee a stable foundation for life.
This hits harder depending on how much money, social capital, or debt you accumulated before this volatility began. If you’ve paid off your debts, bought a house, and stabilized your family life, you’re gambling with how comfortable the coming years will be. If you’re a fresh grad with student debt, no house, and no social network, you’re more or less gambling with your life.
Amen. It's hard to live with hope right now at all. Programmer or otherwise we're constantly told we're all going to be replaced and the economy is a mess (US). Definitely a depressing time to be alive.
> Amen. It's hard to live with hope right now at all. Programmer or otherwise we're constantly told we're all going to be replaced and the economy is a mess (US). Definitely a depressing time to be alive.
I don't doubt this, however, the question is if AI will do this in our life-time. The industrialization has led to prosperity in the long term, but initially it led primarily to the proletarianization of the people. Are you willing to accept a devaluation of your skills and a decline in your prosperity so that in 50 to 100 years there is a chance that AI will lead to a better future?
I think we can assume it will create disruption, but by definition this is both positive and negative for different individuals & dimensions, and it is small solace if society improves while your life languishes or declines - this is just what's happened to a generation of young males in the US and is having huge repercussions. I think you're right to suggest the goal is to avoid letting the uncertainity make you depressed, but that does not automatically make it so of everyone.
Is that AI generated by any chance? Seems like an AI crystal ball that you're looking into.
It's fine to have that opinion, but please frame as an opinion or else give me the lotto numbers for next week if you can predict the future that accurately.
I felt a lot safer when I was a young grad than now that I have kids to support and I can't just up and move to wherever the best job opportunity is or live off lentils to save money or whatever.
Yeah, kids change the landscape a lot. On the other hand, if you don't have any personal ties, its easier to grab opportunities, but you are unlikely to build any kind of social network when chasing jobs all over the country/world.
Either way, there is very little to no path toward "family + place to live + stable job" model.
There must be "dozens of us" with this fear right now. I'm kinda surprised there isn't a rapid growing place for us to discuss this... (Youtube, X account, Discord place..)
As for me, I've just been broke and unemployed since 2010.
Looking back, I was essentially railroaded into a dead end career by society, and any attempt to get into software engineering over the years has been blocked by the simple fact I don't possess the right piece of paper. Nor am I willing to spend a shit ton of cash and more importantly time "going back to school" to learn what I already know pretty damn well already, given that I've been programming since I was 8 and it was the strength of my computer skills that got me into the other career in the first place.
My actual accomplishments in the world of computing, sitting here day to day in front of my personal computer, are the stuff of legends. Things like building my own Linux distro completely from scratch, now consisting of over 1,500 packages, cross compiling to multiple platforms. How many people with a Master's degree or Ph.D. ever did this as part of their degree program? Yet I'm looked upon as unemployable detritus compared to some kid fresh out of college with a newly minted BullShit degree.
Over the years the problem has gotten worse and worse. First it was just being filtered out by HR. Now it's all the other craziness also, like going to 7+ interviews only to be told you're rejected, and so on, and that's after putting in hundreds of applications to get one callback. How the hell am I supposed to go through that? I Fucking Refuse.
Thank God I'm OK with living out of the Piggly Wiggly dumpster.
Protip: living out of the Piggly Wiggly dumpster is actually a pretty good alternative to absolute slavery to government/corporations. Your very entitled and spoiled American wife probably won't agree however. Thank God I don't have one of those.
I'm confused as to why someone who freely admits they have been broke & unemployed for 15 years feels they are qualified to provide "advice", make critical judgement calls about others and brag about their awesomeness.
>> My actual accomplishments in the world of computing ... are the stuff of legends
Protip: When you consistently present yourself as somebody with a massively inflated ego who will be a constant pain to interact with, nobody's going to hire you, skills or not.
I created my first Linux from scratch when I was a freshman in college in a third world country (not India). Fast forward few years later, I now write Linux kernel code for a living. Not sure what you did wrong, bud, to end up miserable like this.
True. This is one of the best arguments for not having kids. I could never imagine putting myself in that uncertain situation. Much better to reduce those risks, and focus on yourself.
Having kids is a personal choice. The stress of having to support them is real and it might mean, at times, you sacrifice more than you would have without kids.
It's been entirely worth it for me and I cannot imagine my life without kids. But it's a deeply personal choice and I am not buying or selling the idea. I would just say nobody is ever ready and the fears around having them probably are more irrational than rational. But not wanting them because of how it might change your own life is a completely valid reason to not have kids.
It's been a cause of mild background anxiety for me for the past 3 years. One part is financial and the other is a potential loss of a comfortable and relatively high status job that I can get even with below average social and physical skills.
I need about 4.5 years until basic financial independence, I wonder how does it feel to be at that point.
I'm older, aware, decently resourced and really trying NOT to play but it is still hard to accomplish. I'm married with 3 kids and even though I sit out much of the nonsense, your friends, family and community will keep pulling you back in. It's hard to do"not playing" without "not participating" and I don't think anybody should do that.
I say this without hyperbole: we are (IMHO) on the verge of total systemic collapse.
We've had 50+ years of deteriotating worker conditions and a massive concentration of wealth to like 10,000 people. The 1980s crushed the labor movement, to all of our detriment.
The GFC destroyed the career prospects of many millenials who discovered their entry-level positions no longer existed so we created a generation that we're loaded with student debt, working as baristas.
A lot of people on HN ignored this because the 2010s were good for tech people but many of us didn't realize this post-GFC wave would eventually come for us. And that's what's happening now.
So on top of the millenaisl we now have Gen Z who have correctly realized they'll never have security, never buy a house and will never retire. They'll live paycheck to paycheck, barely surviving until they die. Why? All so Jeff Bezos can have $205 billion instead of $200 billion.
I'm reminded of the quote "only nine meals separates mankind from anarchy".
I believe we've passed the point where we can solve this problem with electoral politics. Western democracies are being overtaken by fascists because of increasing desperation and the total destruction of any kind of leftism since WW2. At this point, it ends violently and sooner than many think.
I do wonder what will come next, it seems very unlikely that modern states can effectively be toppled and replaced by revolutions but maybe the nature of revolutions will change themselves. After all, it's not like the neoliberal paradigm was always so, it was systematically planned by elites in the 1970s and we're now proudly living in the society they envisioned (elites have wealth while everyone else struggles). The neoliberal establishment was definitely a revolution that impacted and destroyed many lives but it wasn't treated as such.
I guess the next turning of the wheel will be similar too.
And yet the current administration, like every other administration since the mid 90s, still sets labor immigration policy on the testimony of the tech industry that there is still a critical shortage of tech labor so the doors must be remain open for the 30th year of the temporary program that's only going to be in place until the tech companies have time to train domestic talent. If you have a problem with this, you're a racist Nazi who should be excluded from society. Left, right, up, down, they all agree on this, as does the vast majority of posters here. Their defense for this is that little down arrow since they have no other legitimate defense for the 30th something year of the temporary program to give them time to train the talent they claim doesn't exist in the United States.
> The flip scenario: AI unlocks massive demand for developers across every industry, not just tech. Healthcare, agriculture, manufacturing, and finance all start embedding software and automation. Rather than replacing developers, AI becomes a force multiplier that spreads development work into domains that never employed coders. We’d see more entry-level roles, just different ones: “AI-native” developers who quickly build automations and integrations for specific niches.
This is what I expect to happen, but why would these entry-level roles be "developers". I think it's more likely that they will be roles that already exist in those industries, where the responsibilities include (or at least benefit from) effective use of AI tools.
I think the upshot is that more people should probably be learning how to work in some specific domain while learning how to effectively use AI to automate tasks in that domain. (But I also thought this is how the previous iteration of "learn to code" should be directed, so maybe I just have a hammer and everything looks like a nail.)
I think dedicated "pure tech" software where the domain is software rather than some other thing, is more likely to be concentrated in companies building all the infrastructure that is still being built to make this all work. That is, the models themselves and all the surrounding tools, and all the services and databases etc. that are used to orchestrate everything.
An AI-enabled developer is still a full-time job that requires SWE expertise. I think the quoted portion is correct, but it will be a gradual change as CTO/CIOs realize the arbitrage opportunity in replacing most of their crappy SaaS subscriptions with high-velocity in-house solutions. The savvy ones, at least.
My experience hasn't been LLMs automate coding, just speeds it up. It's like I know what I want the solution to be and I'll describe it to the LLM, usually for specific code blocks at a time, and then build it up block-by-block. When I read hacker news people are talking like it's doing much more than that. It doesn't feel like an automation tool to me at all. It just helps me do what I was gonna do anyways, but without having to look up library function calls and language specific syntax
Yeah I also sense this disconnect between the reality and hype.
In part, I think what people are responding to is the trajectory of the tools. I would agree that they seem to be on an asymptote toward being able to do a lot more things on their own, with a lot less direction. But I also feel like the improvements in that direction are incremental at this point, and it's hard to predict when or if there will be a step change.
But yeah, I'm really not sure I buy this whole thing about orchestrating a symphony of agents or whatever. That isn't what my usage of AI is like, and I'm struggling to see how it would become like that.
But what I am starting to see, is "non-programmers" beginning to realize that they can use these tools to do things for their own work and interests, which they would have previously hired a programmer to do for them, or more likely, just decided it wasn't worth the effort. And I think that's a genuine step change that will have a big effect on our industry.
I think this is ultimately a very good thing! This is how computers should work, that anybody can use them to automate stuff they want to do. It is not a given that "automating tasks" is something that must be its own distinct (and high paying) career. But like any disruption, it is very reasonable to feel concerned and uncertain about the future when you're right in the thick of it.
> My experience hasn't been LLMs automate coding, just speeds it up.
This is how basically everyone I know actually uses LLMs.
The whole story about vibecoding and LLMs replacing engineers has become a huge distraction from the really useful discussions to be had. It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.
As a professional programmer, I think both are useful in different scenarios.
You're maintaining a large, professional codebase? You definitely shouldn't be vibe coding. The fact that some people are is a genuine problem. You want a simple app that you and your friends will use for a few weeks and throw away? Sure, you can probably vibe code something in 2 hours instead of paying for a SaaS. Both have their place.
I’m seeing vibe coding redefine what the product manager is doing. Specifically, adding solution execution to its existing strategy and decision making responsibilities. The PM puts solutions in front of a customer and sees what sticks, then hands over the concept to engineering to bake into the larger code base. The primary change here is no longer relying on interviews and research to make product decisions that engineering spends months building only to have flop when it hits market. The PM is being required to build and test dozens of solutions before anything makes its way to engineering resources. How engineering builds the overall solution is still under their control but the fit is validated before it hits their desk.
I think the problem starts with the name. I've been coding with LLMs for the past few months but most of it is far from "vibed", I am constantly reviewing the output and guiding it in the right direction, it's more like a turbo charged code editor than a "junior developer", imo.
> The whole story about vibecoding and LLMs replacing engineers has become a huge distraction
Because the first thing that comes from individual speed up is not engineers making more money but there being less engineers, How much less is the question? Would they be satisfied with 10%, 50% or may be 99%?
anecdotal at best but I have directly heard CTOs - and hear noise beyond my immediate bubble - talk about 10x improvements with a straight face. Seems ridiculous to me, and even if the coding gets 10x easier the act of defining & solving problems doesn't #nosilverbullet
Generally the demand for software engineers has increased as their productivity has increased, looking back over the past few decades. There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.
If we doubled agricultural productivity globally we'd need to have fewer farmers because there's no way we can all eat twice as much food. But we can absolutely consume twice as much CSS, try to play call of duty on our smart fridge or use a new SaaS to pay our taxes.
Oh but we can absolutely let all that food go to waste! In many places unbelievable amounts of food go to waste.
Actually, most software either is garbage or goes to waste at some point too. Maybe that's too negative. Maybe one could call it rot or becoming obsolete or obscure.
I have been around for “the past few decades”. Then you saw the rapid growth of the internet, mobile and BigTech. Just from the law of large numbers, BigTech isn’t going to grow exponentially like it did post 2010.
It’s copium to think that with the combination of AI and oversupply of “good enough” developers, that it won’t be harder for developers to get jobs. We are seeing it now.
It wasn’t this bad after the dot com bust. Then if you were just an ordinary enterprise developer working “in the enterprise” in a 2nd tier city (raises hand), jobs were plentiful.
> Generally the demand for software engineers has increased as their productivity has increased, looking back over the past few decades. There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.
I see this fallacy all the time but I don't know if there is a name for it.
I mean, we make used fun of MBAs for saying the same thing, but now we should be more receptive to the "Line Always Goes Up" argument?
As counter-anecdata, I have a family members that are growing businesses from scratch and they constantly talk to me about problems they want to solve with software. Administrative problems, product problems, market research problems, you name it. I'm sure they have other problems they don't talk to me about where they're not looking for software solutions, but the list of places they want software to automate things is never-ending.
There consumer internet is mostly cropped up by white collar people buying stuff online and clicking on ads. Once the cutting starts, the whole internet economy just becomes a money swapping machine between 7 VC groups.
The demand for paid software is decreasing cause these AI companies are saying "Oh dont buy that SAAS product because you can build it yourself now"
SaaS is not just software though, it’s operationalized software and data management. The value has increasingly been in the latter well before AI. How many open source packages have killed their SaaS competitors (or wrappers)?
As much as I appreciate the difference between literal infinity and consumers' demand for software, there's just so much bad software out there waiting to be improved that I can't see us hitting saturation soon.
This reasoning is flawed in my opinion, because at the end of the day, the software still has to be paid for (for the people that want/need to make a living out of it), and customers wallet are finite.
Our attention is also a finite resource (24h a day max). We already see how this has been the cause for the enshittificaton of large swathes of software like social media where grabbing the attention for a few seconds more drives the main innovation...
the demand for software has increased. The demand for software engineers has increased proportionally, because we were the only source of software. This correlation might no longer hold.
Depending on how the future shapes up, we may have gone from artisans to middlemen, at which point we're only in the business of added value and a lot of coding is over.
Not the Google kind of coding, but the "I need a website for my restaur1ant" kind, or the "I need to agregate data from these excel files in a certain way" kind. Anything where you'd accept cheap and disposable. Perhaps even the traditional startup, if POCs are vibecoded and engineers are only introducer later.
Those are huge businesses, even if they are not present in the HN bubble.
> "I need a website for my restaurant" kind, or the "I need to aggregate data from these excel files in a certain way" kind
I am afraid that kind of jobs were already over by 2015. There are no code website makers available since then and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point, its so efficient even AI wont be more cost effective than that. If you have $10k saved you can hire a competitive agency to maintain and build your website. This business is completely taken over by low cost fiverr automators and agencies for high budget projects. Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store.
I perform software engineering at a research oriented institution and there are some projects I can now prototype without writing a line of code. The productivity benefits are massive
Prototypes are always meant to be thrown away though, someone's going to have to redo it to comply with coding standards, scaling requirements, and existing patterns in the code base.
If the prototype can be just dropped in and clear a PR and comply with all the standards, you're just doing software engineering for less money!
> It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.
What’s “the vibecoding strawman”? There are plenty of people on HN (and elsewhere) repeatedly saying they use LLMs by asking them to “produce full apps in hours instead of weeks” and confirming they don’t read the code.
Just because everyone you personally know does it one way, it doesn’t mean everyone else does it like that.
I'd assume the straw-man isn't that vibe-coding (vbc) doesn't exist, but that all/most ai-dev is vbc, or that it's ok to derail any discussion on ai-assisted dev with complaints applicable only/mainly to vbc.
Neither of those would be a strawman, though. One would be a faulty generalization and the other is airing a grievance (could maybe be a bad faith argument?).
Though I get that these days people tend to use “strawman” for anything they see as a bad argument, so you could be right in your assessment. Would be nice to have clarification on what they mean.
Hmm, if the purpose of either is so an "easier" target can be made, I think it could still qualify as a straw-man; I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).
> Hmm, if the purpose of either is so an "easier" target can be made, I think it could still qualify as a straw-man
Good point.
> I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).
There I partially disagree. Straw-manning is not engaging with the argument but it can be done accidentally. As in, one may genuinely misunderstand the nuance in an argument and respond to a straw man by mistake. Bad faith does require bad intent.
Half strawman -- a mudman, perhaps. Because we're seeing proper experts with credentials jump on the 'shit, AI can do all of this for me' realization blog post train.
Well, I have a lot of respect for antirez (Redis), and at the time of my writing this comment he had a front page blog post in which we find:
"Writing code is no longer needed for the most part."
It was a great post and I don't disagree with him. But it's an example of why it isn't necessarily a strawman anymore, because it is being claimed/realized by more than just vibecoders and hobbyists.
> Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer.
Dunno why the author thinks an AI-enhanced junior can match the "output"of a whole team unless he means in generating lines of code, which is to say tech debt.
Being able to put a lot of words on screen is not the accomplishment in programming. It usually means you've gone completely out of your depth.
My experience (with minfx.ai) has been that it is very important to build a system which imposes lots of constraints on the code. The more constrained you can make it, the better. Rust helps a lot in this. Thanks to this, for the first time in my career, I feel like the bigger the system gets, /the easier/ it is to develop, because AI can discover and reuse common components. While human would struggle searching for these and how to use them in a large codebase. Very counter-intuitive!
If by block by block you mean you stop using an IDE and spend most of your time looking at diffs, sure. Because in a well structured project, that's all you need to do now: maintain a quality bar and ensure Claude doesn't drop the ball.
i notice a huge difference between working on large systems with lots of microservices and building small apps or tools for myself. The large system work is what you describe, but small apps or tools I resonate with the automate coding crowd.
I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.
I'm somewhere in between myself. Before LLMs, I used to block a few sites that distracted me by adding entries in /etc/hosts file to mapping them to 127.0.0.1 on my work machine. I also made the file immutable so that it would take a few steps for me to unblock the sites.
The next step was for me to write a cron job that would reapply the chattr +1 and rewrite the file once in 5 minutes. Sort of an enforcer. I used Claude (web) to write this and cut/pasted it just because I didn't want to bother with bash syntax that I learned and forgot several times.
I then wanted something stronger and looked at publicly available things like pluckeye but they didn't really work the way I wanted. So I tried to write a quick version using Claude (web) and started running it (October 2025). It solved my problem for me.
I wanted a program to use aider on and I started with this. Every time, I needed a feature (e.g. temporary unblocks, prevent tampering and uninstalling, blocking in the browser, violation tracking etc.), I wrote out what I wanted and had the agent do it. OVer the months, it grew to around 4k lines (single file).
Around December, I moved to Claude code from aider and continued doing this. The big task I gave it was to refactor the code into smaller files so that I could manage context better. IT did this well and added tests too. (late December 2025).
I added a helper script to update URLs to block from various sources. Vibe-coded too. Worked fine.
Then, I found it hogging memory because of some crude mistakes I vibe-coded early on fixed that. Cost me around $2 to do so. (Jan 2026).
Then I added support to lock the screen when I crossed a violation threshold. This required some Xlib code to be written. I'm sure I could have written it but it's not really worth it. I know what to do and doing it by hand wouldn't really teach me anything except the innards of a few libraries. I added that.
So, in short, this is something that's 98% AI coded but it genuinely solves a problem for me and has helped me change my behaviour in front of a computer. There are no companies that my research revealed that offer this as a service for Linux. I know what to do but don't have the time write and debug it. With AI, my problem was solved and I have something which is quite valuable to me.
So, while I agree with you that it isn't an "automation tool", the speed and depth which it brings to the environment has opened up possibilities that didn't previously exist. That's the real value and the window through which I'm exploring the whole thing.
It seems alright, but I wonder if it crashes the economy for vast majority of internet businesses. I personally run some tool websites like ones to convert images, cut videos but the traffic for now seems stable, but my tools don't target devs. Most likely you didnt actually need it, but who am i to judge, I just find myself doing random projects because it "takes less time".
I don't think that works. The fact that it can produce different output for the same input, usage of tools etc. don't really fit into the analogy or mental model.
What has worked for me is treating it like an enthusiastic intern with his foot always on the accelerator pedal. I need to steer and manage the brakes otherwise, it'll code itself off a cliff and take my software with it. The most workable thing is a pair programmer. For trivial changes and repeatedly "trying stuff out", you don't need to babysit. For larger pieces, it's good to make each change small and review what it's trying.
I feel like some of the frontier models are approaching run-of-the-mill engineer who does dumb stuff frequently. That said, with appropriate harnessing, it’s more like go-karts on a track; you can’t keep them out of the wall, but you can reset them and get them back on a path (when needed). Not every kart ends up in the wall, but all of them want to go fast, so the better defined the track is the more likely the karts will find a finish line. Certainly more likely than if you just stuck them in a field with no finish line and said “go!”.
>We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.
The reason why it is better is that with search you have to narrow your search down to a specific part of what you are trying to do, for example if you need a unique id generating function as part of what you are trying to do you first search for that, then if you need to make sure that whatever gets output is responsive 3 columns then you might search for that, and then do code to glue the things together to what you need, with AI you can ask for all of this together, get something that is about what the searched for results would have been, do your glue code and fixes you would normally have done.
It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.
It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.
All I know is that firing half my employees and never hiring entry level people again nets me a bonus next quarter.
Not really sure why this article is talking about what happens 2 years from now since that’s 8 times longer than anything anyone with money or power cares about.
Hmmm I know this it’s true because if management only thought quarterly, no one would ever hire anyone. Hiring someone takes 6+ months to pay off as they get up to productivity.
I can't tell if we're doing like a sarcastic joking thing where we're making fun of management, or if you really believe this. If we're joking around, then haha. If you really believe this to be true, then you have a warped view of reality.
The street cred doesn't come from managing more resources, the street cred comes from delivering more.
He’s keeping some around so he can fire half again next quarter for another bonus. That’s the sort of forward-thinking strategic direction that made him the boss man.
I’m doing both. For production code that I care about, I’m reading every line the LLM writes, correcting it a lot, chatting with an observer LLM who’s checking the work the first LLM and I are writing. It’s speeding stuff up, it also reduces the friction on starting on things. Definitely a time saver.
Then I have some non-trivial side projects where I don’t really care about the code quality, and I’m just letting it run. If I dare look at the code, there’s a bunch of repetition. It rarely gets stuff right the first time, but that’s fine, because it’ll correct it when I tell it it doesn’t work right. Probably full of security holes, code is nasty, but it doesn’t matter for the use-cases I want. I have produced pieces of software here that are actively making my life better, and it’s been mostly unsupervised.
> The bottom line: Junior developer hiring could collapse as AI automates entry-level tasks
If AI automated entry-level tasks from today, that just means "entry-level" means something different now. It doesn't mean entry-level ceases to exist. Entey-level as we know it, but not entry-level in general.
>> The skillset is shifting from implementing algorithms to knowing how to ask the AI the right questions and verify its output.
The question is, how much faster is verification only vs writing the code by hand? You gain a lot of understanding when you write the code yourself, and understanding is a prerequisite for verification. The idea seems to be a quick review is all that should be needed "LGTM". That's fine as long as you understand the tradeoffs you are making.
With today's AI you either trade speed for correctness or you have to accept a more modest (and highly project specific) productivity boost.
And there's a ton of human incentives here to take shortcuts in the review part. The process almost pushes you to drop your guard: you spend less physical time observing the code while you write, you get huge chunks of code dropped on you, iterations change a lot to keep a mind model, there's FOMO involved about the speed gain you're supposed to get... We're going to see worse review quality just by a mater of UX and friction of the tool.
Yes! It depends on the company, of course, but I think plenty of people are going to fall for the perverse incentives while reviewing AI output for tech debt.
The perverse incentives being that tech debt is non-obvious & therefore really easy to avoid responsibility for.
Meanwhile, velocity is highly obvious & usually tired directly to personal & team performance metrics.
The only way I see to resolve this is strict enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle.
But when even people working at Anthropic are talking about running multiple agents in parallel, I get the idea that CTO's are not taking this seriously.
imo, The OP has bad ai-assisted takes on almost every single "critical question". This makes me doubt if he has breadth of experience in the craft. For example.
> Narrow specialists risk finding their niche automated or obsolete
Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.
> Universities may struggle to keep up with an industry that changes every few months
Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.
>The flip scenario: AI unlocks massive demand for developers across every industry, not just tech. Healthcare, agriculture, manufacturing, and finance all start embedding software and automation.
I find this one hard to believe. Software is already massively present in all these industries and has already replaced jobs. The last step is complete automation (ie drone tractors that can load up at a hub, go to the field and spray all by themselves) but the bottleneck for this isn't "we need more code", it's real-world issues that I don't see AI help solving (political, notably)
I tend to agree with your assessment. The increase in demand cannot possibly equal the loss from AI.
Given projections of AI abilities over time AI necessarily creates downward pressure on new job creation. AI is for reducing and/or eliminating jobs (by way of increasing efficiency).
AI isn't creating 'new' things, it's reducing the time needed to do what was already being done. Unlike the automobile revolution new job categories aren't being created with AI.
Germans were so quick to revert back to paper after COVID that it felt like one of the only reasons they came out of lockdown eventually was to get paper back.
The Gewerkschaft tactics to resist AI is what I’m really interested in seeing.
Agree, people were already worried about the excessive focus on software over physical technology well before LLMs significantly reduced the barrier to entry
From my experience AI is only good with mainstream coding tasks. Javascript, node, react, crud. Whatever it has seen in overabundance, it is good with. Already with typescript it is less strong than with javascript. It is just a clever token generator without intelligence. Often it resembles intelligence but so can a human by learning quotes of famous people.
It is a new and exciting tool but immediately limited with medium complex tasks. Also we will see a lot more code with tricky bugs coming out of AI assistants and all of that needs to be maintained. If software development gets cheaper per line of code then there will be more demand. And someone has to clean up the mess created by people who have no clue whatsoever of SWE.
Once upon a time people developed software with punch hole cards. Even without AI a developer today is orders of magnitude more proficient than that.
The only thing I hope I am not going to see in my lifetime is real artificial intelligence.
Funny that he mentions people not pivoting away from COBOL. My neighbors work for a bank, programming in COBOL every day. When I moved in and met them 14 years ago, I wondered how much longer they would be able to keep that up.
to be fair, that cobol program has been working for probably 30 years (maybe even longer than that) - thats unusually reliable and long-lived for a software project.
the only real contender in this regard is the win32 api, and actually that did get used in enterprise for a long time too before the major shift to cloud and linux in the mid 2010s.
ultimately the proof is in the real-world use, even if its ugly to look at... id say, even as someone who is a big fan of linux, if i were given a 30 year old obscure software stack that did nothing but work, i would be very hesitant to touch it too!
> Senior developers: Fewer juniors means more grunt work landing on your plate
I'm not sure I agree with that. Right now as a senior my task involves reviewing code from juniors; replace juniors with AI and it means reviewing code from AI.
More or less the same thing.
And the mistakes AI makes don't carry the same code smells juniors make. There are markers in human code that signals how well they understood the problem, AI code more often looks correct at a glance even if it's horribly problematic.
currently my job as a junior is to review vibe code that was "written" by seniors. it's just such bullshit and they make mistakes I wouldn't even dare to make in my first year of school
All well-documented knowledge fields will be gone if software goes. Then the undocumented ones will become documented, and they too will go. The best advice to junior devs is get a hands on job before robotic articulating sausages are perfected and humans become irrelevant blobs of watery meat.
I think the GPT3 or 4 minute mile moment for robotics will be when we see a robotic hand with the dexterity of a 6 year old. Once that happens it will quickly be over.
> junior developer employment drops by about 9-10% within six quarters, while senior employment barely budges. Big tech hired 50% fewer fresh graduates over the past three years.
This study showing 9-10% drop is odd[1] and I'm not sure about their identification critria.
> We identify GenAI adoption by detecting job postings that explicitly seek workers to implement or integrate GenAI technologies into firm workflows.
Based on that MIT study it seems like 90+% of these projects fail. So we could easily be seeing an effect where firms posting these GenAI roles are burning money on the projects in a way that displaces investment in headcount.
The point about "BigTech" hiring 50% fewer grads is almost orthogonal. All of these companies are shifting hiring towards things where new grads are unlikely to add value, building data centers and frontier work.
Moreover the TCJA of 2017 caused software developers to not count for R&D tax write offs (I'm oversimplifying) starting in 2022. This surely has more of an effect than whatever "GenAI integrator roles" postings correlates to.
AI became very popular suddenly. This is something that wasn't in anyone's budget. I believe cost savings from hiring freezes and layoffs are to pay for AI projects and infrastructure.
Sometimes I wonder if I made the wrong choice with software development. Even after getting to a senior role, according to this article, you're still expected to get more education and work on side projects outside of work. Am I supposed to want to code all the time? When can I pursue hobbies, a social life, etc.
Given how quickly models, tools and frameworks rise and fall, betting your career on a single technology stack is risky.
This was something I dealt with a lot when JS frameworks became the newest shiny thing and suddenly the entire industry shifted in a few years from being a front-end developer to being a full stack developer.
This happened to a lot of my friends who went all in on Angular. Then everybody switched to React.
The issue then became, "What should I learn?" because at my company (a large fortune 200 company) they were all in on Angular, and weren't looking for React developers, but I knew companies were moving away from Angular. So do I work to get better and more indispensable with Angular, and risk not knowing React? Or do I learn the new shiny framework betting at some point my company will adopt it or I will be laid off and need to know it?
It feels like half my life as a dev was spent being a degenerate gambler, always trying to hedge my bets in one way or another, constantly thinking about where everything was going. It was the same thing with dozens of other tools as well. It just became so exhausting trying to figure out where to put your effort into to make sure you always knew enough to get that next job.
To put it very directly - if you are OK with being good but not exceptional at your job, this is totally fine. If you want to be exceptional you will probably need to put in the extra work. Not everyone is OK with this tradeoff and it's totally fine to "just" be good and care more about having outside hobbies and a social life and etc.
I had a period of time where I really wanted to be exceptional. I spent many hours studying and working on side projects but it just never really clicked. I think I'm decent at what I do for work but more complicated topics (graphics programming, low level memory management, etc.) just seem to not stick, no matter how many hours I put into studying. Sometimes it feels like I'm forcing this career but after this many years it's hard to give it up. I do still enjoy it but I don't think I'll ever really get it.
Your story sounds similar to mine. There are some parts of programming at which I know I will never excel. I also don't have time in my life to spends lots of hours outside of work developing my skills. I think it's important to realize that the median software engineer is probably not doing these things either. Maybe the top 10% are? Something like that would be my guess. It's okay to not be in the top 10%!
> Am I supposed to want to code all the time? When can I pursue hobbies, a social life, etc.
I feel you. It's a societal question you're posing. Your employer (most employers) deal in dollars. A business is evaluated by its ability to generate revenue. That is the purpose of a business and the fiduciary duty of the CEO's in charge.
Since you're getting into a senior role, learn the mantra, it depends :D
The usual trade-off of a well paid software development job is lack of job security and always learning - the skill set is always changing in contrast with other jobs.
My suggestion, stop chase trends and start to hear from mature software developers to get better perspective on what's best to invest on.
And why the mantra is always true?
You can find stable job (slow moving company) doing basic software development and just learn something new every 4 years and then change companies.
Or never change company and be the default expert, because everyone else is changing jobs, get job security, work less hours and have time within your job to uplift your skills.
Keep chasing latest high paid jobs/trends by sacrificing off time.
What's the best option for you? Only you know, it's depends on your own goals.
>I made the wrong choice with software development.
If you didn't like working with computers, then you (and another gazillion people who choose it for the $$$) probably made the wrong choice.
But totally depends on what you wanted to get out of it. If you wanted to make $$$ and you are making it, what is the problem? That is assuming you have fun outside of work.
But if you wanted to be the best at what you do, then you gotta love what you are doing. May be there are people who have super human discipline. But for normal people, loving what they goes a long way towards that end.
> If you didn't like working with computers, then you probably made the wrong choice.
This doesn't match what I have seen in other industries. Many auto mechanics I know drive old Buicks or Ford's with the 4.6l v8 because the cars are reliable and the last thing they want to do on a day off is have to work on their own car. I know a few people in other trades like plumbers, electricians, and chefs and the pattern holds pretty well for them as well.
You can enjoy working with computers and also enjoy not working in your personal time.
Exactly this. I love writing code and solving problems. In my 20s and very early 30s I worked a lot of long hours and tried my best to always be learning new things and upskilling but it's never ending. It's hard sometimes to not look back and think about the hours I spent on code instead of building stronger friendships and relationships.
> If you didn't like working with <insert anything>, then you ...
This type of argument can hold for any profession and yet we aren't seeing this pattern much in other white-collar professions. Professors, doctors, economists, mechanical engineers, ... it seems like pretty much everybody made the wrong choice then?
I think this is a wrong way to look at it. OP says that he invested a lot of time into becoming proficient in something that today appears to be very close to part extinction.
I think that the question is legit, and he's likely not the only person asking oneself this question.
My take on the question is ability to adapt and learn new skills. Some will succeed some will fail but staying in status-quo position will certainly more likely lead to a failure rather than the success.
Your first point hits the nail on the head. We are expected to have side projects and to keep up with new things (outside of work) but most other jobs don't have that. I would be okay with my work sending me off for additional training, on company time, but I don't want it to consume the time I have left after work.
I don't know why but our profession for some reason is different than the others in this respect and people often like to think that this is a norm and if you're not doing it you're not worthwhile. I think it has to do with some interesting psychological effects of people who are generally attracted to this profession but also due to the companies who implemented those mental hacks as a means to attract people who are 100% for it. Leetcode style interviews where you virtually have to spend months to prepare oneself for the interview, even as a senior, is one example of that but I also do remember the age, which was not too long ago, where your resume wouldn't even get a look if you didn't have several open-source repositories/contributions to show. This is in some part even valid as of today.
There are plenty of such examples but both of these imply that you're ready to devote a lot of your extra time, before or after the job, only that you can show you're relevant in the eyes of those who are the decision makers. This normally means that you're single, that you have no kids, family, no other hobbies but programming etc. This works when you're in your 20's and only up to the certain point unless you become a weirdo in your 30's and 40's etc. without any of these.
However, in the age where we are met with the uncertainty, it may become a new normal to devote extra effort in order to be able to remain not competitive but a mere candidate for the job. Some will find the incentive for this extra pain, some will not but I think it won't be easy. Perhaps in 5 years time we will only have "AI applied" engineers developing or specializing their own models for given domains. Writing code as we have it today I think it's already a thing of a past.
Neither teachers nor nurses only work 40 hours and no overtime. :')
Definitely something that requires social/interpersonal skills though will be the thing that winds up being AI immune. Humans are social creatures so I assume there will always be some need for it.
Not my experience at all. The very notable engineers I know didn't do their most notable work because of engineering or coding skills. Instead it was finding interesting problems and making a start or thinking a bit differently about something and doing something about it and being approachable and available all along that made a difference.
If all they did was code all the time, write code for fun and interacted mostly with other similar people, they probably wouldn't be the first choice for these projects.
The ones who ace their careers are for the most people that are fun, driven, or psychos, all social traits that make you good in a political game.
Spending lots of time with other socially awkward types talking about hard math problems or whatever will get you nowhere outside of some SF fantasy startup movie.
I'd say it's especially important for the more nerdy (myself included) to be more outgoing, and do other stuff like sales or presentations, design/marketing og workshops - that will make you exceptional because you then got the "whole package" and undestand the process and other people.
> And the most successful people I know basically did exactly that.
Well that depends heavily on how you define successful.
Successful in life? I would tend to disagree, unless you believe that career is the only thing that counts.
But even when career is concerned: the most successful people I know went on from being developer to some high end management role. The skills that brought them there definitely did not come from hanging out with other engineers talking about engineering things.
I did not do side projects. I really enjoyed most of my 20s as a single person. I was a part time fitness instructor, I dated, hung out with friends, did some traveling.
The other developers at my job also had plenty of outside hobbies.
> You social life should be hanging out with other engineers talking about engineering things.
Fuck. That.
I worked at a faang, successful people weren't people that did engineering, it was people who did politics.
The most successful people were the ones that joined at the same time as the current VP.
Your hobbies need to be fun, to you. Not support your career. If its just there to support your career, its unpaid career development, not a hobby. Should people not code in their free time? thats not for me to decide. If they enjoy it, and its not hurting anyone, then be my guest.
Engineers are generally useless at understanding whats going on in the real world, they are also quite bad at communicating.
I love your last point. I asked this question because I used to be the person that would spend 4+ hours after work every day trying to keep up with new tech and working on side projects. But now, I've gotten into art and it's really changed my perspective on things like this. I've spent many hours doing, as you call it, unpaid career development instead of pursuing hobbies, building up my friendships, and in general just having fun. It feels like I've taken life so seriously and I don't have much to show for it.
I never worked “for fun”. My job for 30 years is just a means to support my addiction to food and shelter. I don’t hate my job especially my last 3 since 2020 when I started working remotely. But it is just something I do.
The most useful thing juniors can do now is use AI to rapidly get up to the speed with the new skill floor. Learn like crazy. Self learning is empowered by AI.
AI has a lot of potential as a personal, always on teaching assistant.
It's also an 'skip intro' button for the friction that comes with learning.
You're getting a bug? just ask the thing rather than spending time figuring it out. You don't know how to start a project from scratch? ask for scaffolding. Your first boss asks for a ticket? Better not to screw up, hand it to the machine just to be safe.
If those temptations are avoided you can progress, but I'm not sure that lots of people will succeed. Furthermore, will people be afforded that space to be slow, when their colleagues are going at 5x?
Modern life offers little hope. We're all using uber eats to avoid the friction of cooking, tinder to avoid the awkwardness of a club, and so on. Frictionless tends to win.
Something very odd about the tone of this article. Is it mostly AI written? There is a lot of references and info. But I am feeling far more disconnected with it.
For the record, I was genuinely trying to read it properly. But it is becoming unbearable by mid article.
Yes, lots of AI style/writing in this article. I wouldn't necessarily discredit an article just based on stylization if the content was worth engaging with ... but like you mentioned, when the AI is given too much creative control it goes off the rails by the middle and turns into what the kids call "AI slop".
It resembles an article, it has the right ingredients (words), but they aren't combined and cooked into any kind of recognizable food.
Thanks a lot for taking the time to confirm. Not hating on AI slop or something. But I do genuinely feel if he/she/they tried to invest time in writing it, people would consume and enjoy it better.
Its hard to put my finger on it. But it lacks soul, it factor or whatever you want to call it. Feels empty in a way.
I mean, this is not the first AI assisted article am reading. But usually, it's to a negligible level. Maybe it's just me. :)
Understandable. I usually only recognise AI assist cos someone in the comment section points it out. But the off putting tone of this was blatantly obvious. This is by far the most AI influenced article I have read yet.
I have been telling people that, titles aside, senior developers were the people not afraid to write original code. I don’t see LLMs changing this. I only envision people wishing LLMs would change this.
I almost think what a lot of people are coming to grips is with is how much code is unoriginal. The ones who've adjusted the fastest were humble to begin with. I don't want to claim the title, but I can certainly claim the imposter syndrome! If anything, LLMs validated something I always suspected. The amount of truly unique, relevant to success, code in a given project is often very small. More often than not, it's not grouped together either. Most of the time it's tailored to a given functionality. For example, a perfectly accurate Haversine distance is slower than an optimized one with tradeoffs. LLMs have not yet become adept at housing the ability to identify the need for those tradeoffs in context well or consistently, so you end up with generic code that works but not great. Can the LLM adjust if you explicitly instruct it to? Sure, sometimes! Sometimes it catches it in a thought loop too. Other times you have to roll up your sleeves and do the work like you said, which often still involves traditional research or thinking.
1) Senior developers are more likely to know how to approach a variety of tasks, including complex ones, in ways that work, and are more likely to (maybe almost subconsciously) stick to these proven design patterns rather than reinvent the wheel in some novel way. Even if the task itself is somewhat novel, they will break it down in familar ways into familar subtasks/patterns. For sure if a task does require some thinking outside the box, or a novel approach, then the senior developer might have better intuition on what to consider.
The major caveat to this is that I'm an old school developer, who started professionally in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.
2) I think the real distinction of senior vs junior programmers, that will carry over into the AI era, is that senior developers have had enough experience, at increasing levels of complexity, that they know how to architect and work on large complex projects where a more junior developer will flounder. In the AI coding world, at least for time being, until something closer to AGI is achieved (could be 10-20 years away), you still need to be able to plan and architect the project if you want to achieve a result where the outcome isn't just some random "I let the AI choose everything" experiment.
I completely agree with your second point. For your first point my experience tells me the people least afraid to write original code are the people least oppositional to reinventing wheels.
The distinguishing behavior is not about the quantity of effort involved but the total cost after consideration of dependency management, maintenance time, and execution time. The people that reinvent wheels do so because they want to learn and they also want to do less work on the same effort in the future.
> in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.
I think this is really underappreciated and was big in driving how a lot of people felt about LLMs. I found it even more notable on a site named Hacker News. There is an older generation for whom computing was new. 80s through 90s probably being the prime of that era (for people still in the industry). There constantly was a new platform, language, technology, concept to learn. And nobody knew any best practices, nobody knew how anything "should work". Nobody knew what anything was capable of. It was all trying things, figuring them out. It was way more trailblazing / exploring new territory. The birth of the internet being one of the last examples of this from that era.
The past 10-15 years of software development have been the opposite. Just about everything was evolutionary, rarely revolutionary. Optimizing things for scale, improving libraries, or porting successful ideas from one domain to another. A lot of shifting around deck chairs on things that were fundamentally the same. Just about every new "advance" in front-end technology was this. Something hailed as ground breaking really took little exploration, mostly solution space optimization. There was almost always a clear path. Someone always had an answer on stack overflow - you never were "on your own". A generation+ grew up in that environment and it felt normal to them.
LLMs came about and completely broke that. And people who remembered when tech was new and had potential and nobody knew how to use it loved that. Here is a new alien technology and I get to figure out what makes it tick, how it works how to use it. And on the flip side people who were used to there being a happy path, or a manual to tell you when you were doing it wrong got really frustrated as their being no direction - feeling perpetually lost and it not working the way they wanted.
I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs. So much was, "I tried something it didn't work so I gave up". Or "I just kept telling it to work and it didn't so I gave up". Explore, pretend you're in a sci-fi movie. Does it work better on
Wednesdays? Does it work better if you stand on your head? Does it work differently if you speak pig-latin? Think side-ways. What behavior can you find about it that makes you go "hmm, that's interesting...".
Now I think there has been a shift very recently of people getting more comfortable with the tech - but still was surprised of how little of a hacker mindset I saw on hacker news when it came to LLMs.
LLMs have reset the playing field from well manicured lawn, to an unexplored wilderness. Figure out the new territory.
To me, the "hacker" distinction is not about novelty, but understanding.
Bashing kludgy things together until they work was always part of the job, but that wasn't the motivational payoff. Even if the result was crappy, knowing why it was crappy and how it could've been better was key.
LLMs promise an unremitting drudgery of the "mess around until it works" part, facing problems that don't really have a cause (except in a stochastic sense) and which can't be reliably fixed and prevented going forward.
The social/managerial stuff that may emerge around "good enough" and velocity is a whole 'nother layer.
No, the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers. Case in point, you can’t really cleverly “hack” LLMs. It’s more a roll of the dice that you try to affect using hit-or-miss incantations.
>the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers
Louder for those turned deaf by LLM hype. Vibe coders want to turn a field of applied math into dice casting.
>I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs
You keep using the word "LLMs" as if Opus 4.x came out in 2022. The first iterations of transformers were awful. Gpt-2 was more of a toy and Gpt-3 was an eyebrow-raising chatbot. It has taken years of innovations to reach the point of usable stuff without constant hallucinations. So don't fault devs for the flaws of early LLMs
I recently started as a developer advocate - I have similar opinions to the author - junior devs have a hard time getting hired and flipping to something like devrel makes a lot of sense.
I've been saying for a decade that one of the fundamental issues with SWE in the average company, is that management does not seem to understand that SWE is a management level job. Its not an assembly line worker. It requires reorganizing, streamlining, stake-holders, etc - in code and data - which directly affect people much the same that any other management role has. There are just fewer issues with HR and more with CDNs or CVEs.
> A CEO of a low-code platform articulated this vision: in an “agentic” development environment, engineers become “composers,”
I see we'll be twisting words around to keep avoiding the comparison.
You are not looking at right places. Github repo counts have been high since 2020 because there are companies & individuals who run fork scripts. So AI cant match the numbers.
But on product hunt, the amount of projects is First week of Jan: 5000+, Entire Jan 2018: 4000 approx.
This is such a stupid argument. A very significant amount of code never makes it into the public sphere. None of the code I've written professionally in the last 26 years is publicly accessible, and if someone uses a product I've written they likely don't care if it was written with the aid of an LLM or not.
Not to mention agent capabilities at the end of last year were vastly different to those at the start of the year.
Even if a portion of software is not released to the general public, you'd still expect an increase in the amount of software released to the general public.
Even if LLMs became better during the year, you'd still expect an increase in releases.
Please don’t get my hopes up. Adaptable people like me will outcompete hard in the post-engineering world. Alas, I don’t believe it’s coming. The tech just doesn’t seem to have what it takes to do the job.
> And the jobs which will remain will be impossible to get.
Exactly my thoughts lately ... Even by yesterday's standards it was already very difficult to land a job and, by tomorrow's standards, it appears as if only the very best of the best will be able to keep their jobs and the ones in a position of decision making power.
One of the better analysis of this question I think.
On the optimistic take side - I suspect it might end up being true that software might be infused into more niches but not sure it follows that this helps on the jobs market side. Or put different demand for software and SWE might decouple somewhat for much of that additional software demand.
I'm mostly convinced at this point that the jobs market will only be affected temporarily.
This is really just another form of automation, speeding things up. We can now make more customized software more quickly and cheaply. The market is already realizing that fact, and demand for more performant, bespoke software at lower costs/prices is increasing.
Those who are good at understanding the primary areas of concern in software design generally, and who can communicate well, will continue to be very much in demand.
The most important question is who will get paid the most? I don't think the future of software engineering will be attractive if all you do is more work for same or even less pay. A second danger is too much reliance on AI tools will centralise knowledge and THAT is the scariest thing. Software systems will need to perform for a long time, having juniors on board and people who understand software architecture will be massively important. Or will all software crash when this generation retires?
The people who don't lose their jobs will also not be in a great spot, there wont be a guarantee that they will never lose their jobs, they will continue to live on the wobbly and uncertain foundation, will get fired for first no they say to the management. If software engineering falls, all the related industries will fall too, thus creating a domino effect, that none of the execs can imagine right now.
I really do wonder what sort of economy change is coming to us because companies will hypothetically need to hire less people to sustain the equal output of today. They can do that basically today so not even hypothetically anymore, it just needs some time to take off.
The question IMO is, who will be creating the demand on the other side for all of these goods produced if so many people will be left without the jobs? UBI, redistribution of wealth through taxes? I'm not so convinced about that ...
> The question IMO is, who will be creating the demand on the other side for all of these goods produced if so many people will be left without the jobs?
There is no reason why people will left without jobs. Ultimately, "job" is simply a superstructure for satisfying people's needs. As long as people have needs and the ability to satisfy them, there will be jobs in the market. AI change nothing in those aspects.
I think it very much does. Those exact needs so far have been fulfilled by N people jobs. Today those same needs are going to be fulfilled by N-M people jobs. For your hypothesis to work, human, or shall I say better, market needs to scale such that M people left redundant will be needed to cover that new gap. The thing is that I am not so sure about the "scaling" part. Not to mention that people's skills also need to scale such so that they can deliver the value for scaling the market. Skills that we had until yesterday are slowly started to begin a thing of a past so I am wondering what type of skills people will need in order to get those "new" jobs? I would genuinely like to hear the opinion because I am not really positive that the market will self-adjust itself such that the economy will remain the same.
> there wont be a guarantee that they will never lose their jobs, they will continue to live on the wobbly and uncertain foundation
The people who lose their jobs prove this was always the case. No job comes with a guarantee, even ones that say or imply they do. Folks who believe their job is guaranteed to be there tomorrow are deceiving themselves.
Love the article, I had a struggle with my new identity and thus had to write https://edtw.in/high-agency-engineering/ for myself, but also came to the realisation that the industry is shifting too especially for junior engineers.
Curious about how the Specialist vs Generalist theme plays out, who is going to feel it more *first* when AI gets better over time?
A humble way for devs to look at this, is that in the new LLM era we are all juniors now.
A new entrant with a good attitude, curiosity and interest in learning the traditional "meta" of coding (version control, specs, testing etc) and a cutting-edge, first-rate grasp of using LLMs to assist their craft (as recommended in the article) will likely be more useful in a couple of years than a "senior" dragging their heels or dismissing LLMs as hype.
We aren't in coding Kansas anymore, junior and senior will not be so easily mapped to legacy development roles.
Sorry but no. Software engineering is too high dimensional such that there is no rulebook for doing it the way there is for building a bridge. You need to develop taste, much like high level Go players do. This is even more critical as LLMs start to spit out code at an ever higher rate allowing entropy to accumulate much faster and letting unskilled people paint themselves into corners.
I think of it a bit like ebike speed limits. Previously to go above 25mph on a 2-wheeled transport you needed a lot of time training on a bicycle, which gave you the skills, or you needed your motorcycle licence, which required you to pass a test. Now people can jump straight on a Surron and hare off at 40mph with no handling skills and no license. Of course this leads to more accidents.
Not to say LLMs can't solve this eventually, RL approaches look very strong and maybe some kind of self-play can be introduced like AlphaZero. But we aren't there yet, that's for sure.
I don't think that conflicts with what I said but perhaps counters with something I didn't; your ebike analogy implies a recklessness that the junior with the attributes I mentioned will be averse to. Conversely the senior with the full grasp of LLMs and the "taste" and judgement will naturally be ahead.
But the comparison I made was between the junior with a good attitude and expert grasp on LLMs, and the stick-in-the-mud/disinterested "senior". Those are where the senior and junior roles will be more ambiguous in demarcation as time moves forward.
My question: are those people who were building crappy, brittle software, which was full of bugs and and orher suboptimal behavior, that were the main reasons of slowing down the evolution that software, will they now begin writing better software because of AI? Answering yes, implies that the main reason of those problems was that those developers didn't have enough time to spend on analyzing those problems or to build protection harnesses. I would stronly argue that was not the case, as the main reason is of intelectual and personal nature - inability to build abstractions, to follow up the route causes (thus not aquiring necessary knowledge), or to avoid being distracted by some new toy. In 2-5 years I expect the industry going into panic mode, as there will be a shortage of people who could maintain the drivel that is now being created en masse. The future is bright for those with the brains, just need to wait this out
1) The AI code maintainence question - who would maintain the AI generated code
2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?
Perhaps thinking about AI generated code in terms of machine code generated by a compiler helps. Who maintains the compiled program? Nobody. If you want to make changes to it, you recompile the source.
In a similar fashion, AI generated code will be fed to another AI round and regenerated or refactored. What this also means is that in most cases nobody will care about producing code with high quality. Why bother, if the AI can refactor ("recompile") it in a few minutes?
I think this post is a great example of a different point made in this thread. People confuse vibe-coding with llm-assisted coding all the time (no shade for you, OP). There is an implied bias that all LLM code is bad, unmaintainable, incomprehensible. That's not necessarily the case.
1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.
The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).
That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.
2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).
One thing that fucks with juniors is the expecration of paying for subscriptions for AI models. If you need to know how the AI tools work, you need to learn them with your own money.
Not everyone can afford it, and then we are at the point of changing the field that was so proud about just needing a computer and access to internet to teach oneself into a subscription service.
You can get by pretty well with the ~$20/month plans for either Claude or Gemini. You don't need to be doing the $200/month ones just to get a sense of how they work.
I mean it's pretty simple:
management will take bad quality (because they don't understand the field) over having and paying more employees any day.
Software engineer positions will shrink and be unrecognizable: one person expected to be doing the work of multiple departments to stay employed.
People may leave the field or won't bother learning it.
When the critical mass is reached, AI will be paywalled and rug pulled.
Then the field evens itself out again over a long, expensive period of time for every company that fell for it, lowering the expectations back to reality.
This is truly the problem: You either get fired or you get to work 10x more to survive. Only question is how many of us will be in 1st group and how many in the 2nd group, its a lose lose situation.
Exactly. Some jobs moved from database, backend, frontend and devops to "fullstack", which means 4 jobs with the pay of one. People do that job, but with only 8h-10h in a day the quality is as expected. I think overall people will try to move out of the field, no matter how much of a force multiplier AI might be. Its simply a worse trade to carry so much responsibility and burden when you can work in IT or outside of IT in a less cognitively demanding field with set hours and expectations for the same pay (in EU, very hyperbolic statement tbh). Especially when the profit you bring dwarfs the compensation with all the frustrations that come with knowing that and being kept down in the corporate ladder.
Yep, it never fails. Here's another prediction for "The next two years of software engineering"; AI vendors will start to utilize their senior devs' personal domains to write their advertising pieces to attempt to mitigate scrutiny when such things are posted to social media.
Ahhhh, this is like that guy who works at Claude Code and runs 100 agents at the same time to replace 100 juniors. Everyone is convinced he will be the last software engineer on earth.
Maybe a harsh criticism. The article seemed to be all over the place, maybe because the subject is also all over the place. I agree with everything, its just that it seemed like the same story we've been in for awhile.
Wasn't the main take away generally "study everything even more than you were, and talk/network to everybody even more than you were, and hold on. Work more more more"
This article suggests it is specialists who are "at risk", but as much more of a generalist I was thinking the opposite and starting to regret not specialising more.
My value so far in my career has been my very broad knowledge of basically the entire of computer science, IT, engineering, science, mathematics, and even beyond. Basically, I read a lot, at least 10x more than most people it seems. I was starting to wonder how relevant that now is, given that LLMs have read everything.
But maybe I'm wrong about what my skill actually is. Everyone has had LLMs for years now and yet I still seem better at finding info, contextualising it and assimilating it than a lot of people. I'm now using LLMs too but so far I haven't seen anyone use an LLM to become like me.
So I remain slightly confused about what exactly it is about me and people like me that makes us valuable.
LLMs have read EVERYTHING yes. that includes a lot of not optimal solutions, repeating mantras about past best practices that are not relevant anymore, thousands of blog posts about how to draw an owl by drawing two circles and leaving the rest as an exercise to the reader etc.
The value of a good engineer is his current-context judgment. Something that LLMs can not do Well.
Second point, something that is being mentioned occasionally but not discussed seriously enough, is that the Dead Internet Theory is becoming a reality.
The amount of good, professionally written training materials is by now exhausted and LLMs will start to feed on their own slop.
See How little the LLM's core competency increased in the last year even with the big expansion of their parameters.
Babysitting LLM's output will be the big thing in the next two years.
I mean there is no strat that saves you 100% from it. The layoffs are kind of random, based on teams they dont see any vision for, or engineers who dont perform. Generalising is better imo.
Is there a Jeapordy for guessing prompts? Give an executive summary of GenAI trends where GenAI is the destiny and everything reacts to it. Touch on all “problems”. Don’t be divisive by making hard proclamations. Summarize in a safe way by appealing to the trope of the enthusiastic programmer who dutifully adapts to the world around them in order to stay “up to date”; the passive drone that accepts whatever environment they are placed in and never tries to change it. But add insult to injury by paradoxically concluding that the only safe future is the one you (individual) “actively engineer”.
I’m not saying that this was prompted. I’m just summarizing it in my own way.
Change is a constant for software engineers. It always has been. If your job is doing stuff that should be automated, either you are automating it or you are not a very good software engineer.
A few key fallacies at play here.
- Assuming a closed world assumption: we'll do the same amount of work but with less people. This has never been true. As soon as you meaningfully drop the price of a unit of software (pick your favorite), demand goes up and we'll need more of them. Also it opens the door to building software that previously would have been too expensive. That's why the amount of software engineers has consistently increased over the years. This despite a lot of stuff getting a lot easier over time.
- Assuming the type of work always stays the same. This too has never been true. Stuff changes over time. New tools, new frameworks, new types of software, new jobs to do. And the old ones fade away. Being a software engineer is a life of learning. Very few of us get to do the same things for decades on end.
- Assuming people know what to ask for. AIs do as you ask, which isn't necessarily what you want. The quality of what you get correlates very much to your ability you ask for it. The notion that you get a coherent bit of software in response to poorly articulated incoherent prompts is about as realistic as getting a customer to produce coherent requirements. That never happened either. Converting customer wishes into maintainable/valuable software is still a bit of a dark art.
The bottom line: many companies don't have a lot of in house software development capacity or competence. AI doesn't really help these companies to fix that in exactly the same way that Visual Basic didn't magically turn them into software driven companies either. They'll use third party companies to get the software they need because they lack the in house competence to even ask for the right things.
Lowering the cost just means they'll raise the ambition level and ask for more/better software. The type of companies that will deliver that will be staffed with people working with AI tools to build this stuff for them. You might call these people software engineers. Demand for senior SEs will go through the roof because they deliver the best AI generated software because they know what good software looks like and what to ask for. That creates a lot of room for enterprising juniors to skill up and join the club because, as ever, there simply aren't enough seniors around. And thanks to AI, skilling up is easier than ever.
The distinction between junior and senior was always fairly shallow. I know people that were in their twenties that got labeled as senior barely out of college. Maybe on their second or third job. It was always a bit of a vanity title that because of the high demand for any kind of SEs got awarded early. AI changes nothing here. It just creates more opportunities for people to use tools to work themselves up to senior level quicker. And of course there are lots of examples of smart young people that managed to code pretty significant things and create successful startups. If you are ambitious, now is a good time to be alive.
> Junior developers: Make yourself AI-proficient and versatile. Demonstrate that one junior plus AI can match a small team’s output. Use AI coding agents (Cursor/Antigravity/Claude Code/Gemini CLI) to build bigger features, but understand and explain every line if not most. Focus on skills AI can’t easily replace: communication, problem decomposition, domain knowledge. Look at adjacent roles (QA, DevRel, data analytics) as entry points. Build a portfolio, especially projects integrating AI APIs. Consider apprenticeships, internships, contracting, or open source. Don’t be “just another new grad who needs training”; be an immediately useful engineer who learns quickly.
If I were starting out today, this is basically the only advice I would listen to. There will indeed be a vacuum in the next few years because of the drastic drop in junior hiring today.
And you think juniors aren't doing this? At this point everyone in the market does more Vibe coding than those who are not in the market. Market is saturated most because Execs cutting jobs not because juniors are not good.
What. That’s written in a way that’s like “men writing women”. Not putting themselves in the shoes of a junior who has no context or almost no opportunities.
The outlook on CS credentials is wrong. You'll never be worse off than someone without those credentials, all other things equal. Buried in this text is some assumption that the relatively studious people who get degrees are going to fall behind the non-degreed, because the ones who didn't go to school will out-study them. What is really going to happen generally is that the non-degreed will continue to not study, and they will lean on AI to avoid studying even the few things that they might have otherwise needed to study to squeak by in industry.
The fundamentals of CS dont change and are more valuable to learn for the long term. Vibe coders think they can just bypass everything because they can ask a machine to write them a todo list.
I think you're right but it's more like the theory and other thinking skills are harder to pick up on your own than particular technologies. You definitely still ought to learn both theory and particular tech skills, as they are not interchangeable. A person who only knows pure CS is difficult to employ as an engineer because programming entails particular technological skills.
The author has a bizarre idea of what a computer science degree is about. Why would it teach cloud computing or dev ops? The idea is you learn those on your own.
I am not sure abot devops. But Cloud Computing likely has lot of science behind it. When done properly. They are not any less complex systems to reason about than just code. And I mean it as understanding and designing cloud platforms. Not as deploying code to them.
Despite what completely uninformed people may think, the field "computer science" is not about software development. It's a branch of mathematics. If you want an education in software development, those are offered by trade schools.
What I want is for universities to offer a degree in Software Engineering. That's a different field from Computer Science.
You say that belongs in a trade school? I might agree, if you think trade schools and not universities should teach electrical engineering, mechanical engineering, and chemical engineering.
But if chemical engineering belongs at a university, so does software engineering.
Saying this as a software engineer that has a degree in electrical engineering - software "engineering" is definitely not the same as other engineering disciplines and definitely belongs in a trade school.
Right, because the guy sitting next to me and is designing a PCB for next copy of rPI is so much more for an engineer than the other guy designing a distributed computing algorithm? It shows that you only dealt with the trivial things in SE. There are very complex areas in both disciplines and as much as I can find trivial things in SE I can do the same for EE. Let's just not pretend it's a science fiction when it's not.
My university had Electrical Engineering, Computer Engineering, Software Engineering and Computer Science degrees (in additional to all the other standard ones.)
Cloud computing is not some new fundamental area of computer science. It’s just virtual CPUs with networks and storage. My CS degree from 1987 is still working just fine in the cloud, because we learned about CPUs, virtualization, networks, and storage. They’re all a lot bigger and faster, with different APIs, but so what?
Devops isn’t even a thing, it’s just a philosophy for doing ops. Ops is mostly state management, observability, and designing resilient systems, and we learned about those too in 1987. Admittedly there has been a lot of progress in distributed systems theory since then, but a CS degree is still where you’ll find it.
School is typically the only time in your life that you’ll have the luxury of focusing on learning the fundamentals full time. After that, it’s a lot slower and has to be fit into the gaps.
There has to be a balance of practical skills and theory in a useful degree, and most CS curricula are built that way. It should not be all about random hot tech because that always changes. You can easily learn tech from tutorials, because the tech is simple compared to theory. Theory is also important to be able to judge the merits of different technology and software designs.
A CS degree is there to teach you concepts and fundamentals that are the foundation of everything computing related. It doesn't generally chase after the latest fads.
Sure, but we need to update our definitions of concepts/fundamentals. A lot of this stuff has its own established theory and has been a core primitive for software engineering for many years.
For example, the primitives of cloud computing are largely explained by papers published by Amazon, Google, and others in the early '00s (DynamoDB, Bigtable, etc.). If you want to explore massively parallel computation or container orchestration, etc, it would be natural to do that using a public cloud, although of course many of the platform-specific details are incidentals.
Part of the story here is that the scale of computing has expanded enormously. The DB class I took in grad school was missing lots of interesting puzzle pieces around replication, consistency, storage formats, etc. There was a heavy focus on relational algebra and normalization forms, which is just... far from a complete treatment of the necessary topics.
We need to extend our curricula beyond the theory that is require to execute binaries on individual desktops.
I just don't see the distinction. Looking at it from the other direction: most CS degrees will have you spend a lot of time looking at assembly language, computer architecture, and *nix tools. But none of these are mathematical inevitabilities - they're just a core part of the foundations of software engineering.
However, in the decades since this curricula was established, it's clear that the foundation has expanded. Understanding how containerization works, how k8s and friends work, etc is just as important today.
I do agree that the scale has expanded a lot. But this is true with any other fields. Does that mean that you need to learn everything? Well at some point it becomes unfeasible.
See doctors for example, you learn a bit of everything. But then if you want to specialise, you choose one.
This hits harder depending on how much money, social capital, or debt you accumulated before this volatility began. If you’ve paid off your debts, bought a house, and stabilized your family life, you’re gambling with how comfortable the coming years will be. If you’re a fresh grad with student debt, no house, and no social network, you’re more or less gambling with your life.
Unless you're a plumber.
It's fine to have that opinion, but please frame as an opinion or else give me the lotto numbers for next week if you can predict the future that accurately.
Why are you certain of this?
Either way, there is very little to no path toward "family + place to live + stable job" model.
Where I am I’m alone. Don’t underestimate the value of community.
Looking back, I was essentially railroaded into a dead end career by society, and any attempt to get into software engineering over the years has been blocked by the simple fact I don't possess the right piece of paper. Nor am I willing to spend a shit ton of cash and more importantly time "going back to school" to learn what I already know pretty damn well already, given that I've been programming since I was 8 and it was the strength of my computer skills that got me into the other career in the first place.
My actual accomplishments in the world of computing, sitting here day to day in front of my personal computer, are the stuff of legends. Things like building my own Linux distro completely from scratch, now consisting of over 1,500 packages, cross compiling to multiple platforms. How many people with a Master's degree or Ph.D. ever did this as part of their degree program? Yet I'm looked upon as unemployable detritus compared to some kid fresh out of college with a newly minted BullShit degree.
Over the years the problem has gotten worse and worse. First it was just being filtered out by HR. Now it's all the other craziness also, like going to 7+ interviews only to be told you're rejected, and so on, and that's after putting in hundreds of applications to get one callback. How the hell am I supposed to go through that? I Fucking Refuse.
Thank God I'm OK with living out of the Piggly Wiggly dumpster.
Protip: living out of the Piggly Wiggly dumpster is actually a pretty good alternative to absolute slavery to government/corporations. Your very entitled and spoiled American wife probably won't agree however. Thank God I don't have one of those.
>> My actual accomplishments in the world of computing ... are the stuff of legends
We agree on the legends part
Enjoy your commute.
It's been entirely worth it for me and I cannot imagine my life without kids. But it's a deeply personal choice and I am not buying or selling the idea. I would just say nobody is ever ready and the fears around having them probably are more irrational than rational. But not wanting them because of how it might change your own life is a completely valid reason to not have kids.
I need about 4.5 years until basic financial independence, I wonder how does it feel to be at that point.
Will people still buy and sell houses?
Will house prices go down because no one can afford them?
Will house prices go up because so few will sell their assets?
I would like to buy a small farm today without debt and cheap energy (upfront investment in solar and storage) but I need a few years more.
Does the world can really change that fast? I don't know but the progress in AI is fast, very fast.
At this point I’ve realized I need to cast all other ambitions aside and work on getting some out of the way land that I own.
We've had 50+ years of deteriotating worker conditions and a massive concentration of wealth to like 10,000 people. The 1980s crushed the labor movement, to all of our detriment.
The GFC destroyed the career prospects of many millenials who discovered their entry-level positions no longer existed so we created a generation that we're loaded with student debt, working as baristas.
A lot of people on HN ignored this because the 2010s were good for tech people but many of us didn't realize this post-GFC wave would eventually come for us. And that's what's happening now.
So on top of the millenaisl we now have Gen Z who have correctly realized they'll never have security, never buy a house and will never retire. They'll live paycheck to paycheck, barely surviving until they die. Why? All so Jeff Bezos can have $205 billion instead of $200 billion.
I'm reminded of the quote "only nine meals separates mankind from anarchy".
I believe we've passed the point where we can solve this problem with electoral politics. Western democracies are being overtaken by fascists because of increasing desperation and the total destruction of any kind of leftism since WW2. At this point, it ends violently and sooner than many think.
I guess the next turning of the wheel will be similar too.
Or maybe we all just have poor imaginations.
This is what I expect to happen, but why would these entry-level roles be "developers". I think it's more likely that they will be roles that already exist in those industries, where the responsibilities include (or at least benefit from) effective use of AI tools.
I think the upshot is that more people should probably be learning how to work in some specific domain while learning how to effectively use AI to automate tasks in that domain. (But I also thought this is how the previous iteration of "learn to code" should be directed, so maybe I just have a hammer and everything looks like a nail.)
I think dedicated "pure tech" software where the domain is software rather than some other thing, is more likely to be concentrated in companies building all the infrastructure that is still being built to make this all work. That is, the models themselves and all the surrounding tools, and all the services and databases etc. that are used to orchestrate everything.
In part, I think what people are responding to is the trajectory of the tools. I would agree that they seem to be on an asymptote toward being able to do a lot more things on their own, with a lot less direction. But I also feel like the improvements in that direction are incremental at this point, and it's hard to predict when or if there will be a step change.
But yeah, I'm really not sure I buy this whole thing about orchestrating a symphony of agents or whatever. That isn't what my usage of AI is like, and I'm struggling to see how it would become like that.
But what I am starting to see, is "non-programmers" beginning to realize that they can use these tools to do things for their own work and interests, which they would have previously hired a programmer to do for them, or more likely, just decided it wasn't worth the effort. And I think that's a genuine step change that will have a big effect on our industry.
I think this is ultimately a very good thing! This is how computers should work, that anybody can use them to automate stuff they want to do. It is not a given that "automating tasks" is something that must be its own distinct (and high paying) career. But like any disruption, it is very reasonable to feel concerned and uncertain about the future when you're right in the thick of it.
This is how basically everyone I know actually uses LLMs.
The whole story about vibecoding and LLMs replacing engineers has become a huge distraction from the really useful discussions to be had. It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.
You're maintaining a large, professional codebase? You definitely shouldn't be vibe coding. The fact that some people are is a genuine problem. You want a simple app that you and your friends will use for a few weeks and throw away? Sure, you can probably vibe code something in 2 hours instead of paying for a SaaS. Both have their place.
Because the first thing that comes from individual speed up is not engineers making more money but there being less engineers, How much less is the question? Would they be satisfied with 10%, 50% or may be 99%?
If we doubled agricultural productivity globally we'd need to have fewer farmers because there's no way we can all eat twice as much food. But we can absolutely consume twice as much CSS, try to play call of duty on our smart fridge or use a new SaaS to pay our taxes.
Actually, most software either is garbage or goes to waste at some point too. Maybe that's too negative. Maybe one could call it rot or becoming obsolete or obscure.
It’s copium to think that with the combination of AI and oversupply of “good enough” developers, that it won’t be harder for developers to get jobs. We are seeing it now.
It wasn’t this bad after the dot com bust. Then if you were just an ordinary enterprise developer working “in the enterprise” in a 2nd tier city (raises hand), jobs were plentiful.
I see this fallacy all the time but I don't know if there is a name for it.
I mean, we make used fun of MBAs for saying the same thing, but now we should be more receptive to the "Line Always Goes Up" argument?
I was referring specifically to this point, which, IMHO, is a fallacy:
>>> There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.
There is no way to use the word "infinite" in this context, even if qualified, that is representative of reality.
The demand for paid software is decreasing cause these AI companies are saying "Oh dont buy that SAAS product because you can build it yourself now"
Our attention is also a finite resource (24h a day max). We already see how this has been the cause for the enshittificaton of large swathes of software like social media where grabbing the attention for a few seconds more drives the main innovation...
Depending on how the future shapes up, we may have gone from artisans to middlemen, at which point we're only in the business of added value and a lot of coding is over.
Not the Google kind of coding, but the "I need a website for my restaur1ant" kind, or the "I need to agregate data from these excel files in a certain way" kind. Anything where you'd accept cheap and disposable. Perhaps even the traditional startup, if POCs are vibecoded and engineers are only introducer later.
Those are huge businesses, even if they are not present in the HN bubble.
I am afraid that kind of jobs were already over by 2015. There are no code website makers available since then and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point, its so efficient even AI wont be more cost effective than that. If you have $10k saved you can hire a competitive agency to maintain and build your website. This business is completely taken over by low cost fiverr automators and agencies for high budget projects. Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store.
If the prototype can be just dropped in and clear a PR and comply with all the standards, you're just doing software engineering for less money!
What’s “the vibecoding strawman”? There are plenty of people on HN (and elsewhere) repeatedly saying they use LLMs by asking them to “produce full apps in hours instead of weeks” and confirming they don’t read the code.
Just because everyone you personally know does it one way, it doesn’t mean everyone else does it like that.
https://en.wikipedia.org/wiki/Faulty_generalization
Though I get that these days people tend to use “strawman” for anything they see as a bad argument, so you could be right in your assessment. Would be nice to have clarification on what they mean.
Good point.
> I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).
There I partially disagree. Straw-manning is not engaging with the argument but it can be done accidentally. As in, one may genuinely misunderstand the nuance in an argument and respond to a straw man by mistake. Bad faith does require bad intent.
"Writing code is no longer needed for the most part."
It was a great post and I don't disagree with him. But it's an example of why it isn't necessarily a strawman anymore, because it is being claimed/realized by more than just vibecoders and hobbyists.
> Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer.
* the README was clearly not written by an LLM nor aided
* he still uses GPLv2 (not 3) as the license for his works
Dunno why the author thinks an AI-enhanced junior can match the "output"of a whole team unless he means in generating lines of code, which is to say tech debt.
Being able to put a lot of words on screen is not the accomplishment in programming. It usually means you've gone completely out of your depth.
I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.
The next step was for me to write a cron job that would reapply the chattr +1 and rewrite the file once in 5 minutes. Sort of an enforcer. I used Claude (web) to write this and cut/pasted it just because I didn't want to bother with bash syntax that I learned and forgot several times.
I then wanted something stronger and looked at publicly available things like pluckeye but they didn't really work the way I wanted. So I tried to write a quick version using Claude (web) and started running it (October 2025). It solved my problem for me.
I wanted a program to use aider on and I started with this. Every time, I needed a feature (e.g. temporary unblocks, prevent tampering and uninstalling, blocking in the browser, violation tracking etc.), I wrote out what I wanted and had the agent do it. OVer the months, it grew to around 4k lines (single file).
Around December, I moved to Claude code from aider and continued doing this. The big task I gave it was to refactor the code into smaller files so that I could manage context better. IT did this well and added tests too. (late December 2025).
I added a helper script to update URLs to block from various sources. Vibe-coded too. Worked fine.
Then, I found it hogging memory because of some crude mistakes I vibe-coded early on fixed that. Cost me around $2 to do so. (Jan 2026).
Then I added support to lock the screen when I crossed a violation threshold. This required some Xlib code to be written. I'm sure I could have written it but it's not really worth it. I know what to do and doing it by hand wouldn't really teach me anything except the innards of a few libraries. I added that.
So, in short, this is something that's 98% AI coded but it genuinely solves a problem for me and has helped me change my behaviour in front of a computer. There are no companies that my research revealed that offer this as a service for Linux. I know what to do but don't have the time write and debug it. With AI, my problem was solved and I have something which is quite valuable to me.
So, while I agree with you that it isn't an "automation tool", the speed and depth which it brings to the environment has opened up possibilities that didn't previously exist. That's the real value and the window through which I'm exploring the whole thing.
What has worked for me is treating it like an enthusiastic intern with his foot always on the accelerator pedal. I need to steer and manage the brakes otherwise, it'll code itself off a cliff and take my software with it. The most workable thing is a pair programmer. For trivial changes and repeatedly "trying stuff out", you don't need to babysit. For larger pieces, it's good to make each change small and review what it's trying.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
It seems it only took until about 2023 or so
It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.
It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.
Not really sure why this article is talking about what happens 2 years from now since that’s 8 times longer than anything anyone with money or power cares about.
The street cred doesn't come from managing more resources, the street cred comes from delivering more.
Then I have some non-trivial side projects where I don’t really care about the code quality, and I’m just letting it run. If I dare look at the code, there’s a bunch of repetition. It rarely gets stuff right the first time, but that’s fine, because it’ll correct it when I tell it it doesn’t work right. Probably full of security holes, code is nasty, but it doesn’t matter for the use-cases I want. I have produced pieces of software here that are actively making my life better, and it’s been mostly unsupervised.
If AI automated entry-level tasks from today, that just means "entry-level" means something different now. It doesn't mean entry-level ceases to exist. Entey-level as we know it, but not entry-level in general.
I used to work on teams which were 50% entry level. Then just one. Then all senior teams became the norm.
This all happened after I became senior but before AI came along.
The question is, how much faster is verification only vs writing the code by hand? You gain a lot of understanding when you write the code yourself, and understanding is a prerequisite for verification. The idea seems to be a quick review is all that should be needed "LGTM". That's fine as long as you understand the tradeoffs you are making.
With today's AI you either trade speed for correctness or you have to accept a more modest (and highly project specific) productivity boost.
The perverse incentives being that tech debt is non-obvious & therefore really easy to avoid responsibility for.
Meanwhile, velocity is highly obvious & usually tired directly to personal & team performance metrics.
The only way I see to resolve this is strict enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle.
But when even people working at Anthropic are talking about running multiple agents in parallel, I get the idea that CTO's are not taking this seriously.
> Narrow specialists risk finding their niche automated or obsolete
Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.
> Universities may struggle to keep up with an industry that changes every few months
Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.
I find this one hard to believe. Software is already massively present in all these industries and has already replaced jobs. The last step is complete automation (ie drone tractors that can load up at a hub, go to the field and spray all by themselves) but the bottleneck for this isn't "we need more code", it's real-world issues that I don't see AI help solving (political, notably)
Given projections of AI abilities over time AI necessarily creates downward pressure on new job creation. AI is for reducing and/or eliminating jobs (by way of increasing efficiency).
AI isn't creating 'new' things, it's reducing the time needed to do what was already being done. Unlike the automobile revolution new job categories aren't being created with AI.
We are going to need to de-risk our software dependencies, and Germany is going to need to use computers.
Germany is going to be crazy, I think.
The Gewerkschaft tactics to resist AI is what I’m really interested in seeing.
It is a new and exciting tool but immediately limited with medium complex tasks. Also we will see a lot more code with tricky bugs coming out of AI assistants and all of that needs to be maintained. If software development gets cheaper per line of code then there will be more demand. And someone has to clean up the mess created by people who have no clue whatsoever of SWE.
Once upon a time people developed software with punch hole cards. Even without AI a developer today is orders of magnitude more proficient than that.
The only thing I hope I am not going to see in my lifetime is real artificial intelligence.
They're still doing it.
the only real contender in this regard is the win32 api, and actually that did get used in enterprise for a long time too before the major shift to cloud and linux in the mid 2010s.
ultimately the proof is in the real-world use, even if its ugly to look at... id say, even as someone who is a big fan of linux, if i were given a 30 year old obscure software stack that did nothing but work, i would be very hesitant to touch it too!
I'm not sure I agree with that. Right now as a senior my task involves reviewing code from juniors; replace juniors with AI and it means reviewing code from AI. More or less the same thing.
Worse. The AI doesn't share any responsibility.
This study showing 9-10% drop is odd[1] and I'm not sure about their identification critria.
> We identify GenAI adoption by detecting job postings that explicitly seek workers to implement or integrate GenAI technologies into firm workflows.
Based on that MIT study it seems like 90+% of these projects fail. So we could easily be seeing an effect where firms posting these GenAI roles are burning money on the projects in a way that displaces investment in headcount.
The point about "BigTech" hiring 50% fewer grads is almost orthogonal. All of these companies are shifting hiring towards things where new grads are unlikely to add value, building data centers and frontier work.
Moreover the TCJA of 2017 caused software developers to not count for R&D tax write offs (I'm oversimplifying) starting in 2022. This surely has more of an effect than whatever "GenAI integrator roles" postings correlates to.
[1] https://download.ssrn.com/2025/11/6/5425555.pdf
Given how quickly models, tools and frameworks rise and fall, betting your career on a single technology stack is risky.
This was something I dealt with a lot when JS frameworks became the newest shiny thing and suddenly the entire industry shifted in a few years from being a front-end developer to being a full stack developer.
This happened to a lot of my friends who went all in on Angular. Then everybody switched to React.
The issue then became, "What should I learn?" because at my company (a large fortune 200 company) they were all in on Angular, and weren't looking for React developers, but I knew companies were moving away from Angular. So do I work to get better and more indispensable with Angular, and risk not knowing React? Or do I learn the new shiny framework betting at some point my company will adopt it or I will be laid off and need to know it?
It feels like half my life as a dev was spent being a degenerate gambler, always trying to hedge my bets in one way or another, constantly thinking about where everything was going. It was the same thing with dozens of other tools as well. It just became so exhausting trying to figure out where to put your effort into to make sure you always knew enough to get that next job.
I feel you. It's a societal question you're posing. Your employer (most employers) deal in dollars. A business is evaluated by its ability to generate revenue. That is the purpose of a business and the fiduciary duty of the CEO's in charge.
The usual trade-off of a well paid software development job is lack of job security and always learning - the skill set is always changing in contrast with other jobs.
My suggestion, stop chase trends and start to hear from mature software developers to get better perspective on what's best to invest on.
And why the mantra is always true?
You can find stable job (slow moving company) doing basic software development and just learn something new every 4 years and then change companies.
Or never change company and be the default expert, because everyone else is changing jobs, get job security, work less hours and have time within your job to uplift your skills.
Keep chasing latest high paid jobs/trends by sacrificing off time.
What's the best option for you? Only you know, it's depends on your own goals.
If you didn't like working with computers, then you (and another gazillion people who choose it for the $$$) probably made the wrong choice.
But totally depends on what you wanted to get out of it. If you wanted to make $$$ and you are making it, what is the problem? That is assuming you have fun outside of work.
But if you wanted to be the best at what you do, then you gotta love what you are doing. May be there are people who have super human discipline. But for normal people, loving what they goes a long way towards that end.
This doesn't match what I have seen in other industries. Many auto mechanics I know drive old Buicks or Ford's with the 4.6l v8 because the cars are reliable and the last thing they want to do on a day off is have to work on their own car. I know a few people in other trades like plumbers, electricians, and chefs and the pattern holds pretty well for them as well.
You can enjoy working with computers and also enjoy not working in your personal time.
This type of argument can hold for any profession and yet we aren't seeing this pattern much in other white-collar professions. Professors, doctors, economists, mechanical engineers, ... it seems like pretty much everybody made the wrong choice then?
I think this is a wrong way to look at it. OP says that he invested a lot of time into becoming proficient in something that today appears to be very close to part extinction.
I think that the question is legit, and he's likely not the only person asking oneself this question.
My take on the question is ability to adapt and learn new skills. Some will succeed some will fail but staying in status-quo position will certainly more likely lead to a failure rather than the success.
There are plenty of such examples but both of these imply that you're ready to devote a lot of your extra time, before or after the job, only that you can show you're relevant in the eyes of those who are the decision makers. This normally means that you're single, that you have no kids, family, no other hobbies but programming etc. This works when you're in your 20's and only up to the certain point unless you become a weirdo in your 30's and 40's etc. without any of these.
However, in the age where we are met with the uncertainty, it may become a new normal to devote extra effort in order to be able to remain not competitive but a mere candidate for the job. Some will find the incentive for this extra pain, some will not but I think it won't be easy. Perhaps in 5 years time we will only have "AI applied" engineers developing or specializing their own models for given domains. Writing code as we have it today I think it's already a thing of a past.
I wonder what the best decision would have been. What job is AI immune and has a stable 40 hour week, no overtime, with decent pay. Teacher? Nursing?
Definitely something that requires social/interpersonal skills though will be the thing that winds up being AI immune. Humans are social creatures so I assume there will always be some need for it.
> Am I supposed to want to code all the time?
Yes.
> When can I pursue hobbies,
Your hobby should be coding fun apps for yourself
> a social life, etc.
You social life should be hanging out with other engineers talking about engineering things.
And the most successful people I know basically did exactly that.
I'm not saying y'all should be doing that now, I'm just saying, that is in fact how it used to be.
If all they did was code all the time, write code for fun and interacted mostly with other similar people, they probably wouldn't be the first choice for these projects.
The ones who ace their careers are for the most people that are fun, driven, or psychos, all social traits that make you good in a political game.
Spending lots of time with other socially awkward types talking about hard math problems or whatever will get you nowhere outside of some SF fantasy startup movie.
I'd say it's especially important for the more nerdy (myself included) to be more outgoing, and do other stuff like sales or presentations, design/marketing og workshops - that will make you exceptional because you then got the "whole package" and undestand the process and other people.
Well that depends heavily on how you define successful. Successful in life? I would tend to disagree, unless you believe that career is the only thing that counts. But even when career is concerned: the most successful people I know went on from being developer to some high end management role. The skills that brought them there definitely did not come from hanging out with other engineers talking about engineering things.
I did not do side projects. I really enjoyed most of my 20s as a single person. I was a part time fitness instructor, I dated, hung out with friends, did some traveling.
The other developers at my job also had plenty of outside hobbies.
Fuck. That.
I worked at a faang, successful people weren't people that did engineering, it was people who did politics.
The most successful people were the ones that joined at the same time as the current VP.
Your hobbies need to be fun, to you. Not support your career. If its just there to support your career, its unpaid career development, not a hobby. Should people not code in their free time? thats not for me to decide. If they enjoy it, and its not hurting anyone, then be my guest.
Engineers are generally useless at understanding whats going on in the real world, they are also quite bad at communicating.
do. fun. things.
I'm sorry for you as well.
I’m more concerned that it is the highlight of someone’s life being in front of a computer all day.
In my opinion we always needed to be versatile to stand any chance of being comfortable in these insanely rapid changing times.
Engineers > developers > coders.
It's also an 'skip intro' button for the friction that comes with learning.
You're getting a bug? just ask the thing rather than spending time figuring it out. You don't know how to start a project from scratch? ask for scaffolding. Your first boss asks for a ticket? Better not to screw up, hand it to the machine just to be safe.
If those temptations are avoided you can progress, but I'm not sure that lots of people will succeed. Furthermore, will people be afforded that space to be slow, when their colleagues are going at 5x?
Modern life offers little hope. We're all using uber eats to avoid the friction of cooking, tinder to avoid the awkwardness of a club, and so on. Frictionless tends to win.
It takes extra discipline and willpower to force yourself do the painful thing, if there is a less painful way to do it.
For the record, I was genuinely trying to read it properly. But it is becoming unbearable by mid article.
It resembles an article, it has the right ingredients (words), but they aren't combined and cooked into any kind of recognizable food.
Its hard to put my finger on it. But it lacks soul, it factor or whatever you want to call it. Feels empty in a way.
I mean, this is not the first AI assisted article am reading. But usually, it's to a negligible level. Maybe it's just me. :)
intro... Problem... (The Bottom line... What to do about it...) Looped over and over. and then Finally...
I want to read it, but I can't get myself to.
1) Senior developers are more likely to know how to approach a variety of tasks, including complex ones, in ways that work, and are more likely to (maybe almost subconsciously) stick to these proven design patterns rather than reinvent the wheel in some novel way. Even if the task itself is somewhat novel, they will break it down in familar ways into familar subtasks/patterns. For sure if a task does require some thinking outside the box, or a novel approach, then the senior developer might have better intuition on what to consider.
The major caveat to this is that I'm an old school developer, who started professionally in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.
2) I think the real distinction of senior vs junior programmers, that will carry over into the AI era, is that senior developers have had enough experience, at increasing levels of complexity, that they know how to architect and work on large complex projects where a more junior developer will flounder. In the AI coding world, at least for time being, until something closer to AGI is achieved (could be 10-20 years away), you still need to be able to plan and architect the project if you want to achieve a result where the outcome isn't just some random "I let the AI choose everything" experiment.
The distinguishing behavior is not about the quantity of effort involved but the total cost after consideration of dependency management, maintenance time, and execution time. The people that reinvent wheels do so because they want to learn and they also want to do less work on the same effort in the future.
I think this is really underappreciated and was big in driving how a lot of people felt about LLMs. I found it even more notable on a site named Hacker News. There is an older generation for whom computing was new. 80s through 90s probably being the prime of that era (for people still in the industry). There constantly was a new platform, language, technology, concept to learn. And nobody knew any best practices, nobody knew how anything "should work". Nobody knew what anything was capable of. It was all trying things, figuring them out. It was way more trailblazing / exploring new territory. The birth of the internet being one of the last examples of this from that era.
The past 10-15 years of software development have been the opposite. Just about everything was evolutionary, rarely revolutionary. Optimizing things for scale, improving libraries, or porting successful ideas from one domain to another. A lot of shifting around deck chairs on things that were fundamentally the same. Just about every new "advance" in front-end technology was this. Something hailed as ground breaking really took little exploration, mostly solution space optimization. There was almost always a clear path. Someone always had an answer on stack overflow - you never were "on your own". A generation+ grew up in that environment and it felt normal to them.
LLMs came about and completely broke that. And people who remembered when tech was new and had potential and nobody knew how to use it loved that. Here is a new alien technology and I get to figure out what makes it tick, how it works how to use it. And on the flip side people who were used to there being a happy path, or a manual to tell you when you were doing it wrong got really frustrated as their being no direction - feeling perpetually lost and it not working the way they wanted.
I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs. So much was, "I tried something it didn't work so I gave up". Or "I just kept telling it to work and it didn't so I gave up". Explore, pretend you're in a sci-fi movie. Does it work better on Wednesdays? Does it work better if you stand on your head? Does it work differently if you speak pig-latin? Think side-ways. What behavior can you find about it that makes you go "hmm, that's interesting...".
Now I think there has been a shift very recently of people getting more comfortable with the tech - but still was surprised of how little of a hacker mindset I saw on hacker news when it came to LLMs.
LLMs have reset the playing field from well manicured lawn, to an unexplored wilderness. Figure out the new territory.
Bashing kludgy things together until they work was always part of the job, but that wasn't the motivational payoff. Even if the result was crappy, knowing why it was crappy and how it could've been better was key.
LLMs promise an unremitting drudgery of the "mess around until it works" part, facing problems that don't really have a cause (except in a stochastic sense) and which can't be reliably fixed and prevented going forward.
The social/managerial stuff that may emerge around "good enough" and velocity is a whole 'nother layer.
Louder for those turned deaf by LLM hype. Vibe coders want to turn a field of applied math into dice casting.
You keep using the word "LLMs" as if Opus 4.x came out in 2022. The first iterations of transformers were awful. Gpt-2 was more of a toy and Gpt-3 was an eyebrow-raising chatbot. It has taken years of innovations to reach the point of usable stuff without constant hallucinations. So don't fault devs for the flaws of early LLMs
> A CEO of a low-code platform articulated this vision: in an “agentic” development environment, engineers become “composers,”
I see we'll be twisting words around to keep avoiding the comparison.
Talk is cheap, let's see the money :D
Last year was, as it seems, just a normal year in terms of global software output.
But on product hunt, the amount of projects is First week of Jan: 5000+, Entire Jan 2018: 4000 approx.
Has the output of existing companies/products increased substantially?
Have more products proven successful and started companies?
hard to say but maybe a little
Would be impossible to tell.
Not to mention agent capabilities at the end of last year were vastly different to those at the start of the year.
Even if LLMs became better during the year, you'd still expect an increase in releases.
Exactly my thoughts lately ... Even by yesterday's standards it was already very difficult to land a job and, by tomorrow's standards, it appears as if only the very best of the best will be able to keep their jobs and the ones in a position of decision making power.
On the optimistic take side - I suspect it might end up being true that software might be infused into more niches but not sure it follows that this helps on the jobs market side. Or put different demand for software and SWE might decouple somewhat for much of that additional software demand.
This is really just another form of automation, speeding things up. We can now make more customized software more quickly and cheaply. The market is already realizing that fact, and demand for more performant, bespoke software at lower costs/prices is increasing.
Those who are good at understanding the primary areas of concern in software design generally, and who can communicate well, will continue to be very much in demand.
The question IMO is, who will be creating the demand on the other side for all of these goods produced if so many people will be left without the jobs? UBI, redistribution of wealth through taxes? I'm not so convinced about that ...
There is no reason why people will left without jobs. Ultimately, "job" is simply a superstructure for satisfying people's needs. As long as people have needs and the ability to satisfy them, there will be jobs in the market. AI change nothing in those aspects.
The people who lose their jobs prove this was always the case. No job comes with a guarantee, even ones that say or imply they do. Folks who believe their job is guaranteed to be there tomorrow are deceiving themselves.
Curious about how the Specialist vs Generalist theme plays out, who is going to feel it more *first* when AI gets better over time?
A humble way for devs to look at this, is that in the new LLM era we are all juniors now.
A new entrant with a good attitude, curiosity and interest in learning the traditional "meta" of coding (version control, specs, testing etc) and a cutting-edge, first-rate grasp of using LLMs to assist their craft (as recommended in the article) will likely be more useful in a couple of years than a "senior" dragging their heels or dismissing LLMs as hype.
We aren't in coding Kansas anymore, junior and senior will not be so easily mapped to legacy development roles.
I think of it a bit like ebike speed limits. Previously to go above 25mph on a 2-wheeled transport you needed a lot of time training on a bicycle, which gave you the skills, or you needed your motorcycle licence, which required you to pass a test. Now people can jump straight on a Surron and hare off at 40mph with no handling skills and no license. Of course this leads to more accidents.
Not to say LLMs can't solve this eventually, RL approaches look very strong and maybe some kind of self-play can be introduced like AlphaZero. But we aren't there yet, that's for sure.
But the comparison I made was between the junior with a good attitude and expert grasp on LLMs, and the stick-in-the-mud/disinterested "senior". Those are where the senior and junior roles will be more ambiguous in demarcation as time moves forward.
1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?
In a similar fashion, AI generated code will be fed to another AI round and regenerated or refactored. What this also means is that in most cases nobody will care about producing code with high quality. Why bother, if the AI can refactor ("recompile") it in a few minutes?
1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.
The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).
That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.
2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).
Not everyone can afford it, and then we are at the point of changing the field that was so proud about just needing a computer and access to internet to teach oneself into a subscription service.
And yes, that plan can get you started, but when I tested it, I managed to get 1 task done, before having to wait 4 hours.
Where is all the new and improved software output we’d expect to see?
Ah, there it is.
Wasn't the main take away generally "study everything even more than you were, and talk/network to everybody even more than you were, and hold on. Work more more more"
My value so far in my career has been my very broad knowledge of basically the entire of computer science, IT, engineering, science, mathematics, and even beyond. Basically, I read a lot, at least 10x more than most people it seems. I was starting to wonder how relevant that now is, given that LLMs have read everything.
But maybe I'm wrong about what my skill actually is. Everyone has had LLMs for years now and yet I still seem better at finding info, contextualising it and assimilating it than a lot of people. I'm now using LLMs too but so far I haven't seen anyone use an LLM to become like me.
So I remain slightly confused about what exactly it is about me and people like me that makes us valuable.
The value of a good engineer is his current-context judgment. Something that LLMs can not do Well.
Second point, something that is being mentioned occasionally but not discussed seriously enough, is that the Dead Internet Theory is becoming a reality. The amount of good, professionally written training materials is by now exhausted and LLMs will start to feed on their own slop. See How little the LLM's core competency increased in the last year even with the big expansion of their parameters.
Babysitting LLM's output will be the big thing in the next two years.
I’m not saying that this was prompted. I’m just summarizing it in my own way.
A few key fallacies at play here.
- Assuming a closed world assumption: we'll do the same amount of work but with less people. This has never been true. As soon as you meaningfully drop the price of a unit of software (pick your favorite), demand goes up and we'll need more of them. Also it opens the door to building software that previously would have been too expensive. That's why the amount of software engineers has consistently increased over the years. This despite a lot of stuff getting a lot easier over time.
- Assuming the type of work always stays the same. This too has never been true. Stuff changes over time. New tools, new frameworks, new types of software, new jobs to do. And the old ones fade away. Being a software engineer is a life of learning. Very few of us get to do the same things for decades on end.
- Assuming people know what to ask for. AIs do as you ask, which isn't necessarily what you want. The quality of what you get correlates very much to your ability you ask for it. The notion that you get a coherent bit of software in response to poorly articulated incoherent prompts is about as realistic as getting a customer to produce coherent requirements. That never happened either. Converting customer wishes into maintainable/valuable software is still a bit of a dark art.
The bottom line: many companies don't have a lot of in house software development capacity or competence. AI doesn't really help these companies to fix that in exactly the same way that Visual Basic didn't magically turn them into software driven companies either. They'll use third party companies to get the software they need because they lack the in house competence to even ask for the right things.
Lowering the cost just means they'll raise the ambition level and ask for more/better software. The type of companies that will deliver that will be staffed with people working with AI tools to build this stuff for them. You might call these people software engineers. Demand for senior SEs will go through the roof because they deliver the best AI generated software because they know what good software looks like and what to ask for. That creates a lot of room for enterprising juniors to skill up and join the club because, as ever, there simply aren't enough seniors around. And thanks to AI, skilling up is easier than ever.
The distinction between junior and senior was always fairly shallow. I know people that were in their twenties that got labeled as senior barely out of college. Maybe on their second or third job. It was always a bit of a vanity title that because of the high demand for any kind of SEs got awarded early. AI changes nothing here. It just creates more opportunities for people to use tools to work themselves up to senior level quicker. And of course there are lots of examples of smart young people that managed to code pretty significant things and create successful startups. If you are ambitious, now is a good time to be alive.
If I were starting out today, this is basically the only advice I would listen to. There will indeed be a vacuum in the next few years because of the drastic drop in junior hiring today.
You say that belongs in a trade school? I might agree, if you think trade schools and not universities should teach electrical engineering, mechanical engineering, and chemical engineering.
But if chemical engineering belongs at a university, so does software engineering.
The bigger problem when I was there was undergrads (me very much included) not understanding the difference at all when signing up.
Glad I did CS, since SE looked like it consisted of mostly group projects writing 40 pages of UML charts before implementing a CRUD app.
Devops isn’t even a thing, it’s just a philosophy for doing ops. Ops is mostly state management, observability, and designing resilient systems, and we learned about those too in 1987. Admittedly there has been a lot of progress in distributed systems theory since then, but a CS degree is still where you’ll find it.
School is typically the only time in your life that you’ll have the luxury of focusing on learning the fundamentals full time. After that, it’s a lot slower and has to be fit into the gaps.
For example, the primitives of cloud computing are largely explained by papers published by Amazon, Google, and others in the early '00s (DynamoDB, Bigtable, etc.). If you want to explore massively parallel computation or container orchestration, etc, it would be natural to do that using a public cloud, although of course many of the platform-specific details are incidentals.
Part of the story here is that the scale of computing has expanded enormously. The DB class I took in grad school was missing lots of interesting puzzle pieces around replication, consistency, storage formats, etc. There was a heavy focus on relational algebra and normalization forms, which is just... far from a complete treatment of the necessary topics.
We need to extend our curricula beyond the theory that is require to execute binaries on individual desktops.
However, in the decades since this curricula was established, it's clear that the foundation has expanded. Understanding how containerization works, how k8s and friends work, etc is just as important today.
See doctors for example, you learn a bit of everything. But then if you want to specialise, you choose one.