> Nobody at this point disagrees we’re going to achieve AGI this century.
Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.
> 100% of today’s SWE tasks are done by the models.
Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.
Oh, no? I'm still untying corporate Gordian knots?
> There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.
> Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.
This captures my chief irk over these sorts of "interviews" and AI boosterism quite nicely.
Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement. That leaves one of two possible outcomes:
1) They have not ingested data from beyond their narrow echo chamber that could challenge their perceptions, revealing an irresponsible, nay, negligent amount of ignorance for people in positions of authority or power
OR
2) They do not see their opponents as people.
Like, that's it. They're either ignorant or they view their opposition as subhuman. There is no gray area here, and it's why I get riled up when they're allowed to speak unchallenged at length like this. Genuinely good ideas don't need this much defense, and genuinely useful technologies don't need to be forced down throats.
option 3: reject the premise that they're being 100% honest
this third option seems like the most reasonable option here? the way you worded this makes it seems like there are only these two options to reach your absurd conclusion
...did you just skip the first part where I literally preface my argument with this line?
> Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement.
That's the core assumption. It's meant to give them the complete benefit of the doubt, and show that doing so means their argument is either ignorant or their perspective that opponents aren't people.
Obviously they're being dishonest little shits, but calling that out point-blank is hostile and results in blind dismissal of the toxicity of their position. Asking someone to complete the thought experiment ("They're behaving honestly, therefore...") is the entire exercise.
> My company tried this, then quickly stopped: $$$
How much were devs spending to become a sticking point?
I'm asking because I thought it'd be extremely expensive when it rolled out at the company I work for, we have dashboards tracking expenses averaged per dev in each org layer, the most expensive usage is about US$ 350/month/dev, the average hovers around US$ 30-50.
Nobody out of people remotely worth listening to. There's always people deeply wrong about things but over 70 years at this point is a pretty insane position unless you have a great reason like expecting Taiwan to get bombed tomorrow and slow down progress.
Probabilities have increased, but it's still not a certainty. It may turn out that stumbling across LLMs as a mimicry of human intelligence was a fluke and the confluence of remaining discoveries and advancements required to produce real AGI won't fall into place for many, many years to come, especially if some major event (catastrophic world war, systematic environmental collapse, etc) occurs and brings the engine of technological progress to a crawl for 3-5 decades.
I think the only people that don't think we're going to see AGI within the next 70 years are people that believe consciousness involves "magic". That is, some sort of mystical or quantum component that is, by definition, out of our reach.
The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.
I don't think there's "magic" exactly, but I do believe that there's a high chance that the missing elements will be found in places that are non-intuitive and beyond the scope of current research focus.
The reason is because this has generally been how major discoveries have worked. Science and technology as a whole advances more rapidly when both R&D funding is higher across the board and funding profiles are less spiky and more even. Diminishing returns accumulate pretty quickly with intense focus.
Sufficiently advanced science is no different than magic. Religion could be directionally correct, if off on the specifics.
I think there’s a good bit of hubris in assuming we even have the capacity to understand everything. Not to say we can’t achieve AGI, but we’re listening to a salesman tell us what the future holds.
While I fully agree with your sentiment it’s striking Dario said “this century”. He likely won’t even be alive for about 50%, assuming he lives to 80, of his prediction window. It’s such a remarkably meaningless comment.
He was hawking the doubling of human lifespan to some boomers few months ago. The current AI is just religion in new clothes, mainly for people who see themselves as too smart to believe in God and heaven so believe in the AI and project everything to it.
> We pay humans upwards of $50 trillion in wages because they’re useful, even though in principle it would be much easier to integrate AIs into the economy than it is to hire humans
Microsoft and OpenAI had to define it in their agreement, and settled on “AI systems that can generate at least $100 billion in profits”. Which tells you where those folks are coming from.
'By powerful AI [he dislikes the baggage of AGI, but means the same], I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter”.'
It's just God for them basically, they project the solutions to their fears into it. Eg. fear of death, in religion we have heaven etc. In AI they believe it will multiply their lifespan with some magic.
He does not specify if tasks are done correctly. Merge your change request full of ai slop and close the task in jira as done. Voila! Velocity increased to the moon! 6000 or 7000 open issues - who cares?
Does anyone know who Dwarkesh’s patron is that boosted him in podcast world? He isn’t otherwise highly distinguished and admitted does his show prep with AI which sometimes shows in his questions. I feel like there are a very large number of tech podcasts, but there’s some marketing effect around this guy that I just don’t understand.
Same thing was true of his interview with Tony Blair. It was such a night and day difference between the two. Tony's skill, knowledge and polish saved the interview and made it enjoyable despite the interviewer.
Yeah I also don't understand how he is able to get such high profile guests. His interview with Jeff Dean and Noam Shazeer last year[1] is so hilariously bad. Jeff and Noam kept trying to give really insightful answers on how they see AI development shaping in coming years and he was just steering the conversation to shallow and silly tabloid gossip (why don't you "just" let AI improve the next version in a loop so we can quickly have singularity, Jeff Dean AI running in a DC, evil Jeff Dean AI escaping containment and on and on). It was just embarrassing. The interview would have been so much better with just Jeff and Noam without him.
> I also don't understand how he is able to get such high profile guests
His reach. He's an Indian Lex Friedman (and I mean that derogatorily [edit: what I mean is I dislike Lex Friedman's lack of substance and oversimplification of extremely complex subjects, and how Dwarkesh uses a similar tack]), but as such has significant reach.
Now that Indian consumers have now become a major bloc in most Western social media platforms, more Dwarkeshes will enter western discourse.
Furthermore, unlike Chinese, Indians overwhelmingly use Western platforms, and Indian policymakers have begun using this consumer power to push western companies to collocate and shift the center of gravity to Indian offices.
I highly doubt that. "The algorithm" will surely adjust the recommendations per geography. I don't think most westerners are getting T-Series recommendation in their feed as well.
> He's an Indian Lex Friedman (and I mean that derogatorily)
I might be reading this wrong, but sounds kinda racist to me?
He knew people, caught a wave, and was roommates with Dylan Patel of SemiAnalysis. They networked, got to meet the right people, developed a web of contacts and sources, and the rest is history. Treat your friends well, and it often comes back multiplied.
The marketing effect was them catching the wave at the right time, and they're just surfing the hell out of it.
It's one of the most popular "inside baseball" blogs in AI. Dylan Patel covers the people, tech, hardware, business analytics, and has amazing insight and access to people. "Blog" isn't quite right, but if you subscribe, you get a ton of useful analysis and reporting and writing.
In my opinion, he asks the right questions and lets the guests speak, which is something that can't be said about the rest of tech podcasts.
For example, at some point I grew very tired of the superficiality of the questions that Lex Friedman asks his very technical guests. He seems to be more interested into taking the conversation into a philosophy freshman's essay about technology instead of talking about technology itself.
Hearing the Dwarkesh podcast was a breath of fresh air in that regard.
Isn't it just the usual feedback loop that happens with popular podcasters? They have connections and get a few highly popular guests on. As long as their demeanor is agreeable and they keep the conversation interesting other high profile guests will agree to be on and thus they've created a successful show.
For deep dives into AI stuff google deep mind's podcast with Hannah Fry is very good (but obviously limited to goog stuff). I also like Lex for his tech / AI podcasts. Much better interviewer IMO, Dwarkesh talks way too much, and injects his own "insights" for my taste. I'm listening to a podcast to hear what the guests have to say, not the host.
For more light-weight "news-ish" type of podcast that I listen to while walking/driving/riding the train, in no particular order: AI & I (up to date trends, relevant guests), The AI Daily Brief (formerly The AI Breakdown - this is more to keep in touch with what's released in the past month) and any other random stuff that yt pops up for me from listening to these 4 regularly.
There was a small network of AI-intellectualism (and rationality) that grew highly relevant when AI took off post chatgpt. It feels adjacent to Tyler Cowen's network + tpot + hn/lesswrong. (I can't remember if Tyler specifically gave him a fast grant, but his first few interviews were GMU-centric.)
I personally liked that he stayed away from navel-gazing in politics when the blogosphere/podcasts went pretty heavy into that.
It did very well on twitter with a large number of high-follower-count tech people, and soon to be high-follower-count (basically AI employees). He had followed the zeitgeists general wisdom well (bat signal, work in public, you-can-just-do-things, move-to-the-arena, You-Are-the-Average-of-the-Five-People-You-Spend-the-Most-Time-With, high-horsepower). And he's just executed very well. Other people have interviewed similar people and generally gotten lower signal content. This moxie marlinspike interview is great though - https://www.youtube.com/watch?v=cPRi7mAGp7I .
The thing which distinguished him was getting good guests, before the hype hit. And he generally asks good questions and then shuts up while his guests talk.
It seems that AI people have moved on from Lex Fridman to Dwarkesh. A couple of years ago the YouTube algorithm spammed Fridman in response to basically anything, now it is Dwarkesh. Maybe they need a new face periodically.
Exactly, it's the Lex Fridman gambit: a reputation for asking safe questions to powerful people tends to snowball because "safe, popular interview platform" is something they are all looking to self-promote on.
If you want to see the mask slip, watch Lex's interview with Zelensky.
> who Dwarkesh’s patron is that boosted him in podcast world
The Indian consumer market.
Unlike China, Indians use western social media platforms so Indian tastes and trends are becoming increasingly common on the internet.
This is also why you see entirely different trends on TikTok (banned in India, allowed elsewhere), Western Social Media (banned in China, allowed elsewhere), and Chinese social media (only used by Chinese and the diaspora).
What Ben Thompson predicted with his "Four Internets" theory 6 years ago has started playing out [0].
Over the next decade, more Indian media like Dwarkesh will leak into Western social media.
You've said this a couple times in this thread now. Do you have any evidence that most of his audience is in India, to make that claim that his ethnicity matters?
Similar wonderings occurred to me at that point in the vid where he struggled to understanding Amodei's explanation of the economics, which was pretty straightforward. Unless he was just being deliberately arsey.
I never knew about him until a few months ago when he started appearing in my YouTube recommendations, and naturally I thought the same thing because a 'nobody' like him (not in a derogatory sense) started doing interviews with the top AI bros. And the interviews are terribly boring because they feel like a cheap PR campaign. You could sit Lex Fridman instead of Dwarkesh Patel and it would feel exactly the same.
Same with that "MIT" interviewer who wasn't even at MIT.
And that girl Altoff ...
Literal nobodies suddenly interviewing Elon Musk, etc... within weeks.
Things rarely go "viral" on their own these days, everything is controlled, even who gets the stage, how the message is delivered, etc... as you have noticed.
With regards to who's behind, well, we might never know. However, as arcane as it might sound, gradient descent can take you close to the answer, or at least point you towards it.
I like this recent meme of Christof from Truman Show saying things like "now tell them that there's aliens" or crap like that.
Whatever you do please DO NOT look up these links on the Internet Archive.
Not just that but I would also suggest to stop using the Internet Archive in general, as it is obviously not a reliable source of truth like Wikipedia or many news outlets with specialized people that spend a non-trivial amount of their time carefully checking all of this information.
A lot of people believe that Fridman is not affiliated with MIT even though the university says it is. <https://lex.mit.edu/> It's a recurring thing in the Talk page for the Wikipedia article.
Nah, that's just reddit. At this point it's safer to take anything that's popular on reddit as either outright wrong or so heavily out of context that it's not relevant.
Oh, sure, I learned a long time ago that Reddit is a very reliable anti-indicator. But given that HN isn't nearly as bad (but there are moments), it's still strange that people would just repeat something about someone else that they could disprove for themselves in 30 seconds.
The concept of the "end of the exponential" sounds like a tech version of Fukuyama's much mocked "End of History". Amodei seems to think we’ll solve all the "useful" problems and then hit a ceiling of utility.
But if you’ve read David Deutsch’s The Beginning of Infinity, Amodei’s view looks like a mistake. Knowledge creation is unbounded. Solving diseases/coding shouldn't result in a plateau, but rather unlock totally new, "better" problems we can't even conceive of yet.
I find myself coding a lot with Claude Code.. but then it's very hard to quantify the productivity boost.
The first 80% seem magical, the last ones are painful. I have to basically get the mental model of the codebase in my head no matter what.
This is my experience, which is why I stopped altogether.
I think I'm better off developing a broad knowledge of design patterns and learning the codebases I work with in intricate, painstaking detail as opposed to trying to "go fast" with LLMs.
I have the issue that I run into some bug that it just cannot fix. Bear in mind I am developing an online game. And then I have to get into the weeds myself which feels such an gargantuan effort after having used the LLM, that I just want to close the IDE and go do something else. Yes, I have used Opus 4.6 and Codex 5.3 and they cannot just solve some issues no matter how I twist it. Might be the language and the fact that it is a game with custom engine and not a react app.
I talked with my coworker today and asked which model he uses, he said Opus 4.6 but he said he doesn't use any AI stuff much anymore since he felt it makes him not learn and build the mental model which I tend to agree a bit with.
One of my friends and I started building a PaaS for a niche tech stack, believing that we could use Claude for all sorts of code generation activities. We thought, if Anthropic and OpenAI are claiming that most of the code is written by LLMs in new product launches, we could start using it too.
Unsurprisingly, we were able to build a demo platform within a few days. But when we started building the actual platform, we realized that the code generated by Claude is hard to extend, and a lot of replanning and reworking needs to be done every time you try to add a major feature.
This brought our confidence level down. We still want to believe that Claude will help in generating code. But I no longer believe that Claude will be able to write complex software on its own.
Now we are treating Claude as a junior person on the team and give it well-defined, specific tasks to complete.
Referring to a curve with a derivative everywhere equal to its value as something that has an end gives the game away: pure fanciful nominalization with no grounding in any kind of concrete modelling of any constraints.
IMHO this is really silly: we already know that IQ is useful as a metric in the 0 to about 130 range. For any value above the delta fails to provide predictive power on real-world metrics. Just this simple fact makes the verbiage here moot. Also let's consider the wattage involved...
Is "the end of the exponential" an established expression? There's no singularity in an exponential so the expression doesn't make sense to me. To me, it sounds like "the end of the exponential part", meaning it's a sigmoid, but that's obviously not what he means.
This is written with the idea that the exponential part keeps going forever.
It never does. The progress curve always looks sigmoidal.
- The beginning looks like a hockey stick, and people get excited. The assumption is that the growth party will never stop.
- You start to hit something that inherently limits the exponential growth and growth starts to be linear. It still kinda looks exponential and the people that want the party to keep growing will keep the hype up.
- Eventually you saturate something and the curve turns over. At this point it’s obvious to all but the most dedicated party-goers.
I don’t know where we are on the LLM curve, but I would guess we’re in the linear part. Which might keep going for a while. Or maybe it turns over this year. No one knows. But the party won’t go on forever; it never does.
I think Cal Newport’s piece [0] is far more realistic:
> But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.
I think it's coupled differential equations where each growth factor amplifies the others, I posted about it in 2024 - https://b.h4x.zip/ce/ - sent it around a bit but everyone thought I was nuts, look at that post from 2025 and think about what was happening IRL under the graphs line, then go look at where METR is today. I'm not trying to brag, I don't work for anthropic, but I do think I'm probably right.
I only take it partially seriously. I view it as a serious presentation that is misinformed. What I find unique is that people have become so interested in "the exponential" it's almost become like an axiom, or even a near religious belief in AI. It is a subtle admission that while current AI capabilities are impressive, it requires additional years of exponential growth for AI to reach the fantastic claims some people are making.
> Given consistent trends of exponential performance improvements over many years and across many industries, it would be extremely surprising if these improvements suddenly stopped.
This is the part I find very strange. Let's table the problems with METR [1], just noting that benchmarking AI is extremely hard and METR's methodology is not gospel just because METR's "sole purpose is to study AI capabilities". (That is not a good way to evaluate research!)
Taking whatever idealized metric you want, at some point it has to level off. That's almost trivially true: everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe. That makes the question when, and not if. When do external forces dominate whatever positive feedback loops were causing the original growth? In AI, those positive feedback loops include increased funding, increased research attention and human capital, increased focus on AI-friendly hardware, and many others, including perhaps some small element of AI itself assisting the research process that could become more relevant in the future.
These positive feedback loops have happened many times, and they often do experience quite sharp level-offs as some external factor kicks in. Commercial aircraft speeds experienced a very sharp increase until they leveled off. Many companies grow very rapidly at first and then level off. Pandemics grow exponentially at first before revealing their logistic behavior. Scientific progress often follows a similar trajectory: a promising field emerges, significant increased attention brings a bevy of discoveries, and as the low-hanging fruit is picked the cost of additional breakthroughs surges and whatever fundamental limitations the approach has reveal themselves.
It's not "extremely surprising" that COVID did not infect a trillion people, even though there are some extremely sharp exponentials you can find looking at the first spread in new areas. It isn't extremely surprising that I don't book flights at Mach 3, or that Moore's Law was not an ironclad law of the universe.
Does that mean the entire field will stop making any sort of progress? Of course not. But any analysis that fundamentally boils down to taking a (deeply flawed) graph and drawing a line through it and simplifying the whole field of AI research to "line go up" is not going to give you well-founded predictions for the future.
A much more fruitful line of analysis, in my view, is to focus on the actual conditions and build a reasonable model of AI progress that includes current data while building in estimations of sigmoidal behavior. Does training scaling continue forever? Probably not, given the problems with e.g., GPT-4.5 and the limited amount of quality non-synthetic training data. It's reasonable to expect synthetic training data to work better over time, and it's also reasonable to expect the next generation of hardware to also enable an additional couple orders of magnitude. Beyond that, especially if the money runs out, it seems like scaling will hit a pretty hard wall barring exceptional progress. Is inference hardware going to get better enough that drastically increased token outputs and parallelism won't matter? Probably not, but you can definitely forecast continued hardware improvements to some degree. What might a new architectural paradigm be for AI, and would that have significant improvements over current methodology? To what degree is existing AI deployment increasing the amount of useful data for AI training? What parts of the AI improvement cycle rely on real-world tasks that might fundamentally limit progress?
That's what the discussion should be, not reposting METR for the millionth time and saying "line go up" the way people do about Bitcoin.
"everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe." - why is this a good/useful framing?
All models are wrong; some are useful. Cognizance of that is even more critical for a model like exponential growth that often leads to extremely poor predictions quickly if uncritically extrapolated.
I think "are the failures of a simple linear regression on the METR graph relevant" is a much better framing than "does seeing a line if you squint extrapolate forever." As I said, I'd much rather frame the discussion around the actual material conditions of AI progress, but if you are going to be drawing lines I'd at least want to start by acknowledging that no such model will be perfect.
No matter how fast and accurately your AI apps can spit out code (or PowerPoints, or excel spreadsheets, or business plans, etc) you will still need humans to understand how stuff works. If it’s truly business critical software, you can’t get around the fact that humans need to deeply understand how and why it works, in case something goes wrong and they need to explain to the CEO what happened.
Even in a world where the software is 100% written by AI in 1 millisecond by a country of geniuses in a data center, humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being. That means taking the time to understand what the AI put together. That will be the bottleneck regardless of how fast and smart AI is. Because unless the CEO wants to be held accountable for what the AI builds and deploys, humans will need to be there to take the responsibility for its output.
Regulation will not stop this. It's time to build and deploy weapons if you want your species to survive. See earlier discussion here: https://news.ycombinator.com/item?id=46964545
LLMs alone aren't the way to AGI. Perhaps something involving a merge of diffusion or other models that are based on more sensory elements, like images, time, and motion, but LLMs alone aren't going to get us there.
The end of the exponential means the start of other models.
I have said that Amodei is by far worse than Sam Altman. Altman wants money but this guy wants the money AND to be your dad by censoring the shit out of the model and wagging his finger at you what you can say or what you cannot. And lobbying for legislation to block competition. Also the constant "muh china" whining while these guys stole all the books in the world.
Every time I read something from Dario, it seems like he is grifting normies and other midwits with his "OHHH MY GOD CLAUDE WAS KILLING TO KILL SOMEONE! MY GOD IT WANTS TO BREAK OUT!" Then they have all their Claude constitution bullshit and other nonsense to fool idiots. Yeah bro the model with static weights is truly going to take over.
He knows what he is doing, it's all marketing and they have put shit ton of money into it if you have been following the media for the last few months.
Btw, it wasn't many months ago that this guy was hawking doubling of human life span at a group of some boomer investors. Oh yeah I wonder why he decided to bring it up there? Maybe because the audience is old and desperate and that scammers play on this weaknesses.
Truly of one of the more obnoxious people in the AI space and frankly by extension Anthropic is scammy too. I rather pay Altman than give these guys a penny and that says a lot.
Amodei isn't a grifter; the difference is that he really believes powerful AI is imminent.
If you truly believe powerful AI is imminent, then it makes perfect sense to be worried about alignment failures. If a powerless 5 year old human mewls they're going to kill someone, we don't go ballistic because we know they have many years to grow up. But if a powerless 5 year old alien says they're going to kill someone, and in one year they'll be a powerful demigod, then it's quite logical to be extremely concerned about the currently harmless thoughts, because soon they could be quite harmful.
I myself don't think powerful AI is 1-2 years away, but I do take Amodei and others as genuine, and I think what they're saying does make logical sense if you believe powerful AI is imminent.
AI marketing is dystopian. They describe a world where most people are suddenly starving and homeless, and just when you start to think “hey this sounds like the conditions to create something like a French Revolution but where Bastille is a data center” they pivot to BUY MY PRODUCT SO YOU DON'T GET LEFT BEHIND.
It’s advertising straight through the amygdala.
I have no idea if they actually believe this. But it’s repulsive behavior.
Uh...there's constant talk from people being disturbed by it. One of the Democratic candidates in 2020 had his platform based around this, and I can assure you that it's not gotten less attention since ChatGPT came out.
Oh good, hopefully it'll model itself after an exponential rise in any sort of animal populations and collapse on itself because it can no longer be sustained! Isn't that how things go in exponential systems with resource constraints? We can only hope that will be the best outcome. That would be wonderful.
Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.
> 100% of today’s SWE tasks are done by the models.
Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.
Oh, no? I'm still untying corporate Gordian knots?
> There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.
My company tried this, then quickly stopped: $$$
This captures my chief irk over these sorts of "interviews" and AI boosterism quite nicely.
Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement. That leaves one of two possible outcomes:
1) They have not ingested data from beyond their narrow echo chamber that could challenge their perceptions, revealing an irresponsible, nay, negligent amount of ignorance for people in positions of authority or power
OR
2) They do not see their opponents as people.
Like, that's it. They're either ignorant or they view their opposition as subhuman. There is no gray area here, and it's why I get riled up when they're allowed to speak unchallenged at length like this. Genuinely good ideas don't need this much defense, and genuinely useful technologies don't need to be forced down throats.
this third option seems like the most reasonable option here? the way you worded this makes it seems like there are only these two options to reach your absurd conclusion
> like thats it
> There is no gray area here
re-examine your assumptions
> Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement.
That's the core assumption. It's meant to give them the complete benefit of the doubt, and show that doing so means their argument is either ignorant or their perspective that opponents aren't people.
Obviously they're being dishonest little shits, but calling that out point-blank is hostile and results in blind dismissal of the toxicity of their position. Asking someone to complete the thought experiment ("They're behaving honestly, therefore...") is the entire exercise.
You hit the nail on their head.
They go out of their way to call you an "AI bot" if you say something that contradicts their delusional world view.
How much were devs spending to become a sticking point?
I'm asking because I thought it'd be extremely expensive when it rolled out at the company I work for, we have dashboards tracking expenses averaged per dev in each org layer, the most expensive usage is about US$ 350/month/dev, the average hovers around US$ 30-50.
It's much cheaper than I expected.
The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.
The reason is because this has generally been how major discoveries have worked. Science and technology as a whole advances more rapidly when both R&D funding is higher across the board and funding profiles are less spiky and more even. Diminishing returns accumulate pretty quickly with intense focus.
I think there’s a good bit of hubris in assuming we even have the capacity to understand everything. Not to say we can’t achieve AGI, but we’re listening to a salesman tell us what the future holds.
'By powerful AI [he dislikes the baggage of AGI, but means the same], I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter”.'
https://darioamodei.com/essay/machines-of-loving-grace
It's a constantly shifting goalpost. Really it's a just a big lie that says AI will do whatever you can imagine it would.
Nah, that would be ASI, artificial super intelligence.
Meanwhile, Claude Code is implemented using a React-like framework and has 6000 open issues, many of which are utterly trivial to fix.
Can I ask what happened with your Claude Code rollout?
The funny thing is his questions to her were terrible. But she rescued it anyway.
But I think he has improved markedly as an interviewer I will say
[1] https://www.youtube.com/watch?v=v0gjI__RyCY
His reach. He's an Indian Lex Friedman (and I mean that derogatorily [edit: what I mean is I dislike Lex Friedman's lack of substance and oversimplification of extremely complex subjects, and how Dwarkesh uses a similar tack]), but as such has significant reach.
Now that Indian consumers have now become a major bloc in most Western social media platforms, more Dwarkeshes will enter western discourse.
Furthermore, unlike Chinese, Indians overwhelmingly use Western platforms, and Indian policymakers have begun using this consumer power to push western companies to collocate and shift the center of gravity to Indian offices.
> He's an Indian Lex Friedman (and I mean that derogatorily)
I might be reading this wrong, but sounds kinda racist to me?
The marketing effect was them catching the wave at the right time, and they're just surfing the hell out of it.
Oh yeah, sure, that explains his sudden rise to fame.
Joke aside, literally who?
https://semianalysis.com/about/
Maybe they practiced interviews as roommates
For example, at some point I grew very tired of the superficiality of the questions that Lex Friedman asks his very technical guests. He seems to be more interested into taking the conversation into a philosophy freshman's essay about technology instead of talking about technology itself.
Hearing the Dwarkesh podcast was a breath of fresh air in that regard.
For deep dives into AI stuff google deep mind's podcast with Hannah Fry is very good (but obviously limited to goog stuff). I also like Lex for his tech / AI podcasts. Much better interviewer IMO, Dwarkesh talks way too much, and injects his own "insights" for my taste. I'm listening to a podcast to hear what the guests have to say, not the host.
For more light-weight "news-ish" type of podcast that I listen to while walking/driving/riding the train, in no particular order: AI & I (up to date trends, relevant guests), The AI Daily Brief (formerly The AI Breakdown - this is more to keep in touch with what's released in the past month) and any other random stuff that yt pops up for me from listening to these 4 regularly.
I personally liked that he stayed away from navel-gazing in politics when the blogosphere/podcasts went pretty heavy into that.
It did very well on twitter with a large number of high-follower-count tech people, and soon to be high-follower-count (basically AI employees). He had followed the zeitgeists general wisdom well (bat signal, work in public, you-can-just-do-things, move-to-the-arena, You-Are-the-Average-of-the-Five-People-You-Spend-the-Most-Time-With, high-horsepower). And he's just executed very well. Other people have interviewed similar people and generally gotten lower signal content. This moxie marlinspike interview is great though - https://www.youtube.com/watch?v=cPRi7mAGp7I .
The IPO hype is in full swing.
If you want to see the mask slip, watch Lex's interview with Zelensky.
The Indian consumer market.
Unlike China, Indians use western social media platforms so Indian tastes and trends are becoming increasingly common on the internet.
This is also why you see entirely different trends on TikTok (banned in India, allowed elsewhere), Western Social Media (banned in China, allowed elsewhere), and Chinese social media (only used by Chinese and the diaspora).
What Ben Thompson predicted with his "Four Internets" theory 6 years ago has started playing out [0].
Over the next decade, more Indian media like Dwarkesh will leak into Western social media.
[0] - https://stratechery.com/2020/india-jio-and-the-four-internet...
You've said this a couple times in this thread now. Do you have any evidence that most of his audience is in India, to make that claim that his ethnicity matters?
And that girl Altoff ...
Literal nobodies suddenly interviewing Elon Musk, etc... within weeks.
Things rarely go "viral" on their own these days, everything is controlled, even who gets the stage, how the message is delivered, etc... as you have noticed.
With regards to who's behind, well, we might never know. However, as arcane as it might sound, gradient descent can take you close to the answer, or at least point you towards it.
I like this recent meme of Christof from Truman Show saying things like "now tell them that there's aliens" or crap like that.
Lex Fridman is a research scientist at MIT. <https://web.mit.edu/directory/?id=lexfridman&d=mit.edu>
I doubt there are any notable research contributions from him. His actual PhD is from Drexel - Not MIT.
https://lids.mit.edu/people/research-staff
Not just that but I would also suggest to stop using the Internet Archive in general, as it is obviously not a reliable source of truth like Wikipedia or many news outlets with specialized people that spend a non-trivial amount of their time carefully checking all of this information.
Very normal stuff.
Nah, that's just reddit. At this point it's safer to take anything that's popular on reddit as either outright wrong or so heavily out of context that it's not relevant.
But if you’ve read David Deutsch’s The Beginning of Infinity, Amodei’s view looks like a mistake. Knowledge creation is unbounded. Solving diseases/coding shouldn't result in a plateau, but rather unlock totally new, "better" problems we can't even conceive of yet.
It's the begining of Inifinity, no end in sight!
I think I'm better off developing a broad knowledge of design patterns and learning the codebases I work with in intricate, painstaking detail as opposed to trying to "go fast" with LLMs.
I talked with my coworker today and asked which model he uses, he said Opus 4.6 but he said he doesn't use any AI stuff much anymore since he felt it makes him not learn and build the mental model which I tend to agree a bit with.
Unsurprisingly, we were able to build a demo platform within a few days. But when we started building the actual platform, we realized that the code generated by Claude is hard to extend, and a lot of replanning and reworking needs to be done every time you try to add a major feature.
This brought our confidence level down. We still want to believe that Claude will help in generating code. But I no longer believe that Claude will be able to write complex software on its own.
Now we are treating Claude as a junior person on the team and give it well-defined, specific tasks to complete.
IMHO this is really silly: we already know that IQ is useful as a metric in the 0 to about 130 range. For any value above the delta fails to provide predictive power on real-world metrics. Just this simple fact makes the verbiage here moot. Also let's consider the wattage involved...
https://www.julian.ac/blog/2025/09/27/failing-to-understand-...
It never does. The progress curve always looks sigmoidal.
- The beginning looks like a hockey stick, and people get excited. The assumption is that the growth party will never stop.
- You start to hit something that inherently limits the exponential growth and growth starts to be linear. It still kinda looks exponential and the people that want the party to keep growing will keep the hype up.
- Eventually you saturate something and the curve turns over. At this point it’s obvious to all but the most dedicated party-goers.
I don’t know where we are on the LLM curve, but I would guess we’re in the linear part. Which might keep going for a while. Or maybe it turns over this year. No one knows. But the party won’t go on forever; it never does.
I think Cal Newport’s piece [0] is far more realistic:
> But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.
[0] Discussed here: https://news.ycombinator.com/item?id=46505735
All glory to the exponential!
This is the part I find very strange. Let's table the problems with METR [1], just noting that benchmarking AI is extremely hard and METR's methodology is not gospel just because METR's "sole purpose is to study AI capabilities". (That is not a good way to evaluate research!)
Taking whatever idealized metric you want, at some point it has to level off. That's almost trivially true: everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe. That makes the question when, and not if. When do external forces dominate whatever positive feedback loops were causing the original growth? In AI, those positive feedback loops include increased funding, increased research attention and human capital, increased focus on AI-friendly hardware, and many others, including perhaps some small element of AI itself assisting the research process that could become more relevant in the future.
These positive feedback loops have happened many times, and they often do experience quite sharp level-offs as some external factor kicks in. Commercial aircraft speeds experienced a very sharp increase until they leveled off. Many companies grow very rapidly at first and then level off. Pandemics grow exponentially at first before revealing their logistic behavior. Scientific progress often follows a similar trajectory: a promising field emerges, significant increased attention brings a bevy of discoveries, and as the low-hanging fruit is picked the cost of additional breakthroughs surges and whatever fundamental limitations the approach has reveal themselves.
It's not "extremely surprising" that COVID did not infect a trillion people, even though there are some extremely sharp exponentials you can find looking at the first spread in new areas. It isn't extremely surprising that I don't book flights at Mach 3, or that Moore's Law was not an ironclad law of the universe.
Does that mean the entire field will stop making any sort of progress? Of course not. But any analysis that fundamentally boils down to taking a (deeply flawed) graph and drawing a line through it and simplifying the whole field of AI research to "line go up" is not going to give you well-founded predictions for the future.
A much more fruitful line of analysis, in my view, is to focus on the actual conditions and build a reasonable model of AI progress that includes current data while building in estimations of sigmoidal behavior. Does training scaling continue forever? Probably not, given the problems with e.g., GPT-4.5 and the limited amount of quality non-synthetic training data. It's reasonable to expect synthetic training data to work better over time, and it's also reasonable to expect the next generation of hardware to also enable an additional couple orders of magnitude. Beyond that, especially if the money runs out, it seems like scaling will hit a pretty hard wall barring exceptional progress. Is inference hardware going to get better enough that drastically increased token outputs and parallelism won't matter? Probably not, but you can definitely forecast continued hardware improvements to some degree. What might a new architectural paradigm be for AI, and would that have significant improvements over current methodology? To what degree is existing AI deployment increasing the amount of useful data for AI training? What parts of the AI improvement cycle rely on real-world tasks that might fundamentally limit progress?
That's what the discussion should be, not reposting METR for the millionth time and saying "line go up" the way people do about Bitcoin.
[1] https://www.transformernews.ai/p/against-the-metr-graph-codi...
I think "are the failures of a simple linear regression on the METR graph relevant" is a much better framing than "does seeing a line if you squint extrapolate forever." As I said, I'd much rather frame the discussion around the actual material conditions of AI progress, but if you are going to be drawing lines I'd at least want to start by acknowledging that no such model will be perfect.
Even in a world where the software is 100% written by AI in 1 millisecond by a country of geniuses in a data center, humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being. That means taking the time to understand what the AI put together. That will be the bottleneck regardless of how fast and smart AI is. Because unless the CEO wants to be held accountable for what the AI builds and deploys, humans will need to be there to take the responsibility for its output.
Quoting the Anthropic safety guy who just exited, making a bizarre and financially detrimental move: "the world is in peril" (https://www.forbes.com/sites/conormurray/2026/02/09/anthropi...)
There are people in the AI industry who are urgently warning you. Myself and my colleagues, for example: https://www.theregister.com/2026/01/11/industry_insiders_see...
Regulation will not stop this. It's time to build and deploy weapons if you want your species to survive. See earlier discussion here: https://news.ycombinator.com/item?id=46964545
The end of the exponential means the start of other models.
> 100% of today’s SWE tasks are done by the models.
Maybe that’s why the software is so shitty nowadays.
Citation needed please.
Also the same as with saying that "nuclear fussion unlimited energy is 20 years away"
Yet news and opinions from that world somehow seep through into my reality...
Every time I read something from Dario, it seems like he is grifting normies and other midwits with his "OHHH MY GOD CLAUDE WAS KILLING TO KILL SOMEONE! MY GOD IT WANTS TO BREAK OUT!" Then they have all their Claude constitution bullshit and other nonsense to fool idiots. Yeah bro the model with static weights is truly going to take over.
He knows what he is doing, it's all marketing and they have put shit ton of money into it if you have been following the media for the last few months.
Btw, it wasn't many months ago that this guy was hawking doubling of human life span at a group of some boomer investors. Oh yeah I wonder why he decided to bring it up there? Maybe because the audience is old and desperate and that scammers play on this weaknesses.
Truly of one of the more obnoxious people in the AI space and frankly by extension Anthropic is scammy too. I rather pay Altman than give these guys a penny and that says a lot.
If you truly believe powerful AI is imminent, then it makes perfect sense to be worried about alignment failures. If a powerless 5 year old human mewls they're going to kill someone, we don't go ballistic because we know they have many years to grow up. But if a powerless 5 year old alien says they're going to kill someone, and in one year they'll be a powerful demigod, then it's quite logical to be extremely concerned about the currently harmless thoughts, because soon they could be quite harmful.
I myself don't think powerful AI is 1-2 years away, but I do take Amodei and others as genuine, and I think what they're saying does make logical sense if you believe powerful AI is imminent.
AI marketing is dystopian. They describe a world where most people are suddenly starving and homeless, and just when you start to think “hey this sounds like the conditions to create something like a French Revolution but where Bastille is a data center” they pivot to BUY MY PRODUCT SO YOU DON'T GET LEFT BEHIND.
It’s advertising straight through the amygdala.
I have no idea if they actually believe this. But it’s repulsive behavior.
Oh good, hopefully it'll model itself after an exponential rise in any sort of animal populations and collapse on itself because it can no longer be sustained! Isn't that how things go in exponential systems with resource constraints? We can only hope that will be the best outcome. That would be wonderful.