I've been saying this for maybe nine months vis-à-vis my consulting work keeps proving it.
Go is an excellent language for LLM code generation. There exists a large stable training corpus, one way to write it, one build system, one formatter, static typing, CSP concurrency that doesn't have C++ footguns.
The language hasn't had a breaking version in over a decade. There's minimal framework churn. When I advise teams to adopt agentic coding workflows at my consultancy [0], Go delivers highly consistent results via Claude and Codex regularly and more often than working with clients using TypeScript and/or Python.
When LLMs have to navigate Python and TypeScript there is a massive combinatorial space of frameworks, typing approaches, and utility libraries.
Too much optionality in the training distribution. The output is high entropy and doesn't converge. Python only dominated early AI coding because ML researchers write Python and trained on Python first. It was path dependence, not merit.\
The thing nobody wants to say is that the reason serious programmers historically hated Go is exactly why LLMs are great at it: There's a ceiling on abstraction.
Go has many many failings (e.g. it took over a decade to get generics). But LLMs don't care about expressiveness, they care about predictability. Go 1.26 just shipped a completely rewritten go fix built on the analysis framework that does AST-level refactoring automatically. That's huge for agentic coding because it keeps codebases modern without needing the latest language features in training data or wasting tokens looking up new signatures.
I spent four years building production public key infrastructure in Golang before LLMs [1]. After working coding agents like everyone else and domain-switching for clients - I've become more of a Go advocate because the language finally delivers on its promise. Engineers have a harder time complaining about the verbose and boilerplate syntax when an LLM does it correctly every single time.
Java has decade(s) of cruft and breaking changes which LLMs were trained on. It's hard to compare. Plus Go compilation speed/test running provides quick iteration for LLMs.
There is a decently long list of breaking changes now. Removing JavaEE modules from the JDK, and restricting sun.misc.Unsafe, are the ones people usually run into.
I mostly write Go code (and have barely had to write any code myself in the past months), but today I had to do some work in a Java project and Claude Code was a terrible experience.
It really felt like using AI tooling of a year or two ago. It wasn’t understanding my prompts, going on tangents, not following the existing style and idioms. Maybe Claude was hungover or doesn’t like mondays, but the contrast with Go was surprising.
One example is that I wanted to add an extra prometheus metric to keep track of an edge case in some for loop. All it had to do was define a counter and increment it. For some reason it would define the counter the line before increment it, instead of defining it next to the other counters outside of the for loop. Technically not wrong (defining a counter is idempotent), but who does that? Especially when the other counters are defined elsewhere in the same function?
Anyway, n=1 but I feel it has an easier time with Go.
Well, there was a Claude outage today, maybe related :D
My n=1 is that it is pretty good with Java, on par with other popular languages like Python and JS, in line with these 3 probably being a good chunk if not the majority of training data.
Not really. It has a pretty bare bones OOP (single inheritance, interface), primitives and objects, generics and pretty much that's it.
Newer features fit very nicely and didn't increase the language surface (records are just a normal class with some methods auto-generated, while sealed types are just a restriction on who can subtype an interface -- and yet these give full ADT support for the language that improves readability and type safety).
Exactly, the propping up of Go seems unfounded. Java in it's newest iterations make it more compelling as a target, and people, especially young people, overlook it because of its stigma as enterprise cruft.
Sandboxing is a completely orthogonal issue and WASM is probably not a good direct target for LLMs.
Of course writing a language that compiles to Wasm is certainly a way, but you would have to sandbox also all the other tools that is used during development (e.g. agents can just call grep/find/etc).
> I spent four years building production public key infrastructure in Golang before LLMs
Do you think you might perhaps have a bias in the same way that my 9+ years of Typescript usage and advocacy would cause me to have a bias or a material interest?
There is nothing non-trivial you can make that involves the web that is better with Go than Typescript. I look at your personal page and I see that you're already struggling to manage state and css and navigation, or that those things aren't interesting to you.
This tells me you have limited web experience, just as I have limited experience making build scripts at Google and you would probably find my server-side concurrency fairly crude.
Still, you lump Python and Typescript together as "equally frustrating for LLMs" tells me you are not speaking out of direct experience. But the lumping in of Typescript and Python feels really, empirically wrong to me as someone with a foot in both those worlds.
> When LLMs have to navigate Python and TypeScript there is a massive combinatorial space of frameworks, typing approaches, and utility libraries.
I'm right there with you with Python! Lumping in static and dynamic languages is not correct here. Most Python code is from a fragmented ecosystem that took 10+ years to migrate from 2 to 3 and often there is no indication in the corpus even what major version it is and typing caught on very slowly. That's going to be a major problem for a long time, whereas no recent LLM has never ever ever confused .js for .ts or suddenly started writing Node .v12 and angular into a Node 22 and vue project.
I'm happy to throw down the gauntlet if you ever want to have a friendly go vs typescript vibe-code off that spans a reasonably sophisticated full-stack project over three or four hours of live coding.
If you feel like I'm a mean person and attacking you for wanting proof that Typescript is not at parity or superior to Go in terms of LLM legibility, I still would really like you to consider how you can demonstrate your virtuosity and value judgements best.
LLMs are great with Typescript. But the fact remains that there are many different browsers and several runtimes (Node, Deno, Bun), each of which may have slightly different rules.
It is easy to get started in. Some of the major warts it has, at least the ones that annoy me, revolve around deployment and management. Python packaging has been "fixed" at least 6 times
> But LLMs don't care about expressiveness, they care about predictability.
I think this is true, but it misses a very key point. Go does an impressively bad job at designing APIs that are difficult to misuse, so LLMs will misuse them and will require also writing unit tests to walk through it, just to validate it used the libraries correctly. This isn't always possible (or is awkward/cumbersome) for certain scenarios like database querues.
All of the reasons people argue Go is good for LLMs are more true for Rust. You and the LLM can design libraries to be difficult to misuse, and then get instant feedback from the compiler to the LLM about what it did wrong, and often with suggestions about how it should fix them! This also makes RL deriving from compiler feedback more effective.
This allows the LLMs to reason more abstractly at larger scales, since the abstractions are less leaky (unlike in Go). The ceiling on abstraction screws you here, since troubleshooting requires more deep diving. It's the same reason Go projects become difficult for humans at large scales, too.
Go is not difficult to maintain at large scale, I mean take Kubernetes for example, it's "trivial" to understand and modified even though it's in the millions loc.
Take async for example. You have to choose some third-party async runtime which may or may not work with other runtimes, libraries, platforms, etc.
With Go, async code written in Go 1.0 compiles and runs the same in Go 1.26, and there is no fragmentation or necessity to reach for third party components.
Rust is harder for the bot to get "wrong" in the sense of running-but-does-the-wrong-thing, but it's far less stable than Go and LLMs frequently output Rust that straight up doesn't compile.
LLMs outputting code that doesn't compile is the failure mode you want. Outputting wrong code that compiles is far worse.
Setting aside the problems of wrong but compiling code. Wrong and non-compiling code is also much easier to deal with. For training an LLM, you have an objective fitness function to detect compilation errors.
For using an LLM, you can embed the LLM itself in a larger system that checks it's output and either re-rolls on errors, or invokes something to fix the errors.
I think the more you can shift to compile time the better when it comes to agents. Go is therefore 'ok', but the type system isn't as useful as other options.
I would say Rust is quite good for just letting something churn through compiler errors until it works, and then you're unlikely to get runtime errors.
I haven't tried Haskell, but I assume that's even better.
I think Rust is great for agents, for a reason that is rarely mentioned: unit tests are in the same file. This means that agents just "know" they should update the tests along with the source.
With other languages, whether it's TypeScript/Go/Python, even if you explicitly ask agents to write/run tests, after a while agents just forget to do that, unless they cause build failures. You have to constantly remind them to do that as the session goes. Never happens with Rust in my experience.
Fwiw i used to do this (and with lints) - it was the only way to make Claude consistent in the early days when i first started using it (~August 2025).
For many months now though, Claude is nearly consistent with both calling test and check/clippy. Perhaps this is due to my global memory file, not sure to be honest.
What i do know, is that i never use those hooks, i have them disabled atm. Why? Because the benefit is almost nonexistent as i mentioned, and the cost is at times, quite high. It means i cannot work on a project piecemeal, aka "only focus on this file, it will not compile and that's okay", and instead forces claude to make complete edits which may be harder to review. Worst of all, i have seen it get into a loop and be unable to exit. Eg a test fails and claude says "that failure is not due to my changes" or w/e, and it just does that.. forever, on loop. Burns 100% of the daily tokens pretty quick if unmonitored.
Fwiw i've not looked to see if there's an alternate way to write hooks. It might be worth having the hook only suggest, rather than forcing claude. Alternatively, maybe i could spawn a subagent to review if stopping claude makes sense.. hmm.
Haskell is great, for what it's worth, but as with any language you have to reign in the AI's use of excessive verbosity. It will stack abstractions to the moon even for simple projects, and haskell's strengths for humans in this regard are weaknesses for AI - different weaknesses than other languages, but still, TANSTAAFL
I am trying out building a toy language hosted on Haskell and it's been a nice combo - the toy language uses dependent typing for even more strictness, but simple regular syntax which is nicer for LLMs to use, and under the hood if you get into the interpreter you can use the full richness of Haskell with less safety guardrails of dependent typing. A bit like safe/unsafe Rust.
> Haskell is great, for what it's worth, but as with any language you have to reign in the AI's use of excessive verbosity. It will stack abstractions to the moon even for simple projects, and haskell's strengths for humans in this regard are weaknesses for AI - different weaknesses than other languages, but still, TANSTAAFL
I haven't had this problem with Opus 4.5+ and Haskell. In fact, I get the opposite problem and often wish it was more capable of using abstractions.
I guess it might be something with the subject matter and how I'm prompting. I prefer somewhat more imperative haskell though so that's probably a taste thing.
+1 to Rust - if we're offloading the coding to the clankers, might as well front-load more complexity cost to offload operational cost. Sure, it isn't a particularly ergonomic or simple language but we're not the ones who have to use it.
I've been cruising on rust too, not just because it works great for LLMs but also the great interop:
- I can build SPAs with typescript and offload expensive operations to a rust implementation that targets wasm
- I can build a multi-platform bundled app with Tauri that uses TS for the frontend, rust for the main parts of the backend, and it can load a python sidecar for anything I need python for (ML stuff mainly)
- Haven't dived too much into games but bevy seems promising for making performant games without the overhead of using one of the big engines (first-class ECS is a big plus too)
It ended up solving the problem of wanting to use the best parts of all of these different languages without being stuck with the worst parts.
> I think the more you can shift to compile time the better when it comes to agents
not born out by evidence. rust is bottom-mid tier on autocoderbenchmark. typescript is marginally bettee than js
shifting to compile time is not necessarily great, because the llm has to vibe its way through code in situ. if you have to have a compiler check your code it's already too late, and the llm does not havs your codebase in its weights, a fetch to read the types of your functions is context expensive since it's nonlocal.
i mean as a first order approximation context (the key resource that seems to affect quality) doesn't depend on real compilation speed, presumably the agent is suspended and not burning context while waiting for compliation
Was asking on mastodon if people tried leveraging very concise and high level languages like haskell, prolog with 2025 llms.. I'm really really curious.
Jane Street had a cool video about how you can address lack of training data in a programming language using llm patching. Video is called "Arjun Guha: How Language Models Model Programming Languages & How Programmers Model Language Models"
The big take away is that you can "patch" llms and steer them to correct answers in less trained programming languages, allowing for superior performance. Might work here. Not a clue how to implement, but stuff to llm-to-doc and the like makes me hopeful
I ve just vibed for 2 weeks a pretty complex Python+Next.js app. I've forced Codex into TDD, so everything(!) has to be tested.
So far, it is really really stable and type errors haven't been a thing yet.
Not wanting to disagree, I am sure with Rust, it would be even more stable.
What will you use for dependent types, Idris 2? Lean? None are as popular as Rust especially counting the number of production level packages available.
This is quite sad to see someone react to a comment they disagree with by assuming that different opinion is paid for. I'd love it if you dug into my comment history and found even a shred of evidence that I'm being paid to talk positively about my programming language of choice.
All comments are paid for in some way, even if only in "warm fuzzies". If that is sad, why are you choosing to be sad? But outlandish comments usually require greater payment to justify someone putting in the effort. If you're not being paid well, what's the motivation to post things you know don't make any sense to try and sell a brand?
No, unless you mean the problem of over-engineering? In which case, yes, that is a realistic concern. In the real world, tests are quite often more than good enough. And since they are good enough they end up covering all the same cases a half-assed type system is able to assert anyway by virtue of the remaining logic needing to be tested, so the type system doesn't become all that important in the first place.
A half-assed type system is helpful for people writing code by hand. Then you get things like the squiggly lines in your editor and automated refactoring tools, which are quite beneficial for productivity. However, when an LLM is writing code none of that matters. It doesn't care one bit if the failure reports comes from the compiler or the test suite. It is all the same to it.
Have also wondered how Haskell would be. From my limited understanding it’s one of the few languages whose compiler enforces functional purity. I’ve always liked that idea in theory but never tried the language
You can write in it like in imperative languages. I did it when I first encountered it long time ago, and I didn’t know how to write, or why I should write code in a functional way. It’s like how you can write in an object oriented way in simple C. It’s possible, and it’s a good thought experiment, but it’s not recommended. So, it’s definitely not “enforced” in a strict sense.
There's no special keyword, just a "generic" type `IO<T>` defined in standard library which has a similar "tainting" property like `async` function coloring.
Any side effect has to be performed inside `IO<T>` type, which means impure functions need to be marked as `IO<T>` return. And any function that tries to "execute" `IO<T>` side effect has to mark itself as returning `IO<T>` as well.
You basically compose a description of the side effects and pass this value representing those to the main handler which is special in that it can execute the side effects.
For the rest of the codebase this is simply an ordinary value you can pass on/store etc.
I think the intersection of FP and current AI is quite interesting. Purity provides a really tightly scoped context, so it almost seems like you could have one 'architect' model design the call graph/type skeleton at a high level (function signatures, tests, perf requirements, etc.) then have implementers fill them out in parallel.
> Lifetimes are a global property and LLMs are not particularly good at reasoning about them compared to local ones.
Huh? Lifetime analysis is a local analysis, same as any other kind of type checking. The semantics may have global implications, but exposing them locally is the whole point of having dedicated syntax for it.
> Lifetime analysis is a local analysis, same as any other kind of type checking
That's what the compiler is doing.
The developer (or LLM) is supposed to do the global reasoning so that what they end up writing down makes semantic sense.
Sure, throwing a bunch of variants at it and see what sticks is certainly an approach, but "lifetimes check out" only proves that the resulting code will be memory safe, not that it actually makes sense.
I built an agent with Go for the exact reasons laid out in the article, but did consider Rust. I would prefer it to be Rust actually. But the #1 reason I chose Go is token efficiency. My intuitive sense was that the LLM would have to spent a lot of time reasoning about lifetimes, interpreting and fixing compiler warnings, etc.
I've built tools with both Go and Rust as LLM experiments, and it is a real advantage for Go that the test/compile cycle is much faster.
I've been successful with each, I think there's positives and negatives to both, just wanted to mention that particular one that stands out as making it relatively more pleasant to work with.
"LLM would have to spend a lot of time reasoning about lifetimes"
Let's set aside the fact that Go is a garbage collected language while Rust is not for now...
Do you prefer to let LLM reason about lifetimes, or debugging subtle errors yourself at runtime, like what happens with C++?
People who are familiar with the C++ safety discussion understand that lifetimes are like types -- they are part of the code and are just as important as the real logic. You cannot be ambiguous about lifetimes yet be crystal clear about the program's intended behavior.
For many (most) types of objects lifetimes can be a runtime property just fine. For e.g. a list, in rust/c/c++ you would have to do an explicit decision how long should it be "alive", meanwhile a managed language's assumption that when it's reachable that is its lifetime is completely correct and it has the benefit of fluidly adapting to future code changes, lessening maintenance costs.
Of course there are types where this is not true (file handlers, connections, etc), and managed languages usually don't have as good features to deal with these as CPP/Rust (raii).
It's not a waste of time though. Those warnings and clippy lints are there to improve the quality of the code and to find bugs.
As a human I can just decide to write quality code (or not!), but LLMs don't understand when they're being lazy or stupid and so need to have that knowledge imposed on them by an external reviewer. Static analysis is cheap, and more importantly it's automatic. The alternative is to spend more time doing code review, but that's a bottleneck.
I've never actually seen it get a compiler issue arising from lifetimes, so it seems to one-shot that stuff just fine. Although my work is typically middle of the road, non-HFT trading applications, not super low-level.
Most LLM sucked at Rust at the beginning because there's much less rust code available on the broad internet.
I suspect the providers started training specifically in it because it appeared proportionally much more in the actual LLM usage (obviously much less than more mainstream languages like Python or JavaScript, but I wouldn't be surprised if there was more LLM queries on Rust than on C, for demographic reasons).
Nowadays even small Qwens are decent at it in one-shot prompts, or at least much better than GPT-4 was.
That matches with actual Rust use actually, I've worked with Rust since 2017 on multiple projects and the number of times I've used the lifetime annotation has been very limited.
It's actually rare to have to borrow something and keep the borrow in another object (is where lifetime happens), most (95% at least I'd say) of the time you borrow something and then drop the borrow, or move the thing.
Why is this a meaningful distinction to you? What does "reason" mean here? Can we construct a test that cleanly splits what humans do from what LLMs do?
Haskell works pretty well with agents, particularly when the agent is LSP-capable and you set up haskell-language-server. Even less capable models do well with this combo. Without LSP works fine but the fast feedback loop after each edit really accelerates agents while the intent is still fresh in context
I've been using LLMs (Opus) heavily for writing Haskell, both at work and on personal projects and its shockingly effective.
I wouldn't use it for the galaxy brain libraries or explorations I like to do for my blog but for production Haskell Opus 4.5+ is really good. No other models have been effective for me.
I am guessing there is a balance between a language that has a lot of soundness checks (like Rust) and a language that has a ton of example code to train on (like Python). How much more valuable each aspect is I am not sure.
- Rust code generates absolutely perfectly in Claude Code.
- Rust code will run without GC. You get that for free.
- Rust code has a low defect rate per LOC, at least measured by humans. Google gave a talk on this. The sum types + match and destructure make error handling ergonomic and more or less required by idiomatic code, which the LLM will generate.
I'd certainly pick Rust or Go over Python or TypeScript. I've had LLMs emit buggy dynamic code with type and parameter mismatches, but almost never statically typed code that fails to compile.
In this benchmark, models can correctly solve Rust problems 61% on first pass — A far cry from other languages such as C# (88%) or Elixir (a “buggy dynamic language”) where they perform best (97%).
I wonder why that is, it’s quite surprising. Obviously details of their benchmark design matter, but this study doesn’t support your claims.
The downside is that even simple Rust projects typically use hundreds of dependencies, and this is even worse with LLMs, who don’t understand the concept of “less is more”.
Of my friend group the two people I think of as standout in terms of getting useful velocity out of AI workflows in non-trivial domains (as opposed to SaaS plumbing or framework slop) primarily use Haskell with massive contexts and tight integration with the dev env to ground the model.
I have let Gemini, Claude Code and Codex hallucinate the language they wanted to for a few days. I prompted for "design the language you'd like to program in" and kept prompting "go ahead". Just rescued it from a couple too deep rabbit holes or asked it for some particular examples to stress it a bit.
It´s a weird-ass Forth-like but with a strong type system, contracts, native testing, fuzz testing, and a constraint solver for integer math backed by z3. Interpreter implemented in Elixir.
In about 150 commits, everything it has done has always worked without runtime errors, both the Elixir interpreter and the examples in the hallucinated language, some of them non-trivial for a week old language (json parser, DB backed TODO web app).
It´s a deranged experiment, but on the other hand seems to confirm that "compile" time analysis plus extensive testing facilities do help LLM agents a lot, even for a weird language that they have to write just from in-context reference.
Don´t click if you value your sanity, the only human generated thing there is the About blurb:
Interesting project, but I believe the base assumption is already slightly wrong. Why do we assume that LLMs know what kind of language would benefit them? This information is not knowable without doing proper research, and even if there is some research like that, it would have to be a part of the training data. Otherwise it's just hallucination.
I agree, it´s mostly a silly whim taken too far. Too much time in my hands.
In particular the whole stack based thing looks questionable.
In fact the very first answer by Gemini proposed an APL-like encoding of the primitives for token saving, but when I started the implementation Claude Code pushed back on that, saying it would need to keep some sane semantics around the keywords to be able to understand the programs.
The very strict verification story seems more plausible, tracks with the rest of the comments here.
What has surprised me is that the language works at all, adding todo items to a web app written in a week old language felt a bit eery.
Wow that is wild, that is exactly along the lines of my fantasy language. It'd be so easy to go into the deep end building tooling and improving a language like this.
This is actually quite impressive, especially as AI vibe-coded slop. How easy is the language to learn for novice coders, compared to other FORTH lookalikes?
There's a lot of language for such a little time, but if you have programmed any Forth it should be easy to pick up, have a look at some of the top level examples.
I have programmed about 3 Forth implementations by hand throughout the years for fun, but I have never been able to really program in it, because the stack wrangling confuses me enormously.
So for me anything vaguely complex is unreadable , but apparently not for the LLMs, which I find surprising. When I have interrogated them they say they like the lack of syntax more than the stack ops hamper them, but it might be just an hallucinated impression.
When they write Cairn I sometimes see stack related error messages scroll by, but they always correct them quickly before they stop.
- Strongly typed, including GADTs and various flavors of polymorphism, but not as inscrutable as Haskell
- (Mostly) pure functions, but multiple imperative/OO escape hatches
- The base language is surprisingly simple
- Very fast to build/test (the bytecode target, at least)
- Can target WASM/JS
- All code in a file is always evaluated in order, which means it has to be defined in order. Circular dependencies between functions or types have to be explicitly called out, or build fails.
I should add, it's also very fun to work with as a human! Finding refactors with pure code that's this readable is a real joy.
Strong agree. OCaml's compiler is sofa king good at catching preventing real bugs the agents accidentally introduce here and there. It's the same as with humans, except the agents don't complain about syntax or multicore, they just power through and produce high quality output.
How's the multicore and async story these days? I remember that was one of the big draws of F# originally, that it had all (or, most of) the type safety features of OCaml but all the mutlicore of dotnet. (Plus it created async before even C# had it). Has OCaml caught up?
OCaml has full multicore support with algebraic effects now. The effect system makes things like async very nice as there's no function "coloring" problem: https://discuss.ocaml.org/t/ocaml-5-0-0-is-out/10974
But I don't believe the effects are tracked in the type system yet, but that's on it way.
The type system for effects is an ongoing research effort. For now you get unhandled effect exceptions at runtime.
With Multicore OCaml we gained thread sanitizer support and a reasonable memory model. Combined they give you tools for reasoning about data races and finding them. https://ocaml.org/manual/5.3/tsan.html
Strongly agree, plus OCaml has an expressive type system that lets you build abstractions that just aren’t possible with Go. The original article gives poor reasons for choosing Go.
what would you prefer? i liked rust a lot as i found the compiler feedback loop pretty great, but the language was much more verbose and i found the simplicity of Go to be great, and the typing system is good enough for almost everything.
I have a feeling F# would work great, but unfortunately we don't use it at work so I can't experiment with the fancy expensive models. Only problem might be amount of training data.
- features a bit more actual data than “intuitions” compared to OP
- interesting to think about in an agent context specifically is runtime introspection afforded by the BEAM (which, out of how it developed, has always been very important in that world) - the blog post has a few notes on that as well
Yeah, Go is probably the best general purpose language at the moment.
Rust is great, but there's no need to manage memory manually if you don't need to.
So for general mainstream languages, that leaves ... Python. Sure, it's ok but Go has strong typing from the start, not bolted on with warts.
(I realized how incredibly subjective this comment turned out to be after I had written it. Apologies if I omitted or slighted your fave. This is pretty much how I see it).
For me Go is like the 80% language. I like TypeScript as well, but Go is just such a reliable workhorse I'd say? it's not "sexy" but it's just satisfying how it's just these simple building blocks that you can build extremely complex software with
Go has govulncheck[0] for static analysis of vulnerabilities in both code and binaries. The govulncheck tool has first-class support in the Go ecosystem. No other language has this level of integration with a static analyzer and at best will only analyze for known vulnerable modules (PLEASE CORRECT ME IF I'M WRONG).
Not understanding the difference between this and something like cargo audit[0]. I suppose it has something to do with "static analysis of vulnerabilities" but I don't see any of that from a quick google search of govulncheck.
govulncheck analyzes symbol usage and only warns if your code reaches the affected symbol(s).
I’m not sure about cargo audit specifically, but most other security advisories are package scoped and will warn if your code transitively references the package, regardless of which symbols your code uses.
It sounds like you think govulncheck can analyze your code and detect vulnerabilities that you wrote in your code. That's not what it does. It analyzes the libraries that you use and determines if you are using them in a vulnerable way. For a free tool, govulncheck is somewhat nicer than average in its class because it does call flow analysis and won't claim you're vulnerable just because you used a module, you have to actually have a call that could go over the vulnerable code, but "somewhat nicer than average" is as far as I would take it. But many languages have similar tools, and when you say "static analyzer" this isn't what I have in mind. For that I'd cite golangci-lint, which is a collection of community-built analysis tools, and it's nice to be able to pick them all up in one fell swoop, but they're nothing like Coverity or any real static analysis tool.
You're correct about govulncheck's integration; it significantly enhances Go's maintainability for large projects. Other languages often depend on external tools that lack the same level of usability and depth as Go's offerings.
I've read these arguments and they make perfect sense; but having tried different projects rewritten in Go vs Python (with Claude & Cursor); Python was just significantly faster, smaller, and easier to understand for Claude. It was done faster, and made less mistakes. I don't mean faster as execution time, but the code for its Python projects was almost a magnitude smaller. So it was done by the time its Go counterpart was halfway. Maybe it's gotten better, or I need some kind of "how to Go" skill for Claude... But just didn't work out of the box _for me_ as well as Python did. I tried a couple projects rewritten in different languages; Go, Kotlin, Python, Javascript. I settled with Python. (My own background is in Kotlin, Java and C++.)
Great discussion! As someone who works with AI coding agents daily, my take is that the "best" language really depends on what the agent is building. Go's simplicity and predictability are huge for general-purpose agents, but I've found TypeScript shines for agents that live in the web ecosystem - interacting with APIs, browser automation, etc. The ecosystem alignment matters a lot. Python will always have a stranglehold on data/ML workloads simply because that's where the libraries are. The key insight might be: pick the language that matches your agent's domain, not just what the LLM generates best.
I think Go isn't bad choice. It is widely popular, so I'd assume there's plenty of it in training sets and has stable APIs, so even "outdated code" would work. There's also rich ecosystem of static analyzers to keep generated code in check.
On the other hand I think Rust is better by some margin. Type system is obviously a big gain but Rust is very fast moving. When API changes LLMs can't follow and it takes many tries to get it right so it kinda levels out. Code might compile but only on some god-forgotten crate version everybody (but LLM) forgot about.
From personal experience Haskell benefits the most. Not only it has more type system usage than Rust, but its APIs are moving on snail-like pace, which means it doesn't suffer from outdated Rust and code compilable will work just fine.
Also I think that Haskell code in training sets is guaranteed to be safe because of language extension system.
How are the generated Haskell programs? I imagine much shorter than Go and easier to eyeball for correctness, but can’t say as I’m not fluent in it. LLM-generated procedural Python scripts are very readable in my experience.
Is Go the best programming language for AI agents? I don't think so.
But what makes Go useful is the fact that it compiles to an actual executable you can easily ship anywhere - and that is actually really good considering that the language itself is super easy to learn.
I've recently started building some A agent tools with it and so far the experience has been great:
Shameless plug - I sort of eluded in this post I wrote about Dark Factories generally and about rust being better than Go for building software (not just agents) with AI - but I think something generally important is feedback loops. While not all feedback loops are created equal and some will be superior, my argument is that holistic approach of including diverse, valuable feedback loops matters more.
For me it is an active question if coding training data "purity" matters. Python has Go on volume, but within that is a ton of API changes, language changes, etc. Is that free regularization or does it poison the dataset? As the author points out Go code is nominal because basically all published Go code looks the same and the library APIs are frozen in time to some degree.
I actually spent some time trying to get to the bottom of what a logical extension of this would be. An entirely made up language spec for an idealized language it never saw ever, and therefore had no bad examples of it. Go is likely the closest for the many reasons people call it boring.
Yeah, I don't care for go but I expect it to win here. Its performance is good enough for most use cases, it has a huge ecosystem of libraries, lots of training data, and deploys as a binary so users don't need to install anything else.
I expect rust to gain some market share since it's safe and fast, with a better type system, but complex enough that many developers would struggle by themselves. But IME AI also struggles with the manual memory management currently in large projects and can end up hacking things that "work" but end up even slower than GC. So I think the ecosystem will grow, but even once AI masters it, the time and tokens required for planning, building, testing will always exceed that of a GC language, so I don't see it ever usurping go, at least not in the next decade.
I wish the winner would be OCaml, as it's got the type safety of rust (or better), and the development speed of Go. But for whatever reason it never became that mainstream, and the lack of libraries and training data will probably relegate it to the dustbin. Basically, training data and libraries >>> operational characteristics >>> language semantics in the AI world.
I have a hard time imagining any other language maintaining a solid advantage over those two. There's less need for a managed runtime, definitely no need for an interpreted language, so I imagine Java and Python will slowly start to be replaced. Also I have to imagine C/C++ will be horrible for AI for obvious reasons. Of course JS will still be required for web, Swift for iOS, etc., but for mainstream development I think it's going to be Rust and Go.
> But for whatever reason it never became that mainstream
Syntax. Syntax is the reason. It's too foreign to be picked up quickly by the mass of developers that already know a C style language. I would also argue that it's not only foreign, it's too clunky.
The syntax is ridiculously simple, and I can't in good conscience allow OCaml to be called clunky in a thread about a language that solved error handling with record-like interfaces and multiple return types.
This is an opinion piece without any benchmarks, some valid points there but all anecdotal. Hard to take it seriously, feels like cargo culting into a preference.
Edit: cool article, I have myself speculated that we will get a new language made for/by llms that will be torture writing by hand/ide but easy to read/follow/navigate/check for a human and super easy for Llms to develop and maintain.
Language models need redundancy (as informing structure). Not surprising, since they're trained on human language. It's hard to train a model on a language with a high entropy. I haven't tried it, but I think LLMs would perform quite badly on languages such as APL, where structure and meaning are closely intertwined.
Static compiling is a minus not a plus. Dynamic languages like Clojure allow agents to REPL and prod with the code live, and follow Verified Spec-Driven development a whole lot better. Lisp-like languages allow agents to create the exact data structure they need for every problem.
I wonder if this is why there's been a huge uptick in the visibility of Go related content. I've seen more posts about Go in the last few days then I had in the last year.
Right now, I'd say the best language for AI is the one that you can review the fastest and rarely changes. Go is fairly readable imo and never changes so it is probably a good contender. But, I can't see any reason for anyone to learn it if they don't feel like it. Same goes for other "language X is good for AI" type posts.
Clojure is awesome for LLMs (if you shim in an automatic paren balancer).
But that's because it's tight, token efficient, and above all local. Pure functions don't require much context to reason about effectively.
However, you do miss the benefit of types, which are also good for LLMs.
The "ideal" LLM language would have the immutability and functional nature of Clojure combined with a solid type system.
Haskell or OCaml immediately come to mind, but I'm not sure how much the relative lack of training data hurts... curious if anyone has any experiences there.
Clojure is definitely dense. I’m wondering, though, about the languages’ representation in the training data.
Stack overflow tags:
17,775 Clojure
74,501 Go
I’m not finding a way to get any useful information from GitHub, e.g. count of de-duplicated lines of code per language. There might be something in their annual “Octoverse” report but I haven’t drilled into it yet: https://github.blog/news-insights/octoverse/octoverse-a-new-...
I was prototyping to this end the other day - what would it be like for a coding agent to have access to a language that can be:
- structurally edited, ensuring syntactic validity at all times
- annotated with metadata, so that agents can annotate the code as they go and refer back to accreted knoweledge (something Clojure can do structurally using nodepaths or annotations directly in code)
- put into any environment you might like, e.g. using ClojureScript
I haven't proven to myself this is more useful/results in better code than just writing code "the normal way" with an agent, but it sure seems interesting.
Every agent I've seen in Go has been so straightforward. Take exe.dev's Shelley. Great example of clean code and very effective tooling. Worth a try if you haven't used it.
Hi, author here, thanks! I have used TypeScript before across various projects, but I haven't considered building CLI tooling in that before, I guess due to my prejudice against the whole JS ecosystem. I plan to give it another try in the next weeks.
As long as python runs all the models, the best language for agents is likely Python as it allows e.g. auto-fine-tuning of (local) LLMs for self-improving agents without the need to change the programming language. Use Pydantic if you care about type/runtime errors.
I had a lot of success when having agents write D code. The results for me have been better than with C# or C++. I hadn't considered Go. Does anybody have some experience about how D fares vs. Go?
Intuitively I expect this. Go is a language designed by Rob Pike to keep legions of high IQ Google engineers constrained down a simple path. There's generally one way to do it in Go.
As a human programmer with creative and aesthetic urges as well as being lazy and having an ego, I love expressive languages that let me describe what I want in a parsimonious fashion. ie As few lines of code as possible and no boilerplate.
With the advances in agent coding none of these concerns matter any more.
What matters most is can easily look at the code and understand the intent clearly. That the agent doesn't get distracted by formatting. That the code is relatively memory safe, type safe and avoids null issues and cannot ignore errors.
I dislike Go but I am a lot more likely to use it in this new world.
The most striking thing about Go codebases is that, for the most part (there are exceptions), they all look the same. You can choose a random repository on GitHub and be hard-pressed to not think that you wrote it yourself. Which also means that LLMs are likely to produce code that looks like you wrote it yourself. I do think that is one thing Go has going for it today.
But for how long will it matter? I do wonder if programming languages as we know them today will lose relevance as all this evolves.
Go error handling is so bad that it ruins the language for me. But it might accidentally be an advantage here, because LLMs notoriously don't know how to handle exceptions properly. They'll do stuff like catch-log-ignore deep in the stack.
This is a great article, thank you for sharing. The 4 languages I've honed in on with respect to AI agents are Rust, Python, C, and Go. Python has a foothold in the tooling for creating AI based on the training of large language models with frameworks including PyTorch and Tensorflow. As long as Python is the language to create AI, it will also be a great language for AI to code in.
The most important downside of Python is that it doesn't compile to a native binary that the OS can recognize and it's much slower. However, it's a great "glue" for different binaries or languages like Rust and Go.
Rust is the increasingly popular language for AI agents to choose from, often integrated into Python code. The trend is on the side of Rust here. I don't want to mention all the great points from the original poster. One technical point that wasn't mentioned, from my experience, is that the install size is too large for embedded systems. As the article mentioned, the build times are also longer than Go and this is an even worse bottleneck on embedded systems. I prefer Go over Rust in my research and development but I yield to other developers on the team professionally.
What about C/C++? At the moment, I've had great success with implementing C++ code through Agentic AI. However, there are a dearth of frameworks for things like web development. Because Python compiles to C, and integrating C modules into Python is relatively straightforward, I find myself implementing the Numpy approach where C is the backbone of performance critical features.
Personally, I still actively utilize code I've written more than 10 years ago that's battle tested, peer reviewed, and production ready. The above comments are for the current state, but what about the future? Another point that wasn't mentioned was the software license from Go. It's BSD3 with a patent grant which is more permissive than Rust's MIT + Apache 2.0 licenses. This is very important to understand the future viability of software because given enough time and all other things the same, more permissive software will win out in adoption.
The rabbit hole goes deeper. I think we will sacrifice Rust as the "good-enough" programming language to spoil the ecosystem with Agentic AI before its redemption arc. Only time will tell, but Python's inability to compile to a native binary makes it a bad choice for malware developers. You can fill in the blank here. Perhaps the stage has already been set, and it looks like Rust will be the opening act now that the lights are on.
- I agree that go's syntax and concepts are simpler (esp when you write libraries, some rust code can get gnarly and take a lot of brain cycles to parse everything)
- > idiomatic way of writing code and simpler to understand for humans - eh, to some extent. I personally hate go's boilerplate of "if err != nil" but that's mainly my problem.
- compiles faster, no question about it
- more go code out there allowing models to generate better code in Go than Rust - eh, here I somewhat disagree. The quality of the code matters as well. That's why a lot of early python code was so bad. There just is so much bad python out there. I would say that code quality and correctness matters as well, and I'd bet there's more "production ready" (heh) rust code out there than go code.
- (go) it is an opinionated language - so is rust, in a lot of ways. There are a lot of things that make writing really bad rust code pretty hard. And you get lots of protections for foot meets gun type of situations. AFAIK in go you can still write locking code using channels. I don't think you can do that in rust.
- something I didn't see mentioned is error messages. I think rust errors are some of the best in the industry, and they are sooo useful to LLMs (I've noticed this ever since coding with gpt4 era models!)
I guess we'll have to wait and see. There will be a lot of code written by agents going forward, we'll be spoiled for choice.
I independently came to this conclusion myself a few months ago. I don't particularly enjoy working with Go. I find it to be cumbersome and tedious to write by hand. I find the syntax to be just different enough from C++ or C# to be irritating. Don't get me started on the package versioning system.
But it does have the benefit of having a very strong "blessed way of doing things", so agents go off the rails less, and if claude is writing the code and endless "if err != nil" then the syntax bothers me less.
Go's fast compile times (feedback) are good for dumb models. Smarter ones are more likely to get it right and can therefore use languages with richer type systems.
I thought about this for a while and came to a conclusion that while "code is free", tokens are not. If tokens were free and instant, it would generate machine code directly. Therefore, it needs abstractions like a compiled or interpreted language in order to address the token bottleneck.
My experience is that AI agents are not that good with Go. Not sure why but I think it is down to the low code quality of many major open source projects in Go.
you had me at Compile time bugs, strong typing, and static typing.
With Go it will increasingly become that one has to write the design doc carefully with constraints, for semi tech/coder folks it does make a lot of sense.
With Python, making believe is easy(seen it multiple times myself), but do you think that coding agent/LLM has to be quite malicious to put make believe logic in compile time lang compared with interpreted languages?
Strange article. Why is Go the best language for agents instead of, say, Python? Here are the points the author seems to make:
---
# Author likes go
Ok, cool story bro...
# Go is compiled
Nice, but Python also has syntax and type checking -- I don't typically have any more luck generating more strictly typed code with agents.
# Go is simple
Sure. Python for a long time had a reputation as "pseudocode that runs", so the arguments about go being easy to read might be bias on the part of the author (see point 1).
Is that a big deal if you don't need to build binaries at all?
# Agents know Go
Agents seem to know python as well...
---
Author seems to fall short of supporting the claim that Go is better than any other language by any margin, mostly relying on the biases they have that Go is a superior language in general than, say, Python. There are arguments to be made about compiled versus interpreted, for example, but if you don't accept that Go is the best language of them all for every purpose, the argument falls flat.
I would say Go is better then python for 2 reasons
1) Go runs faster, so if you're not optimizing for dev time (and if you're vibe coding, you're not) then it's a clear winner there
2) Python's barrier to entry is incredibly low, so intuitively there's likely a ton of really terrible python code in the training corpus for these tools
Go is an excellent language for LLM code generation. There exists a large stable training corpus, one way to write it, one build system, one formatter, static typing, CSP concurrency that doesn't have C++ footguns.
The language hasn't had a breaking version in over a decade. There's minimal framework churn. When I advise teams to adopt agentic coding workflows at my consultancy [0], Go delivers highly consistent results via Claude and Codex regularly and more often than working with clients using TypeScript and/or Python.
When LLMs have to navigate Python and TypeScript there is a massive combinatorial space of frameworks, typing approaches, and utility libraries.
Too much optionality in the training distribution. The output is high entropy and doesn't converge. Python only dominated early AI coding because ML researchers write Python and trained on Python first. It was path dependence, not merit.\
The thing nobody wants to say is that the reason serious programmers historically hated Go is exactly why LLMs are great at it: There's a ceiling on abstraction.
Go has many many failings (e.g. it took over a decade to get generics). But LLMs don't care about expressiveness, they care about predictability. Go 1.26 just shipped a completely rewritten go fix built on the analysis framework that does AST-level refactoring automatically. That's huge for agentic coding because it keeps codebases modern without needing the latest language features in training data or wasting tokens looking up new signatures.
I spent four years building production public key infrastructure in Golang before LLMs [1]. After working coding agents like everyone else and domain-switching for clients - I've become more of a Go advocate because the language finally delivers on its promise. Engineers have a harder time complaining about the verbose and boilerplate syntax when an LLM does it correctly every single time.
[0]: https://sancho.studio
[1]: https://github.com/zoom/zoom-e2e-whitepaper
It's an even more popular language with even more training data and also has a better type system so more validation on LLM output, etc.
It really felt like using AI tooling of a year or two ago. It wasn’t understanding my prompts, going on tangents, not following the existing style and idioms. Maybe Claude was hungover or doesn’t like mondays, but the contrast with Go was surprising.
One example is that I wanted to add an extra prometheus metric to keep track of an edge case in some for loop. All it had to do was define a counter and increment it. For some reason it would define the counter the line before increment it, instead of defining it next to the other counters outside of the for loop. Technically not wrong (defining a counter is idempotent), but who does that? Especially when the other counters are defined elsewhere in the same function?
Anyway, n=1 but I feel it has an easier time with Go.
My n=1 is that it is pretty good with Java, on par with other popular languages like Python and JS, in line with these 3 probably being a good chunk if not the majority of training data.
Newer features fit very nicely and didn't increase the language surface (records are just a normal class with some methods auto-generated, while sealed types are just a restriction on who can subtype an interface -- and yet these give full ADT support for the language that improves readability and type safety).
I personally think neither Go nor Java would be good for "agents". Better to have them sandboxed in WASM.
Of course writing a language that compiles to Wasm is certainly a way, but you would have to sandbox also all the other tools that is used during development (e.g. agents can just call grep/find/etc).
Do you think you might perhaps have a bias in the same way that my 9+ years of Typescript usage and advocacy would cause me to have a bias or a material interest?
There is nothing non-trivial you can make that involves the web that is better with Go than Typescript. I look at your personal page and I see that you're already struggling to manage state and css and navigation, or that those things aren't interesting to you.
This tells me you have limited web experience, just as I have limited experience making build scripts at Google and you would probably find my server-side concurrency fairly crude.
Still, you lump Python and Typescript together as "equally frustrating for LLMs" tells me you are not speaking out of direct experience. But the lumping in of Typescript and Python feels really, empirically wrong to me as someone with a foot in both those worlds.
> When LLMs have to navigate Python and TypeScript there is a massive combinatorial space of frameworks, typing approaches, and utility libraries.
I'm right there with you with Python! Lumping in static and dynamic languages is not correct here. Most Python code is from a fragmented ecosystem that took 10+ years to migrate from 2 to 3 and often there is no indication in the corpus even what major version it is and typing caught on very slowly. That's going to be a major problem for a long time, whereas no recent LLM has never ever ever confused .js for .ts or suddenly started writing Node .v12 and angular into a Node 22 and vue project.
I'm happy to throw down the gauntlet if you ever want to have a friendly go vs typescript vibe-code off that spans a reasonably sophisticated full-stack project over three or four hours of live coding.
If you feel like I'm a mean person and attacking you for wanting proof that Typescript is not at parity or superior to Go in terms of LLM legibility, I still would really like you to consider how you can demonstrate your virtuosity and value judgements best.
Python doesn’t need dependence to prove its merit. There’s a reason why it is one the major programming languages and was top 1 for a while.
I think this is true, but it misses a very key point. Go does an impressively bad job at designing APIs that are difficult to misuse, so LLMs will misuse them and will require also writing unit tests to walk through it, just to validate it used the libraries correctly. This isn't always possible (or is awkward/cumbersome) for certain scenarios like database querues.
All of the reasons people argue Go is good for LLMs are more true for Rust. You and the LLM can design libraries to be difficult to misuse, and then get instant feedback from the compiler to the LLM about what it did wrong, and often with suggestions about how it should fix them! This also makes RL deriving from compiler feedback more effective.
This allows the LLMs to reason more abstractly at larger scales, since the abstractions are less leaky (unlike in Go). The ceiling on abstraction screws you here, since troubleshooting requires more deep diving. It's the same reason Go projects become difficult for humans at large scales, too.
With Go, async code written in Go 1.0 compiles and runs the same in Go 1.26, and there is no fragmentation or necessity to reach for third party components.
Setting aside the problems of wrong but compiling code. Wrong and non-compiling code is also much easier to deal with. For training an LLM, you have an objective fitness function to detect compilation errors.
For using an LLM, you can embed the LLM itself in a larger system that checks it's output and either re-rolls on errors, or invokes something to fix the errors.
I would say Rust is quite good for just letting something churn through compiler errors until it works, and then you're unlikely to get runtime errors.
I haven't tried Haskell, but I assume that's even better.
With other languages, whether it's TypeScript/Go/Python, even if you explicitly ask agents to write/run tests, after a while agents just forget to do that, unless they cause build failures. You have to constantly remind them to do that as the session goes. Never happens with Rust in my experience.
For many months now though, Claude is nearly consistent with both calling test and check/clippy. Perhaps this is due to my global memory file, not sure to be honest.
What i do know, is that i never use those hooks, i have them disabled atm. Why? Because the benefit is almost nonexistent as i mentioned, and the cost is at times, quite high. It means i cannot work on a project piecemeal, aka "only focus on this file, it will not compile and that's okay", and instead forces claude to make complete edits which may be harder to review. Worst of all, i have seen it get into a loop and be unable to exit. Eg a test fails and claude says "that failure is not due to my changes" or w/e, and it just does that.. forever, on loop. Burns 100% of the daily tokens pretty quick if unmonitored.
Fwiw i've not looked to see if there's an alternate way to write hooks. It might be worth having the hook only suggest, rather than forcing claude. Alternatively, maybe i could spawn a subagent to review if stopping claude makes sense.. hmm.
I am trying out building a toy language hosted on Haskell and it's been a nice combo - the toy language uses dependent typing for even more strictness, but simple regular syntax which is nicer for LLMs to use, and under the hood if you get into the interpreter you can use the full richness of Haskell with less safety guardrails of dependent typing. A bit like safe/unsafe Rust.
I haven't had this problem with Opus 4.5+ and Haskell. In fact, I get the opposite problem and often wish it was more capable of using abstractions.
- I can build SPAs with typescript and offload expensive operations to a rust implementation that targets wasm
- I can build a multi-platform bundled app with Tauri that uses TS for the frontend, rust for the main parts of the backend, and it can load a python sidecar for anything I need python for (ML stuff mainly)
- Haven't dived too much into games but bevy seems promising for making performant games without the overhead of using one of the big engines (first-class ECS is a big plus too)
It ended up solving the problem of wanting to use the best parts of all of these different languages without being stuck with the worst parts.
not born out by evidence. rust is bottom-mid tier on autocoderbenchmark. typescript is marginally bettee than js
shifting to compile time is not necessarily great, because the llm has to vibe its way through code in situ. if you have to have a compiler check your code it's already too late, and the llm does not havs your codebase in its weights, a fetch to read the types of your functions is context expensive since it's nonlocal.
If you're running good agentic AI it can read the compile errors just like a human and work to fix them until the build goes through.
The big take away is that you can "patch" llms and steer them to correct answers in less trained programming languages, allowing for superior performance. Might work here. Not a clue how to implement, but stuff to llm-to-doc and the like makes me hopeful
- Rust: nearly universally compiles and runs without fault.
- Python,JS: very often will run for some time and then crash
The reason I think is type safety and the richness of the compiler errors and warnings. Rust is absolutely king here.
Not wanting to disagree, I am sure with Rust, it would be even more stable.
Does one get paid well to post these advertisements for Rust?
I hope there aren't many of your type on here.
Not to mention it's one of the slowest compilation of recent languages if not the slowest (maybe Kotlin).
Everything is a trade-off.
A half-assed type system is helpful for people writing code by hand. Then you get things like the squiggly lines in your editor and automated refactoring tools, which are quite beneficial for productivity. However, when an LLM is writing code none of that matters. It doesn't care one bit if the failure reports comes from the compiler or the test suite. It is all the same to it.
Any side effect has to be performed inside `IO<T>` type, which means impure functions need to be marked as `IO<T>` return. And any function that tries to "execute" `IO<T>` side effect has to mark itself as returning `IO<T>` as well.
You basically compose a description of the side effects and pass this value representing those to the main handler which is special in that it can execute the side effects.
For the rest of the codebase this is simply an ordinary value you can pass on/store etc.
Lifetimes are a global property and LLMs are not particularly good at reasoning about them compared to local ones.
Most applications don't need low level memory control, so this complexity is better pushed to runtime.
There are lots of managed languages with good/even stronger type systems than Rust, paired with a good modern GC.
Huh? Lifetime analysis is a local analysis, same as any other kind of type checking. The semantics may have global implications, but exposing them locally is the whole point of having dedicated syntax for it.
That's what the compiler is doing.
The developer (or LLM) is supposed to do the global reasoning so that what they end up writing down makes semantic sense.
Sure, throwing a bunch of variants at it and see what sticks is certainly an approach, but "lifetimes check out" only proves that the resulting code will be memory safe, not that it actually makes sense.
I've been successful with each, I think there's positives and negatives to both, just wanted to mention that particular one that stands out as making it relatively more pleasant to work with.
Let's set aside the fact that Go is a garbage collected language while Rust is not for now...
Do you prefer to let LLM reason about lifetimes, or debugging subtle errors yourself at runtime, like what happens with C++?
People who are familiar with the C++ safety discussion understand that lifetimes are like types -- they are part of the code and are just as important as the real logic. You cannot be ambiguous about lifetimes yet be crystal clear about the program's intended behavior.
Of course there are types where this is not true (file handlers, connections, etc), and managed languages usually don't have as good features to deal with these as CPP/Rust (raii).
As a human I can just decide to write quality code (or not!), but LLMs don't understand when they're being lazy or stupid and so need to have that knowledge imposed on them by an external reviewer. Static analysis is cheap, and more importantly it's automatic. The alternative is to spend more time doing code review, but that's a bottleneck.
I suspect the providers started training specifically in it because it appeared proportionally much more in the actual LLM usage (obviously much less than more mainstream languages like Python or JavaScript, but I wouldn't be surprised if there was more LLM queries on Rust than on C, for demographic reasons).
Nowadays even small Qwens are decent at it in one-shot prompts, or at least much better than GPT-4 was.
It's actually rare to have to borrow something and keep the borrow in another object (is where lifetime happens), most (95% at least I'd say) of the time you borrow something and then drop the borrow, or move the thing.
I wouldn't use it for the galaxy brain libraries or explorations I like to do for my blog but for production Haskell Opus 4.5+ is really good. No other models have been effective for me.
- Rust code generates absolutely perfectly in Claude Code.
- Rust code will run without GC. You get that for free.
- Rust code has a low defect rate per LOC, at least measured by humans. Google gave a talk on this. The sum types + match and destructure make error handling ergonomic and more or less required by idiomatic code, which the LLM will generate.
I'd certainly pick Rust or Go over Python or TypeScript. I've had LLMs emit buggy dynamic code with type and parameter mismatches, but almost never statically typed code that fails to compile.
In this benchmark, models can correctly solve Rust problems 61% on first pass — A far cry from other languages such as C# (88%) or Elixir (a “buggy dynamic language”) where they perform best (97%).
I wonder why that is, it’s quite surprising. Obviously details of their benchmark design matter, but this study doesn’t support your claims.
It´s a weird-ass Forth-like but with a strong type system, contracts, native testing, fuzz testing, and a constraint solver for integer math backed by z3. Interpreter implemented in Elixir.
In about 150 commits, everything it has done has always worked without runtime errors, both the Elixir interpreter and the examples in the hallucinated language, some of them non-trivial for a week old language (json parser, DB backed TODO web app).
It´s a deranged experiment, but on the other hand seems to confirm that "compile" time analysis plus extensive testing facilities do help LLM agents a lot, even for a weird language that they have to write just from in-context reference.
Don´t click if you value your sanity, the only human generated thing there is the About blurb:
https://github.com/cairnlang/Cairn
In particular the whole stack based thing looks questionable.
In fact the very first answer by Gemini proposed an APL-like encoding of the primitives for token saving, but when I started the implementation Claude Code pushed back on that, saying it would need to keep some sane semantics around the keywords to be able to understand the programs.
The very strict verification story seems more plausible, tracks with the rest of the comments here.
What has surprised me is that the language works at all, adding todo items to a web app written in a week old language felt a bit eery.
I have programmed about 3 Forth implementations by hand throughout the years for fun, but I have never been able to really program in it, because the stack wrangling confuses me enormously.
So for me anything vaguely complex is unreadable , but apparently not for the LLMs, which I find surprising. When I have interrogated them they say they like the lack of syntax more than the stack ops hamper them, but it might be just an hallucinated impression.
When they write Cairn I sometimes see stack related error messages scroll by, but they always correct them quickly before they stop.
- Strongly typed, including GADTs and various flavors of polymorphism, but not as inscrutable as Haskell
- (Mostly) pure functions, but multiple imperative/OO escape hatches
- The base language is surprisingly simple
- Very fast to build/test (the bytecode target, at least)
- Can target WASM/JS
- All code in a file is always evaluated in order, which means it has to be defined in order. Circular dependencies between functions or types have to be explicitly called out, or build fails.
I should add, it's also very fun to work with as a human! Finding refactors with pure code that's this readable is a real joy.
But I don't believe the effects are tracked in the type system yet, but that's on it way.
With Multicore OCaml we gained thread sanitizer support and a reasonable memory model. Combined they give you tools for reasoning about data races and finding them. https://ocaml.org/manual/5.3/tsan.html
Well if it's a choice between these 4, then sure. Not sure that really suffices to qualify Go as "the" best language for agents
“Why Elixir is the best language for AI” https://news.ycombinator.com/item?id=46900241
- for comparison of the arguments made
- features a bit more actual data than “intuitions” compared to OP
- interesting to think about in an agent context specifically is runtime introspection afforded by the BEAM (which, out of how it developed, has always been very important in that world) - the blog post has a few notes on that as well
Rust is great, but there's no need to manage memory manually if you don't need to.
So for general mainstream languages, that leaves ... Python. Sure, it's ok but Go has strong typing from the start, not bolted on with warts.
(I realized how incredibly subjective this comment turned out to be after I had written it. Apologies if I omitted or slighted your fave. This is pretty much how I see it).
[0] https://go.dev/doc/tutorial/govulncheck
[0]https://crates.io/crates/cargo-audit
I’m not sure about cargo audit specifically, but most other security advisories are package scoped and will warn if your code transitively references the package, regardless of which symbols your code uses.
On the other hand I think Rust is better by some margin. Type system is obviously a big gain but Rust is very fast moving. When API changes LLMs can't follow and it takes many tries to get it right so it kinda levels out. Code might compile but only on some god-forgotten crate version everybody (but LLM) forgot about.
From personal experience Haskell benefits the most. Not only it has more type system usage than Rust, but its APIs are moving on snail-like pace, which means it doesn't suffer from outdated Rust and code compilable will work just fine. Also I think that Haskell code in training sets is guaranteed to be safe because of language extension system.
But what makes Go useful is the fact that it compiles to an actual executable you can easily ship anywhere - and that is actually really good considering that the language itself is super easy to learn.
I've recently started building some A agent tools with it and so far the experience has been great:
https://github.com/pantalk/pantalk https://github.com/mcpshim/mcpshim
https://bernste.in/writings/the-unreasonable-effectiveness-o...
I actually spent some time trying to get to the bottom of what a logical extension of this would be. An entirely made up language spec for an idealized language it never saw ever, and therefore had no bad examples of it. Go is likely the closest for the many reasons people call it boring.
I expect rust to gain some market share since it's safe and fast, with a better type system, but complex enough that many developers would struggle by themselves. But IME AI also struggles with the manual memory management currently in large projects and can end up hacking things that "work" but end up even slower than GC. So I think the ecosystem will grow, but even once AI masters it, the time and tokens required for planning, building, testing will always exceed that of a GC language, so I don't see it ever usurping go, at least not in the next decade.
I wish the winner would be OCaml, as it's got the type safety of rust (or better), and the development speed of Go. But for whatever reason it never became that mainstream, and the lack of libraries and training data will probably relegate it to the dustbin. Basically, training data and libraries >>> operational characteristics >>> language semantics in the AI world.
I have a hard time imagining any other language maintaining a solid advantage over those two. There's less need for a managed runtime, definitely no need for an interpreted language, so I imagine Java and Python will slowly start to be replaced. Also I have to imagine C/C++ will be horrible for AI for obvious reasons. Of course JS will still be required for web, Swift for iOS, etc., but for mainstream development I think it's going to be Rust and Go.
Syntax. Syntax is the reason. It's too foreign to be picked up quickly by the mass of developers that already know a C style language. I would also argue that it's not only foreign, it's too clunky.
I've started what I'm calling an agent first framework written in Go.
Its just too easy to get great outputs with Go and Codex.
https://github.com/swetjen/virtuous
The key is blending human observability with agent ergonomics.
I've no idea myself, I just thought it was interesting for comparison.
https://news.ycombinator.com/item?id=47222705
Edit: cool article, I have myself speculated that we will get a new language made for/by llms that will be torture writing by hand/ide but easy to read/follow/navigate/check for a human and super easy for Llms to develop and maintain.
I've no idea myself, I just thought it was interesting for comparison.
But that's because it's tight, token efficient, and above all local. Pure functions don't require much context to reason about effectively.
However, you do miss the benefit of types, which are also good for LLMs.
The "ideal" LLM language would have the immutability and functional nature of Clojure combined with a solid type system.
Haskell or OCaml immediately come to mind, but I'm not sure how much the relative lack of training data hurts... curious if anyone has any experiences there.
Stack overflow tags:
I’m not finding a way to get any useful information from GitHub, e.g. count of de-duplicated lines of code per language. There might be something in their annual “Octoverse” report but I haven’t drilled into it yet: https://github.blog/news-insights/octoverse/octoverse-a-new-...- structurally edited, ensuring syntactic validity at all times
- annotated with metadata, so that agents can annotate the code as they go and refer back to accreted knoweledge (something Clojure can do structurally using nodepaths or annotations directly in code)
- put into any environment you might like, e.g. using ClojureScript
I haven't proven to myself this is more useful/results in better code than just writing code "the normal way" with an agent, but it sure seems interesting.
May be this is good incentive to improve error handling in Go.
Golang just gets bogged down in irrelevant details way too easily for this.
Though, I have found both to be better at C# than Swift, for example.
I really love this point-out. Not always an easy sell upstream, but a big factor in happy + productive teams.
On the other hands if there good conventions it’s also a benefit, for example Ruby on Rails.
As a human programmer with creative and aesthetic urges as well as being lazy and having an ego, I love expressive languages that let me describe what I want in a parsimonious fashion. ie As few lines of code as possible and no boilerplate.
With the advances in agent coding none of these concerns matter any more.
What matters most is can easily look at the code and understand the intent clearly. That the agent doesn't get distracted by formatting. That the code is relatively memory safe, type safe and avoids null issues and cannot ignore errors.
I dislike Go but I am a lot more likely to use it in this new world.
But for how long will it matter? I do wonder if programming languages as we know them today will lose relevance as all this evolves.
The most important downside of Python is that it doesn't compile to a native binary that the OS can recognize and it's much slower. However, it's a great "glue" for different binaries or languages like Rust and Go.
Rust is the increasingly popular language for AI agents to choose from, often integrated into Python code. The trend is on the side of Rust here. I don't want to mention all the great points from the original poster. One technical point that wasn't mentioned, from my experience, is that the install size is too large for embedded systems. As the article mentioned, the build times are also longer than Go and this is an even worse bottleneck on embedded systems. I prefer Go over Rust in my research and development but I yield to other developers on the team professionally.
What about C/C++? At the moment, I've had great success with implementing C++ code through Agentic AI. However, there are a dearth of frameworks for things like web development. Because Python compiles to C, and integrating C modules into Python is relatively straightforward, I find myself implementing the Numpy approach where C is the backbone of performance critical features.
Personally, I still actively utilize code I've written more than 10 years ago that's battle tested, peer reviewed, and production ready. The above comments are for the current state, but what about the future? Another point that wasn't mentioned was the software license from Go. It's BSD3 with a patent grant which is more permissive than Rust's MIT + Apache 2.0 licenses. This is very important to understand the future viability of software because given enough time and all other things the same, more permissive software will win out in adoption.
The rabbit hole goes deeper. I think we will sacrifice Rust as the "good-enough" programming language to spoil the ecosystem with Agentic AI before its redemption arc. Only time will tell, but Python's inability to compile to a native binary makes it a bad choice for malware developers. You can fill in the blank here. Perhaps the stage has already been set, and it looks like Rust will be the opening act now that the lights are on.
- I agree that go's syntax and concepts are simpler (esp when you write libraries, some rust code can get gnarly and take a lot of brain cycles to parse everything)
- > idiomatic way of writing code and simpler to understand for humans - eh, to some extent. I personally hate go's boilerplate of "if err != nil" but that's mainly my problem.
- compiles faster, no question about it
- more go code out there allowing models to generate better code in Go than Rust - eh, here I somewhat disagree. The quality of the code matters as well. That's why a lot of early python code was so bad. There just is so much bad python out there. I would say that code quality and correctness matters as well, and I'd bet there's more "production ready" (heh) rust code out there than go code.
- (go) it is an opinionated language - so is rust, in a lot of ways. There are a lot of things that make writing really bad rust code pretty hard. And you get lots of protections for foot meets gun type of situations. AFAIK in go you can still write locking code using channels. I don't think you can do that in rust.
- something I didn't see mentioned is error messages. I think rust errors are some of the best in the industry, and they are sooo useful to LLMs (I've noticed this ever since coding with gpt4 era models!)
I guess we'll have to wait and see. There will be a lot of code written by agents going forward, we'll be spoiled for choice.
But it does have the benefit of having a very strong "blessed way of doing things", so agents go off the rails less, and if claude is writing the code and endless "if err != nil" then the syntax bothers me less.
Code is free, sure, but it's not guaranteed to be correct, and review time is not free.
... write the code yourself?
I think many many people just skip the "review" step in this process, and assume they're saving time. It's not going to end well.
Reduce entropy, increase probability of the correct outcome.
LLMs are surfing higher dimensional vector spaces, reduce the vector space, get better results.
With Go it will increasingly become that one has to write the design doc carefully with constraints, for semi tech/coder folks it does make a lot of sense.
With Python, making believe is easy(seen it multiple times myself), but do you think that coding agent/LLM has to be quite malicious to put make believe logic in compile time lang compared with interpreted languages?
---
# Author likes go
Ok, cool story bro...
# Go is compiled
Nice, but Python also has syntax and type checking -- I don't typically have any more luck generating more strictly typed code with agents.
# Go is simple
Sure. Python for a long time had a reputation as "pseudocode that runs", so the arguments about go being easy to read might be bias on the part of the author (see point 1).
# Go is opinionated
Sure. Python also has standards for formatting code, running tests (https://docs.python.org/3/library/unittest.html), and has no need for building binaries.
# Building cross-platform Go binaries is trivial
Is that a big deal if you don't need to build binaries at all?
# Agents know Go
Agents seem to know python as well...
---
Author seems to fall short of supporting the claim that Go is better than any other language by any margin, mostly relying on the biases they have that Go is a superior language in general than, say, Python. There are arguments to be made about compiled versus interpreted, for example, but if you don't accept that Go is the best language of them all for every purpose, the argument falls flat.
1) Go runs faster, so if you're not optimizing for dev time (and if you're vibe coding, you're not) then it's a clear winner there
2) Python's barrier to entry is incredibly low, so intuitively there's likely a ton of really terrible python code in the training corpus for these tools