Rochus9 minutes ago
Interesting. It's essentially the same idea as in this article: https://substack.com/home/post/p-184486153. In both scenarios, the human is relieved of the burden of writing complex formal syntax (whether Event-B or Lean 4). The human specifies intent and constraints in natural language, while the LLM handles the work of formalization and satisfying the proof engine.

But Lean 4 is significantly more rigid, granular, and foundational than Event-B, and they handle concepts like undefined areas and contradictions very differently. While both are "formal methods," they were built by different communities for different purposes: Lean is a pure mathematician's tool, while Event-B is a systems engineer's tool. Event-B is much more flexible, allowing an engineer (or the LLM) to sketch the vague, undefined contours of a system and gradually tighten the logical constraints through refinement.

LLMs are inherently statistical interpolators. They operate beautifully in an Open World (where missing information is just "unknown" and can be guessed or left vague) and they use Non-Monotonic Reasoning (where new information can invalidate previous conclusions). Lean 4 operates strictly on the Closed World Assumption (CWA) and is brutally Monotonic. This is why using Lean to model things humans care about (business logic, user interfaces, physical environments, dynamic regulations) quickly hits a dead end. The physical world is full of exceptions, missing data, and contradictions. Lean 4 is essentially a return to the rigid, brittle approach of the 1980s expert systems. Event-B (or similar methods) provides the logical guardrails, but critically, it tolerates under-specification. It doesn't force the LLM to solve the Frame Problem or explicitly define the whole universe. It just checks the specific boundaries the human cares about.

Gehinnn28 minutes ago
I just completed the formal verification of my bachelor thesis about real time cellular automata with Lean 4, with heavy use of AI.

Over the past year, I went from fully manual mode (occasionally asking chat gpt some Lean questions) to fully automatic mode, where I barely do Lean proofs myself now (and just point AI to the original .tex files, in German). It is hard to believe how much the models and agentic harnesses improved over the last year.

I cannot describe how much fun it is to do refactorings with AI on a verified Lean project!

Also, it's so easy now to have visualizations and typesetted documents generated by AI, from dependency visualizations of proofs using the Lean reflection API, to visual execution traces of cellular automatas.

xvilka1 hour ago
Lean is a great idea, especially the 4th version, a huge level up from the 3rd one, but its core still deficient[1] in some particular scenarious (see an interesting discussion[2] in the Rock (formerly Coq) issue tracker). Not sure if it might hinder the automation with the AI.

[1] https://artagnon.com/logic/leancoq

[2] https://github.com/rocq-prover/rocq/issues/10871

kig27 minutes ago
If you want to mess with this at home, I've been vibe coding https://github.com/kig/formalanswer to plug theorem provers into an LLM call loop. It's pretty early dev but it does have a logic rap battle mode.
throwaway20272 hours ago
I think I saw Terence Tao use a formal proof language but I don't remember if it was Lean. I'm not familiar with it but I do agree that moving to provable languages could improve AI but isn't the basis just having some immutable rigorous set of tests basically which could be replicated in "regular" programming languages?
iNic1 hour ago
You can think of theorem provers as really crazy type checkers. It's not just a handful of tests that have to run, but more like a program that has to compile.
seanhunter1 hour ago
Yes exactly. There is this thing called the “Curry-Howard Isomorphism” which (as I understand it) says that propositions in formal logic are isomorphic to types. So the “calculus of constructions” is a typed lambda calculus based on this that makes it possible for you to state some proposition as a type and if you can instantiate that type then what you have done is isomorphic to proving the proposition. Most proof assistants (and certainly Lean) are based on this.

So although lean4 is a programming language that people can use to write “normal” programs, when you use it as a proof assistant this is what you are doing - stating propositions and then using a combination of a (very extensive) library of previous results, your own ingenuity using the builtins of the language and (in my experience anyway) a bunch of brute force to instantiate the type thus proving the proposition.

gaogao1 hour ago
Yes, though often the easiest way to replicate it in regular programming languages is to translate that language to Lean or another ITM, though auto-active like Verus is used for Rust pretty successfully.

Python and C though have enough nasal demons and undefined behavior that it's a huge pain to verify things about them, since some random other thread can drive by and modify memory in another thread.

seanhunter2 hours ago
It was lean4. In fact he has made lean4 versions of all of the proofs in his Analysis I textbook available here

https://github.com/teorth/analysis

He also has blogged about how he uses lean for his research.

Edit to add: Looking at that repo, one thing I like (but others may find infuriating idk) is that where in the text he leaves certain proofs as exercises for the reader, in the repo he turns those into “sorry”s, so you can fork the repo and have a go at proving those things in lean yourself.

If you have some proposition which you need to use as the basis of further work but you haven’t completed a formal proof of yet, in lean, you can just state the proposition with the proof being “sorry”. Lean will then proceed as though that proposition had been proved except that it will give you a warning saying that you have a sorry. For something to be proved in lean you have to have it done without any “sorry”s. https://lean-lang.org/doc/reference/latest/Tactic-Proofs/Tac...

tokenless23 minutes ago
> Large language models (LLMs) have astounded the world with their capabilities, yet they remain plagued by unpredictability and hallucinations – confidently outputting incorrect information. In high-stakes domains like finance, medicine or autonomous systems, such unreliability is unacceptable.

This misses a point that software engineers initmately know especially ones using ai tools:

* Proofs are one QA tool

* Unit tests, integration tests and browser automation are other tools.

* Your code can have bugs because it fails a test above BUT...

* You may have got the requirements wrong!

Working with claude code you can have productive loops getting it to assist you in writing tests, finding bugs you hadn't spotted and generally hardening your code.

It takes taste and dev experience definitely helps (as of Jan 26)

So I think hallucinations and proofs as the fix is a bit barking up the wrong tree

The solution to hallucinations is careful shaping of the agent environment around the project to ensure quality.

Proofs may be part of the qa toolkit for AI coded projects but probably rarely.

zmgsabst1 hour ago
The real value is in mixed mode:

- Lean supports calling out as a tactic, allowing you to call LLMs or other AI as judges (ie, they return a judgment about a claim)

- Lean can combine these judgments from external systems according to formal theories (ie, normal proof mechanics)

- an LLM engaged in higher order reasoning can decompose its thinking into such logical steps of fuzzy blocks

- this can be done recursively, eg, having a step that replaces LLM judgments with further logical formulations of fuzzy judgments from the LLM

Something, something, sheaves.

nudpiedo2 hours ago
I like a lot of the idea behind such theorem provers, however, I always have issues with them producing compatible code with other languages.

This happened to me with idris and many others, I took some time to learn the basics, wrote some examples and then FFI was a joke or code generators for JavaScript absolutely useless.

So no way of leveraging an existing ecosystem.

seanhunter1 hour ago
nudpiedo1 hour ago
Literally the first line of the link:

“The current interface was designed for internal use in Lean and should be considered unstable. It will be refined and extended in the future.“

My point is that in order to use these problem provers you really gotta be sure you need them, otherwise interaction with an external ecosystem might be a dep/compilation nightmare or bridge over tcp just to use libraries.

AxiomLab1 hour ago
Theorem provers enforce an absolute epistemic rigor that stochastic AI generation completely lacks.

When you ground a system in formalized logic rather than probabilistic weights, you don't just get better results; you get mathematical guarantees. Relying on continuous approximation is fine for chat, but building truly robust systems requires discrete, deterministic proofs. This is exactly the paradigm shift needed.

youoy1 hour ago
This site is getting invaded by AI bots... how long before its just AI speaking with AI, and just people reading the conversations thinking that its actual people?
5o1ecist1 hour ago
There's no need for you to worry about it, MY FELLOW HUMAN [1], because you will never know.

[1] https://old.reddit.com/r/totallynotrobots

PS: Of course that's not true. An ID system for humans will inevitably become mandatory and, naturally, politicians will soon enough create a reason to use it for enforcing a planet wide totalitarian government watched over by Big Brother.

Conspiracy-theory-nonsense? Maybe! I'll invite some billionaires to pizza and ask them what they think.