My main skepticism here is whether the theorems have been properly replicated in the proof. Verifying that the proof really captured the mathematical statement seems like a manual, human process, and quite hard to repeat reliably across proofs.
At least for the main accomplishment, the Tietze extension theorem, the natural language statement and the Megalodon theorem looked quite easy to correspond, and the dependency chain seemed fairly easy as well. (the fact that the proof is 10k lines is irrelevant, this is just about the theorem statement itself)
This project did not deal with some cutting-edge, esoteric or puzzle-type maths for which the formalization can be tricky - just well-trodden topology.
Maybe this is what you meant, but the dangerous part is ensuring that the final claim being proved is correct -- the actual proof of that claim is, by design, something that can be quickly and mechanically verified by applying a series of simple rules to a set of accepted axioms. Even a human could (somewhat laboriously) do this verification.
I have no experience with this, but I'd expect the danger to arise from implicit assumptions that are obvious to a mathematician in the relevant subfield but not necessarily to an LLM. E.g., whether we are dealing with real-valued or integer-valued variables can make a big difference in many problems, but might only be actually explicitly stated once in the very first chapter (or even implied by the book's title).
There are also many types of "overloaded" mathematical notation -- usually a superscripted number means "raised to the power", but if that number is -1, it might instead mean "the inverse of this function".
This is correct, with the proviso that the meaning of the claim/statement depends on the definitions and checking these can be a bit tedious. But still feasible, and better by orders of magnitude than "checking" the proofs. Abuses of notation and just plain sloppiness are a big issue though if you wish to auto-translate an informal development into correct definitions and statements for formalization.
This is so cool. I think I had read that some mathematicians were trying to formalize all(?) known math in Lean. Expecting it to be a long term project. And I was recently wondering how long it would be before they turned LLMs loose on the problem.
Seems like plenty of people are already on the path. So cool.
Are there any lessons or tricks here that we can apply to software engineering? What kinds of assertions can we express more easily with Lean etc. than with unit tests, fuzz tests, property tests, using our existing testing frameworks?
What's most impressive about it is that it uses a system, Megaladon, which is pretty obscure. There isn't that much documentation or examples, but it can still formalize an advanced undergraduate textbook.
This project did not deal with some cutting-edge, esoteric or puzzle-type maths for which the formalization can be tricky - just well-trodden topology.
I have no experience with this, but I'd expect the danger to arise from implicit assumptions that are obvious to a mathematician in the relevant subfield but not necessarily to an LLM. E.g., whether we are dealing with real-valued or integer-valued variables can make a big difference in many problems, but might only be actually explicitly stated once in the very first chapter (or even implied by the book's title).
There are also many types of "overloaded" mathematical notation -- usually a superscripted number means "raised to the power", but if that number is -1, it might instead mean "the inverse of this function".
Seems like plenty of people are already on the path. So cool.