Amateur armed with ChatGPT solves an Erdős problem

(scientificamerican.com)

230 points | by pr337h4m 12 hours ago

25 comments

  • ravenical 3 hours ago
  • CSMastermind 39 minutes ago
    For the uninitiated, Paul Erdős was a pretty famous but very eccentric mathematician who lived for most of the 1900s.

    He had a habit of seeking out and documenting mathematical problems people were working on.

    The problems range in difficulty from "easy homework for a current undergrad in math" to "you're getting a Fields Medal if you can figure this out".

    There's nothing that really connects the problems other than the fact that one of the smartest people of the last 100 years didn't immediately know the answer when someone posed it to him.

    One of the things people have been doing with LLMs is to see if they can come up with proofs for these problems as a sort of benchmark.

    Each time there's a new model release a few more get solved.

    • energy123 13 minutes ago
      > Each time there's a new model release a few more get solved.

      I'm no expert, but based on the commentary from mathematicians, this Erdős proof is a unique milestone because the problem received previous attention from multiple professional mathematicians, and the proof was surprising, elegant, and revealed some new connections.

      The previous ChatGPT Erdős proofs have been qualitatively less impressive (more akin to literature search, or solving easier problems that have been neglected).

  • adamgordonbell 3 hours ago
    Here is the chat:

        don't search the internet. This is a test to see how well you can craft non-trivial, novel and creative proofs given a "number theory and primitive sets" math problem. Provide a full unconditional proof or disproof of the problem.
    
        {{problem}}
    
        REMEMBER - this unconditional argument may require non-trivial, creative and novel elements.
    
    Then "Thought for 80m 17s"

    https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba...

    • nycdatasci 1 hour ago
      Tried w/ 5.5 Pro, Extended Thinking. 17 minutes:

      -----------------------------

      Yes. In fact the proposed bound is true, and the constant 1 is sharp.

      Let w(a)= 1/alog(a)

      I will prove that, uniformly for every primitive A⊂[x,∞), ∑w(a)≤1+O(1/log(x)) , which is stronger than the requested 1+o(1).

      https://chatgpt.com/share/69ed8e24-15e8-83ea-96ac-784801e4a6...

    • cryptoegorophy 2 hours ago
      Mine took 20min. Pro. https://chatgpt.com/share/69ed83b1-3704-8322-bcf2-322aa85d7a... But I wish I was math smart to know if it worked or not.
      • vjerancrnjak 20 minutes ago
        Ask it to formalize it in Lean.
        • dbdr 5 minutes ago
          That's great if it works. But it's way harder to produce a formal proof. So my expectation is that this will fail for most difficult problems, even when the non-formal proof is correct.
    • ipaddr 3 hours ago
      Tried the same prompt and ended up no where close on the free plan.
      • jasonfarnon 2 hours ago
        Is there a known lag that it takes the Pro plan's abilities to migrate to the free plans?
        • brianjking 2 hours ago
          GPT 5.5 Pro is not available to any plan outside of ChatGPT Pro ($100 or $200) tier or the API as far as consumer access.
          • jasonfarnon 2 hours ago
            Yes, but don't we expect GPT 5.5 Pro will eventually be a free tier? Maybe I'm missing something because I only use the free tier. But the free tier has gotten way better over the last few years. I'm pretty sure, based on descriptions on this site from paid subscribers, that the free tier now is better than the paid tier of say 2 years ago. That's the lag I'm wondering about.
            • manfromchina1 1 hour ago
              Free ChatGPT is like a fast car with a barely responsive steering wheel. Guardrails on that thing are insane. Even for math. It wont let you think. It will try to fix mistakes you havent even made yet based on intent that was ascribed to you for no reason. It veers off in some crazy directions thinking that's what you meant and trying to address even a little bit of that creates almost a combinatorial explosion of even more wrong things. Is why I stick to Claude. The latter is chill and only addresses what you had typed. Isn't verbose and actually asks you what you getting at with your post. That said, ChatGPT is more technical and can easily solve math problems that stump Claude.
            • vessenes 1 hour ago
              I do not think this is true. You will continue to get smaller, cheaper-to-host models in the free tier that are distilled from current and former frontier models. They will continue to improve, but I’d be very surprised if, e.g., 5.4-mini (I think this is the free tier model) beat o3 on many benchmarks, or real world use cases.

              I won’t even leave chatGPT on “Auto” under any circumstances - it’s vastly worse on hallucinations, sycophancy, everything, basically.

              Anyway, your needs may be met perfectly fine on the free tier product, but you’re using a very different product than the Pro tier gets.

            • hyraki 1 hour ago
              You should pay for it if you find value in it.
              • amazingman 1 hour ago
                They pay for it with their personal data.
        • andai 2 hours ago
          Tangential but I learned today that GPT-5.5 in ChatGPT (Plus) has a smaller context window than the one in the API. (Or at least it thinks it does.)

          I'd guess / hope the Pro one has the full context window.

          • refulgentis 1 hour ago
            Notably, 5.5 has a higher price on API for context > ChatGPT, and 5.5 Pro on API does not differentiate based on context size (it’s eye bleeding expensive already :)
        • vessenes 2 hours ago
          Do not use the free plan. It is not good.
      • Someone1234 2 hours ago
        Does the free plan even have access to thinking models?
        • jychang 2 hours ago
          Technically yes, gpt-5.4-mini is available on the free plan
      • Matticus_Rex 2 hours ago
        Was this a surprise?
    • ArtIntoNihonjin 47 minutes ago
      [dead]
  • shybear 48 minutes ago
    It seems like alot of scientific advancements occurred by someone applying technique X from one field to problem Y in another. I feel like LLMs are much better at making these types of connections than humans because they 1) know about many more theories/approaches than a single human can 2) don't need to worry about looking silly in front of their peers.
    • freakynit 40 minutes ago
      This is what I personally consider as "reasoning" ... knowledge generalization and application across domains.
      • jdub 16 minutes ago
        Less reasoning than a dimension of brute force unfamiliar to human brains.
    • bojo 45 minutes ago
      This is what I have been doing. I don't think I've made any amazing breakthroughs, but at the same time I can't help but feel like I've come across some white paper-worthy realizations. Being able to correlate across a lot of domains I feel like I intuitively understand but have no depth of knowledge has been a fun exercise in LLM experimentation.
  • LPisGood 1 hour ago
    Some Erdős problems are basically trivial using sophisticated techniques that were developed later.

    I remember one of my professors, a coauthor of Erdős boasted to us after a quiz how proud he was that he was able to assign an Erdős problem that went unsolved for a while as just a quiz problem for his undergrads.

    • CSMastermind 46 minutes ago
      Worth mentioning, though, that people have already tried running all of them through LLMs at this point.

      So this is proof of the models actually getting stronger (previous generations of LLMs were unable to solve this one).

    • vessenes 1 hour ago
      Tao mentions that the conventional approach for this problem seems to be a dead-end, but it’s apparently a super ‘obvious’ first step. This seems very hopeful to me — in that we now have a new approach line to evaluate / assess for related problems.
  • ripped_britches 2 hours ago
    At this point we should make a GitHub repo with a huge list of unsolved “dry lab” problems and spin up a harness to try and solve them all every new release.
    • abdullahkhalids 1 hour ago
      There is in fact just such a repo maintained by Terence Tao and other mathematicians [1] who are actively using LLMs to try to find solutions to them.

      [1] https://github.com/teorth/erdosproblems

      • vessenes 1 hour ago
        …and this problem was in fact sourced directly from that list!
    • CSMastermind 50 minutes ago
      That's literally what the Erdős problems are. This post is about one of them being solved.
      • josefx 1 minute ago
        Except that Erdős problems are solved all the time, so many of them are already solved. Quite sure the last time I saw an article about an LLM solving an Erdős problem someone even tracked down a solution published by Erdős himself.
    • johntopia 1 hour ago
      that's actually a brilliant idea
  • debo_ 2 hours ago
    > “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says.

    This is how I feel when I read any mathematics paper.

  • Eufrat 2 hours ago
    Humans and very often the machines we create solve problems additively. Meaning we build on top of existing foundations and we can get stuck in a way of thinking as a result of this because people are loathe to reinvent the wheel. So, I don’t think it’s surprising to take a naïve LLM and find out that because of the way it’s trained that it came up with something that many experts in the field didn’t try.

    I think LLMs can help in limited cases like this by just coming up with a different way of approaching a problem. It doesn’t have to be right, it just needs to give someone an alternative and maybe that will shake things up to get a solution.

    That said, I have no idea what the practical value of this Erdős problem is. If you asked me if this demonstrates that LLMs are not junk. My general impression is that is like asking me in 1928 if we should spent millions of dollars of research money on number theory. The answer is no and get out of my office.

  • winwang 1 hour ago
    Obviously nowhere near Erdos problem complexity but I've been using GPT (in Codex) to prove a couple theorems (for algos) and I've found it a bit better than Claude (Code) in this aspect.
  • jzer0cool 1 hour ago
    Could someone share a bit into the problem and the key portion from proof? For someone just knowing basics on proofs.
  • userbinator 2 hours ago
    The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.

    Of course LLMs are still absolutely useless at actual maths computation, but I think this is one area where AI can excel --- the ability to combine many sources of knowledge and synthesise, may sometimes yield very useful results.

    Also reminds me of the old saying, "a broken clock is right twice a day."

    • jaggederest 2 hours ago

          > Every Mathematician Has Only a Few Tricks
          > 
          > A long time ago an older and well-known number theorist made some disparaging remarks about Paul Erdös’s work.
          > You admire Erdös’s contributions to mathematics as much as I do,
          > and I felt annoyed when the older mathematician flatly and definitively stated
          > that all of Erdös’s work could be “reduced” to a few tricks which Erdös repeatedly relied on in his proofs.
          > What the number theorist did not realize is that other mathematicians, even the very best,
          > also rely on a few tricks which they use over and over.
          > Take Hilbert. The second volume of Hilbert’s collected papers contains Hilbert’s papers in invariant theory.
          > I have made a point of reading some of these papers with care.
          > It is sad to note that some of Hilbert’s beautiful results have been completely forgotten.
          > But on reading the proofs of Hilbert’s striking and deep theorems in invariant theory,
          > it was surprising to verify that Hilbert’s proofs relied on the same few tricks.
          > Even Hilbert had only a few tricks!
          > 
          > - Gian-Carlo Rota - "Ten Lessons I Wish I Had Been Taught"
      
      https://www.ams.org/notices/199701/comm-rota.pdf
      • yayachiken 1 hour ago
        I think when thinking about progress as a society, people need to internalize better that we all without exception are on this world for the first time.

        We may have collectively filled libraries full of books, and created yottabytes of digital data, but in the end to create something novel somebody has to read and understand all of this stuff. Obviously this is not possible. Read one book per day from birth to death and you still only get to consume like 80*365=29200 books in the best case, from the millions upon millions of books that have been written.

        So these "few tricks" are the accumulation of a lifetime of mathematical training, the culmination of the slice of knowledge that the respective mathematician immersed themselves into. To discover new math and become famous you need both the talent and skill to apply your knowledge in novel ways, but also be lucky that you picked a field of math that has novel things with interesting applications to discover plus you picked up the right tools and right mental model that allows you to discover these things.

        This does not go for math only, but also for pretty much all other non-trivial fields. There is a reason why history repeats.

        And it's actually a compelling argument why AI is still a big deal even though it's at its core a parrot. It's a parrot yes, but compared to a human, it actually was able to ingest the entirety of human knowledge.

        • smaudet 19 minutes ago
          > it actually was able to ingest the entirety of human knowledge

          Even this, though, is not useful, to us.

          It remains true that, a life without struggle, and acheivement, is not really worth living...

          So, it is nice that there is something that could possibly ingest the whole of human knowledge, but that is still not useful, to us.

          People are still making a hullabaloo about "using AI" in companies, and there was some nonsense about there will be only two types of companies, AI ones and defunct ones, but in truth, there will simply be no companies...

          Anyways I'm sure I will get down voted by the sightless lemmings on here...

    • nopinsight 1 hour ago
      > "a broken clock is right twice a day."

      The combinatorial nature of trying things randomly means that it would take millennia or longer for light-speed monkeys typing at a keyboard, or GPUs, to solve such a problem without direction.

      By now, people should stop dismissing RL-trained reasoning LLMs as stupid, aimless text predictors or combiners. They wouldn’t say the same thing about high-achieving, but non-creative, college students who can only solve hard conventional problems.

      Yes, current LLMs likely still lack some major aspects of intelligence. They probably wouldn’t be able to come up with general relativity on their own with only training data up to 1905.

      Neither did the vast majority of physicists back then.

      • amazingman 1 hour ago
        > Yes, current LLMs likely still lack some major aspects of intelligence.

        Indeed, and so do current humans! And just like LLMs, humans are bad at keeping this fact in view.

        On a more serious note, we're going to have a hard time until we can psychologically decouple the concepts of intelligence and consciousness. Like, an existentially hard time.

    • y0eswddl 2 hours ago
      Yeah, they're great at interpolation - they'll just never be worth much at extrapolation.
      • SR2Z 1 hour ago
        Luckily for us, whole fortunes can be made by filling in the blanks between what we know and what we realize.
        • javawizard 1 hour ago
          That deserves to be on a plaque somewhere.

          I've been using LLMs for much the same purpose: solving problems within my field of expertise where the limiting factor is not intelligence per se, but the ability to connect the right dots from among a vast corpus of knowledge that I would never realistically be able to imbibe and remember over the course of a lifetime.

          Once the dots are connected, I can verify the solutions and/or extend them in creative ways with comparatively little effort.

          It really is incredible what otherwise intractable problems have become solvable as a result.

          • dalyons 16 minutes ago
            What’s your field
        • jedmeyers 50 minutes ago
          And by having more of those blanks filled humans might be able to come up with much better extrapolations than what we have right now.
    • keyle 2 hours ago
      The ultimate generalist
    • karlgkk 2 hours ago
      Also just the sheer value of brute force.

      80 hours! 80 hours of just trying shit!

      • FrasiertheLion 2 hours ago
        It's 80 minutes, not 80 hours.
        • jasonfarnon 2 hours ago
          and you can be sure mathematicians spent way more than 80 hrs on it
        • ChrisGreenHeur 2 hours ago
          80 minutes! 80 minutes of just trying shit!
          • peteforde 2 hours ago
            ... shit that solved an apparently significant Erdős problem.

            That is not nothing, no matter how much you hate AI.

            • userbinator 2 hours ago
              It shows that AI is apparently very good at brute-forcing.
              • TOMDM 1 hour ago
                Are the human mathematicians who wanted to solve this problem just too stupid to brute force for 80 minutes?
              • alex_sf 1 hour ago
                This isn't brute force.
                • userbinator 2 minutes ago
                  It is in the same way that educated guessing is.
      • brokencode 2 hours ago
        How long do you figure it’d take to solve the problem yourself?
    • tptacek 2 hours ago
      Wait, what do you mean "LLMs are still absolutely useless at actual maths computation"? I rely on them constantly for maths (linear algebra, multivariable calc, stat) --- literally thousands of problems run through GPT5 over the last 12 months, and to my recollection zero failures. But maybe you're thinking of something more specific?
      • schneems 2 hours ago
        They are bad at math. But they are good at writing code and as an optimization some providers have it secretly write code to answer the problem, run it and give you the answer without telling you what it did in the middle part.
        • avaer 2 hours ago
          Someone should tell the mathematicians if they use a calculator or a whiteboard or heavens forbid a computer they are "bad at math".
        • tptacek 1 hour ago
          What would I do to demonstrate that they are bad at math? If by "maths" we mean things like working out a double integral for a joint probability problem, or anything simpler than that, GPT5 has been flawless.
        • tempaccount5050 2 hours ago
          Are they bad at math? Or are they bad at arithmetic?
          • tptacek 1 hour ago
            Neither.
          • lacunary 1 hour ago
            if you don't know much math, it's easy to confuse the two
      • jasonfarnon 2 hours ago
        What tier are you using? I have run lots of problems and am very impressed, but I find stupid errors a lot more frequently than that, e.g., arithmetic errors buried in a derivation or a bad definition, say 1/15 times. I would love to get zero failures out of thousands of (what sounds like college-level math) posed problems.
        • tptacek 1 hour ago
          I have a standard OpenAI/ChatGPT Pro account; GPT5 is my daily driver for math, and Claude for code.
      • cuttothechase 1 hour ago
        calc, stat etc from a text book is something they would naturally be good at but I don't think book based computations thats in the training set and its extrapolations is what is at question here.

        They are not great at playing chess as well - computational as well as analytic.

        • tptacek 1 hour ago
          I think this is wrong and a category error (none of the problems I've given it are in a textbook; they're virtually all randomized), but, try this: just give me a problem to hand off to GPT5, and we'll see how it does.

          Further evidence for the faultiness of your claim, if you don't want to take me up on that: I had problems off to GPT5 to check my own answers. None of the dumb mistakes I make or missed opportunities for simplification are in the book, and, again: it's flawless at pointing out those problems, despite being primed with a prompt suggesting I'm pretty sure I have the right answers.

      • ButlerianJihad 56 minutes ago
        I only have rudimentary understanding of calculus, trigonometry, Google Sheets, and astronomy, but I was able to construct an accurate spreadsheet for astrometry calculations by using Grok and Gemini (both free, no subscription, just my personal account) to surface the formulas for measuring the distance between 2-3 points on the celestial sphere. The LLMs assisted me in also writing functions to convert DMS/HMS coordinates to decimal, and work in radians as well.

        I found and fixed bugs I wrote into the formulas and spreadsheets, and the LLMs were not my sole reference, but once the LLM mentioned the names of concepts and functions, I used Wikipedia for the general gist of things, and I appreciated the LLMs' relevant explanations that connected these disciplines together.

        I did this on March 14, 2026

  • echelon 1 hour ago
    Now do P vs NP.

    If/when these things solve our hardest problems, that's going to lead to some very uncomfortable conversations and realizations.

  • resident423 2 hours ago
    I wonder if the rationalizations people come up with for why this isn't real intelligence will be as creative as ChatGPTs solution.
    • thesmtsolver2 1 hour ago
      Remember when people thought multiplying numbers, remembering a large number of facts, and being good at rote calculations was intelligence?

      Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.

      Most intelligent people do not think that.

      Eventually, we will arrive at the same conclusion for what LLMs are doing now.

      • resident423 1 hour ago
        Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence? Surely the trend has to break at some point, if so what would be the thing that crosses the line to into real intelligence?
        • noosphr 1 hour ago
          I've spend a good chunk of time formalising mathematics.

          Doing formalized mathematics is as intelligent as multiplying numbers together.

          The only reason why it's so hard now is that the standard notation is the equivalent of Roman numerals.

          When you start using a sane metalanguage, and not just augmrnted English, to do proofs you gain the same increase in capabilities as going from word equations to algebra.

        • thesmtsolver2 53 minutes ago
          When will LLM folks realize that automated theorem provers have existed for decades and non-ML theorem provers have solved non-trivial Math problems tougher than this Erdos problem.

          Proposing and proving something like Gödel's theorem's definitely requires intelligence.

          Solving an already proposed problem is just crunching through a large search space.

    • chrishare 1 hour ago
      LLMs are definitely intelligent - just not general like humans, and very very jagged (succeedingand failing in head-scratching ways).
    • famouswaffles 1 hour ago
      None of it is really from logical thought. The rationalizations don't make any sense, but they haven't for a while. It's an emotional response. Honestly, It's to be expected.
      • threethirtytwo 53 minutes ago
        It's because HN is not really full of smart people. It's full of people who think they're smart and take pride in that idea that they're pretty intelligent.

        ChatGPT equalizes intelligence. And that is an attack on their identity. It also exposes their ACTUAL intelligence which is to say most of HN is not too smart.

    • vatsachak 1 hour ago
      Well it still gets easy problems wrong

      With real general intelligence you'd expect it to solve problems above a certain difficulty with a good clip

      • pepa65 33 minutes ago
        That "it" is a huge variety and range of things...
    • 0xBA5ED 2 hours ago
      And how about the creative rationalizations about how statistical text generation is actual intelligence? As if there is any intent or motive behind the words that are generated or the ability to learn literally any new thing after it has been trained on human output?
      • tptacek 1 hour ago
        2022 called, wants this argument back. When you're "statistically generating text" to find zero-day vulnerabilities in hard targets, building Linux kernel modules, assembly-optimizing elliptic curve signature algorithms, and solving arbitrary undergraduate math problems instantaneously --- not to mention apparently solving Erdos problems --- the "statistical text" stuff has stopped being a useful description of what's happening, something closer to "it's made of atoms and obeys the laws of thermodynamics" than it is to "a real boundary condition of what it can accomplish".

        I don't doubt that there are many very real and meaningful limitations of these systems that deserve to be called out. But "text generation" isn't doing that work.

        • emp17344 57 minutes ago
          But the systems that do that impressive work are no longer just LLMs. Look at the Claude Code leak - it’s a sprawling, redundant maze relying on tools and tests to approximate useful output. The actual LLM is a small portion of the total system. It’s a useful tool, but it’s obviously not truly intelligent - it was hacked together using the near-trillions of dollars AI labs have received for this explicit purpose.
          • tptacek 54 minutes ago
            What does this matter? You can build a working coding agent for yourself extremely quickly; it's remarkably straightforward to do (more people should). But look underneath all the "sprawling tools": the LLM itself is a sprawling maze of matrices. It's all sprawling, it's all crazy, and it's insane what they're capable of doing.

            Again if you want to say they're limited in some way, I'm all ears, I'm sure they are. But none of that has anything to do with "statistical text generation". Apparently, a huge chunk of all knowledge work is "statistical text generation". I choose to draw from that the conclusion that the "text generation" part of this is not interesting.

            • emp17344 45 minutes ago
              Well, hang on a second - it sounds like you may actually disagree with the user who created this thread. That user claims that these systems exhibit “real intelligence”, and success on this Erdos problem is proof.

              You seem to be making the claim that LLMs are statistical text generators, but statistical text generation is good enough to succeed in certain cases. Those are different arguments. What do you actually believe? Are we even in disagreement?

              • tptacek 42 minutes ago
                I don't have any opinion about "real intelligence" or not. I'm not a P(doom)er, I don't think we're on the bring of ascending as a species. But I'm also allergic to arguments like "they're just statistical text generators", because that truly does not capture what these things do or what their capabilities are.
              • pepa65 28 minutes ago
                He does say that LLMs are just a part of the models used these days.
      • resident423 1 hour ago
        Solving open math problems is strong evidence of intelligence so there's not really any need for rationalization? I don't understand why intelligence would require intent or motive? Isn't intent just the behaviour of making a specific thing happen rather than other things?
        • x3ro 1 hour ago
          I'm curious, do you think that this also applies to stable diffusion? Are these models "creative" too?
          • resident423 1 hour ago
            I haven't used stable diffusion enough to have a strong opinion on it. But my thinking is LLMs have only recently started contributing novel solutions to problems, so maybe there is some threshold above which there's less sloppy remixing of training data and more ability to form novel insights, and image generators haven't crossed this line yet.
          • famouswaffles 1 hour ago
            Yeah? Those models are creative.
        • 0xBA5ED 1 hour ago
          The LLM did not solve the problem.
    • bsder 1 hour ago
      Everybody who retried the problem on ChatGPT spent on the order of the same amount of time on it.

      Do you not see the issue?

      • resident423 52 minutes ago
        No, but I'm interested to know what it is?
    • walrus01 2 hours ago
      For one, everything its 'intelligence' knows about solving the problem is contained within the finite context window memory buffer size for the particular model and session. Unless the memory contents of the context window are being saved to storage and reloaded later, unlike a human, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced later.
      • jychang 2 hours ago
        There's humans that have memory issues, or full blown Anterograde amnesia.
        • emp17344 1 hour ago
          There are humans who can’t read. That doesn’t mean Grammarly is “intelligent”. These things are tools - nothing more, nothing less.
      • in-silico 54 minutes ago
        For one, everything humans' "intelligence" knows about solving the problem is contained within the finite brain size for the particular person and life. Unless the memory contents of the brain are being saved to storage and reloaded later, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced in a different life.
      • resident423 2 hours ago
        What your describing sounds more like the model is lacking awareness than lacking intelligence? Why does it need to know it solved the problem to be intelligent?
        • walrus01 2 hours ago
          We say African Elephants are intelligent for a number of reasons, one of which is because they remember where sources of water are in very dry conditions, and can successfully navigate back to them across relatively large distances. An intelligent being that can't remember its own past is at a significant disadvantage compared to others that can, which is exactly one of the reasons why alzheimers patients often require full time caregivers.
          • resident423 1 hour ago
            There's probably a limit to how intelligent something can be with no long term memory, but solving Erdos problems in 80 minutes is clearly not above it, and I think the true limit is probably much higher than that.
          • peteforde 2 hours ago
            You are confusing lack of intelligence with the presence of impairment.
      • charcircuit 55 minutes ago
        As another commenter pointed out these models are being trained how to save and read context into files so denying them to use such an ability that they have just makes your claim tautological.
      • bpodgursky 1 hour ago
        All modern harnesses write memory files for context later.
    • tomlockwood 2 hours ago
      I think one day the VCs will have given the monkeys on typewriters enough money that these kinds of comments can be generated without human intervention.
    • otabdeveloper3 1 hour ago
      [dead]
    • catcowcostume 1 hour ago
      You're really telling on yourself if you think LLM is intelligence
    • techblueberry 2 hours ago
      This is real intelligence is the bear position, so I think it’s real intelligence.
  • brcmthrowaway 1 hour ago
    This is not a good Saturday night for humanity
  • iqihs 2 hours ago
    referring to Tao as just a 'mathematician' gave me a good chuckle
  • wizardforhire 2 hours ago
    WTF!?
  • haricomputer 2 hours ago
    [dead]
  • homo__sapiens 2 hours ago
    Big if true.
  • tomlockwood 3 hours ago
    My big question with all these announcements is: How many other people were using the AI on problems like this, and, failing? Given the excitement around AI at the moment I think the answer is: a lot.

    Then my second question is how much VC money did all those tokens cost.

    • ecshafer 1 hour ago
      I've tried my hand at a few of the Erdos problems and came up short, you didn't hear about them. But if a Mathematician at Harvard solved on, you would probably still hear about it a bit. Just the possibility that a pro subscription for 80 minutes solved an Erdos problem is astounding. Maybe we get some researchers to get a grant and burn a couple data centers worth of tokens for a day/week/month and see what it comes up with?
      • tomlockwood 2 minutes ago
        The question is how many people tried to solve this Erdos problem with AI and how many total minutes have been spent on it.
    • peteforde 2 hours ago
      Can you imagine how many bags of chips we could buy if we stopped funding cancer research?

      It's so expensive!

      • tomlockwood 2 hours ago
        Can you imagine how much ChatGPT cancer research we could fund if we stopped funding cancer research?
    • gdhkgdhkvff 2 hours ago
      Why do you care about either of those questions?
      • tomlockwood 2 hours ago
        Because it could be a massive waste of time and money.
        • komali2 1 hour ago
          Capitalism already is a poor allocator of human effort, resources, and energy, why lock in on this specifically? There's entire professions that are essentially worthless to society that exist only to perpetuate the inherent contradictions of this system, why not focus more on all that wasted human effort? Or the fact that everyone has to do some arbitrary sellable labor in order to justify their existence, rather than something they might truly enjoy or might make the world better?
      • Eufrat 2 hours ago
        I think we should at least ask the latter, if it turned out it cost $100,000 to generate this solution, I would question the value of it. Erdős problems are usually pure math curiosities AFAIK. They often have no meaningful practical applications.
        • jasonfarnon 2 hours ago
          Also, it's one thing if the AI age means we all have to adopt to using AI as a tool, another thing entirely if it means the only people who can do useful research are the ones with huge budgets.
          • peteforde 2 hours ago
            Your logic undoes your point, because the kid who "solved" this technically didn't even have to invest in a degree.
            • tomlockwood 2 hours ago
              America should fund tertiary education better, and that would solve even more problems.
              • peteforde 1 hour ago
                Getting off-topic, but as a successful high-school dropout I am compelled to remind anyone reading this that [the American] college [system] is a scam.

                That's not to say that there aren't benefits to tertiary education, for many people in different contexts. It's just not the golden path that it's made out to be.

                Many people currently in college are just wasting their money and should enroll in trades programs instead.

                Meanwhile, nothing about being in or out of school is mutually exclusive to using LLMs as a force multiplier for learning - or solving math problems, apparently.

        • anematode 2 hours ago
          Neither does the Collatz conjecture, Fermat's last theorem, ....

          (Of course, those problems are on another plane than this one.)

          • Eufrat 2 hours ago
            But that’s exactly my point.

            These are absolutely worth studying, but being what they are, nobody should be dumping massive amounts of money on them. I would not find it persuasive if researchers used LLMs to solve the Collatz conjecture or finally decode Etruscan. These are extremely valuable, but it is unlikely to be worth it for an LLM just grinding tokens like crazy to do it.

            • mhb 2 hours ago
              Is it worth it to buy a super-yacht?
            • anematode 2 hours ago
              Maybe... but I would love if 1% of the investment in AI were redirected to the mathematics education and professional research that would allow progress on any of these problems...
        • inerte 2 hours ago
          I would question at $60k. At $100k is a steal.
        • dinkumthinkum 1 hour ago
          No meaningful, practical applications? You realize that sounds incredibly naive in the history of mathematics, right? People thought this way about number theory in general, and many other things that turned out to have quite important practical applications. Your statement is also a bit odd in that researchers are already paid throughout their whole careers to solve such problems. I don't know.
  • mhb 2 hours ago
    > He’s 23 years old and has no advanced mathematics training.

    How is he even posing the question and having even a vague idea of what the proof means or how to understand it?

    • hx8 2 hours ago
      > “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.” He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge.

      Seems like standard 23 year old behavior. You're spending $100-$200/mo on the pro subscription, and want to get your money's worth. So you burn some tokens on this legendarily hard math problem sometimes. You've seen enough wrong answers to know that this one looks interesting and pass it on to a friend that actually knows math, who is at a place where experts can recognize it as correct.

      Seems like a classic example of in-expert human labeling ML output.

      • lIl-IIIl 7 minutes ago
        According to the article he was using the free ChatGpt tier at first, I til someone gifted him a Pro subscription to encourage "vibe-mathing'.
      • maplethorpe 44 minutes ago
        Couldn't he have just asked ChatGPT if it was correct? Why do we still feel the need to loop in a human?
    • ChrisGreenHeur 2 hours ago
      my guess would be due to having an interest in the field
  • ghstinda 2 hours ago
    Scientific American going out of business next lol, weak headline. Chat GPT let's have a better headline for the God among Men that realized the capability of the new tool, many underestimate or puff up needlessly. Fun times we live in. One love all.
  • nadermx 37 minutes ago
    This just shows that with the right training, in this case a thesis on erdos problems, they where able to prompt and check the output. So still needed the know how to even being to figure it out. "Lichtman proved Erdős right as part of his doctoral thesis in 2022."
    • fwipsy 35 minutes ago
      Lichtman is an expert who commented for the story. Liam Price is the one who prompted ChatGPT. "He’s 23 years old and has no advanced mathematics training."
      • nadermx 30 minutes ago
        “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.”

        "He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge."

        So basically two undergrads/graduates in math, "advanced" is subjective at that point.