The Rise and Fall of GOFAI

(billwadge.com)

50 points | by herodotus 165 days ago

15 comments

  • lispm 164 days ago
    Fails to mention any "GOFAI" stuff beyond Eliza, which was not even GOFAI and also not written in Lisp.

    There are many "GOFAI" domains explored, the author does not seem to know one. Planning/Scheduling, Natural Language Processing, Diagnosis, Software/Hardware Verification, ...

    There are also examples for Machine Translation applications, though the author does not know any. An early example was Metal. https://dl.acm.org/doi/pdf/10.1145/114669.114673

    A report about Metal from the early 1990s: https://aclanthology.org/www.mt-archive.info/90/EW-1990-META... Bonus: one of the rare applications deployed on a Lisp Machine, here as a translation server and for grammar development.

  • noelwelsh 164 days ago
    This is a very odd post. It claims, for example, that Good Old-Fashioned AI (GOFIA) was over before the digital computer was invented. It doesn't correspond to either my understanding of what is meant by the term GOFAI or the history of the field. It ignores all current work (e.g. proof assistants, Lean), the role of computational power, and the interplay between GOFAI techniques and other techniques (e.g. Monte-carlo tree search).
    • qsort 164 days ago
      I'm honestly wondering if it's some sort of parody?

      The part about games is also very weird. The machine that beat Kasparov was Deep Blue, not Big Blue (a nickname for IBM itself), and it was almost entirely brute force search IIRC.

      Modern chess engines do incorporate statistical methods in the form of NNUE nets, but they would be completely worthless without traditional tree search straight out of the Russel-Norvig.

      Exactly the same goes for Go, Shogi, etc.

      I'm also confused about why the whole issue of undecidability is relevant. Surely this is a limitation for anything running on a computer, including statistical approaches, why would that favor an approach over the other?

      • mtlmtlmtlmtl 164 days ago
        Deep blue(and basically no chess engines since the mid 50s) wasn't really doing brute-force search. It was using alpha-beta conspiracy search, a version of alpha-beta which was a novel idea at the time. And even alpha-beta itself is only brute-force under the worst possible move-ordering, which due to iterative deepening is basically never the case. It does give the same result as a truly brute-force minimax search, but by looking at roughly the square root of the number of positions naïve minimax would(under optimal move-ordering).

        As for Russell-Norvig, it has maybe 2 or 3 pages on alpha-beta, and nothing on the almost 70 years of developments that have actually made it fast, able to use multiple threads, etc.

        • bunderbunder 164 days ago
          And, for what it's worth, I started college a couple years after Deep Blue beat Kasparov, and at the time this family of search algorithms were still being taught (and consumed a significant amount of classroom time) in the Artificial Intelligence class, not the algorithms class.
  • m463 164 days ago
    I lived through the rise and fall of "AI" of this type. "prolog will be the future", etc.

    I think the idea is that we were supposed to encode "cats have 4 legs" and every other problem space, and the machines would magically be smart.

    To me this seemed sort of naive like "Give your spouse a MYSQL server, and they will finally be organized, forever".

  • ducktective 164 days ago
    From what I understood, the author suggests that GOFAI or symbolic approach to the problem of AI, has fundamental theoretical problems: Gödel's Incompleteness Theorems and the Undecidability Problem. But:

    1- ML approach is also inherently based on mathematical foundations like Linear Algebra and Statistical methods. Any natural limitations in GOFAI also applies here.

    2- Do we even know how ML/DL methods really work? There is a Geoffrey Hinton interview about this and he seems to suggest that NN methods are actually a black box. So in this regard, NN-based methods are even less "confidence-inspiring" than symbolic reasoning.

    • zmgsabst 164 days ago
      I’m relatively sure that NNs are learning an implicit type theory anyway. But:

      1. we don’t know which one; and,

      2. you can only use fuzzy reasoning, not synthesize and compute results.

      I think the next breakthrough will be using statistical models to build effective type theories for the data we show them, but extracting them so we can “reason” explicitly.

  • bunderbunder 164 days ago
    This post feels weirdly shallow?

    For example, a statement like "totally beyond the reach of purely symbolic computation" needs to be followed by a lot more clarification, given that all of these algorithms that have supposedly superseded symbolic computation are themselves implemented using systems that are equivalent to symbolic computation. And that means the author stops short of getting to the really interesting insights. No, really, why has symbolic AI fallen out of favor?

    As someone who started their career doing symbolic NLP and has since moved on to machine learning methods, I'd argue that it's a practicality problem. Symbolic methods require humans to figure out the rules and then program them into the system. And as the system gets bigger and more complex, it becomes increasingly difficult to make any change without creating a sort of butterfly effect that destabilizes some other portion of the system.

    I have a hunch that one of the most (popularly) underappreciated aspects of deep neural networks is that they've come up with some good mechanisms for mitigating that problem. If you squint and turn your head, that's what regularization and dropout are really about - trying to minimize the number of truly critical dependencies among different parts of the model, so that it can continue to grow without becoming unstable.

    By contrast, GOFAI practitioners tend to actively create such critical dependencies, because doing so limits the number of items we need to keep in our working memory when we're writing the code. Unfortunately, and rather ironically, that seems to work out about as well as greedy search ever does.

    • Nevermark 163 days ago
      The overriding bit in terms of why symbolic systems found language challenging, is language in our brains isn't a symbolic system. It has symbolic characteristics but is still an analog system inductively learned by neurons.

      So it is not surprising that models that use approximated neural type information processing, are better able to model something that is naturally implemented with that category of computation.

      And the reverse is true. Neural models, like humans, don't naturally perform well on many step exact calculations, much less complex exact reasoning.

      (Unless models are reduced to modules that are equivalent to primitive symbolic operations. But that is essentially symbolic design on top of a substrate whose details have been intentionally abstracted away, with the extra costs of using an arbitrary non-optimal substrate.)

  • pjbk 164 days ago
    Unless the author is attempting to redefine GOFAI, ML and NN were very much techniques of "old" AI. The perceptron was invented in 1949 and there was already plenty of prior art.

    Large scale models prove computers are very good interpolators. They also make evident they lack common sense.

    Every time there is a new release of ChatGPT, Copilot, Bard/Gemini, etc. I challenge them with a simple non-computable problem like an example of the PCP. Every time they fail trying to nail down a correct solution. And worse, they try to convince me they can solve the problem and their answer is correct, showing that a computer is still behind pulling the strings.

    • zokier 164 days ago
      > Unless the author is attempting to redefine GOFAI, ML and NN were very much techniques of "old" AI.

      What definition of GOFAI are you using here? Wikipedia states:

      > In the philosophy of artificial intelligence, GOFAI ("Good old fashioned artificial intelligence") is classical symbolic AI, as opposed to other approaches, such as neural networks, situated robotics, narrow symbolic AI or neuro-symbolic AI. The term was coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.

      > Thus, Haugeland's GOFAI does not include "good old fashioned" techniques such as cybernetics, perceptrons, dynamic programming or control theory or modern techniques such as neural networks or support vector machines.

      https://en.wikipedia.org/wiki/GOFAI

    • Vecr 164 days ago
      Humans can't do that either. If you read Scott Aaronson it's pretty clear that humans that "acknowledge" various non-computable "truths" are not actually doing what they represent themselves as doing.

      https://www.scottaaronson.com/writings/finite.html

      • pjbk 164 days ago
        Interesting. I really enjoyed the books by Penrose and Cohen.

        A human certainly cannot solve a non-computable problem by "playing" computer. However the insight that is created by trying to solve (or invent, in Post's case) something like the PCP, and then the ability to intuit and then prove there cannot be a solution by a non-human, is still an open question. Penrose has his quantic processes inside microtubules explanation which seems a very up your sleeve answer. I don't know if that would be too different to a massively parallel machine with true stochastic non-determinism.

        In any case attempting to compare the brain to a computer and calculating the maximum number of states in terms of bits for me is not fair. There are certainly hard physical limits but that pertains the question of consciousness tangentially.

    • bunderbunder 164 days ago
      GOFAI is defined in terms of the types of methods used, not in terms of the year in which the method was invented. GOFAI and ML have always existed concurrently, and always will.
  • bitwize 164 days ago
    GOFAI is still here, solving problems from chip layout to route planning for video game characters. It just isn't called AI anymore because it isn't sexy. At one point it was; it could attract funds from military sponsors who wanted the missile to know where it is the way today's ML algorithms attract adtech sponsors who want to optimize their eyeball-monetizing strategy. But today it's like Java: boring tech solving boring problems because it's a known quantity.
  • aedon 164 days ago
    When I was younger I joined "AGI" lab once, only to find out that I joined a GOFAI lab. That was rough. The CEO didn't believe in statistical learning because your system can only be as good as your data, and believed it wasn't actually "thinking" when you would interact with a DL system
    • bunderbunder 164 days ago
      "Thinking" is one of the most annoying words in AI.

      It has no clear, stable definition, and I think that that might actually be its entire point. If we ever settled on a concrete, operationalizable definition, that would bring a quick end to the arguments about how many angels can dance on the head of a pin.

  • DeathArrow 164 days ago
    What if we mix GOFAI with machine learning? Wouldn't the result surpass the capacity of either?
    • tsimionescu 164 days ago
      That's exactly how the AlphaX (AlphaGo, AlphaStar etc) are built. They use machine learning but also layers of Monte-Carlo tree search and other GOFAI "tricks".
      • riku_iki 164 days ago
        > Monte-Carlo tree search and other GOFAI "tricks"

        Is monte-carlo and similar really a GOFAI? My impression is that GOFAI is mostly some rule based systems like prolog, and other algorithms are not necessary in this category.

        • tsimionescu 163 days ago
          As far as I know, most game-related algorithms are the products of GOFAI. Things like MinMax, A*, and yes, Monte-Carlo tree search are coming from this same pool.
    • f1shy 164 days ago
      Yes. It is called neurosymbolic programming.
  • tromp 164 days ago
    > the programming language LISP, based closely on the λ calculus.

    I would say LISP is at best loosely based on the λ calculus.

    David Turner discusses LISP in Section 2 of [1]. The subsection "Some Myths about LISP" debunks the link between LISP and lambda calculus. McCarthy added LAMBDA to LISP without an understanding of the lambda calculus, and a result got scoping wrong (dynamic). Eventually, LISP did adopt the correct lexical scoping.

    [1] https://www.cs.kent.ac.uk/people/staff/dat/tfp12/tfp12.pdf

  • auggierose 164 days ago
    Well, GOFAI ≠ logic, although GOFAI of course uses logic as a tool. Logic itself has been quite the success story, maybe one of the biggest of applied mathematics.
  • sterlind 164 days ago
    GOFAI is vital. Where are the automated theorem provers? The planning algorithms? The expert systems, the optimizers, the constraint solvers? All those problem domains still exist, but they've been totally forsaken by new AI. Prolog is still state of the art for expert systems. Planning hasn't changed since the early '00s. Automated provers still don't scale.

    Somehow, we've hit the point where AIs can write sonnets and play jazz, but proving simple theorems is still science fiction.

    I just want my computer to solve logic puzzles without waiting for the heat death of the universe as I scale them up. I'm sure language models would make for fantastic heuristics, it just seems like nobody cares and the whole field is just rotting away.

    • jhbadger 164 days ago
      >Somehow, we've hit the point where AIs can write sonnets and play jazz, but proving simple theorems is still science fiction.

      It's interesting that this seems to be a return to conventional ideas of what is difficult. I remember that people used to comfort themselves with statements like "yes, computers are very good at things like chess and mathematics, but they will never be able to compose music or write a poem" implying that those things were the real hard things and that math and chess were comparatively easy.

      • empath-nirvana 164 days ago
        People have this weird idea that if an AI uses an external tool like a search engine or python to do something, that it is somehow cheating, but try doing advanced math as a _human_ just off the top of your head. No pencil and paper, no calculator, no looking up how to do things.

        Math and logic is not a _natural_ thing that humans can do. It's something they have to be trained to do, and most humans throughout history did not know how to do it.

        People need to stop thinking of an AI system is only a neural network. In reality any usable AI system is going to be a collection of specialized components, some of which might be GOFAI, along with more general components (like generative ai).

    • kragen 164 days ago
      all these things depend crucially on search, which gets faster according to some large exponent on the quality of your search heuristic. probably neural networks will provide much better heuristics and therefore speed for such search-based approaches
    • empath-nirvana 164 days ago
      You can think of most compilers as GOFAI.
    • ducktective 164 days ago
      >it just seems like nobody cares

      This is the natural consequence of bulk of money in tech going to webdev, from engagement attraction to big data for ads.

  • ForHackernews 164 days ago
    Arguably old-fashioned AI led to greater insight into thinking and reasoning. Black-box LLMs are (so far) stochastic parrots. Maybe what we've learned is that most "intelligent reasoning" by people, isn't.
    • mardifoufs 164 days ago
      In what way did old fashioned AI lead to greater insight in thinking and reasoning? Sure that was the goal, but it completely failed at doing so, as witnessed by the lack luster results. And usually, the more it tried to use any supposed insight or "model" of reasoning, the harder it failed to produce results.

      People love to repeat that "stochastic parrots" argument which, I mean sure I get why. But what does it say about GOFAI when even basic "black boxes" outperform almost everything GOFAI can do? And often with much less overfitting. Not just in language models, but also vision, classification, anomaly detection etc. There's a reason why current techniques are, well, current.

      I guess I just don't see how it led to any actual insights if it hasn't been able to reproduce any part of it. Now maybe real GOFAI just hasn't been tried enough, or we did it wrong, and we just need to try going back to classical techniques and theories (ie. trying to replicate human thinking)... but people are free to do that! and I'm sure most researchers would be delighted if someone came up with a way to make it work and outperform the current "black box" driven approach. It's just that it never happens, but it "sounds" good and "feels" better for some to think it will, eventually.

      • KineticLensman 164 days ago
        > In what way did old fashioned AI lead to greater insight in thinking and reasoning? Sure that was the goal, but it completely failed at doing so

        I was involved in an AI project funded by the EU Esprit programme in the early 90s, developing an Expert System Builder. Our goal was definitely not to gain insights into thinking and reasoning, but to help commercialise an academic technology by providing tools that allowed domain experts to build reasoning systems. That could be sold.

        It went about as well as you can expect given the limitations of the expert-system approach, although two of the companies involved did manage to produce novel, vaguely useful in-house tools that were used in demos to senior customers to show that the companies were forward looking.

        • mardifoufs 164 days ago
          Ah sorry, I guess I was generalizing from what I was taught and what my teachers (at Université de Montréal) were doing back then :). Weren't reasoning/expert systems usually based on trying to model the human thought process? At least early on? I might be totally wrong.
          • KineticLensman 163 days ago
            > Weren't reasoning/expert systems usually based on trying to model the human thought process?

            Yes, in the sense of using rules / heuristics in the way that human experts were believed to. One classic architecture involved a black-board of facts. Rules were triggered when the facts matched their pre-conditions and could update the blackboard with new facts, and so on. The rules looked like a mass of if-then statements, but the order in which they were fired was driven by the contents of the knowledge base and the behaviour of the inference engine.

            In my experience, once you reached a certain number of rules / level of complexity, it became harder and harder to add new rules, and the lack of traditional programmatic approaches to structuring and control compromised the purely 'knowledge based' approach. As a traditional programmer myself (in Lisp), I increasingly encountered situations where I just wanted to call a proper function. There were also more theoretical issues, such as non-monotonic reasoning, where you discover that a previously asserted fact was misleading / incorrect, and you needed to retract subsequent assertions, etc. Comedy example here is where you have knowledge that Tweety is a bird, and use rules to design an aviary for him. You then discover that Tweety is a penguin, so a completely different habitat is required. There were also comedy examples where people used a medical expert system to diagnose their car's problems and it would determine that the rust was a bad case of measles.

            I think it did lead to improved understanding of mathematical logic-based systems, but didn't feed back into an understanding of human cognition.

      • ForHackernews 164 days ago
        Symbolic manipulation from early AI work has been used to great effect in computer algebra systems like Maple or Mathematica. It's a measure of the 'defining down' of AI that none of that stuff counts in people's minds any more.
      • riku_iki 164 days ago
        > when even basic "black boxes" outperform almost everything GOFAI can do?

        wondering what exactly you includes in "everything"? Maybe you can provide specific example?

        • mardifoufs 163 days ago
          I was referring to the stuff I mention later in my comment: Anomaly detection, image classification/segmentation, upscaling data, even interpolation, complex forecasting, text generation, text to speech, speech recognition, etc.

          I'm curious about where gofai is still outperforming modern techniques in complex tasks. As in, genuinely curious, because I want to be wrong on this!

          • riku_iki 163 days ago
            > I was referring to the stuff I mention later in my comment: Anomaly detection, image classification/segmentation, upscaling data, even interpolation, complex forecasting, text generation, text to speech, speech recognition, etc.

            I think most/all of this was not target of GOFAI.

            > I'm curious about where gofai is still outperforming modern techniques in complex tasks. As in, genuinely curious, because I want to be wrong on this!

            theorem proving, equation solving as example. NN still suck in deep symbolic math and reasoning.

            • mardifoufs 163 days ago
              True, I didn't consider those as being AI tasks but that's just proving your point!

              Though I think that a lot of those (certainly image classification, face recognition, TTS, etc) were tasks that were very important in the field back in the 1970/80/90s. A lot of resources were spent on AI specifically for those purposes.

              • riku_iki 163 days ago
                > Though I think that a lot of those (certainly image classification, face recognition, TTS, etc) were tasks that were very important in the field back in the 1970/80/90s. A lot of resources were spent on AI specifically for those purposes.

                Maybe, it is hard for me to see how you came to such conclusion. But discussion is about specific term GOFAI. I would look at following as source of truth:

                - wiki definition, which explicitly states that GOFAI is symbolic AI(rule based reasoning, like prolog, cog, cyc).

                - table of content of 2nd edition of Norvig book: https://aima.cs.berkeley.edu/2nd-ed/contents.html which has very little about what you described, but mostly focuses on search, discrete algos and reasoning.

    • adamnemecek 164 days ago
      > Arguably old-fashioned AI led to greater insight into thinking and reasoning

      did it?

      • barberpole 164 days ago
        sure it did, the stuff around SOAR and John Anderson, production systems that aimed to model cognitive load, Case-based reasoning, etc.
  • captaincaveman 164 days ago
    Seems to be conflating GOFAI with Cybernetics, they are distinct with Cybernetics being far more aligned with the sub-symbolic approach (McCullock & Pitts for example).
  • cbint 163 days ago
    [dead]