4 comments

  • dexwiz 394 days ago
    This article makes sense in 2017, which most AI was just a statistical model. But 6 years later we are living in a much different world.
    • nwah1 394 days ago
      Not that much different. 9/10 uses of the term are still complete bullshit. The other 1/10 are still mostly bullshit as well.
      • BoorishBears 394 days ago
        If you don't understand the potential of a technology it's certainly easier to dismiss it as bullshit than to learn.

        Crypto/"Web3" was stupid because of the oracle problem: Your magical NFT can't actually enforce its license without leveraging the slightly centralized concept of the federal court system.

        AI as in LLMs and Latent Diffusion do not have that kind of obvious limitation to generating any actual value. There will clearly be specific limitations, but even within the confines of what has been built so far there is an insane amount of value and potential.

        Crypto detractors who are lumping AI into the "bullshit" bucket really come across as a broken clock that was right once, and I say that as someone who has been a crypto detractor for a long time.

        • marginalia_nu 394 days ago
          Eh, I think a lot of the reasons to be skeptical of crypto still apply to "AI".

          AI may have actual real world uses (unlike crypto), but it's also the subject of a huge hype campaign (like crypto) largely consisting of the same people hyping crypto, with similar strategies of emphasizing FOMO (like crypto) and making wild and fantastical claims about unheard of growth and about the end of the existing world order through technology in a way that borders on eachatology (like crypto).

          • idopmstuff 394 days ago
            Except that you can't make money on hype with AI the same way that you could with crypto. If you could generate FOMO about a coin, people would buy that coin and you'd make money. Generating FOMO about AI just leads people to go try ChatGPT. Unless you've built some domain-specific product on top that's meaningfully better than ChatGPT or Bing at whatever it's supposed to do, you can't actually profit (and if you have built that, then good for you, you've done something useful).

            That is all to say that there is definitely a lot of crypto hype, but it seems to be generated much more by legitimate excitement than by financial incentives. And given how much progress we've seen with AI serving legitimately useful functions, the hype seems much more warranted than the crypto hype (some of it, at least).

            • add-sub-mul-div 394 days ago
              > Except that you can't make money on hype with AI the same way that you could with crypto.

              People seem to think so. Have you seen all the shovelware getting posted here lately?

            • marginalia_nu 393 days ago
              I think these people have been useful idiots for crypto, and crypto grifters have fanned their flames and encouraged this cult of technology, but at this point they're an independent entity that will gladly latch onto any narrative that resonates them in the vacuum left behind crypto. They were never making money off this. If anything, they were the ones going broke in bait-and-switches.
          • BoorishBears 394 days ago
            > AI may have actual real world uses (unlike crypto)

            You started with a sentence that captures the heart of why handwavy comments about "most mentions of AI are bullshit" even after someone narrowed it down to recent developments are silly.

            The vast majority of recent mentions of AI are from LLMs and Latent Diffusion, which again, has already proven real world usefulness in incredible ways

            • marginalia_nu 393 days ago
              Right, I think even including recent development AI is incredibly over-hyped, largely driven by the technology being somewhat new and difficult to reason about and its limitations thus more than a bit nebulous.

              Several technology bubbles have formed before under these exact circumstances. But this is like a whole new level of gullibility. Most of the reasoning seems to be of the line

              1. LLMs have made impressive progress

              2. basically https://xkcd.com/605/

              3. SOCIETY SHATTERING IMPACT!!!

              Especially noteworthy is that this is a point we've been at before with artificial intelligence in particular. This isn't even the second time, but at least the third wave of AI hype.

              Don't get me wrong, I can totally see why OpenAI is pitching a future where basically all of society operates as a SaaS client to their API, what I don't get is why someone who isn't an OpenAI shareholder would guzzle that particular kool aid.

        • esjeon 394 days ago
          "Potential" is just a belief, and wielding it proves nor does nothing.
        • nwah1 394 days ago
          I didn't say machine learning is useless.

          I said most products claiming to have AI secret sauce have no secret sauce.

          Big difference.

          • BoorishBears 394 days ago
            Saying:

            > I said most products claiming to have AI secret sauce have no secret sauce.

            Means you're implying

            > Most products claiming to AI have AI

            Which is exactly the opposite of the article and agreeing with the comment to you replied to while calling things bullshit...

            When the article was written they straight up were not using AI, now they are and mostly in novel ways by the simple virtue they're building on a novel tech.

    • esjeon 394 days ago
      Welp, it's still a statistical model, and nothing is wrong with it. The only thing that's changed is that we now have a large-scale statistical method for extracting patterns out of a large corpus of highly complex data in short-enough time span. This progress was mainly backed by engineering efforts like cloud infrastructure, big data methodologies, GPGPU. The theory itself only got a small bump since the last AI winter (in terms of depth), and, combined with the engineering advancement, it opened up a whole lot of exploration opportunities (in terms of breadth).
      • og_kalu 394 days ago
        it's not statistical model. Unless the implication of the word has lost all meaning. The post shifting rampant in the discussions of this field is something to behold.
      • scrubs 394 days ago
        You stole my thunder.
    • nvr219 394 days ago
      It's still that just with faster computers.
    • version_five 394 days ago
      It's been funny watching the "it's just a statistical model" crowd who were always quick to chime in (when that's not what 2016 deep learning was) now doing a 180 degree turn to "how do we know the brain isn't just a llm" or whatever silly confused by a mirror opinion they have now, when despite obvious advances in scale, theres no fundamental difference in the models. The only takeaway for me is that people love to passionately misunderstand anything that gets called AI.
      • cmarschner 394 days ago
        For my part (phd in computational linguistics, first classical „AI“, then machine learning, then deep learning since 2014, not working on NLP for the past 5 years) - I thought I would know a thing or two about what‘s possible, and I am still in absolute awe what these models spit out.

        People might not be aware that just a few years ago formulating a grammatically correct sentence was a hard problem, let alone all the other things that once had names (anaphora resolution, semantics, pragmatics etc. etc.) and conference tracks and workshops dedicated to them. All that swept away by data and scalable transformer models. It is such a f*cking miracle.

        I would be curious what the state of understanding these models is. Haven‘t done any literature review on this but just reverse engineering whatever structure the model has come about could be enlightning (perhaps).

    • pornel 394 days ago
      We may seamlessly transition from "this isn't a real A.I, only a bit of simple fuzzy logic" to "this is needlessly using a billion-weights model to simulate a bit of simple fuzzy logic".
  • 360macky 394 days ago
    Interesting. I see it. This will be the counterpart of "Human-designed" or "Made by humans".
  • joseph8th 394 days ago
    It's a marketing term rendered meaningless by the marketers. Even those who know better keep calling LLMs "AI"
    • flangola7 394 days ago
      What are LLMs if not AI? A machine that can look at a comic image and explain why it is funny, then play and win a chess game, then explain in Arabic how to solve a trigonometry problem, and then finally write a persuasive speech arguing for a comedy movie about the above three skills to be greenlit, is an intelligent entity.
    • esjeon 394 days ago
      Yeah, "AI" has been a marketing term for almost a decade. People who know better these days tend to avoid the term "AI" in professional contexts and use "model" instead. It's like "autopilot" from Tesla in the sense that the name itself makes people cast unfounded over-expectation (= overhype). You can see tons of people consider GPT as an AGI.
  • ralusek 394 days ago
    We run all of our bits through "Hot Dog/Not Hot Dog" AI. It turned out to be an identity function, because it thinks the 1s are hot dogs and the 0s aren't. All of our integration tests are still passing...things are just a lot slower around here, and multiple people swear that the servers now smell like a baseball game.