7 comments

  • HarHarVeryFunny 1 hour ago
    > Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3

    So the smart get smarter and the dumb get dumber?

    Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.

    I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?

    Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental works took over from physical, but maybe I got that reversed!

    • RodgerTheGreat 17 minutes ago
      I think everyone who believes that they can personally resist the detrimental psychological effects of exposure to LLMs by "remaining aware" or "being careful", because they have cultivated an understanding of how language models work, is falling into precisely the same fallacy as people who think they can't be conned or that marketing doesn't work on them.

      Don't kid yourself. If you use this junk, it's making you dumber and damaging your critical thinking skills, full-stop. This is delegation of core competency. You may feel smarter, or that you're learning faster, of that you're more productive, but to people who aren't addicted to LLMs it sounds exactly like gamblers insisting they have a foolproof system for slots, or alcoholics insisting that a few beers make them a better driver. Nobody outside the bubble is impressed with the results.

      • thesumofall 3 minutes ago
        I fully agree that it’s close to impossible to not eventually fall into the trap of overrelying on them. However, it’s also true that I was able to do things with them that I would never have done otherwise for a lack of time or skill (all sorts of small personal apps, tools, and scripts for my hobbies). Maybe it’s a bit similar to only reading the comment section in a newspaper instead of the news? They will introduce you to new perspectives but if you stop reading the underlying news you’ll harm your own critical thinking? So it’s maybe a bit more grey than black & white?
      • paseante 2 minutes ago
        [dead]
  • kikkupico 1 hour ago
    Contrary to the general opinion, I feel that AI has IMPROVED my cognitive skills. I find myself discovering solutions to problems I've always struggled with (without asking AI about it, of course). I also find myself becoming much better at thinking on my feet during regular conversations. I believe I'm spending more time deep thinking than ever before because I can leave the boring cognitive stuff to AI, and that's giving my mind tougher workouts and making it stronger; but I could be completely wrong.
    • eslaught 1 hour ago
      Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.

      I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.

      • pipes 57 minutes ago
        I don't understand how the placebo effect is a human bias. Is it?
        • wongarsu 20 minutes ago
          At least in some instances you could frame it that way: You believe that doctors and medicine are effective at treating disease, so when you are sick and a doctor gives you a bottle of sugar pills and you take them, you now interpret your state through the lens that you should feel better. A bias on how you perceive your condition

          That's not all that the placebo effect is. But it's probably the aspect that best fits the framing as bias

    • ip26 29 minutes ago
      I keep asking it questions, and as I dialogue about the problem, I walk right into the conclusion myself, classic rubber duck. Or occasionally it will say something back, and it’s like “of course! That’s exactly what I’ve been circling without realizing it!”

      This mostly happens with things I’ve already had long cognitive loops on myself, and I’m feeling stuck for some reason. The conversation with the model is usually multiple iterations of explaining to the model what I’m working through.

    • mayukh 28 minutes ago
      You are not wrong. AI is an amplifier. You chose to amplify something in particular and it works for you. That's good enough. (Give this as a prompt to your ai as I sense self-doubt here)
    • siva7 1 hour ago
      It's so fascinating, i feel the same but at the same i feel like most people get dumber than before ai (and most seem to struggle adapting ai)
      • mayukh 27 minutes ago
        Because most people either don't know how to use it (multiple reasons, that ai itself can help them solve) or don't have the right mindset going into it (deeper work needed)
  • gmuslera 2 hours ago
    The main problem with "System 3" is that it have its own kind of "cognitive biases", like System 1, but those new cognitive biases are designed by marketing, politics, culture and whatever censor or makes visible the original training. Even if the process, the processing and whatever else around was perfect (that is not, i.e. hallucinations)

    But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.

    • HPsquared 1 hour ago
      I suppose the publishing process has always existed as system 3. It's just that now we have a new way to read and write with an abstract "rest of the world".
  • Ozzie_osman 1 hour ago
    When humans have an easy way to do something that is almost as good, we choose that easy way. Call it laziness, energy conservation, coddling, etc. The hard thing then becomes hard to do even when the easy thing isn't available, because the cognitive muscle and the discipline atrophy.

    Like kids who are never taught to do things for themselves.

    • tac19 1 hour ago
      Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle? Do you refuse to use a database, because it will make your memory weaker? Or, do you refuse to use a car, because it makes you less able to walk when the car is unavailable? No. Because the car empowers you to do something that, at the very least, takes a lot longer on foot.

      People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.

      • wongarsu 54 minutes ago
        The car seems like a great example of a technology with a lot of problematic side effects. Places that had a more measured adoption ended up a lot better than those that replaced all public transit with cars and routinely demolished neighborhoods to make space for bigger highways

        Cars are an essential part of modern life, but the sweetspot for car adoption isn't on either of the extremes

        • mayukh 31 minutes ago
          Tragedy of the commons perhaps ? Good for the individual, bad for society and finding solutions that can balance both
          • wongarsu 13 minutes ago
            I'd call it bad on both levels. The costs imposed by car infrastructure are a tragedy of the commons. But even if you were the only person with a modern car you'd still be hit with the social effects of traveling in the isolation of your private metal box and the health effects of walking or biking less

            On the other hand there are also big positives on both the societal and individual level. That's where the balance comes in. You want some individual travel and part of your logistics to run on cars, but not all of it. And probably a lot less of it than what most people in the 60s to 90s thought

      • bluefirebrand 1 hour ago
        > Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle

        Yeah when I was learning in school we weren't allowed electronics for division, and I think I absolutely would be dumber if I had never done that

        > People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.

        If you're posting this from America, you're living in a society that is fatter than ever thanks to cars. So there's surely some nuance here, not every technology upgrade is strictly better with no downsides

  • andai 33 minutes ago
    Damn. I came up with a hypothetical "System 3" last year! I didn't find AI very helpful in that regard though.

    Current status: partially solved.

    Problem: System 2 is supposed to be rational, but I found this to be far from the case. Massive unnecessary suffering.

    Solution (WIP): Ask: What is the goal? What are my assumptions? Is there anything I am missing?

    --

    So, I repeatedly found myself getting into lots of trouble due to unquestioned assumptions. System 2 is supposed to be rational, but I found this to be far from the case.

    So I tried inventing an "actually rational system" that I could "operate manually", or with a little help. I called it System 3, a system where you use a Thinking Tool to help you think more effectively.

    Initial attempt was a "rational LLM prompt", but these mostly devolve into unhelpful nitpicking. (Maybe it's solvable, but I didn't get very far.)

    Then I realized, wouldn't you get better results with a bunch of questions on pen and paper? Guided writing exercises?

    So here are my attempts so far:

    reflect.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...

    unstuck.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...

    --

    I'm not sure what's a good way to get yourself "out of a rut" in terms of thinking about a problem. It seems like the longer you've thought about it, the less likely you are to explore beyond the confines of the "known" (i.e. your probably dodgy/incomplete assumptions).

    I haven't solved System 3 yet, but a few months later found myself in an even more harrowing situation which could have been avoided if I had a System 3.

    The solution turned out to be trivial, but I missed it for weeks... In this case, I had incorrectly named the project, and thus doomed it to limbo. Turns out naming things is just as important in real life as it is in programming!

    So I joked "if being pedantic didn't solve the problem, you weren't being pedantic enough." But it's not a joke! It's about clear thinking. (The negative aspect of pedantry is inappropriate communication. But the positive aspect is "seeing the situation clearly", which is obviously the part you want to keep!)

  • ashwinnair99 2 hours ago
    [flagged]
    • n_u 1 hour ago
      Are you a LLM? This comment is written twice in this thread and of your last 10 comments, 6 use the pattern "X isn't Y" or "X didn't Y, Z did"

      https://news.ycombinator.com/item?id=47469767 > The concern isn't that AI reasons differently.

      https://news.ycombinator.com/item?id=47469834 > The concern isn't that AI reasons differently.

      https://news.ycombinator.com/item?id=47470111 > The problem isn't time.

      https://news.ycombinator.com/item?id=47469760 > Airlines have been quietly expanding what they can remove you for. This isn't really about headphones.

      https://news.ycombinator.com/item?id=47469448 > Good tech losing isn't new, it's just always a bit sad when it happens slowly

      https://news.ycombinator.com/item?id=47469437 > The tool didn't fail here, the person did

      • eslaught 1 hour ago
        Please don't take up space in the comment section with accusations. You can report this at the email below and the mods will look at it:

        > Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email [email protected] and we'll look at the data.

        > https://news.ycombinator.com/newsguidelines.html

        • dgacmu 59 minutes ago
          I find it kind of helpful and interesting to see a subset of these called out with a bit of data. Helps keep my LLM detector trained (the one in my brain, that is) and I think it helps a little about expressing the community consensus against this crap. In this case, I'm glad the GP posted something, as it's definitely not mistaken.
      • christophilus 1 hour ago
        Definitely AI. Every comment sounds like GPT.
    • pepperoni_pizza 2 hours ago
      I already noticed that. When I feel lazy, I feel like reaching for the AI. Exactly the same laziness voice that nudges me to drive instead of walking.

      But then I go running and swimming for fun, and there is no laziness voice there, telling me to stop, because I enjoy it. And similarly with AI, I only use it for things where I don't care about, like various corporate bs. Maybe the cure for AI-brain is to care about and be passionate about things.

      Conversely, does this mean that the kind of people who use AI for everything don't care about anything?

      • necrotic_comp 1 hour ago
        There's something interesting I've found about my interactions with the AI - I use it as a thought-partner. I don't ask it to solve a problem for me (well, first at least!) I think about it as a tool to work with, engage with the problem, and spit out a result that I then test and review.

        I see it as part of the feedback loop, and it speeds up some of the mechanical drudgery, while not removing any of the semantic problems inherent in problem solving. In other words, there's things machines are good at, and things humans are good at - if we each stick to our strengths, we can move incredibly fast.

      • throaway197512 47 minutes ago
        I've been using Claude to vibe code my game ideas for the past months (iterated with docs).

        I find when I think of it as a being named "Claude," like a juniour partner who's there to eagerly help me, I get lazy. I think of it as if it's a real almost slave-like creature, who's there to make everything for me without any regards to himself.

        But, when I think of it as a tool, as if its a hammer or something, I feel much less lazy. I think of it as "building something" using a program, not telling "Claude" what to do and expecting it to happen. I even turn off Claude's verbal responses completely sometimes to help this. 100% impersonal.

      • delijati 2 hours ago
        That is why i compare it to fast-food. From time to time you enjoy it but you should not consume it too much ;)
    • keiferski 1 hour ago
      ”Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you it really became our civilization which is of course what this is all about.“
  • ashwinnair99 2 hours ago
    [flagged]