6 comments

  • 42lux 10 hours ago
    I am bipolar and I help run a group. We lost some people to chatbots already that either fueled a manic or a depressive episode.
    • sherdil2022 7 hours ago
      Lost as in ‘not meeting anymore since they are using chatbots instead’ or ‘took their lives’?
      • 42lux 7 hours ago
        Both but it's mostly not the therapy chatbots or normal "chatgpt" those are worse enough. It's these dumbass ai girlfriend/boyfriend bots that run on uncensored small models. They get unhinged really fast.
  • m3047 5 hours ago
    It was put forward in 1960s (maybe? Robert Anton Wilson? and for parallel purposes Philip K Dick's percept / concept feedback cycle) science fiction, and having therefore casually looked for phenomena when support / disprove this hypothesis over the intervening years: that people in power necessarily become functionally psychotic because people will self-select to be around them as a self-preserving / promoting opportunity (sycophants) who cannot help but filter shared observations through their own biases, this is profoundly unsurprising to me.

    If you choose to believe as Jaron Lanier does that LLMs are a mashup (or as I would characterize it a funhouse mirror) of the human condition, as represented by the Internet, this sort of implicit bias is already represented in most social media. This is further distilled by the cultural practice of hiring third world residents to tag training sets and provide the "reinforcement learning"... people who are effectively if not actually in the thrall of their employers and can't help but reflect their own sycophancy.

    As someone who is therefore historically familiar with this process in a wider systemic sense I need (hope for?) something in articles like this which diagnoses / mitigates the underlying process.

  • joules77 9 hours ago
    It's a bit like talking about the quality of pastoral care you get at Church. You can get a wide spectrum of results.

    Worth pointing out such systems have survived a long long time since access to it is free irrespective of the quality.

  • adamgordonbell 8 hours ago
    The study coauthor actually seems positive on their potential:

    'LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be.'

    And they also mention a previous paper that found high levels of engagement from patients.

    So, they have potential but currently are giving dangerous advice. It sounds like they are saying a fine tuned therapist model is needed because 'you are a great therapist' prompt, just gives you something that vaguely sounds like a therapist to an outsider.

    Sounds like an opportunity honestly.

    Would people value a properly trained therapist enough to pay for it over an existing chatgpt subscription?

  • qgin 8 hours ago
    Benchmarking LLMs on this is an important thing to do. There is a huge potential positive effect of psychotherapy being always-available to every human rather than just for wealthy people once a week. But to get there we need to know the rate of adverse events compared to human therapists (which isn’t zero either).
  • giingyui 9 hours ago
    Therapists that work at an institution that makes millions off training therapists say that free therapy is a bad thing.

    Being less snarky: there is a monumental conflict of interest here that makes the study worthless.

    • seanhunter 9 hours ago
      Here is the paper. https://arxiv.org/abs/2504.18412

      Literally none of the authors are therapists. They are all researchers.

      The conflict of interest is entirely made up by you.

      • giingyui 8 hours ago
        How exactly can they determine that it’s bad to use AI therapy bots if they are not therapists?
        • seanhunter 8 hours ago
          There is a psychiatrist on the author team and they did a mapping review and evaluated AI therapy using existing guidelines about what constitutes good therapy (as discussed in their paper which I linked). In other words, they did research.

          It’s impossible to think that you are discussing this in good faith at this point.

        • _vertigo 8 hours ago
          So your take is that if they are therapists, it’s a conflict of interest, and if they aren’t therapists, they’re not qualified to make the assessment?
          • giingyui 8 hours ago
            That is correct. I don’t think this study can be made in a reliable way.
            • colinmorelli 8 hours ago
              This is an interesting take. By this perspective, it's essentially impossible to ever gauge the efficacy of AI in doing anything, because the people who will know how to measure the quality of that thing are also the people who will be displaced by showing the AI can do that thing. In fact, you could probably argue that every study ever is worthless, because studies are generally performed by people who know the subject matter and it's basically impossible to be unbiased on a topic if you're also highly knowledgable about said topic.

              In reality, what matters is the methodology of the study. If the study's methodology is sound, and its results can be reproduced by others, then it is generally considered to be a good study. That's the whole reason we publish methodologies and results: so others can critique and verify. If you think this study is bad, explain why. The whole document is there for you to review.

              • m3047 5 hours ago
                I think you are correct, and incorrect. However: set and setting. Another of Lanier's observations, which he relates to LLMs, is the Boeing "smart" stall preventer which crashed two <strike>Dreamliners</strike> [correction:] 737 MAXes.

                Who can argue with a stall preventer, right? What one can, and has been exposed / argued with, is the observation that information about the operation of the stall preventer, training, and even the ability to effectively control it depended on how much the airline was willing to pay for this necessary feature.

                So in reality, what matters is studying the methodology of set and setting, not how the pieces of the crashed airship ended up where they did.

                • colinmorelli 5 hours ago
                  I'm not exactly sure how this relates to my comment above. An analysis of an airline crash and a study are not the same thing.

                  As it relates to study design, controlling for set and setting are part of the methodology. For example, most drug studies are double-blinded so that neither patients nor clinicians are aware of whether the patient is getting the drug or not, to reduce or eliminate any placebo effect (i.e. to control for the "set"/mental state of those involved in the study).

                  There are certainly some cases in which it's effectively impossible to control for these factors (i.e. psychedelics). That's not what's really being discussed here, though.

                  An airline crash is an n of 1 incident, and not the same as a designed study.

              • m3047 5 hours ago
                > it's essentially impossible to ever gauge the efficacy of AI in doing anything...

                ... compared to humans? Yes. This is a philosophical conundrum which you tie yourself up in if you choose to postulate the artificial intelligence as equivalent to, rather than a simulacrum of, human intelligence. We fly (planes): are we "smarter" than birds? We breathe underwater: are we "smarter" than fish? And so on.

                How do you discern that the "other" has an internal representation and dialogue? Oh. Because a human programmed it to be so. But how do you know that another human has internal representation and dialogue? I do (I have conscious control over the verbal dialogue but that's another matter), so I choose to believe that others (humans) do (not the verbal part so much unfortunately). I could extend that to machines, but why? I need a better reason than "because". I'd rather extend the courtesy to a bird or a fish first.

                This is an epistemological / religious question: a matter of faith. There are many things which we can't really know / rigorously define against objective criteria.

                • colinmorelli 4 hours ago
                  This, similar to your other comment, is unrelated to my comment.

                  This is about determining if AI can be a equivalent or better (defined as: achieving equal or better clinical outcomes) therapist than a human. That is a question that can be studied and answered.

                  Whether artificial intelligence accurately models human intelligence, or whether an airplane is "smarter" than a bird, are entirely separate questions that can perhaps serve to explain _why/how_ the AI can (or can't) achieve better results than the thing we're comparing against, but not whether it does or does not. Those questions are perhaps unanswerable based on today's knowledge. But they're not prerequisites.

            • _vertigo 8 hours ago
              Well, that’s helpful to know so that other people can know to ignore what you write on this