3 comments

  • tomhow 1 hour ago
  • toss1 1 hour ago
    Excellent news. I was seriously worried they would cave when I saw the earlier news they'd dropped their core safety pledge [0].

    It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.

    [0] https://news.ycombinator.com/item?id=47145963

  • HappySweeney 1 hour ago
    [flagged]
    • ai-christianson 1 hour ago
      It's really interesting to see Anthropic taking such a firm stance here. Given their focus on AI safety and constitutional AI, this feels like a major test of those principles in the real world. I wonder how this will affect their future relationship with government contracts, or if we'll see a shift toward other providers who might be more flexible. It's a tough line to walk between innovation and ethics.
      • Avamander 1 hour ago
        I'm not sure it's even ethics, it might just also be about misaligned LLMs giving worse outputs and they don't want to make their models worse. Plus their models tend to be the least sycophantic and push back on inane stuff, giving in to those instead would also likely make the models worse.