Ask HN: What 'AI feature' created negative ROI in production?

Not demos, real usage. What broke first: data quality, evals, cost/latency, user trust, or support load?”

5 points | by kajolshah_bt 5 hours ago

1 comments

  • rtbruhan00 4 hours ago
    We implemented an AI-powered customer support triage system that initially looked promising in testing. In production, it actually increased our support costs by ~30% because:

    The AI would confidently misroute 15-20% of tickets, requiring human review of ALL AI decisions and the Customers lost trust after a few bad experiences and started explicitly requesting human agents also Support agents spent more time correcting AI mistakes than they saved

    The breaking point was data quality - our training data was too clean compared to real customer queries. We ended up rolling back to rule-based routing with AI as an optional suggestion tool instead.

    • kajolshah_bt 3 hours ago
      This is such a classic failure mode: even a 15–20% confident misroute is brutal because it forces “review everything,” kills trust, and increases repeats/reopens.

      When you rolled back, did you keep AI as suggestions only + rules-based routing? And what metric exposed it fastest for you: recontact rate, handle time, or escalation to humans?