Ask HN: What 'AI feature' created negative ROI in production?
1 points
1 hour ago
| 1 comment
| HN
Not demos, real usage. What broke first: data quality, evals, cost/latency, user trust, or support load?”
rtbruhan00
1 hour ago
[-]
We implemented an AI-powered customer support triage system that initially looked promising in testing. In production, it actually increased our support costs by ~30% because:

The AI would confidently misroute 15-20% of tickets, requiring human review of ALL AI decisions and the Customers lost trust after a few bad experiences and started explicitly requesting human agents also Support agents spent more time correcting AI mistakes than they saved

The breaking point was data quality - our training data was too clean compared to real customer queries. We ended up rolling back to rule-based routing with AI as an optional suggestion tool instead.

reply
kajolshah_bt
37 minutes ago
[-]
This is such a classic failure mode: even a 15–20% confident misroute is brutal because it forces “review everything,” kills trust, and increases repeats/reopens.

When you rolled back, did you keep AI as suggestions only + rules-based routing? And what metric exposed it fastest for you: recontact rate, handle time, or escalation to humans?

reply