Discussion about this post

User's avatar
Fernando Duarte's avatar

Clay, this is the part I think support leaders need to operationalize next.

If AI handles the clean, repeatable work, the human queue becomes the judgment queue.

That changes how we should measure success.

Deflection alone is too clean of a number, which makes it dangerous, because humans apparently see a dashboard and forget reality exists.

I’d want to pair it with:

• delayed escalation rate

• repeat contact after AI resolution

• reopened tickets after AI handling

• handoff quality score

• agent readiness on AI-failed scenarios

The real question is not “how much did AI resolve?”

It is: “Did AI reduce work, or did it move harder work later?”

That distinction matters. Because if the agent only sees the angry, weird, risky 20%, then training, QA, and workforce planning need to be redesigned around that reality.

This piece nails the risk. The next layer is building the operating model around it.

No posts

Ready for more?