Skip to main content

Check out Port for yourself ➜ 

Measure ATR's impact

See the ATR ROI dashboard in action on Port's demo environment.

This page answers the questions that matter after ATR is running: how many tickets did AI handle this sprint, where does delegation stall, and are the routing rules improving over time?

Without measurement, you are flying blind. Teams add more tickets to ATR, the rules drift, and nobody knows whether the workflow is delivering value or routing everything to humans. The dashboard closes that loop.

What it tracks

The dashboard reads from work item entities in Port. Every routing decision, stage transition, and engineer response writes back to the entity as structured data - so the dashboard is always current.

Key metrics:

MetricWhat it shows
Agent vs human splitHow many tickets were delegated to AI vs routed to a human this period.
Stage breakdownWhere work items currently sit across Draft, Plan, Develop, Deploy, Completed.
Delegation trendAgent-routed volume over time. Are you delegating more or less than last sprint?
Stall pointsWhich stages have the highest dwell time. Where does work stop moving?
Routing decision breakdownHow many tickets passed vs failed each scorecard criterion.

How it works

Each work item entity stores the routing decision result, the engineer's response, and stage transition timestamps as properties. The dashboard aggregates these across all work items using Port's visualization layer.

When an engineer clicks Delegate to Claude Code in Slack, that response writes back to the work item entity. When the PR merges, the stage updates to Deploy. When the deployment confirms, it moves to Completed. The dashboard reflects each transition in real time.

Reading the dashboard

Agent vs human split is the headline metric. If AI is handling 30% of tickets and your target is 60%, the gap tells you something: either the routing rules are too strict, ticket quality is too low, or the services in scope are not a good fit for delegation.

Stall points tell you where to focus. If 40% of work items are stuck in Plan, the workflow is producing routing decisions but engineers are not acting on them. If items pile up in Develop, the coding agent may be producing PRs that fail review repeatedly.

Routing decision breakdown shows which criteria are blocking the most tickets. If blast radius is failing 80% of the time, your catalog's blast radius data may be stale or the threshold may be calibrated too conservatively.

Improving the rules over time

The dashboard is the feedback loop that makes ATR better each sprint. After each cycle:

  1. Check the routing decision breakdown. If one criterion is blocking most tickets, investigate whether the data behind it is accurate.
  2. Check stall points. If work items pile up at a specific stage, the gate criteria for that transition may need adjustment.
  3. Track the delegation trend. If the agent-routed volume is flat or declining, widen the eligible scope gradually.

Next steps