Handle escalations
Three escalation queues sit on your home and on the org overview. Each one points at a different kind of stalled or divergent work that the platform surfaces so you can act on it before it becomes a pattern. Working them takes minutes per cycle, not hours.
The three queues
- Pending evidence > 7 days. A trainer hasn’t yet rendered a verdict on AI-extracted evidence. After seven days the row counts as escalated. See the bandeja and evidence and verdicts.
- Stalled validation sign-offs. A scenario draft has been with a second field expert for too long. Peer validation has stalled.
- Disagreement spikes. A particular trainer or scenario is producing more trainer-AI disagreements than the org’s baseline.
Step 1 — Read the queues
From the org overview, the right rail shows the count for each queue. Click into a count to filter the list. Each row carries the actor involved, the target (the scenario, the evidence row, the validation draft), and how long it has been outstanding.
Step 2 — Work the pending-evidence queue
The seven-day pending-evidence queue is almost always a trainer workload or vacation issue, not a Pondara problem. Contact the trainer; if they are unavailable, reassign the rows to another trainer in the same cohort. Verdicts stay attached to the trainer who rendered them, so the audit trail records the change cleanly.
Step 3 — Work the stalled-validation queue
Stalled validations are the same shape: contact the second field expert. The scenario draft is waiting on their sign-off; if they’re unavailable, the org admin can assign a different field expert in the same field as the second reviewer. The single-expert exception applies only when no second expert exists at all — see subscribe a field.
Step 4 — Work the disagreement-spike queue
Disagreement spikes split into two cases:
- Spike on a scenario. A particular scenario is producing a pattern of trainer-AI disagreements. The scenario probably needs a fork: the situation, the actor behaviour, or the success criteria no longer match the field’s reality. Talk to the authoring field expert. See forking.
- Spike on a trainer. A particular trainer is producing more disagreements than the org’s baseline. This is usually a coaching conversation, not a calibration problem — disagreement is a feature of the platform, not a defect. Look at the trainer’s recent verdicts before assuming anything.
When something goes wrong
- The same escalation keeps reappearing. Open the underlying scenario or learner detail page from the row — the surface escalation is a symptom; the fix is in the work itself.
- The count looks wrong. The queues update lazily; refresh once. If the count still doesn’t match what you can see in the list, contact your customer success contact.
- Escalating work crosses cohorts. Reassignment may need a trainer who’s already at capacity. The org overview’s Cohorts card shows current scenario load per trainer.