Render a verdict
Every closed engagement produces AI signals — small candidate updates the AI proposes for specific sub-competencies. None of those signals move the learner’s record on their own. They sit in the Bandeja, waiting on your verdict. This guide is the canonical reference for that workflow: the three actions, the undo window, the disagreement signal, and what each verdict does to the radar.
Before you begin
- You have an evidence row open in the verdict drawer (clicked from the Bandeja, the engagement view, or the learner detail page).
- You can see the AI signal at the top, the cited dialogue snippet in the middle, and the right-rail context with the learner’s current level on this dimension and the scenario’s weight.
The three verdict actions
Accept
The AI got it right. One click stamps your verdict with the AI’s proposed level. After the undo window closes, the delta is applied to the learner’s competency vector. The row becomes final.
Use this when the cited dialogue clearly evidences the level the AI proposes and you have no reason to push it up or down.
Adjust
The AI was close but not quite right. You set a different level (typically lower or higher than the AI’s proposal) and optionally write a short note. Same undo window, then applied.
Use this when the direction is right but the magnitude is off, or when you observed the same turn evidencing a different sub-competency than the one the AI flagged.
Reject
The AI was wrong. No delta is applied; the dialogue stays in the learner’s history but the AI’s interpretation is dismissed. The row becomes final with no impact on the radar.
Use this when the cited snippet does not actually evidence the sub-competency the AI flagged, or when the level the AI proposes is unsupportable.
What you see
The verdict drawer is laid out in three zones:
- The AI signal. What the AI proposes and why, with a confidence band (Low / Medium / High). Confidence is information about the model’s certainty, not a recommendation about which button to click.
- The cited dialogue snippet. The turn or short window the AI pulled the signal from. Click “open full dialogue” to read the whole engagement in context — see review a dialogue.
- The right-rail context. The dimension itself, the learner’s current level on it, and the scenario’s weight on this dimension. A high-weight scenario contributes more to the dimension than a low-weight one; the radar moves accordingly.
The undo window
After you click Accept, Adjust, or Reject, you have a few seconds to hit Undo before the verdict is applied to the radar. The toast shows a countdown. If you take it back, the row returns to pending and you can render the verdict again.
After the window closes the verdict is final. There is no UI path to “un-final” a verdict — that would corrupt the audit trail. If you spot a mistake later, render a new verdict on a fresh signal that corrects it; the radar is path-dependent, not reset-and-replay.
Verdict impact
Your verdict is the only writer of the learner’s competency vector. When a stamped verdict’s undo window closes, an apply-pass moves the radar on the dimensions the scenario exercised, weighted by the scenario’s weight matrix. A scenario with weight 0.8 on Decision-Making and 0.2 on Communication moves Decision-Making four times as much as Communication for the same accepted level.
The AI cannot move the radar. The learner cannot. The org admin cannot. Even Pondara staff cannot. This is a database-layer guarantee — the column that records when the apply-pass ran is not writable by any non-trainer path.
If you ever see “permission denied” while saving a verdict, that is the rail holding. Report it; it should never reach a real user flow.
See the eight sub-competencies for how weights work and evidence and verdicts for the full data shape.
AI signal
The AI signal is labeled “AI”, not “verdict”, on every surface a learner sees. That distinction is intentional. Pre-verdict, the learner sees the AI’s reading of their dialogue but is told plainly that their trainer will review. Post-verdict, the learner sees both your decision and (if you rejected or adjusted substantially) the fact that you disagreed with the AI.
The confidence band reflects how strongly the model believes the dialogue evidences the sub-competency:
- Low — a weak hint. Common when a learner alludes to something in passing, or the dialogue is short. Reject is often correct.
- Medium — a clear hint with some supporting detail. The default band; usually right in direction, sometimes off in magnitude. Adjust is common.
- High — strong textual support. The model is quoting concrete behaviour. Accept is the typical disposition, but you remain the judge.
Disagreement
A row is flagged as a disagreement when your final verdict diverges from the AI signal by more than a threshold of the available band — either in level magnitude, or in direction (accept of a negative signal the AI proposed positive, etc.).
Disagreement is information, not judgement. It feeds three loops:
- Per AI — repeated disagreement on a sub-competency in your field surfaces to Pondara so the model can be re-grounded.
- Per scenario — repeated disagreement across learners on the same scenario flags it for the field expert to review and possibly fork.
- Per trainer — visible to you on your own profile as a self-awareness aid (am I systematically more lenient on Reflection than the model?). Never punitive.
See disagree with the AI for the full pattern.
When something goes wrong
- The cited snippet doesn’t capture the nuance you remember. Open the full dialogue from the drawer; the snippet is a window, not the whole record. See review a dialogue.
- You disagree but can’t articulate why. Adjust with a note that says so. The system tracks the trend; one ambiguous adjustment is noise, repetition becomes signal.
- The save fails with a generic server error. Most failures are transient. Refresh and try once more. If it persists, capture the request id from the browser console and open a ticket.