Skip to content

Disagree with the AI

Disagreement is normal and useful. The AI is a starting point, not an oracle. When your professional judgement diverges from the AI’s reading, you override it — and that override carries information the system uses to improve.

Before you begin

  • You have an evidence row open in the verdict drawer.
  • You have read the AI signal, the cited dialogue snippet, and (if needed) the full dialogue. See review a dialogue.

Two ways to disagree

You override the AI by Adjusting (you trim or raise the proposed level) or Rejecting (you drop the signal entirely). Both are covered in render a verdict; this guide is about what happens after you do, and when a single disagreement turns into a pattern worth acting on.

A disagreement is logged when your final verdict diverges from the AI signal by more than a threshold — either in magnitude, or in direction. The row is tagged disagreement and feeds back into the calibration loops below.

What disagreement feeds

Per AI, per field

Repeated disagreement on the same sub-competency across many trainers in your field surfaces to Pondara. The model is re-grounded on field-specific evidence so its proposals improve over time. You do not have to do anything for this loop to work — your verdicts are the signal.

Per scenario

Repeated disagreement on the same scenario, across different learners, flags the scenario for the field expert. The situation may be ambiguous, the actor brief may be off, or the dimensions the scenario claims to exercise may not match what learners actually demonstrate. The scenario may need a fork — a new variant that preserves the original while addressing the issue. See scenario lifecycle.

Per learner

Repeated disagreement on the same learner, across different scenarios and dimensions, is rarely a calibration issue. Usually it is a coaching conversation — the learner is doing something consistently that the AI reads one way and you read another. Open the learner detail page and look at the pattern across their recent engagements.

Per trainer

Your own disagreement rate is visible to you on your profile. It is a self-awareness aid, not a performance metric — am I systematically more lenient on Reflection? More strict on Domain Knowledge? — and nothing punitive happens. If the rate is unusually high and sustained across many sub-competencies, the org admin sees an escalation and may want a conversation. That is rare.

When something goes wrong

  • You disagree, but no flag fires. That is fine. A single disagreement is noise. The flags above only fire on repetition.
  • The disagreement counter on your profile feels wrong. Check the threshold — adjustments under the threshold are not counted as disagreements. Small tweaks of the AI’s level are not divergence.
  • You see a per-scenario flag on a scenario you authored or promoted. Open the scenario, read the recent verdicts, and decide whether to fork. Do not edit a Live scenario substantively in place.