Run the realism preview
The realism preview is an automated AI-driven check that reads your draft and flags the obvious gaps before your peers do. It is not a quality bar and it does not validate your scenario. Think of it as a spellchecker for scenarios — useful for catching things you forgot, not for deciding whether the scenario is good.
What it catches
- Thin actor briefs (the AI would not have enough to play the role).
- Vague triggers that don’t pin down what the learner walks into.
- Weights that do not match the situation (everything weighted equally is almost always wrong).
- Success criteria that are too narrow to render judgement against.
- Phase sequences that contradict the situation.
What it does NOT do
It does not sign off your scenario. It does not check field accuracy — no AI knows whether a charge nurse on your ward would actually behave that way. Peer validation is the real gate.
How to run it
Open your draft. Click Run realism preview. The preview takes a few seconds. Results come back per section:
- Trigger — concrete enough? specific stakes?
- Actors — does each brief carry enough for the AI to play them?
- Phases — does the sequence fit the situation?
- Weights — do the weighted dimensions match the work the learner will actually do?
- Success — clear enough to ground a trainer verdict?
Each flag is advisory. You can address it, dismiss it, and submit for peer validation anyway — peers see the same preview output alongside your draft.
Common flags and what they usually mean
- “Actor brief is thin.” Add a sentence on what the actor wants, what they will not volunteer, and how they speak under pressure.
- “Trigger lacks concrete detail.” Replace abstractions with specifics — numbers, times, names of equipment, the actual phrase the customer used.
- “Weights are uniform.” A scenario almost never exercises eight dimensions equally. Pick the two or three the learner will actually be judged on and weight those heavily.
- “Success criteria are too narrow.” If your criteria read like a checklist of three exact phrases, broaden them — leave room for the trainer to recognise competent work that takes a different path.
When something goes wrong
- The preview times out. Click Retry. The check runs against the AI provider; transient network issues happen.
- The preview is wrong about your scenario. Override it. Peer validation is the real gate, and your peers will tell you if the preview was right after all.
- The preview keeps catching the same thing and you don’t agree. Read it once more in your peers’ voice. If you still don’t agree, submit anyway and let validation surface the disagreement.