Skip to content

Scenario lifecycle

A scenario does not appear in front of a learner the moment it is written. It travels through a small status machine, with two gates a field expert has to pass before anyone practises against it.

stateDiagram-v2
[*] --> Draft
Draft --> PendingValidation: submit
PendingValidation --> Validated: peer sign-off
Validated --> Live: assign to learners
Live --> Retired: retire
Live --> Draft: substantive edit (fork)
  • Draft. You are still writing it. Only you and your co-authors see it.
  • Pending validation. You submitted it. Other field experts in your field can read it and sign off.
  • Validated. A peer (or two — see below) has signed off. Trainers can now assign it.
  • Live. A trainer has assigned it. Learners can run it.
  • Retired. Out of rotation. In-flight engagements continue; new ones cannot start.

Peer validation

Two field experts in the same field must sign off before a scenario becomes Validated. If your field has only one expert in the platform, one sign-off is allowed and the scenario carries a “Preview — not peer-reviewed” badge until a second expert eventually signs.

The reason is simple: nobody but a working practitioner can say “this could happen here, and this is what it looks like”. The AI cannot bypass this gate. AI-suggested scenarios always start as drafts and go through the same peer step.

Realism preview

Before you submit a draft for peer validation, you can run a realism preview. The AI reads your scenario the way a peer reviewer would and flags obvious gaps: a missing actor brief, a vague trigger, weights that contradict the success criteria, a setting that names a real organisation it should not.

The preview is a sanity check, not a quality bar. It does not approve your scenario. It does help you catch the things peers would otherwise flag in the first round, so the peer step focuses on what only a practitioner can judge.

Fork

Substantive edits to a Live scenario fork the scenario into a new draft. Cosmetic edits do not.

Once a scenario is Live, what happens when you want to change it depends on what you change.

  • Cosmetic edits — fixing a typo, polishing a sentence in the brief, clarifying a label — stay in place. They are logged so the audit trail is honest, but the same scenario keeps running.
  • Substantive edits — changing the actors, the phase sequence, the competency vector, the weights, or the success criteria — fork the scenario into a new draft. The new draft goes through validation again. In-flight engagements stay on the version they started.

The rule borrows from how Wikipedia handles stable revisions: small changes flow, big changes branch. Nobody who is in the middle of an engagement gets the rug pulled.

Actors

Most scenarios involve someone other than the learner — a customer, a patient, a charge nurse, a workshop foreman. Each of these is a scenario actor with a role name and a role brief.

The role brief is the load-bearing part. It tells the AI who this person is, what they want, what they will and will not say, what state they are in. During the hybrid dialogue the AI plays the actor in role; it does not narrate from outside the scene. A thin brief produces a thin actor. A specific brief — “the patient is 78, recently widowed, came in for a check-up but is hiding chest pain” — produces an interlocutor a learner can actually engage with.

You write actors. The AI plays them. The trainer watches.

Where to look next