On the Decision Surface

On the Decision Surface

Decision Scaffold #3: Continuity Risk Signals

March Subscriber Appreciation

Dr. Dana Moreno's avatar
Dr. Dana Moreno
Mar 27, 2026
∙ Paid

This Decision Scaffold is titled Continuity Risk Signals and serves as this month’s subscriber appreciation post.

This is the point in the series where the work changes character.

The earlier scaffolds focus on noticing: distinguishing kinds of risk, recognizing when assistance becomes reliance, and slowing reaction long enough to see what is actually happening. This scaffold begins where those observations scale beyond individuals or teams and become institutional concerns.


What this scaffold is for

Many AI systems begin as optional tools. Over time, some become continuity-sensitive: embedded in workflows, routines, or professional identity.

When those systems change, degrade, or are withdrawn, disruption can create foreseeable harm, even when there is no malfunction, misuse, or intent.

This scaffold is designed to help governance teams answer one question:

Do foreseeable continuity disruptions now require duty-of-care planning?

It does this without assuming agency, consciousness, or moral status in the system. It relies instead on observable signals: how people use the system, how embedded it has become, what happens when access is disrupted, and what kinds of harm would plausibly follow.


How it works

The scaffold guides you through three phases:

  1. Evidence and scope review
    You establish what is known about the system, its use, prior disruptions, and what remains uncertain. This step is deliberate. Governance failures often begin by responding to pressure rather than evidence.

  2. Continuity risk signals
    You review observable indicators across four categories:

    • user embedding

    • reliance impact

    • foreseeability

    • harm potential

    There is no scoring. The goal is convergence, not thresholds.

  3. Preliminary foreseeability assessment
    You document whether continuity risk appears low, emerging, or high—and whether duty-of-care planning has become relevant. The scaffold provides space for rationale, dissent, and reassessment planning.

The output is a documented governance judgment that can be reviewed, shared, and revisited.


What this scaffold does not do

This scaffold does not prescribe actions.
It does not define legal obligations.
It does not tell you what policy to adopt or how to mitigate risk.

That is intentional.

Its purpose is to identify the moment when continuity risk becomes governance-relevant, so that planning can be proportional rather than reactive.


Who this is for

This scaffold is designed for:

  • policy and governance teams

  • platform owners and institutional deployers

  • leadership groups responsible for AI-mediated systems

  • anyone tasked with anticipating harm before disruption occurs

It is board-safe, reusable, and suitable for audit or review contexts.


Final Comments & Download

This scaffold marks a shift from personal or team-level noticing to institutional responsibility. It formalizes foreseeability without moralizing and creates a defensible surface for governance deliberation.

If you’ve been reading On the Decision Surface, you’ll recognize the underlying commitments here: restraint, proportionality, and attention to how people actually adapt to systems over time.

This is where that thinking becomes governance infrastructure.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Dana Moreno · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture