Skip to main content

Official Framework Guide

SHIFT Agile Framework

Scalable Hybrid Iterative Framework for Teams · Version 2.0 · Ricardo J. Minas, 2025

Introduction

Most Agile frameworks were designed for a world that no longer exists: co-located teams, predictable delivery, and a manageable human-to-output ratio. The frameworks that followed tried to scale that original model rather than rethink it.

SHIFT was built from a different starting point. It begins with the conditions modern teams actually face, and it is designed to augment whatever you are already running, not to replace it.

The four failures SHIFT addresses

The Coordination Failure

When organisations have more than one team, most frameworks break down. Scrum of Scrums is a band-aid. SAFe adds so much ceremony that teams spend more time coordinating than delivering. Most organisations end up with an informal coordination layer of side conversations and management escalations that do not scale or leave a record.

The Innovation Failure

Delivery pressure always wins over exploration. "We'll do the innovation sprint next cycle" never happens. The people closest to the problems are also the people best positioned to solve them. But they never get the protected space to try. SHIFT makes exploration structurally mandatory, not aspirational.

The Alignment Failure

Teams set OKRs in January and discover in March that the sprint work had no measurable connection to them. The gap between strategy and execution is real, persistent, and rarely discussed honestly. Most frameworks do not have a mechanism for mid-sprint alignment checking. SHIFT does.

The AI Transition Failure

Teams adopt AI tools within frameworks designed before AI existed. Story points were never designed for a production function where one well-specified task produces ten times the output of a vague one. Velocity charts are misleading when AI capability improves between sprints. The definition of done needs to include explicit review steps that did not exist before AI-generated output became common. SHIFT was built knowing that AI-assisted work is the default, not the exception.

SHIFT stands for Scalable Hybrid Iterative Framework for Teams. The word "hybrid" is intentional: SHIFT does not demand you discard Scrum or abandon SAFe. It layers on top, replaces what is broken, and fills what is missing.

Part I: Foundation

The Four Layers of SHIFT

SHIFT operates across four distinct layers. Each layer has a different cadence, different participants, and a different decision scope. The failure mode of most frameworks is that everything lives at one altitude: execution. Strategy bleeds into standup, governance becomes ceremony, and learning is an afterthought.

Note that Learning is Layer 1, not Layer 4. This is deliberate. In most frameworks, learning is squeezed at the end of a cycle. In SHIFT, it is foundational. Everything else builds on it.

1
Layer 1: LearningContinuous + sprint end

Retrospection, AI feedback, IIS, pattern library

2
Layer 2: DeliverySprint · 2 weeks

Execution, DOI checks, throughput, sprint ceremonies

3
Layer 3: GovernanceBi-weekly

Risk, dependencies, resource decisions, compliance

4
Layer 4: AlignmentQuarterly

Strategic objectives, portfolio decisions, OKR integration

LayerCadencePurposeKey artefacts
1 · LearningContinuous + sprint endCapture and route learning back into deliveryAI-Augmented Retro, Learning Cards, Pattern Library, SEMI recalibration
2 · Delivery2-week sprintExecute committed work, maintain DOI alignment, produce softwareSprint backlog, DOI Map, Throughput data, Spec docs
3 · GovernanceBi-weekly, 75 minCross-node decisions: Unblock, Approve, DeferLGP pre-read, Decision log, Dependency map
4 · AlignmentQuarterly (12-week cycle)Connect delivery to strategy, set Cycle ObjectivesCycle Objective Set, DOI Health Scores, IIS themes

Layer integration handshakes

HandshakeFromToMechanism
Sprint objectiveAlignmentDeliveryDOI map, sprint goal statement
Blocker escalationDeliveryGovernance4-hour SLA escalation protocol
Governance signalGovernanceAlignmentLGP decision log summary
Learning injectionLearningDeliveryPattern library, SEMI recalibration

Core Principles

People before process

Every structural choice in SHIFT is evaluated against whether it helps or burdens the people doing the work. Governance, tooling, and ceremony exist to serve teams.

Hybrid by design

Distributed and co-located contributors are treated as equal participants. Asynchronous-first communication is the default. Synchronous time is reserved for decisions that genuinely require it.

Iterative and adaptive

Short cycles with structured reflection. SHIFT teams do not commit to large plans. They commit to learning loops that progressively sharpen direction.

Lean governance

Oversight and accountability without bureaucracy. Lean Governance Pods keep decisions moving at the pace the work demands, with clear ownership and minimal coordination overhead.

Innovation is part of delivery

Experimentation is not deferred to a future quarter. Innovation-Integrated Sprints build dedicated capacity for exploratory work inside the delivery rhythm so that learning does not compete with shipping.

Continuous alignment

Strategy does not live in a quarterly deck. Dynamic Objectives Integration keeps team-level work connected to organisational goals throughout every sprint, not just at planning.


Part II: Structure

Adaptive Collaboration Nodes (ACNs)

An Adaptive Collaboration Node is the primary delivery unit in SHIFT. The word "node" is deliberate: it implies connectivity, not isolation. An ACN has clear internal structure, a defined scope of ownership, and explicit interfaces to other nodes.

An ACN owns a capability domain, not a feature list. It is responsible for the full vertical slice of work within that domain: from specification through testing through deployment. It owns outcomes, not outputs.

Node Composition Matrix

SizeAnchorDelivery LeadContributorsAI Agent RolesNotes
3 people10 (Anchor doubles)20-1Minimum viable. IIS suspended.
5 people1131Standard node. Full SHIFT operation.
7 people114-51-2Preferred size. Full IIS + two AI agents.
9 people116-72-3Maximum. Consider splitting.
AI Agent Roles are not headcount. They are defined functional roles assigned to AI tooling — for example "Spec Reviewer Agent" or "Throughput Analyst Agent". Each has a defined input, defined output, and a human owner responsible for its outputs.

Formation

An ACN forms when a capability domain requires more than two cycles of sustained delivery. Formation has four steps:

  1. Domain scoping: define the capability domain in one sentence. If it takes more than one sentence, the domain is too broad.
  2. Anchor assignment: identify an Anchor with delivery credibility in the domain, not just seniority.
  3. Composition drafting: the Anchor proposes a composition using the Node Composition Matrix. The LGP ratifies.
  4. Interface definition: the new node and adjacent nodes produce a one-page interface document covering what they consume and produce for each other, and how they escalate cross-node issues.

Dissolution

An ACN dissolves when its capability domain is complete, when throughput signals indicate sustained delivery failure across two consecutive Red DOI cycles, or when team size drops below three without approved backfill. Dissolution requires a 2-sprint wind-down. The node's Learning Cards remain in the shared pattern library permanently.

ACN anti-patterns

Anti-patternSymptomFix
The Siloed NodeAll cross-node issues resolved informally; no LGP escalationsMandatory interface document review at cycle boundary
The Permanent NodeBacklog is entirely maintenance; no new capability work; throughput decliningDissolution review when two consecutive sprints contain only maintenance items
The Hero NodeDOI green but high individual fatigue; Learning Cards authored by the same person every sprintForced contribution rotation; AI task offload review
The Phantom AI NodeHigh throughput but rising defect rate and spec compliance failuresAI Responsibility Map audit; mandatory human review for all AI-led outputs

Lean Governance Pods (LGPs)

A Lean Governance Pod is SHIFT's answer to the governance tax problem. In most scaled frameworks, governance consumes 20 to 40 percent of senior contributor time. LGPs reduce governance overhead to under 10 percent of any individual's time while maintaining decision quality and traceability.

An LGP is not a steering committee or an approval board. It is a decision-making forum with a defined scope, a time budget, and an explicit anti-bureaucracy mandate.

Composition and cadence

Hard rule: LGP never exceeds 7 participants. If more than 7 people are needed to make a decision, the decision scope is too broad. Break it into smaller decisions.
Session typeDurationTrigger
Bi-weekly75 min maxAligned to sprint boundaries, fixed cadence
Emergency LGP30 min maxCalled by any Anchor or Governance Steward; single agenda item only
Async LGP24-hour windowClearly bounded, low-risk decisions; any participant can request sync

Fixed bi-weekly agenda

SlotDurationPurpose
Pre-read acknowledgement5 minConfirm all participants have read the pre-read
Node Health Review15 minRAG status across all nodes; flag Reds
Decisions: Unblock20 minCross-node or external blockers requiring action
Decisions: Approve15 minItems requiring formal LGP ratification
Decisions: Defer10 minItems not ready: assign owner and due date
DOI Alignment check5 minConfirm sprint objectives still align to cycle goals
Parking lot5 minItems not in pre-read: logged only, not discussed today
Non-negotiable rule: If an item is not in the pre-read, it is not discussed today. This single rule prevents the LGP from becoming a fire-fighting forum.

Decision types

SHIFT defines exactly three decision types for LGP. Any item that does not fit one of these three types does not belong in the LGP.

TypeDefinitionExamples
UnblockA delivery blocker requiring cross-node coordination, external engagement, or resource reallocation. The LGP resolves it or assigns an owner with a deadline.External API dependency blocking Node A; resourcing conflict between nodes
ApproveA decision that has been prepared, pre-read, and requires formal ratification. Proposals must be max two pages in the pre-read.New node formation; IIS theme ratification; cycle objective adjustment; external dependency commitment
DeferItem not ready for decision. LGP assigns an owner, a due date, and specifies the exact information missing. Not 'we'll discuss later.'Proposal missing compliance sign-off; budget data not yet available
Three-strikes rule: Any item that appears in the pre-read three consecutive times without resolution is automatically escalated to the Alignment layer. The Governance Steward does not need LGP agreement to trigger this escalation.

Roles

SHIFT has four core roles. In small teams, they collapse. In large teams, they remain distinct. Roles are defined by accountability, not job title.

SHIFT Anchor

The single accountable person for an ACN's capability domain outcomes, not outputs. Owns DOI connection, prioritisation decisions within the node, IIS theme selection, and the AI Responsibility Map. Represents the node in LGP. Should spend 30 to 40 percent of their time on direct delivery work: writing specs, reviewing outputs, pairing on complex items. An Anchor who attends only meetings is disconnected from delivery reality.

Delivery Lead

The operational heart of the ACN. Owns the sprint plan, SEMI compliance, Sprint DOI Map, the mid-sprint DOI check, the 4-hour blocker escalation SLA, the Node Health Card, retrospective facilitation, and Monte Carlo forecasting at sprint end. This is an operational role with delivery skin in the game, not a coaching role.

Node Contributors

The practitioners doing the delivery work. Expected to write or contribute to specs that meet the SEMI threshold, own the human review step for all AI-led outputs they are responsible for, and contribute to IIS themes with genuine engagement. Every contributor defines their personal AI workflow in the AI Responsibility Map.

Governance Steward

Responsible for the health of the governance system, not for making governance decisions. Compiles and distributes the LGP pre-read 24 hours before each session. Facilitates the bi-weekly LGP. Maintains the decision log (public, searchable, permanent). Runs the three-strikes escalation. Owns the Maturity Model self-assessment process.

Role flexibility for small teams

Team sizeRole combinationsNotes
3 peopleAnchor + Governance Steward; Delivery Lead + primary ContributorIIS suspended. LGP replaced by a weekly 30-min external alignment meeting.
5 peopleAnchor dedicated; Delivery Lead dedicated; Steward function shared between themFull SHIFT operation viable.
7+ peopleAll four roles held by distinct peopleAnchor and Delivery Lead should not be combined above five people.

Part III: Rhythm

The SHIFT Lifecycle

SHIFT runs in 2-week sprints grouped into 12-week cycles (six sprints per cycle). The cycle is the primary strategic alignment unit. The sprint is the primary delivery unit.

Sprint structure (10 working days)

DayCeremony / Activity
Day 1Sprint Kickoff, Spec Review Session
Days 1-10Daily AI-First Standup (15 min, async-first)
Day 6-7Mid-Sprint DOI Check-in (30 min)
Day 10Sprint Review, including IIS Review (60 min total)
Day 10AI-Augmented Retrospective (60 min)

Cycle structure (6 sprints, 12 weeks)

SprintTypeNotes
Sprint 1DeliveryCycle Objectives set, DOI map initialised
Sprint 2DeliveryLGP bi-weekly cadence active
Sprint 3Delivery + IISFirst IIS of cycle (15% capacity ring-fenced)
Sprint 4DeliveryMonte Carlo forecast updated
Sprint 5DeliveryMid-cycle DOI calibration
Sprint 6Delivery + IIS + Cycle ReviewIIS Review, Cycle Portfolio Review (LGP), Maturity Model self-assessment
Total ceremony overhead: approximately 6.7 hours per person per sprint (8.4% of available capacity). Scrum: 12 to 16 hours. SAFe: 20+ hours at scale.

Innovation-Integrated Sprints (IIS)

IIS is SHIFT's mechanism for sustaining exploratory work within a delivery-focused framework, without the failure modes of 20 percent time, hackathons, or separate innovation teams.

Why IIS works where 20% time does not

Problem20% TimeIIS
Time protectionInformal; first to be cut under delivery pressureFormally allocated; requires LGP approval to reduce below 10%
Output formatNo defined format; ideas die in isolationLearning Cards: structured, searchable, permanent
Promotion pathwayNoneLearning Card → Pilot → LGP Approval → Cycle Portfolio Investment
Theme selectionPersonal interest, disconnected from strategyDOI-connected at cycle kickoff

Capacity allocation

Allocation% of sprintCondition
Standard15%Default for all nodes above 5 people
Minimum10%Below this, IIS is performative and should be formally suspended
Maximum25%Designated innovation sprint; requires LGP approval; cannot occur in consecutive sprints
IIS capacity is subtracted from total node capacity before sprint planning. It is not a "leftover" activity. It is planned first.

Learning Card format

A Learning Card is the required output of every IIS sprint. It is not a demo, a slide deck, or a Confluence page. It is a structured capture of what was learned.

Theme question exploredThe question the IIS sprint attempted to answer.
What we didTwo to four sentences on the specific experiment conducted. Not the idea, the actual work.
What we learnedThe most important finding, stated as a declarative sentence. If the team cannot state one clear learning, the theme was not testable enough.
What we did not learnExplicitly capturing the questions that remain open. Prevents false confidence.
SignalPositive (evidence of value), Neutral (inconclusive), or Negative (evidence against pursuing this direction).
RecommendationPromote to Pilot, Continue Exploring, Archive, or Share Externally.
SEMI score for next stepIf promoting: the SEMI score of the proposed next step, to give the promotion decision a cost signal.

The promotion funnel

IIS Sprint

Learning Card created · Signal: Positive

IIS Review

Day 10 · 20 minutes

Decision point

Promote to Pilot?

Yes → continue
No → Archive or Continue Exploring

Pilot Sprint

1 sprint · defined success criteria · standard delivery capacity

Pilot Review

Anchor + LGP decision

Decision point

Investment?

Yes → continue
No → Archive with full data

LGP Approval

Cycle Portfolio Investment · enters next cycle's DOI map

Dynamic Objectives Integration (DOI)

DOI keeps sprint delivery connected to strategic objectives in real time. The core problem it solves: misalignment between strategic plan and delivery reality is usually only discovered at quarter end. DOI introduces a continuous signal system that surfaces misalignment during the sprint, when something can still be done about it.

DOI is not OKRs. It is the operational layer that connects OKRs (or any strategic objective system) to sprint delivery.

Green / Amber / Red tagging

Every sprint backlog item is tagged at the Spec Review Session:

StatusDefinitionRequired action
GreenItem directly serves a Cycle Objective. Throughput supports completion. Monte Carlo confidence above 70%.Monitor. Update at mid-sprint check.
AmberItem at risk but recoverable. Throughput below baseline for 2+ days, a blocker actively being worked, or confidence between 40-70%.Named recovery action with an owner. Must move to Green within 4 calendar days.
RedItem will not complete without intervention. Blocker open 48+ hours, confidence below 30%, or context has changed.Declare within 4 hours. Resolution meeting within 24 hours. Named action, named owner, deadline.
The Amber Trap rule: Any Amber item that has not moved to Green within four calendar days automatically converts to Red. The Delivery Lead cannot override this. Only the Governance Steward can, with documented justification. This single rule prevents teams from using Amber as an indefinite holding state.

DOI Health Score

The DOI Health Score is calculated as: (Green Contributions / Total Contributions) × 100

ScoreStatusLGP action
80-100StrongMonitor
60-79ModerateReview at next LGP
40-59At RiskEmergency LGP if Red items present
Below 40CriticalEscalate to Alignment layer

The trend across sprints is more important than any single score. A score of 75 trending upward is healthier than a score of 80 trending downward.


Part IV: Ceremonies

Ceremonies

Every SHIFT ceremony has a defined output. If the output is not produced, the ceremony has failed regardless of whether it was held on time.

CeremonyDurationWhenOwner
Spec Review Session45 minDay 1Delivery Lead
AI-First Daily Standup15 minDaily (async-first)All contributors
Mid-Sprint DOI Check-in30 minDay 6-7Delivery Lead + Anchor
Sprint Review (incl. IIS Review)60 minDay 10Anchor
AI-Augmented Retrospective60 minDay 10Delivery Lead
Cycle Portfolio Review90 minSprint 6 endGovernance Steward

1. Spec Review Session

The sprint backlog is reviewed ordered by SEMI score, highest first. For each Amber item (SEMI 7-8), the team identifies the highest-risk dimension and agrees a mitigation action completable by day 3. Red items are removed and returned to the Anchor.

Output: every sprint item has a confirmed SEMI score. Amber items have a documented mitigation. Red items are out of the sprint with specific gaps identified.

2. AI-First Daily Standup

Async-first. Three-part format per contributor (90 seconds each):

  1. Progress signal: "X is 60% complete against criteria Y" or "X is done." Not "I worked on X."
  2. AI workflow note: "The LLM reviewer flagged three spec gaps; two resolved, one open." Or "No AI tooling involvement today."
  3. Blocker flag: Any blocker. Classified after standup: internal, cross-node, or external.

3. Mid-Sprint DOI Check-in

Fixed agenda: Sprint DOI Map RAG review (10 min), Amber review and automatic Red conversion if recovery is off-track (10 min), capacity check (5 min), named actions with owners (5 min). This is not a sprint review or a planning session. Redirect if those activities start to bleed in.

4. Sprint Review

The Anchor opens with one sentence: was the sprint goal achieved? Each contributor demonstrates completed work against acceptance criteria. No slide decks. Show the working software, then show the criterion it meets. Stakeholder feedback is categorised immediately: confirmed acceptance, action required, or information. The final 20 minutes are the IIS Review.

5. AI-Augmented Retrospective

Pre-ceremony (mandatory): the AI retrospective agent synthesises throughput trend (last 4 sprints), SEMI distribution, DOI Health Score trend, IIS Learning Card themes, and recurring retrospective patterns. Shared one hour before the session. This eliminates 20 to 30 minutes of context-setting.

Three tracks, 20 minutes each:

Track 1: Delivery System

Opening question: 'What is the single biggest friction point in how we deliver work?' Systemic delivery issues identified. Named actions with owners and sprint deadlines enter the next sprint backlog directly, not a parking lot.

Track 2: Collaboration and AI

Opening question: 'Where did AI tooling help us, where did it slow us down, and where did our human collaboration patterns break down?' AI workflow incidents, prompt updates, agent reconfiguration, and collaboration norms.

Track 3: Learning and Growth

Opening question: 'What did we learn this sprint that we should not lose, and what capability are we missing that would make the biggest difference?' Pattern library updates, IIS theme candidates, team health signals.

6. Cycle Portfolio Review

90 minutes. Anchors, Governance Steward, product or strategy leadership. Governance Steward presents the Cycle DOI Summary. Each Anchor presents Cycle Objective outcomes (5 minutes each, signal and learning, no blame). IIS Portfolio reviewed (themes, Learning Cards, ROI of 15% capacity). Next cycle Objectives and IIS themes set. ACN formation or dissolution decisions made.


Part V: AI-First Teams

Spec-Driven Development

AI-first teams operate with a fundamentally different production function. When AI tools can produce a working implementation from a clear specification in hours, the constraint shifts from coding capacity to specification quality. Vague requirements produce unreliable output regardless of the AI tools involved.

Spec-Driven Development (SDD) treats the specification as the primary engineering artefact. Before any implementation begins, the team produces a complete spec that is reviewed, challenged, and signed off.

A complete spec contains

Problem statementWhat user or system need does this address?
Acceptance criteriaWhat must be true for this to be done? Each criterion must be independently testable.
Edge casesWhat inputs or states must the system handle outside the happy path?
Integration constraintsWhat existing systems, schemas, or APIs does this touch? What must not change?
AI guidance notesWhere is AI output expected to be used, and what review is required?
Definition of doneThe specific conditions under which this item can move to Done in the SEMI model.
SDD changes what sprint planning is for. Planning is no longer primarily about task decomposition and effort estimation. It is about spec quality review. A sprint is ready to begin when every item has a spec that any contributor, human or AI, could implement without interpretation.

Team Sizing for AI-First Work

AI-first teams challenge the conventional Agile sizing heuristic of five to nine people. When AI tools multiply individual output, small teams become viable for work that previously required larger groups.

ConfigurationSizeWhen to useNot suitable for
AI-First Core Team1-4 peopleWell-scoped delivery, clear specs, high AI leverage, understood domainDiscovery work, cross-functional stakeholder alignment, novel domains
Standard ACN4-8 peopleDefault for most delivery workstreams. Full SHIFT operation.N/A, this is the target size
Coalition ACN8-15 peopleComplex programmes; multiple workstreams requiring coherenceSingle-domain work, first adoption cycles
At 8 to 15 people, the Anchor and Governance Steward become full-time, and the LGP cadence increases. Split into sub-nodes before reaching 15 people.

The SEMI Model

The SEMI model is SHIFT's estimation and sprint-readiness system. It replaces story points. Each work item receives four scores on a 1 to 3 scale before it can enter a sprint. The composite score determines sprint entry eligibility, not calendar duration.

S

Specification Quality

How clear, complete, and testable is the specification?

1An engineer can start without clarifying questions.
3Acceptance criteria are missing or untestable.
E

Effort Uncertainty

Has the team done this before?

1Known work with clear precedent.
3Novel work where the approach is unclear.
M

Multi-system Impact

How many external systems, teams, or dependencies does this touch?

1Fully contained work.
3Work touching multiple systems with compliance implications.
I

Implementation Confidence

How confident is the team in the chosen approach?

1A proven pattern the team has used before.
3A spike may be needed before implementation begins.

S: Specification Quality (1-3)

ScoreLabelDefinition
1ClearAcceptance criteria are written, unambiguous, and testable. Edge cases are documented. A contributor can start without clarifying questions.
2PartialAcceptance criteria exist but have gaps. A contributor can start but will need one or two clarifications. Some edge cases undocumented.
3UnclearAcceptance criteria are missing, vague, or untestable. A contributor cannot start without a significant clarification session.

E: Effort Uncertainty (1-3)

ScoreLabelDefinition
1KnownThe team has done this before. Similar work completed in the last three cycles. The approach is clear.
2SimilarSimilar to past work but with meaningful differences. Some unknowns. Team has a hypothesis but has not validated it.
3NovelThe team has not done this before. The approach is uncertain. Multiple viable paths may exist.

M: Multi-System Impact (1-3)

ScoreLabelDefinition
1ContainedNo cross-node dependencies. No external services beyond stable integrations. No schema changes. No security implications.
2AdjacentOne cross-node dependency or one external service integration. Schema changes within node ownership. Minor security review may be required.
3WideMultiple cross-node dependencies. External integrations with uncertain behaviour. Schema changes affecting other nodes. Security or compliance review required.

I: Implementation Confidence (1-3)

ScoreLabelDefinition
1ConfidentThe implementation approach has been used before in similar contexts. The team is aligned. No significant technical risk.
2TentativeThe team has a preferred approach but has not validated it. At least one alternative. Some technical risk.
3UncertainNo clear implementation approach. A spike may be needed before implementation begins. High technical risk.

Sprint entry rules

SEMI Score = S + E + M + I  ·  Minimum: 4  ·  Maximum: 12

SEMI totalBandSprint entry rule
4-6🟢 Green: Sprint ReadyEnter the sprint. No additional preparation required.
7-8🟡 Amber: ConditionalEnter only with a documented mitigation for the highest-scoring dimension, agreed by Delivery Lead and Anchor. Action must be completable by day 3.
9-10🔴 Red: Spec RequiredCannot enter the sprint. Return to Anchor for specification improvement. Re-score before next sprint planning.
11-12⚫ Black: DecomposeItem is too large or complex. Decompose into child items. Re-score all child items before sprint planning.

AI-specific scoring modifiers

ConditionDimensionModifier
AI output is non-deterministic and acceptance criteria do not account for output varianceS+1
AI model is externally hosted and rate-limitedM+1
AI model requires prompt engineering not yet documentedI+1
AI output is the primary user-facing output (higher evaluation complexity)E+1
AI agent has cross-system tool accessM+1 per additional tool beyond 2

Modifiers are additive but capped: no single dimension exceeds 3.

SEMI scoring reference table

Work typeSEMITotalNotes
Bug fix, known root cause11114Sprint ready
Bug fix, unknown root cause23128Conditional: timebox investigation
New UI component (standard)11-2114-5Sprint ready
New API endpoint (standard)1-21-21-214-7Usually sprint ready
New API endpoint (external auth)22329Spec required: clarify auth integration
LLM integration (new)332311Decompose: separate spike from integration
LLM integration (established pattern)12216Sprint ready after pattern documented
Data migration (small)12216Sprint ready with rollback plan
Data migration (large, cross-system)233210Red: full spec and rollback required
Infrastructure change (proven)11114Sprint ready
Security or compliance feature2-32329-10Spec required, compliance review mandatory

SEMI pattern analysis

PatternSystemic signalFix
Consistent S=3Specs written too late or by people disconnected from implementationIntroduce Spec Review earlier; pair Anchor with contributor on spec writing
Consistent E=3 on one work typeTeam treats familiar work as novel; not building pattern familiarityDocument an implementation pattern for this type; E should decrease to 1 or 2 after
Consistent M=3Node domain boundaries too wide; cross-system work without interface agreementsTighten capability domain; establish formal interface documents with adjacent nodes
Consistent I=3Team lacks confidence in implementation approaches; capability gapTargeted IIS themes on technical capability building; pair contributors on complex items

Forecasting: Throughput and Monte Carlo

Velocity-based forecasting collapses in AI-first teams. A team's effective throughput can double between sprints as prompting skills improve or new tooling is adopted. Effort weighting becomes noise. SHIFT uses throughput: counting the number of work items completed per sprint, regardless of estimated size.

Throughput vs. velocity

DimensionVelocity (story points)Throughput (items)
Unit consistencyWeak: points vary by estimator and over timeStrong: item = item
AI work compatibilityPoor: effort variance not capturedModerate: calibratable with SEMI bands
Gaming riskHigh: point inflation is commonLow: items are countable
Stakeholder clarityLow: stakeholders do not understand pointsHigh: 'X items done' is legible to everyone
Definition of a completed item: meets all acceptance criteria, has passed its required review step (including human review for AI-assisted outputs), and is deployed to at least a staging environment or integration-tested. Items "in review" do not count. AI outputs without human review do not count. Soft-done items do not count.

Monte Carlo probability bands

BandProbabilityUse for
P5050%Internal planning only. Do not share externally.
P7070%Sprint goal-setting and internal commitment.
P8585%Stakeholder commitments.
P9595%Contractual or external commitments.
Stakeholder communication template: "Based on our last 10 sprints, we have 85% confidence this will be complete by 14 March. If our throughput holds to recent patterns, we expect to finish by 10 March." This is more honest and more useful than a single predicted date.

Reference class for early baseline

Monte Carlo requires a minimum of 8 sprints of internal data. Before that, use industry-baseline throughput distributions, blended with actual data from sprint 3 onwards (50/50 blend). Always flag to stakeholders when reference class data is in use.

Team sizeGreen items/sprintAmber items/sprint
3 people5-82-4
5 people8-133-6
7 people12-185-9

AI-First Practices

Prompt Library

A node-maintained library of effective prompts for common task types: spec writing, code generation, test generation, review, documentation. Referenced before starting AI-assisted work. Updated whenever a prompt produces significantly better or worse results than expected. The Prompt Library is a first-class team artefact, not an individual's notes.

AI Review Protocol

A structured checklist for reviewing AI-generated output: Does the output match the spec acceptance criteria? Does it handle the documented edge cases? Does it behave correctly in integration? Are there security implications? Has it been tested against the definition of done? No item transitions to Done without this review being documented as completed.

Pair-with-AI

AI is a collaborator, not an autonomous agent. The contributor owns the decision on all AI output. Pair-with-AI means the contributor actively shapes the AI's work: writing the spec, directing the prompts, reviewing the outputs, and deciding what to accept, modify, or reject. Not: run the AI and accept the output.

AI Responsibility Map

A one-page living document maintained by each ACN. For each contributor, it defines which tasks are AI-assisted (human does the thinking, AI assists execution), which are AI-led with human review (AI produces the draft, human evaluates), and which are human-only. Updated at cycle boundaries. Referenced in the Track 2 retrospective every sprint.

AI-First Mindset Shifts

FromToWhy it matters
Estimation accuracySpecification qualityA good spec is worth more than an accurate estimate. Let throughput data handle forecasting.
VelocityThroughputVelocity measures effort-weighted output. In AI-first teams, effort weighting becomes noise. Count items. Use Monte Carlo.
Individual heroicsSystem qualityAI tools make spec quality, tooling, and review process the binding constraint, not the individual.
DoneVerifiedAI-generated output needs rigorous verification. Done means: matches spec, passes edge cases, verified in integration.
Synchronous planningAsynchronous alignmentIf the spec is clear, most planning questions resolve asynchronously. Reserve sync time for decisions that need dialogue.
AI as toolAI as collaboratorA tool is used. A collaborator is directed, reviewed, and held to a standard. Contributors own the results of AI-assisted work.
Prompt firstSpec then promptThe quality of a prompt is bounded by the quality of the spec behind it. Invest in the spec first.

Part VI: Integration

Integrating with Other Frameworks

SHIFT is designed to be adopted in layers, not as a wholesale replacement. Existing frameworks contain genuine value. These integration maps show precisely what to keep, what to replace, and what to add.

SHIFT + Scrum

Scrum elementSHIFT treatmentSHIFT equivalent
Sprint cadenceKeep2-week sprint
Sprint GoalKeep + enhance with DOISprint Contribution in DOI map
Product OwnerReplace with expanded accountabilityAnchor
Scrum MasterReplace with operational roleDelivery Lead
Development TeamKeep + AI Responsibility MapNode Contributors
Backlog RefinementReplaceSpec Review Session (SEMI-driven)
Sprint PlanningMerge into Spec ReviewSpec Review + sprint goal confirmation
RetrospectiveReplace with 3-track formatAI-Augmented Retrospective
Story points / velocityReplaceSEMI scoring + throughput + Monte Carlo
Scrum of ScrumsReplaceLGP (bi-weekly, max 75 min)
Innovation capacityAddIIS (15% of sprint capacity)
Strategy alignmentAddDOI model

SHIFT + Kanban

Kanban and SHIFT share flow-based thinking. SHIFT adds time-boxing and strategic alignment without disrupting Kanban flow.

Kanban elementSHIFT treatment
WIP limitsKeep. SHIFT endorses WIP limits at node level.
Flow metrics (cycle time, throughput)Keep. Throughput data feeds directly into Monte Carlo.
Visualisation disciplineKeep. The Kanban board becomes the Node Health Card's delivery view.
Sprint time-boxingAdd. The sprint is a planning and review cadence, not a flow constraint. Work in flight at sprint end counts toward the next sprint's throughput if it completes there.
DOI alignmentAdd. DOI tags visible on all cards. Green/Amber/Red indicators.
Innovation capacityAdd. Dedicated Innovation swim lane with its own WIP limit.
SEMI scoringAdd. Applied before items enter the WIP queue.

SHIFT + SAFe (approximately 40% ceremony reduction)

SAFe ceremonyDurationSHIFT replacementSHIFT durationReduction
Iteration Planning4 hoursSprint Planning + Spec Review90 + 45 min67%
Daily Scrum15 minAI-First Standup15 min0% (deeper)
Iteration Review60 minSprint Review60 min0% (deeper)
Iteration Retrospective60 minAI-Augmented Retro60 min0% (deeper)
Backlog Refinement2 hoursSpec Review (SEMI-driven)45 min63%
PO Sync30 min weeklyDOI async update15 min50%
Scrum of Scrums30-60 min weeklyLGP (bi-weekly)75 min bi-weekly38%
ART Sync60 min bi-weeklyMerged into LGPAbsorbed100%
Net reduction per person per sprint: 3.5 to 5 hours. Over a 12-week PI (6 sprints), this saves 21 to 30 person-hours per contributor.

SHIFT + LeSS

LeSS and SHIFT share a foundational philosophy: scaling should be achieved by descaling. SHIFT adds back what LeSS deliberately removes, but lightly.

What SHIFT adds to LeSSWhy
ACN outcome ownershipLeSS feature teams own outputs. ACNs own capability domain outcomes.
LGPs for lightweight coordinationLeSS deliberately removes coordination roles; LGPs provide structure without rebuilding what LeSS removed.
DOI modelDistributes strategy alignment to sprint level, complementing the overall Product Owner connection.
IISLeSS has no structured innovation capacity provision. IIS adds it.
SEMI + Monte CarloLeSS relies on story points and velocity. SEMI + Monte Carlo are more accurate for AI-first teams.
SHIFT + LeSS works particularly well for organisations transitioning from SAFe who want the lean philosophy of LeSS but still need some coordination structure during the transition period. SHIFT provides that bridge.

SHIFT + OKRs

DOI is the operational bridge between OKRs and sprint delivery.

OKR levelSHIFT equivalentCadence
Company OKRAlignment layer inputQuarterly
Team OKRCycle Objective (in DOI map)12-week cycle
Key ResultDOI Sprint Contribution outcome signalSprint
Initiative / OutputSprint Contribution (delivery work)Sprint

A declining DOI Health Score mid-cycle is a leading indicator that the team may not hit its Key Results. This surfaces the problem six weeks before the quarterly review rather than at it. The Cycle Portfolio Review replaces the quarterly OKR retrospective rather than running alongside it.


Part VII: Adoption

SHIFT Maturity Model

The SHIFT Maturity Model is a navigation tool, not a certification programme. Teams use it to understand where they are, what to focus on next, and what good looks like at each level. Run at cycle end, facilitated by the Governance Steward, 30 minutes.

Level 1

Established Foundation

Self-assessment questions

  1. Does every sprint end with a Sprint Review where working software is demonstrated against acceptance criteria?
  2. Does every sprint item have a SEMI score before entering the sprint?
  3. Are you tracking the number of items completed per sprint (throughput)?
  4. Can every team member describe the node's capability domain in one sentence?
  5. Does every Node Contributor own the review step for their AI-assisted outputs?

Advancement to next level: 4 of 5 yes, consistently across at least 3 sprints.

Level 2

Aligned Delivery

Self-assessment questions

  1. Can you point to the Cycle Objective that each current sprint item contributes to?
  2. Have you had at least one Red DOI item this cycle and resolved it?
  3. Is IIS happening every cycle, and do Learning Cards exist as output?
  4. Is the LGP pre-read being distributed 24 hours before every session?
  5. Is throughput data from the last 6+ sprints available and in use for planning?

Advancement to next level: 4 of 5 yes, consistently across at least one full cycle (12 weeks).

Level 3

Adaptive Intelligence

Self-assessment questions

  1. Are you communicating forecast confidence using P70/P85 probability bands to stakeholders?
  2. Has at least one IIS Learning Card been promoted through the full funnel to a cycle investment?
  3. Does the team consult the pattern library before scoring SEMI dimensions?
  4. Is the AI retrospective synthesis pulling data from at least four previous sprints?
  5. Are retrospective actions being completed at a rate above 70%?

Advancement to next level: 4 of 5 yes, consistently across two consecutive cycles.

Level 4

System Contribution

Self-assessment questions

  1. Have you shared pattern library content with at least two other teams in the last cycle?
  2. Are AI agent roles from your node being used or adapted by other nodes?
  3. Is your SEMI calibration accurate enough that you rarely have surprise scope overruns?
  4. Can a new team member contribute meaningfully within 5 days, using documented materials?
  5. Has at least one IIS output from your team influenced a product roadmap or cycle portfolio decision?

Getting Started

SHIFT is adopted in four phases over 12 sprints. Do not try to run everything at once.

PhaseSprintsKey introductionsMaturity target
Phase 0: Ground ZeroWeek -1 (before Sprint 1)Roles, domain definition, AI Responsibility Map v1, SEMI-score top 20 backlog items, Sprint 1 planningPre-Level 1
Phase 1: FoundationSprints 1-3AI-first standup, throughput tracking, Sprint Review, 3-track retrospective, Spec Review (Sprint 2), IIS at 10% (Sprint 3)Level 1
Phase 2: AlignmentSprints 4-6Cycle Objectives, DOI map, mid-sprint DOI check, LGP, Red Item Protocol, IIS at 15%, Cycle Portfolio ReviewLevel 2
Phase 3: IntelligenceSprints 7-9Monte Carlo with reference class (Sprint 7), Monte Carlo with internal data (Sprint 8), pattern library, IIS pilot evaluationLevel 3
Phase 4: Full operationSprints 10-12+Full SEMI calibration, cross-team pattern library, Maturity Level 4 self-assessment, reference team for other adoptersLevel 3-4
Start with structure before ceremonies. Identify ACNs, assign Anchors, and establish the first LGP before running any SHIFT ceremonies. Teams that add ceremonies before establishing governance spend the first cycle fixing coordination problems rather than delivering.

Appendix

Vocabulary

TermDefinition
ACNAdaptive Collaboration Node: the primary delivery unit in SHIFT; owns a capability domain
LGPLean Governance Pod: bi-weekly cross-node decision forum; maximum 7 participants, 75 minutes
IISInnovation-Integrated Sprint: ring-fenced exploration capacity (15% standard) within every sprint cycle
DOIDynamic Objectives Integration: mechanism connecting sprint work to cycle objectives via RAG tagging
SEMISpecification Quality, Effort Uncertainty, Multi-system Impact, Implementation Confidence: the four-dimension item readiness model
AnchorAccountable owner of an ACN's capability domain and outcomes
Delivery LeadOperational owner of an ACN's sprint delivery system
Governance StewardOwner of LGP function and governance system health
Node Health CardOne-page ACN status artefact updated every sprint
Learning CardStructured IIS output: what was explored, what was learned, what is recommended
Cycle12-week delivery and alignment period (six sprints)
P70 / P85Monte Carlo probability bands: 70% and 85% confidence delivery forecasts
DOI Health ScoreAggregate cycle-level alignment score: (Green Contributions / Total) × 100
Red Item ProtocolStructured escalation process for Red DOI items: 4-hour declaration SLA, 24-hour resolution meeting
AI Responsibility MapNode-level document defining which tasks are AI-assisted, AI-led with review, or human-only
Spec-Driven DevelopmentPractice treating the specification as the primary engineering artefact before any implementation begins
Pattern LibraryShared repository of implementation patterns, SEMI calibration insights, and IIS learnings
ThroughputNumber of work items completed per sprint; the primary SHIFT delivery metric (replaces velocity)

FAQ

Frequently Asked Questions

Practical answers to the questions that come up most often when teams begin implementing SHIFT.

Starting SHIFT

We want to start implementing SHIFT today. What are the three most important things to do first?
  1. Define your ACN clearly. Write the capability domain in a single sentence. If it takes two sentences, the scope is too broad. Everything else depends on this agreement.
  2. Score your backlog with SEMI. Take the top 15 to 20 items and score them using the four dimensions. Do not aim for precision in the first round — aim for surfacing disagreement. When two people give the same item very different scores, that conversation needs to happen before work begins.
  3. Start tracking throughput from sprint one. Count the number of items marked Done at the end of each sprint. Not story points, not hours — items completed and meeting acceptance criteria. Eight sprints of throughput data will transform your planning conversations.

Everything else — IIS, DOI, LGP, Monte Carlo — layers on top of these three. Teams that skip the foundation and go straight to the advanced mechanisms fail consistently.

We are already running Scrum. What changes in sprint 1 if we adopt SHIFT?
  1. Replace standup format. Move to three signals: a progress signal against a measurable criterion, an AI workflow note (even if it is 'no AI involvement today'), and a blocker flag. Blockers are addressed after standup, not during it.
  2. Add SEMI scoring to refinement. Before any item enters the sprint, it gets a SEMI score. Items scoring 9 or above do not enter the sprint. Items scoring 7 to 8 enter only with a named mitigation on the highest-risk dimension.
  3. Replace your retrospective with the 3-track structure. Track 1: delivery system. Track 2: collaboration and AI. Track 3: learning and growth. Do not combine tracks or skip one.

IIS, DOI, LGP, and Monte Carlo are introduced in subsequent sprints per the phased adoption plan in Chapter 21. Do not add them in sprint 1.

What is the minimum viable SHIFT for a 3-person team?
  1. Roles: one person combines Anchor and Governance Steward. One person combines Delivery Lead and primary Contributor. One person is a contributor.
  2. Ceremonies: AI-first standup, Spec Review at sprint start, Sprint Review, 3-track Retrospective. Drop IIS (suspend below 4 people), drop LGP (replace with a weekly 30-minute alignment meeting with one external stakeholder), drop the formal Cycle Portfolio Review.
  3. Metrics: throughput and SEMI scoring. Skip Monte Carlo until sprint 8.
  4. DOI: one or two cycle objectives with sprint contributions tagged Green, Amber, or Red. Review at the mid-sprint check.
How long before we see results?
  1. Sprints 1-2: process friction increases temporarily. Teams naming spec gaps they previously ignored. Standup taking slightly longer. SEMI scores causing disagreements. This is the framework surfacing problems that already existed invisibly.
  2. Sprints 3-4: SEMI calibration stabilises. Mid-sprint surprises decrease noticeably. Planning conversations become shorter and more honest.
  3. Sprint 6-8: throughput data becomes reliable enough to use for planning. The first Cycle Portfolio Review is when most teams feel the framework operating at altitude.
  4. Cycle 2 onwards: IIS begins producing Learning Cards worth promoting. Monte Carlo forecasts carry credibility. DOI health scores give the team and leadership a shared, objective view of alignment.

The failure mode to avoid: abandoning SHIFT in sprint 2 or 3 because it feels like overhead. Teams that stay with it past sprint 4 uniformly report that the overhead inverts into a net reduction in wasted coordination time.

Our organisation will not let us drop story points. Can we run SHIFT alongside them?
  1. Yes, with a clear separation of purpose. Use story points exclusively for external reporting or contractual commitments where they are required. Use SEMI and throughput for internal planning, sprint readiness, and forecasting. Do not mix them in the same conversation.
  2. The practical approach: score SEMI, track throughput, run Monte Carlo from throughput data. If someone external asks for a story point count, derive it from throughput after the sprint using your historical average points per item.

Warn your team explicitly: SEMI scores and story points answer different questions. SEMI answers 'is this item ready and how complex is it?' Story points attempt to answer 'how much effort will this take?' Conflating them is the most common estimation failure in Agile teams.

Metrics and Tracking

What are the core SHIFT metrics and what exactly do I track for each one?
  1. Throughput (items per sprint): count of work items moved to Done per sprint that meet all acceptance criteria, passed their review step, and are deployed to at least staging. Record as a single number per sprint. Build a rolling 10-sprint dataset.
  2. SEMI Score (4-12 per item): score every item at sprint entry. Log the four individual dimension scores, not just the composite. After each sprint, review which dimension scores were wrong. This calibration is how SEMI improves over time.
  3. DOI Health Score (0-100): track at sprint start, mid-sprint, and sprint end. Formula: (Green Sprint Contributions ÷ Total Sprint Contributions) × 100. Record the score and the trend — trend matters more than any single score.
  4. IIS Capacity (%): track at sprint planning. Record whether IIS ran, what theme was explored, how many Learning Cards were produced, and whether any were promoted.
  5. Monte Carlo P70/P85: from sprint 8 onwards, for any active work thread with more than three items remaining. Record the P70 date (internal planning) and P85 date (stakeholder commitment).
How do we calculate the DOI Health Score in practice, sprint by sprint?
  1. At sprint planning, tag every sprint backlog item against a Cycle Objective as Green, Amber, or Red. Most will be Green at planning.
  2. Update tags at the mid-sprint check (Day 6-7) and at sprint end.
  3. Formula: DOI Health Score = (count of Green Sprint Contributions ÷ total Sprint Contributions) × 100.
  4. Example: 12 Sprint Contributions. At mid-sprint: 8 Green, 3 Amber, 1 Red. Score = (8 ÷ 12) × 100 = 67 — Moderate. The 1 Red item triggers the Red Item Protocol immediately.

The common mistake: teams tag items Green at planning and never update them. A DOI Health Score of 100 at sprint end that was not updated mid-sprint is a fiction. The value of DOI is the mid-sprint signal.

How do we run Monte Carlo in practice? Do we need special software?
  1. Manual approach (early sprints): record throughput per sprint in a spreadsheet. Use a random number generator to sample throughput values from your historical data. Repeat 500 times. The percentage of simulations completing within each timeframe gives you probability bands.
  2. Spreadsheet approach: use RANDBETWEEN or random sample formulas, set up 1,000 simulation rows. P50, P70, P85, and P95 emerge from the distribution. Free templates for this are easy to adapt.
  3. ActionableAgile Analytics (actionableagile.com): the recommended tool for SHIFT teams. Connects directly to Jira and Azure DevOps, produces Monte Carlo 'How Many' and 'When' simulations out of the box.
  4. Nave (nave.app): similar capability, strong Jira integration.
  5. Azure DevOps Analytics: built-in throughput and cycle time reports. Monte Carlo requires a third-party extension (ActionableAgile for ADO) or a connected spreadsheet.

Minimum viable setup: a shared spreadsheet with one row per sprint and one column for items completed. Five minutes to maintain per sprint, provides everything needed for manual P70/P85 calculations.

What leading indicators should we track, not just lagging ones?
  1. SEMI score distribution at sprint entry: a sprint where 40% of items entered Amber or Red is a leading indicator of mid-sprint blockers and incomplete work at sprint end. Check this at Spec Review, before delivery begins.
  2. Amber item age in the DOI map: an Amber item on Day 3 that has not moved to Green by Day 5 will almost certainly become Red. Track Amber item age daily.
  3. Throughput trend (last 3 sprints): a declining trend across three consecutive sprints is a leading indicator of a systemic delivery problem. A single low-throughput sprint is noise. Three consecutive is a signal.
  4. IIS Learning Card signals: a cycle where all IIS Learning Cards are Neutral or Negative is a leading indicator that themes are poorly chosen or IIS capacity is too low.
  5. Spec Review duration: if it consistently runs over 45 minutes, items are arriving at sprint planning underspecified. Spec Review duration is a leading indicator of sprint predictability.

Tooling: Jira, Azure DevOps, and Others

We use Jira. How do we configure it to support SHIFT without a major overhaul?
  1. SEMI Score fields: add a custom Number field named 'SEMI Score' (range 1-12) and four sub-fields S, E, M, I (range 1-3 each). Display them on the issue create and edit screens. This is the single most impactful change — it forces SEMI scoring at issue creation, not as a separate process.
  2. DOI Status field: add a custom Single-select field with options Green, Amber, Red, and Not Tagged. Display it on the sprint board card face. Create a Jira dashboard gadget filtered by sprint showing counts of each status.
  3. IIS work: create a dedicated Epic named 'IIS [Cycle Number]' or use a label 'IIS' on all innovation sprint issues. This separates IIS throughput from delivery throughput in your Monte Carlo baseline.
  4. Throughput tracking: use the Jira Velocity Chart in Issue Count mode, not Story Points. Export this data to a spreadsheet every sprint end and maintain a rolling 10-sprint throughput dataset.
  5. ActionableAgile for Jira: install from the Atlassian Marketplace. Adds Monte Carlo simulator, throughput run chart, cycle time scatterplot, and WIP ageing directly inside Jira. Recommended from sprint 8 onwards.
  6. Node Health Card: maintain as a Confluence page linked from the Jira project. Update the four fields (throughput trend, DOI Health Score, IIS status, open dependencies) at sprint end.
We use Azure DevOps. How do we configure it for SHIFT?
  1. SEMI Score fields: in Process customisation (Organisation Settings > Boards > Process), add custom integer fields to your work item type — 'SEMI Score' plus 'S-Score', 'E-Score', 'M-Score', 'I-Score'. Add them to the work item form under a 'SHIFT' section.
  2. DOI Status field: add a custom picklist field named 'DOI Status' with values Green, Amber, Red, Not Tagged. Create a query in Azure Boards grouping by DOI Status filtered to the current sprint. Pin this as a dashboard widget.
  3. IIS work items: use a tag 'IIS' or a dedicated Area Path/Feature named 'IIS'. Use this to produce a separate throughput chart for IIS items, keeping it out of your delivery baseline.
  4. Throughput tracking: navigate to Boards > Analytics > Throughput. Switch granularity to Sprint, set window to last 10 sprints. Export to Excel for your Monte Carlo spreadsheet.
  5. Monte Carlo in ADO: use ActionableAgile for Azure DevOps, Nave, or export throughput data to an Excel Monte Carlo template.
  6. LGP pre-read and Node Health Card: use Azure DevOps Wiki. Create a wiki page per ACN with a standard template, linked from the team dashboard. The Governance Steward updates the decision log as a separate wiki page after each LGP session.
We use Linear. Does SHIFT work with it?
  1. SEMI scoring: add four custom properties to your issue type — S, E, M, I as number fields (1-3), plus a computed SEMI Score. Linear supports custom properties natively. Add these to your issue template so they are prompted at creation.
  2. DOI Status: add a custom select property (Green, Amber, Red). Use Linear's filter and grouping to view DOI distribution across active cycle issues.
  3. Throughput tracking: Linear's built-in cycle analytics show issue completion counts per cycle. Export to a spreadsheet for Monte Carlo inputs.
  4. IIS work: use a dedicated Label ('IIS') or a separate Linear project for IIS work within each cycle.
  5. Monte Carlo: Linear has no native Monte Carlo tool. Export throughput data and use ActionableAgile (which supports CSV import) or a spreadsheet template.

Linear is well-suited to teams of 3 to 7 people. For teams above 7 to 8 people or multi-node coordination, Jira or ADO is the better choice.

What about Notion, Trello, or spreadsheet-only setups?
  1. Minimum viable toolset: a shared spreadsheet with two tabs — (1) Sprint throughput log, one row per sprint; (2) SEMI backlog, listing each item with S, E, M, I scores and composite.
  2. A shared document (Google Docs, Notion, or Confluence) per sprint for the Sprint DOI Map, tagged Green/Amber/Red, updated mid-sprint.
  3. A shared document per ACN for the Node Health Card (updated each sprint end) and per IIS sprint for Learning Cards.
  4. Notion: good for documentation artefacts (Node Health Cards, Learning Cards, Pattern Library, AI Responsibility Map, LGP pre-reads and decision log). Weak for sprint tracking and throughput analytics. Use Notion for documentation alongside a dedicated board tool for sprint management.
  5. Trello: usable for 3 to 5 person teams. Add SEMI Score as a custom field on cards, DOI Status as a label, IIS as a separate list. Not recommended above 5 people.

The tool matters far less than the discipline of updating it. A team that maintains a clean shared spreadsheet will get more value from SHIFT than a team that configures a full Jira setup and never updates the DOI Status field.

Team and Roles

We do not have a dedicated Governance Steward. Can the Delivery Lead cover both roles?
  1. In teams of 5 or fewer people, yes, with caution. Two conditions must be met: (1) the LGP pre-read is compiled and distributed 24 hours in advance without exception, and (2) the decision log is maintained and visible to all ACNs.
  2. If either condition is not being met consistently, the Governance Steward function is being neglected and must be given to someone else.
  3. The risk: the Delivery Lead's primary loyalty is to sprint delivery. When the two roles conflict, governance gets deprioritised. The most common failure mode is the LGP pre-read arriving the morning of the session — participants have no context and the session becomes a status report.

In teams above 5 people, do not combine these roles. The authority concentration creates a single point of failure for both operations and governance.

Can we rotate the Anchor role between team members?
  1. Rotating the Delivery Lead is encouraged. Rotating the Anchor is not recommended for ACNs in active delivery.
  2. The Anchor role requires deep familiarity with the capability domain's history, decisions made, trade-offs accepted, and strategic direction. This context cannot be effectively transferred in a single handover session.
  3. The appropriate version of role development: a future Anchor candidate shadows the current Anchor for a full cycle, participates in LGP sessions, and takes on specific Anchor accountabilities before formally rotating in. This is a development pathway, not a rotation schedule.
What do we do when the Anchor and Delivery Lead persistently disagree on priorities?
  1. This is a governance issue, not a personality issue. Resolve it at the LGP, not informally.
  2. Persistent disagreement usually means either the capability domain is not clearly enough defined (both roles are working from different mental models of what the ACN owns), or sprint capacity is being managed without shared understanding.
  3. Resolution path: bring the specific disagreement to the LGP as an Unblock item. The Governance Steward facilitates a decision that documents the explicit scope and capacity boundary.

Once the boundary is written down and ratified, the disagreement usually resolves because both parties are now working from the same explicit constraint rather than implicit assumptions.

Integration and Adoption

Our company runs SAFe at the programme level. Can one team adopt SHIFT without the whole organisation changing?
  1. Yes, and this is one of the most common adoption patterns. The Anchor participates in SAFe's PO Sync and ART events as the team's representative. PI objectives map directly to SHIFT Cycle Objectives. The DOI map is maintained internally.
  2. The team uses SEMI scoring internally and derives story point counts from historical throughput averages for external PI Planning reporting. These roles are kept cleanly separate.
  3. The LGP operates entirely internally. SAFe's ART Sync serves as the external governance input; the LGP handles internal cross-node decisions.
  4. After two or three cycles of demonstrably better throughput predictability, the evidence base for expanding SHIFT to other teams is significantly stronger.

Risk: SAFe PI Planning items arrive without SEMI scores. Introduce a Spec Review session immediately after PI Planning to score incoming PI objectives before they enter the sprint backlog.

We already have OKRs. Where does the DOI map overlap with what we already track?
  1. DOI is not a replacement for OKRs. It is the connection between your OKR system and sprint work.
  2. Mapping: Company OKRs → Alignment layer input. Team OKR (or the Key Result your team owns) → Cycle Objective in the DOI map. Sprint Contributions → the specific deliverables each sprint is committing to in service of that Key Result. DOI Health Score → mid-cycle, sprint-by-sprint view of whether delivery is contributing to the Key Result as planned.
  3. What SHIFT adds that OKRs alone do not: a continuous, sprint-level signal of whether execution matches intent, surfaced during the sprint when it can still be corrected — not at quarter end when it cannot.
How do we handle stakeholders who want predictable release dates but do not understand Monte Carlo?
  1. Most stakeholders do not need to understand Monte Carlo. They need a confident, honest answer to 'when will it be done?'
  2. Communication formula: 'Based on our delivery data from the last N sprints, we have 85% confidence this will be complete by [date]. Our most likely completion, based on current throughput, is [earlier date].'
  3. This gives two numbers: a planning date (P85, for roadmap commitments) and an optimistic date (P50 or P70, the team's working target).
  4. For stakeholders who push back on probabilistic language: 'It is the same logic as a weather forecast. We are saying that based on our actual delivery patterns, 85 times out of 100 it would be done by then. We will give you a heads-up the moment we see signals that put that date at risk.'
What is the most common reason SHIFT implementations fail in the first cycle?
  1. Failure mode 1: introducing too much at once. Teams attempt to run IIS, DOI, LGP, Monte Carlo, SEMI, AI-first standup, and 3-track retrospective in sprint 1. The cognitive overhead is too high. Sprint 1 becomes a meta-conversation about SHIFT instead of a delivery sprint. Follow the phased adoption plan strictly.
  2. Failure mode 2: SEMI scoring without honest disagreement. Teams go through the motions of scoring but socially align on scores rather than surfacing real disagreement. When two people give the same item S=1 and S=3, explore why — do not average. The value of SEMI is in the forced clarification. If SEMI scores are not causing any disagreement, the scoring is being done dishonestly.
  3. Failure mode 3: LGP pre-read not enforced. The pre-read is not ready, the session goes ahead anyway as a verbal status update, and the pattern sets in. Once LGP becomes a status report forum, it is very difficult to recover without an explicit reset. The Governance Steward must enforce the pre-read rule from the very first session. If it is not ready, postpone the session — do not hold it without it.