Official Framework Guide
SHIFT Agile Framework
Scalable Hybrid Iterative Framework for Teams · Version 2.0 · Ricardo J. Minas, 2025
Introduction
Most Agile frameworks were designed for a world that no longer exists: co-located teams, predictable delivery, and a manageable human-to-output ratio. The frameworks that followed tried to scale that original model rather than rethink it.
SHIFT was built from a different starting point. It begins with the conditions modern teams actually face, and it is designed to augment whatever you are already running, not to replace it.
The four failures SHIFT addresses
The Coordination Failure
When organisations have more than one team, most frameworks break down. Scrum of Scrums is a band-aid. SAFe adds so much ceremony that teams spend more time coordinating than delivering. Most organisations end up with an informal coordination layer of side conversations and management escalations that do not scale or leave a record.
The Innovation Failure
Delivery pressure always wins over exploration. "We'll do the innovation sprint next cycle" never happens. The people closest to the problems are also the people best positioned to solve them. But they never get the protected space to try. SHIFT makes exploration structurally mandatory, not aspirational.
The Alignment Failure
Teams set OKRs in January and discover in March that the sprint work had no measurable connection to them. The gap between strategy and execution is real, persistent, and rarely discussed honestly. Most frameworks do not have a mechanism for mid-sprint alignment checking. SHIFT does.
The AI Transition Failure
Teams adopt AI tools within frameworks designed before AI existed. Story points were never designed for a production function where one well-specified task produces ten times the output of a vague one. Velocity charts are misleading when AI capability improves between sprints. The definition of done needs to include explicit review steps that did not exist before AI-generated output became common. SHIFT was built knowing that AI-assisted work is the default, not the exception.
Part I: Foundation
The Four Layers of SHIFT
SHIFT operates across four distinct layers. Each layer has a different cadence, different participants, and a different decision scope. The failure mode of most frameworks is that everything lives at one altitude: execution. Strategy bleeds into standup, governance becomes ceremony, and learning is an afterthought.
Note that Learning is Layer 1, not Layer 4. This is deliberate. In most frameworks, learning is squeezed at the end of a cycle. In SHIFT, it is foundational. Everything else builds on it.
Retrospection, AI feedback, IIS, pattern library
Execution, DOI checks, throughput, sprint ceremonies
Risk, dependencies, resource decisions, compliance
Strategic objectives, portfolio decisions, OKR integration
| Layer | Cadence | Purpose | Key artefacts |
|---|---|---|---|
| 1 · Learning | Continuous + sprint end | Capture and route learning back into delivery | AI-Augmented Retro, Learning Cards, Pattern Library, SEMI recalibration |
| 2 · Delivery | 2-week sprint | Execute committed work, maintain DOI alignment, produce software | Sprint backlog, DOI Map, Throughput data, Spec docs |
| 3 · Governance | Bi-weekly, 75 min | Cross-node decisions: Unblock, Approve, Defer | LGP pre-read, Decision log, Dependency map |
| 4 · Alignment | Quarterly (12-week cycle) | Connect delivery to strategy, set Cycle Objectives | Cycle Objective Set, DOI Health Scores, IIS themes |
Layer integration handshakes
| Handshake | From | To | Mechanism |
|---|---|---|---|
| Sprint objective | Alignment | Delivery | DOI map, sprint goal statement |
| Blocker escalation | Delivery | Governance | 4-hour SLA escalation protocol |
| Governance signal | Governance | Alignment | LGP decision log summary |
| Learning injection | Learning | Delivery | Pattern library, SEMI recalibration |
Core Principles
People before process
Every structural choice in SHIFT is evaluated against whether it helps or burdens the people doing the work. Governance, tooling, and ceremony exist to serve teams.
Hybrid by design
Distributed and co-located contributors are treated as equal participants. Asynchronous-first communication is the default. Synchronous time is reserved for decisions that genuinely require it.
Iterative and adaptive
Short cycles with structured reflection. SHIFT teams do not commit to large plans. They commit to learning loops that progressively sharpen direction.
Lean governance
Oversight and accountability without bureaucracy. Lean Governance Pods keep decisions moving at the pace the work demands, with clear ownership and minimal coordination overhead.
Innovation is part of delivery
Experimentation is not deferred to a future quarter. Innovation-Integrated Sprints build dedicated capacity for exploratory work inside the delivery rhythm so that learning does not compete with shipping.
Continuous alignment
Strategy does not live in a quarterly deck. Dynamic Objectives Integration keeps team-level work connected to organisational goals throughout every sprint, not just at planning.
Part II: Structure
Adaptive Collaboration Nodes (ACNs)
An Adaptive Collaboration Node is the primary delivery unit in SHIFT. The word "node" is deliberate: it implies connectivity, not isolation. An ACN has clear internal structure, a defined scope of ownership, and explicit interfaces to other nodes.
An ACN owns a capability domain, not a feature list. It is responsible for the full vertical slice of work within that domain: from specification through testing through deployment. It owns outcomes, not outputs.
Node Composition Matrix
| Size | Anchor | Delivery Lead | Contributors | AI Agent Roles | Notes |
|---|---|---|---|---|---|
| 3 people | 1 | 0 (Anchor doubles) | 2 | 0-1 | Minimum viable. IIS suspended. |
| 5 people | 1 | 1 | 3 | 1 | Standard node. Full SHIFT operation. |
| 7 people | 1 | 1 | 4-5 | 1-2 | Preferred size. Full IIS + two AI agents. |
| 9 people | 1 | 1 | 6-7 | 2-3 | Maximum. Consider splitting. |
Formation
An ACN forms when a capability domain requires more than two cycles of sustained delivery. Formation has four steps:
- Domain scoping: define the capability domain in one sentence. If it takes more than one sentence, the domain is too broad.
- Anchor assignment: identify an Anchor with delivery credibility in the domain, not just seniority.
- Composition drafting: the Anchor proposes a composition using the Node Composition Matrix. The LGP ratifies.
- Interface definition: the new node and adjacent nodes produce a one-page interface document covering what they consume and produce for each other, and how they escalate cross-node issues.
Dissolution
An ACN dissolves when its capability domain is complete, when throughput signals indicate sustained delivery failure across two consecutive Red DOI cycles, or when team size drops below three without approved backfill. Dissolution requires a 2-sprint wind-down. The node's Learning Cards remain in the shared pattern library permanently.
ACN anti-patterns
| Anti-pattern | Symptom | Fix |
|---|---|---|
| The Siloed Node | All cross-node issues resolved informally; no LGP escalations | Mandatory interface document review at cycle boundary |
| The Permanent Node | Backlog is entirely maintenance; no new capability work; throughput declining | Dissolution review when two consecutive sprints contain only maintenance items |
| The Hero Node | DOI green but high individual fatigue; Learning Cards authored by the same person every sprint | Forced contribution rotation; AI task offload review |
| The Phantom AI Node | High throughput but rising defect rate and spec compliance failures | AI Responsibility Map audit; mandatory human review for all AI-led outputs |
Lean Governance Pods (LGPs)
A Lean Governance Pod is SHIFT's answer to the governance tax problem. In most scaled frameworks, governance consumes 20 to 40 percent of senior contributor time. LGPs reduce governance overhead to under 10 percent of any individual's time while maintaining decision quality and traceability.
An LGP is not a steering committee or an approval board. It is a decision-making forum with a defined scope, a time budget, and an explicit anti-bureaucracy mandate.
Composition and cadence
| Session type | Duration | Trigger |
|---|---|---|
| Bi-weekly | 75 min max | Aligned to sprint boundaries, fixed cadence |
| Emergency LGP | 30 min max | Called by any Anchor or Governance Steward; single agenda item only |
| Async LGP | 24-hour window | Clearly bounded, low-risk decisions; any participant can request sync |
Fixed bi-weekly agenda
| Slot | Duration | Purpose |
|---|---|---|
| Pre-read acknowledgement | 5 min | Confirm all participants have read the pre-read |
| Node Health Review | 15 min | RAG status across all nodes; flag Reds |
| Decisions: Unblock | 20 min | Cross-node or external blockers requiring action |
| Decisions: Approve | 15 min | Items requiring formal LGP ratification |
| Decisions: Defer | 10 min | Items not ready: assign owner and due date |
| DOI Alignment check | 5 min | Confirm sprint objectives still align to cycle goals |
| Parking lot | 5 min | Items not in pre-read: logged only, not discussed today |
Decision types
SHIFT defines exactly three decision types for LGP. Any item that does not fit one of these three types does not belong in the LGP.
| Type | Definition | Examples |
|---|---|---|
| Unblock | A delivery blocker requiring cross-node coordination, external engagement, or resource reallocation. The LGP resolves it or assigns an owner with a deadline. | External API dependency blocking Node A; resourcing conflict between nodes |
| Approve | A decision that has been prepared, pre-read, and requires formal ratification. Proposals must be max two pages in the pre-read. | New node formation; IIS theme ratification; cycle objective adjustment; external dependency commitment |
| Defer | Item not ready for decision. LGP assigns an owner, a due date, and specifies the exact information missing. Not 'we'll discuss later.' | Proposal missing compliance sign-off; budget data not yet available |
Roles
SHIFT has four core roles. In small teams, they collapse. In large teams, they remain distinct. Roles are defined by accountability, not job title.
SHIFT Anchor
The single accountable person for an ACN's capability domain outcomes, not outputs. Owns DOI connection, prioritisation decisions within the node, IIS theme selection, and the AI Responsibility Map. Represents the node in LGP. Should spend 30 to 40 percent of their time on direct delivery work: writing specs, reviewing outputs, pairing on complex items. An Anchor who attends only meetings is disconnected from delivery reality.
Delivery Lead
The operational heart of the ACN. Owns the sprint plan, SEMI compliance, Sprint DOI Map, the mid-sprint DOI check, the 4-hour blocker escalation SLA, the Node Health Card, retrospective facilitation, and Monte Carlo forecasting at sprint end. This is an operational role with delivery skin in the game, not a coaching role.
Node Contributors
The practitioners doing the delivery work. Expected to write or contribute to specs that meet the SEMI threshold, own the human review step for all AI-led outputs they are responsible for, and contribute to IIS themes with genuine engagement. Every contributor defines their personal AI workflow in the AI Responsibility Map.
Governance Steward
Responsible for the health of the governance system, not for making governance decisions. Compiles and distributes the LGP pre-read 24 hours before each session. Facilitates the bi-weekly LGP. Maintains the decision log (public, searchable, permanent). Runs the three-strikes escalation. Owns the Maturity Model self-assessment process.
Role flexibility for small teams
| Team size | Role combinations | Notes |
|---|---|---|
| 3 people | Anchor + Governance Steward; Delivery Lead + primary Contributor | IIS suspended. LGP replaced by a weekly 30-min external alignment meeting. |
| 5 people | Anchor dedicated; Delivery Lead dedicated; Steward function shared between them | Full SHIFT operation viable. |
| 7+ people | All four roles held by distinct people | Anchor and Delivery Lead should not be combined above five people. |
Part III: Rhythm
The SHIFT Lifecycle
SHIFT runs in 2-week sprints grouped into 12-week cycles (six sprints per cycle). The cycle is the primary strategic alignment unit. The sprint is the primary delivery unit.
Sprint structure (10 working days)
| Day | Ceremony / Activity |
|---|---|
| Day 1 | Sprint Kickoff, Spec Review Session |
| Days 1-10 | Daily AI-First Standup (15 min, async-first) |
| Day 6-7 | Mid-Sprint DOI Check-in (30 min) |
| Day 10 | Sprint Review, including IIS Review (60 min total) |
| Day 10 | AI-Augmented Retrospective (60 min) |
Cycle structure (6 sprints, 12 weeks)
| Sprint | Type | Notes |
|---|---|---|
| Sprint 1 | Delivery | Cycle Objectives set, DOI map initialised |
| Sprint 2 | Delivery | LGP bi-weekly cadence active |
| Sprint 3 | Delivery + IIS | First IIS of cycle (15% capacity ring-fenced) |
| Sprint 4 | Delivery | Monte Carlo forecast updated |
| Sprint 5 | Delivery | Mid-cycle DOI calibration |
| Sprint 6 | Delivery + IIS + Cycle Review | IIS Review, Cycle Portfolio Review (LGP), Maturity Model self-assessment |
Innovation-Integrated Sprints (IIS)
IIS is SHIFT's mechanism for sustaining exploratory work within a delivery-focused framework, without the failure modes of 20 percent time, hackathons, or separate innovation teams.
Why IIS works where 20% time does not
| Problem | 20% Time | IIS |
|---|---|---|
| Time protection | Informal; first to be cut under delivery pressure | Formally allocated; requires LGP approval to reduce below 10% |
| Output format | No defined format; ideas die in isolation | Learning Cards: structured, searchable, permanent |
| Promotion pathway | None | Learning Card → Pilot → LGP Approval → Cycle Portfolio Investment |
| Theme selection | Personal interest, disconnected from strategy | DOI-connected at cycle kickoff |
Capacity allocation
| Allocation | % of sprint | Condition |
|---|---|---|
| Standard | 15% | Default for all nodes above 5 people |
| Minimum | 10% | Below this, IIS is performative and should be formally suspended |
| Maximum | 25% | Designated innovation sprint; requires LGP approval; cannot occur in consecutive sprints |
Learning Card format
A Learning Card is the required output of every IIS sprint. It is not a demo, a slide deck, or a Confluence page. It is a structured capture of what was learned.
The promotion funnel
IIS Sprint
Learning Card created · Signal: Positive
IIS Review
Day 10 · 20 minutes
Decision point
Promote to Pilot?
Pilot Sprint
1 sprint · defined success criteria · standard delivery capacity
Pilot Review
Anchor + LGP decision
Decision point
Investment?
LGP Approval
Cycle Portfolio Investment · enters next cycle's DOI map
Dynamic Objectives Integration (DOI)
DOI keeps sprint delivery connected to strategic objectives in real time. The core problem it solves: misalignment between strategic plan and delivery reality is usually only discovered at quarter end. DOI introduces a continuous signal system that surfaces misalignment during the sprint, when something can still be done about it.
DOI is not OKRs. It is the operational layer that connects OKRs (or any strategic objective system) to sprint delivery.
Green / Amber / Red tagging
Every sprint backlog item is tagged at the Spec Review Session:
| Status | Definition | Required action |
|---|---|---|
| Green | Item directly serves a Cycle Objective. Throughput supports completion. Monte Carlo confidence above 70%. | Monitor. Update at mid-sprint check. |
| Amber | Item at risk but recoverable. Throughput below baseline for 2+ days, a blocker actively being worked, or confidence between 40-70%. | Named recovery action with an owner. Must move to Green within 4 calendar days. |
| Red | Item will not complete without intervention. Blocker open 48+ hours, confidence below 30%, or context has changed. | Declare within 4 hours. Resolution meeting within 24 hours. Named action, named owner, deadline. |
DOI Health Score
The DOI Health Score is calculated as: (Green Contributions / Total Contributions) × 100
| Score | Status | LGP action |
|---|---|---|
| 80-100 | Strong | Monitor |
| 60-79 | Moderate | Review at next LGP |
| 40-59 | At Risk | Emergency LGP if Red items present |
| Below 40 | Critical | Escalate to Alignment layer |
The trend across sprints is more important than any single score. A score of 75 trending upward is healthier than a score of 80 trending downward.
Part IV: Ceremonies
Ceremonies
Every SHIFT ceremony has a defined output. If the output is not produced, the ceremony has failed regardless of whether it was held on time.
| Ceremony | Duration | When | Owner |
|---|---|---|---|
| Spec Review Session | 45 min | Day 1 | Delivery Lead |
| AI-First Daily Standup | 15 min | Daily (async-first) | All contributors |
| Mid-Sprint DOI Check-in | 30 min | Day 6-7 | Delivery Lead + Anchor |
| Sprint Review (incl. IIS Review) | 60 min | Day 10 | Anchor |
| AI-Augmented Retrospective | 60 min | Day 10 | Delivery Lead |
| Cycle Portfolio Review | 90 min | Sprint 6 end | Governance Steward |
1. Spec Review Session
The sprint backlog is reviewed ordered by SEMI score, highest first. For each Amber item (SEMI 7-8), the team identifies the highest-risk dimension and agrees a mitigation action completable by day 3. Red items are removed and returned to the Anchor.
Output: every sprint item has a confirmed SEMI score. Amber items have a documented mitigation. Red items are out of the sprint with specific gaps identified.
2. AI-First Daily Standup
Async-first. Three-part format per contributor (90 seconds each):
- Progress signal: "X is 60% complete against criteria Y" or "X is done." Not "I worked on X."
- AI workflow note: "The LLM reviewer flagged three spec gaps; two resolved, one open." Or "No AI tooling involvement today."
- Blocker flag: Any blocker. Classified after standup: internal, cross-node, or external.
3. Mid-Sprint DOI Check-in
Fixed agenda: Sprint DOI Map RAG review (10 min), Amber review and automatic Red conversion if recovery is off-track (10 min), capacity check (5 min), named actions with owners (5 min). This is not a sprint review or a planning session. Redirect if those activities start to bleed in.
4. Sprint Review
The Anchor opens with one sentence: was the sprint goal achieved? Each contributor demonstrates completed work against acceptance criteria. No slide decks. Show the working software, then show the criterion it meets. Stakeholder feedback is categorised immediately: confirmed acceptance, action required, or information. The final 20 minutes are the IIS Review.
5. AI-Augmented Retrospective
Pre-ceremony (mandatory): the AI retrospective agent synthesises throughput trend (last 4 sprints), SEMI distribution, DOI Health Score trend, IIS Learning Card themes, and recurring retrospective patterns. Shared one hour before the session. This eliminates 20 to 30 minutes of context-setting.
Three tracks, 20 minutes each:
Track 1: Delivery System
Opening question: 'What is the single biggest friction point in how we deliver work?' Systemic delivery issues identified. Named actions with owners and sprint deadlines enter the next sprint backlog directly, not a parking lot.
Track 2: Collaboration and AI
Opening question: 'Where did AI tooling help us, where did it slow us down, and where did our human collaboration patterns break down?' AI workflow incidents, prompt updates, agent reconfiguration, and collaboration norms.
Track 3: Learning and Growth
Opening question: 'What did we learn this sprint that we should not lose, and what capability are we missing that would make the biggest difference?' Pattern library updates, IIS theme candidates, team health signals.
6. Cycle Portfolio Review
90 minutes. Anchors, Governance Steward, product or strategy leadership. Governance Steward presents the Cycle DOI Summary. Each Anchor presents Cycle Objective outcomes (5 minutes each, signal and learning, no blame). IIS Portfolio reviewed (themes, Learning Cards, ROI of 15% capacity). Next cycle Objectives and IIS themes set. ACN formation or dissolution decisions made.
Part V: AI-First Teams
Spec-Driven Development
AI-first teams operate with a fundamentally different production function. When AI tools can produce a working implementation from a clear specification in hours, the constraint shifts from coding capacity to specification quality. Vague requirements produce unreliable output regardless of the AI tools involved.
Spec-Driven Development (SDD) treats the specification as the primary engineering artefact. Before any implementation begins, the team produces a complete spec that is reviewed, challenged, and signed off.
A complete spec contains
Team Sizing for AI-First Work
AI-first teams challenge the conventional Agile sizing heuristic of five to nine people. When AI tools multiply individual output, small teams become viable for work that previously required larger groups.
| Configuration | Size | When to use | Not suitable for |
|---|---|---|---|
| AI-First Core Team | 1-4 people | Well-scoped delivery, clear specs, high AI leverage, understood domain | Discovery work, cross-functional stakeholder alignment, novel domains |
| Standard ACN | 4-8 people | Default for most delivery workstreams. Full SHIFT operation. | N/A, this is the target size |
| Coalition ACN | 8-15 people | Complex programmes; multiple workstreams requiring coherence | Single-domain work, first adoption cycles |
The SEMI Model
The SEMI model is SHIFT's estimation and sprint-readiness system. It replaces story points. Each work item receives four scores on a 1 to 3 scale before it can enter a sprint. The composite score determines sprint entry eligibility, not calendar duration.
Specification Quality
How clear, complete, and testable is the specification?
Effort Uncertainty
Has the team done this before?
Multi-system Impact
How many external systems, teams, or dependencies does this touch?
Implementation Confidence
How confident is the team in the chosen approach?
S: Specification Quality (1-3)
| Score | Label | Definition |
|---|---|---|
| 1 | Clear | Acceptance criteria are written, unambiguous, and testable. Edge cases are documented. A contributor can start without clarifying questions. |
| 2 | Partial | Acceptance criteria exist but have gaps. A contributor can start but will need one or two clarifications. Some edge cases undocumented. |
| 3 | Unclear | Acceptance criteria are missing, vague, or untestable. A contributor cannot start without a significant clarification session. |
E: Effort Uncertainty (1-3)
| Score | Label | Definition |
|---|---|---|
| 1 | Known | The team has done this before. Similar work completed in the last three cycles. The approach is clear. |
| 2 | Similar | Similar to past work but with meaningful differences. Some unknowns. Team has a hypothesis but has not validated it. |
| 3 | Novel | The team has not done this before. The approach is uncertain. Multiple viable paths may exist. |
M: Multi-System Impact (1-3)
| Score | Label | Definition |
|---|---|---|
| 1 | Contained | No cross-node dependencies. No external services beyond stable integrations. No schema changes. No security implications. |
| 2 | Adjacent | One cross-node dependency or one external service integration. Schema changes within node ownership. Minor security review may be required. |
| 3 | Wide | Multiple cross-node dependencies. External integrations with uncertain behaviour. Schema changes affecting other nodes. Security or compliance review required. |
I: Implementation Confidence (1-3)
| Score | Label | Definition |
|---|---|---|
| 1 | Confident | The implementation approach has been used before in similar contexts. The team is aligned. No significant technical risk. |
| 2 | Tentative | The team has a preferred approach but has not validated it. At least one alternative. Some technical risk. |
| 3 | Uncertain | No clear implementation approach. A spike may be needed before implementation begins. High technical risk. |
Sprint entry rules
SEMI Score = S + E + M + I · Minimum: 4 · Maximum: 12
| SEMI total | Band | Sprint entry rule |
|---|---|---|
| 4-6 | 🟢 Green: Sprint Ready | Enter the sprint. No additional preparation required. |
| 7-8 | 🟡 Amber: Conditional | Enter only with a documented mitigation for the highest-scoring dimension, agreed by Delivery Lead and Anchor. Action must be completable by day 3. |
| 9-10 | 🔴 Red: Spec Required | Cannot enter the sprint. Return to Anchor for specification improvement. Re-score before next sprint planning. |
| 11-12 | ⚫ Black: Decompose | Item is too large or complex. Decompose into child items. Re-score all child items before sprint planning. |
AI-specific scoring modifiers
| Condition | Dimension | Modifier |
|---|---|---|
| AI output is non-deterministic and acceptance criteria do not account for output variance | S | +1 |
| AI model is externally hosted and rate-limited | M | +1 |
| AI model requires prompt engineering not yet documented | I | +1 |
| AI output is the primary user-facing output (higher evaluation complexity) | E | +1 |
| AI agent has cross-system tool access | M | +1 per additional tool beyond 2 |
Modifiers are additive but capped: no single dimension exceeds 3.
SEMI scoring reference table
| Work type | S | E | M | I | Total | Notes |
|---|---|---|---|---|---|---|
| Bug fix, known root cause | 1 | 1 | 1 | 1 | 4 | Sprint ready |
| Bug fix, unknown root cause | 2 | 3 | 1 | 2 | 8 | Conditional: timebox investigation |
| New UI component (standard) | 1 | 1-2 | 1 | 1 | 4-5 | Sprint ready |
| New API endpoint (standard) | 1-2 | 1-2 | 1-2 | 1 | 4-7 | Usually sprint ready |
| New API endpoint (external auth) | 2 | 2 | 3 | 2 | 9 | Spec required: clarify auth integration |
| LLM integration (new) | 3 | 3 | 2 | 3 | 11 | Decompose: separate spike from integration |
| LLM integration (established pattern) | 1 | 2 | 2 | 1 | 6 | Sprint ready after pattern documented |
| Data migration (small) | 1 | 2 | 2 | 1 | 6 | Sprint ready with rollback plan |
| Data migration (large, cross-system) | 2 | 3 | 3 | 2 | 10 | Red: full spec and rollback required |
| Infrastructure change (proven) | 1 | 1 | 1 | 1 | 4 | Sprint ready |
| Security or compliance feature | 2-3 | 2 | 3 | 2 | 9-10 | Spec required, compliance review mandatory |
SEMI pattern analysis
| Pattern | Systemic signal | Fix |
|---|---|---|
| Consistent S=3 | Specs written too late or by people disconnected from implementation | Introduce Spec Review earlier; pair Anchor with contributor on spec writing |
| Consistent E=3 on one work type | Team treats familiar work as novel; not building pattern familiarity | Document an implementation pattern for this type; E should decrease to 1 or 2 after |
| Consistent M=3 | Node domain boundaries too wide; cross-system work without interface agreements | Tighten capability domain; establish formal interface documents with adjacent nodes |
| Consistent I=3 | Team lacks confidence in implementation approaches; capability gap | Targeted IIS themes on technical capability building; pair contributors on complex items |
Forecasting: Throughput and Monte Carlo
Velocity-based forecasting collapses in AI-first teams. A team's effective throughput can double between sprints as prompting skills improve or new tooling is adopted. Effort weighting becomes noise. SHIFT uses throughput: counting the number of work items completed per sprint, regardless of estimated size.
Throughput vs. velocity
| Dimension | Velocity (story points) | Throughput (items) |
|---|---|---|
| Unit consistency | Weak: points vary by estimator and over time | Strong: item = item |
| AI work compatibility | Poor: effort variance not captured | Moderate: calibratable with SEMI bands |
| Gaming risk | High: point inflation is common | Low: items are countable |
| Stakeholder clarity | Low: stakeholders do not understand points | High: 'X items done' is legible to everyone |
Monte Carlo probability bands
| Band | Probability | Use for |
|---|---|---|
| P50 | 50% | Internal planning only. Do not share externally. |
| P70 | 70% | Sprint goal-setting and internal commitment. |
| P85 | 85% | Stakeholder commitments. |
| P95 | 95% | Contractual or external commitments. |
Reference class for early baseline
Monte Carlo requires a minimum of 8 sprints of internal data. Before that, use industry-baseline throughput distributions, blended with actual data from sprint 3 onwards (50/50 blend). Always flag to stakeholders when reference class data is in use.
| Team size | Green items/sprint | Amber items/sprint |
|---|---|---|
| 3 people | 5-8 | 2-4 |
| 5 people | 8-13 | 3-6 |
| 7 people | 12-18 | 5-9 |
AI-First Practices
Prompt Library
A node-maintained library of effective prompts for common task types: spec writing, code generation, test generation, review, documentation. Referenced before starting AI-assisted work. Updated whenever a prompt produces significantly better or worse results than expected. The Prompt Library is a first-class team artefact, not an individual's notes.
AI Review Protocol
A structured checklist for reviewing AI-generated output: Does the output match the spec acceptance criteria? Does it handle the documented edge cases? Does it behave correctly in integration? Are there security implications? Has it been tested against the definition of done? No item transitions to Done without this review being documented as completed.
Pair-with-AI
AI is a collaborator, not an autonomous agent. The contributor owns the decision on all AI output. Pair-with-AI means the contributor actively shapes the AI's work: writing the spec, directing the prompts, reviewing the outputs, and deciding what to accept, modify, or reject. Not: run the AI and accept the output.
AI Responsibility Map
A one-page living document maintained by each ACN. For each contributor, it defines which tasks are AI-assisted (human does the thinking, AI assists execution), which are AI-led with human review (AI produces the draft, human evaluates), and which are human-only. Updated at cycle boundaries. Referenced in the Track 2 retrospective every sprint.
AI-First Mindset Shifts
| From | To | Why it matters |
|---|---|---|
| Estimation accuracy | Specification quality | A good spec is worth more than an accurate estimate. Let throughput data handle forecasting. |
| Velocity | Throughput | Velocity measures effort-weighted output. In AI-first teams, effort weighting becomes noise. Count items. Use Monte Carlo. |
| Individual heroics | System quality | AI tools make spec quality, tooling, and review process the binding constraint, not the individual. |
| Done | Verified | AI-generated output needs rigorous verification. Done means: matches spec, passes edge cases, verified in integration. |
| Synchronous planning | Asynchronous alignment | If the spec is clear, most planning questions resolve asynchronously. Reserve sync time for decisions that need dialogue. |
| AI as tool | AI as collaborator | A tool is used. A collaborator is directed, reviewed, and held to a standard. Contributors own the results of AI-assisted work. |
| Prompt first | Spec then prompt | The quality of a prompt is bounded by the quality of the spec behind it. Invest in the spec first. |
Part VI: Integration
Integrating with Other Frameworks
SHIFT is designed to be adopted in layers, not as a wholesale replacement. Existing frameworks contain genuine value. These integration maps show precisely what to keep, what to replace, and what to add.
SHIFT + Scrum
| Scrum element | SHIFT treatment | SHIFT equivalent |
|---|---|---|
| Sprint cadence | Keep | 2-week sprint |
| Sprint Goal | Keep + enhance with DOI | Sprint Contribution in DOI map |
| Product Owner | Replace with expanded accountability | Anchor |
| Scrum Master | Replace with operational role | Delivery Lead |
| Development Team | Keep + AI Responsibility Map | Node Contributors |
| Backlog Refinement | Replace | Spec Review Session (SEMI-driven) |
| Sprint Planning | Merge into Spec Review | Spec Review + sprint goal confirmation |
| Retrospective | Replace with 3-track format | AI-Augmented Retrospective |
| Story points / velocity | Replace | SEMI scoring + throughput + Monte Carlo |
| Scrum of Scrums | Replace | LGP (bi-weekly, max 75 min) |
| Innovation capacity | Add | IIS (15% of sprint capacity) |
| Strategy alignment | Add | DOI model |
SHIFT + Kanban
Kanban and SHIFT share flow-based thinking. SHIFT adds time-boxing and strategic alignment without disrupting Kanban flow.
| Kanban element | SHIFT treatment |
|---|---|
| WIP limits | Keep. SHIFT endorses WIP limits at node level. |
| Flow metrics (cycle time, throughput) | Keep. Throughput data feeds directly into Monte Carlo. |
| Visualisation discipline | Keep. The Kanban board becomes the Node Health Card's delivery view. |
| Sprint time-boxing | Add. The sprint is a planning and review cadence, not a flow constraint. Work in flight at sprint end counts toward the next sprint's throughput if it completes there. |
| DOI alignment | Add. DOI tags visible on all cards. Green/Amber/Red indicators. |
| Innovation capacity | Add. Dedicated Innovation swim lane with its own WIP limit. |
| SEMI scoring | Add. Applied before items enter the WIP queue. |
SHIFT + SAFe (approximately 40% ceremony reduction)
| SAFe ceremony | Duration | SHIFT replacement | SHIFT duration | Reduction |
|---|---|---|---|---|
| Iteration Planning | 4 hours | Sprint Planning + Spec Review | 90 + 45 min | 67% |
| Daily Scrum | 15 min | AI-First Standup | 15 min | 0% (deeper) |
| Iteration Review | 60 min | Sprint Review | 60 min | 0% (deeper) |
| Iteration Retrospective | 60 min | AI-Augmented Retro | 60 min | 0% (deeper) |
| Backlog Refinement | 2 hours | Spec Review (SEMI-driven) | 45 min | 63% |
| PO Sync | 30 min weekly | DOI async update | 15 min | 50% |
| Scrum of Scrums | 30-60 min weekly | LGP (bi-weekly) | 75 min bi-weekly | 38% |
| ART Sync | 60 min bi-weekly | Merged into LGP | Absorbed | 100% |
SHIFT + LeSS
LeSS and SHIFT share a foundational philosophy: scaling should be achieved by descaling. SHIFT adds back what LeSS deliberately removes, but lightly.
| What SHIFT adds to LeSS | Why |
|---|---|
| ACN outcome ownership | LeSS feature teams own outputs. ACNs own capability domain outcomes. |
| LGPs for lightweight coordination | LeSS deliberately removes coordination roles; LGPs provide structure without rebuilding what LeSS removed. |
| DOI model | Distributes strategy alignment to sprint level, complementing the overall Product Owner connection. |
| IIS | LeSS has no structured innovation capacity provision. IIS adds it. |
| SEMI + Monte Carlo | LeSS relies on story points and velocity. SEMI + Monte Carlo are more accurate for AI-first teams. |
SHIFT + OKRs
DOI is the operational bridge between OKRs and sprint delivery.
| OKR level | SHIFT equivalent | Cadence |
|---|---|---|
| Company OKR | Alignment layer input | Quarterly |
| Team OKR | Cycle Objective (in DOI map) | 12-week cycle |
| Key Result | DOI Sprint Contribution outcome signal | Sprint |
| Initiative / Output | Sprint Contribution (delivery work) | Sprint |
A declining DOI Health Score mid-cycle is a leading indicator that the team may not hit its Key Results. This surfaces the problem six weeks before the quarterly review rather than at it. The Cycle Portfolio Review replaces the quarterly OKR retrospective rather than running alongside it.
Part VII: Adoption
SHIFT Maturity Model
The SHIFT Maturity Model is a navigation tool, not a certification programme. Teams use it to understand where they are, what to focus on next, and what good looks like at each level. Run at cycle end, facilitated by the Governance Steward, 30 minutes.
Established Foundation
Self-assessment questions
- Does every sprint end with a Sprint Review where working software is demonstrated against acceptance criteria?
- Does every sprint item have a SEMI score before entering the sprint?
- Are you tracking the number of items completed per sprint (throughput)?
- Can every team member describe the node's capability domain in one sentence?
- Does every Node Contributor own the review step for their AI-assisted outputs?
Advancement to next level: 4 of 5 yes, consistently across at least 3 sprints.
Aligned Delivery
Self-assessment questions
- Can you point to the Cycle Objective that each current sprint item contributes to?
- Have you had at least one Red DOI item this cycle and resolved it?
- Is IIS happening every cycle, and do Learning Cards exist as output?
- Is the LGP pre-read being distributed 24 hours before every session?
- Is throughput data from the last 6+ sprints available and in use for planning?
Advancement to next level: 4 of 5 yes, consistently across at least one full cycle (12 weeks).
Adaptive Intelligence
Self-assessment questions
- Are you communicating forecast confidence using P70/P85 probability bands to stakeholders?
- Has at least one IIS Learning Card been promoted through the full funnel to a cycle investment?
- Does the team consult the pattern library before scoring SEMI dimensions?
- Is the AI retrospective synthesis pulling data from at least four previous sprints?
- Are retrospective actions being completed at a rate above 70%?
Advancement to next level: 4 of 5 yes, consistently across two consecutive cycles.
System Contribution
Self-assessment questions
- Have you shared pattern library content with at least two other teams in the last cycle?
- Are AI agent roles from your node being used or adapted by other nodes?
- Is your SEMI calibration accurate enough that you rarely have surprise scope overruns?
- Can a new team member contribute meaningfully within 5 days, using documented materials?
- Has at least one IIS output from your team influenced a product roadmap or cycle portfolio decision?
Getting Started
SHIFT is adopted in four phases over 12 sprints. Do not try to run everything at once.
| Phase | Sprints | Key introductions | Maturity target |
|---|---|---|---|
| Phase 0: Ground Zero | Week -1 (before Sprint 1) | Roles, domain definition, AI Responsibility Map v1, SEMI-score top 20 backlog items, Sprint 1 planning | Pre-Level 1 |
| Phase 1: Foundation | Sprints 1-3 | AI-first standup, throughput tracking, Sprint Review, 3-track retrospective, Spec Review (Sprint 2), IIS at 10% (Sprint 3) | Level 1 |
| Phase 2: Alignment | Sprints 4-6 | Cycle Objectives, DOI map, mid-sprint DOI check, LGP, Red Item Protocol, IIS at 15%, Cycle Portfolio Review | Level 2 |
| Phase 3: Intelligence | Sprints 7-9 | Monte Carlo with reference class (Sprint 7), Monte Carlo with internal data (Sprint 8), pattern library, IIS pilot evaluation | Level 3 |
| Phase 4: Full operation | Sprints 10-12+ | Full SEMI calibration, cross-team pattern library, Maturity Level 4 self-assessment, reference team for other adopters | Level 3-4 |
Appendix
Vocabulary
| Term | Definition |
|---|---|
| ACN | Adaptive Collaboration Node: the primary delivery unit in SHIFT; owns a capability domain |
| LGP | Lean Governance Pod: bi-weekly cross-node decision forum; maximum 7 participants, 75 minutes |
| IIS | Innovation-Integrated Sprint: ring-fenced exploration capacity (15% standard) within every sprint cycle |
| DOI | Dynamic Objectives Integration: mechanism connecting sprint work to cycle objectives via RAG tagging |
| SEMI | Specification Quality, Effort Uncertainty, Multi-system Impact, Implementation Confidence: the four-dimension item readiness model |
| Anchor | Accountable owner of an ACN's capability domain and outcomes |
| Delivery Lead | Operational owner of an ACN's sprint delivery system |
| Governance Steward | Owner of LGP function and governance system health |
| Node Health Card | One-page ACN status artefact updated every sprint |
| Learning Card | Structured IIS output: what was explored, what was learned, what is recommended |
| Cycle | 12-week delivery and alignment period (six sprints) |
| P70 / P85 | Monte Carlo probability bands: 70% and 85% confidence delivery forecasts |
| DOI Health Score | Aggregate cycle-level alignment score: (Green Contributions / Total) × 100 |
| Red Item Protocol | Structured escalation process for Red DOI items: 4-hour declaration SLA, 24-hour resolution meeting |
| AI Responsibility Map | Node-level document defining which tasks are AI-assisted, AI-led with review, or human-only |
| Spec-Driven Development | Practice treating the specification as the primary engineering artefact before any implementation begins |
| Pattern Library | Shared repository of implementation patterns, SEMI calibration insights, and IIS learnings |
| Throughput | Number of work items completed per sprint; the primary SHIFT delivery metric (replaces velocity) |
FAQ
Frequently Asked Questions
Practical answers to the questions that come up most often when teams begin implementing SHIFT.
Starting SHIFT
We want to start implementing SHIFT today. What are the three most important things to do first?↓
- Define your ACN clearly. Write the capability domain in a single sentence. If it takes two sentences, the scope is too broad. Everything else depends on this agreement.
- Score your backlog with SEMI. Take the top 15 to 20 items and score them using the four dimensions. Do not aim for precision in the first round — aim for surfacing disagreement. When two people give the same item very different scores, that conversation needs to happen before work begins.
- Start tracking throughput from sprint one. Count the number of items marked Done at the end of each sprint. Not story points, not hours — items completed and meeting acceptance criteria. Eight sprints of throughput data will transform your planning conversations.
Everything else — IIS, DOI, LGP, Monte Carlo — layers on top of these three. Teams that skip the foundation and go straight to the advanced mechanisms fail consistently.
We are already running Scrum. What changes in sprint 1 if we adopt SHIFT?↓
- Replace standup format. Move to three signals: a progress signal against a measurable criterion, an AI workflow note (even if it is 'no AI involvement today'), and a blocker flag. Blockers are addressed after standup, not during it.
- Add SEMI scoring to refinement. Before any item enters the sprint, it gets a SEMI score. Items scoring 9 or above do not enter the sprint. Items scoring 7 to 8 enter only with a named mitigation on the highest-risk dimension.
- Replace your retrospective with the 3-track structure. Track 1: delivery system. Track 2: collaboration and AI. Track 3: learning and growth. Do not combine tracks or skip one.
IIS, DOI, LGP, and Monte Carlo are introduced in subsequent sprints per the phased adoption plan in Chapter 21. Do not add them in sprint 1.
What is the minimum viable SHIFT for a 3-person team?↓
- Roles: one person combines Anchor and Governance Steward. One person combines Delivery Lead and primary Contributor. One person is a contributor.
- Ceremonies: AI-first standup, Spec Review at sprint start, Sprint Review, 3-track Retrospective. Drop IIS (suspend below 4 people), drop LGP (replace with a weekly 30-minute alignment meeting with one external stakeholder), drop the formal Cycle Portfolio Review.
- Metrics: throughput and SEMI scoring. Skip Monte Carlo until sprint 8.
- DOI: one or two cycle objectives with sprint contributions tagged Green, Amber, or Red. Review at the mid-sprint check.
How long before we see results?↓
- Sprints 1-2: process friction increases temporarily. Teams naming spec gaps they previously ignored. Standup taking slightly longer. SEMI scores causing disagreements. This is the framework surfacing problems that already existed invisibly.
- Sprints 3-4: SEMI calibration stabilises. Mid-sprint surprises decrease noticeably. Planning conversations become shorter and more honest.
- Sprint 6-8: throughput data becomes reliable enough to use for planning. The first Cycle Portfolio Review is when most teams feel the framework operating at altitude.
- Cycle 2 onwards: IIS begins producing Learning Cards worth promoting. Monte Carlo forecasts carry credibility. DOI health scores give the team and leadership a shared, objective view of alignment.
The failure mode to avoid: abandoning SHIFT in sprint 2 or 3 because it feels like overhead. Teams that stay with it past sprint 4 uniformly report that the overhead inverts into a net reduction in wasted coordination time.
Our organisation will not let us drop story points. Can we run SHIFT alongside them?↓
- Yes, with a clear separation of purpose. Use story points exclusively for external reporting or contractual commitments where they are required. Use SEMI and throughput for internal planning, sprint readiness, and forecasting. Do not mix them in the same conversation.
- The practical approach: score SEMI, track throughput, run Monte Carlo from throughput data. If someone external asks for a story point count, derive it from throughput after the sprint using your historical average points per item.
Warn your team explicitly: SEMI scores and story points answer different questions. SEMI answers 'is this item ready and how complex is it?' Story points attempt to answer 'how much effort will this take?' Conflating them is the most common estimation failure in Agile teams.
Metrics and Tracking
What are the core SHIFT metrics and what exactly do I track for each one?↓
- Throughput (items per sprint): count of work items moved to Done per sprint that meet all acceptance criteria, passed their review step, and are deployed to at least staging. Record as a single number per sprint. Build a rolling 10-sprint dataset.
- SEMI Score (4-12 per item): score every item at sprint entry. Log the four individual dimension scores, not just the composite. After each sprint, review which dimension scores were wrong. This calibration is how SEMI improves over time.
- DOI Health Score (0-100): track at sprint start, mid-sprint, and sprint end. Formula: (Green Sprint Contributions ÷ Total Sprint Contributions) × 100. Record the score and the trend — trend matters more than any single score.
- IIS Capacity (%): track at sprint planning. Record whether IIS ran, what theme was explored, how many Learning Cards were produced, and whether any were promoted.
- Monte Carlo P70/P85: from sprint 8 onwards, for any active work thread with more than three items remaining. Record the P70 date (internal planning) and P85 date (stakeholder commitment).
How do we calculate the DOI Health Score in practice, sprint by sprint?↓
- At sprint planning, tag every sprint backlog item against a Cycle Objective as Green, Amber, or Red. Most will be Green at planning.
- Update tags at the mid-sprint check (Day 6-7) and at sprint end.
- Formula: DOI Health Score = (count of Green Sprint Contributions ÷ total Sprint Contributions) × 100.
- Example: 12 Sprint Contributions. At mid-sprint: 8 Green, 3 Amber, 1 Red. Score = (8 ÷ 12) × 100 = 67 — Moderate. The 1 Red item triggers the Red Item Protocol immediately.
The common mistake: teams tag items Green at planning and never update them. A DOI Health Score of 100 at sprint end that was not updated mid-sprint is a fiction. The value of DOI is the mid-sprint signal.
How do we run Monte Carlo in practice? Do we need special software?↓
- Manual approach (early sprints): record throughput per sprint in a spreadsheet. Use a random number generator to sample throughput values from your historical data. Repeat 500 times. The percentage of simulations completing within each timeframe gives you probability bands.
- Spreadsheet approach: use RANDBETWEEN or random sample formulas, set up 1,000 simulation rows. P50, P70, P85, and P95 emerge from the distribution. Free templates for this are easy to adapt.
- ActionableAgile Analytics (actionableagile.com): the recommended tool for SHIFT teams. Connects directly to Jira and Azure DevOps, produces Monte Carlo 'How Many' and 'When' simulations out of the box.
- Nave (nave.app): similar capability, strong Jira integration.
- Azure DevOps Analytics: built-in throughput and cycle time reports. Monte Carlo requires a third-party extension (ActionableAgile for ADO) or a connected spreadsheet.
Minimum viable setup: a shared spreadsheet with one row per sprint and one column for items completed. Five minutes to maintain per sprint, provides everything needed for manual P70/P85 calculations.
What leading indicators should we track, not just lagging ones?↓
- SEMI score distribution at sprint entry: a sprint where 40% of items entered Amber or Red is a leading indicator of mid-sprint blockers and incomplete work at sprint end. Check this at Spec Review, before delivery begins.
- Amber item age in the DOI map: an Amber item on Day 3 that has not moved to Green by Day 5 will almost certainly become Red. Track Amber item age daily.
- Throughput trend (last 3 sprints): a declining trend across three consecutive sprints is a leading indicator of a systemic delivery problem. A single low-throughput sprint is noise. Three consecutive is a signal.
- IIS Learning Card signals: a cycle where all IIS Learning Cards are Neutral or Negative is a leading indicator that themes are poorly chosen or IIS capacity is too low.
- Spec Review duration: if it consistently runs over 45 minutes, items are arriving at sprint planning underspecified. Spec Review duration is a leading indicator of sprint predictability.
Tooling: Jira, Azure DevOps, and Others
We use Jira. How do we configure it to support SHIFT without a major overhaul?↓
- SEMI Score fields: add a custom Number field named 'SEMI Score' (range 1-12) and four sub-fields S, E, M, I (range 1-3 each). Display them on the issue create and edit screens. This is the single most impactful change — it forces SEMI scoring at issue creation, not as a separate process.
- DOI Status field: add a custom Single-select field with options Green, Amber, Red, and Not Tagged. Display it on the sprint board card face. Create a Jira dashboard gadget filtered by sprint showing counts of each status.
- IIS work: create a dedicated Epic named 'IIS [Cycle Number]' or use a label 'IIS' on all innovation sprint issues. This separates IIS throughput from delivery throughput in your Monte Carlo baseline.
- Throughput tracking: use the Jira Velocity Chart in Issue Count mode, not Story Points. Export this data to a spreadsheet every sprint end and maintain a rolling 10-sprint throughput dataset.
- ActionableAgile for Jira: install from the Atlassian Marketplace. Adds Monte Carlo simulator, throughput run chart, cycle time scatterplot, and WIP ageing directly inside Jira. Recommended from sprint 8 onwards.
- Node Health Card: maintain as a Confluence page linked from the Jira project. Update the four fields (throughput trend, DOI Health Score, IIS status, open dependencies) at sprint end.
We use Azure DevOps. How do we configure it for SHIFT?↓
- SEMI Score fields: in Process customisation (Organisation Settings > Boards > Process), add custom integer fields to your work item type — 'SEMI Score' plus 'S-Score', 'E-Score', 'M-Score', 'I-Score'. Add them to the work item form under a 'SHIFT' section.
- DOI Status field: add a custom picklist field named 'DOI Status' with values Green, Amber, Red, Not Tagged. Create a query in Azure Boards grouping by DOI Status filtered to the current sprint. Pin this as a dashboard widget.
- IIS work items: use a tag 'IIS' or a dedicated Area Path/Feature named 'IIS'. Use this to produce a separate throughput chart for IIS items, keeping it out of your delivery baseline.
- Throughput tracking: navigate to Boards > Analytics > Throughput. Switch granularity to Sprint, set window to last 10 sprints. Export to Excel for your Monte Carlo spreadsheet.
- Monte Carlo in ADO: use ActionableAgile for Azure DevOps, Nave, or export throughput data to an Excel Monte Carlo template.
- LGP pre-read and Node Health Card: use Azure DevOps Wiki. Create a wiki page per ACN with a standard template, linked from the team dashboard. The Governance Steward updates the decision log as a separate wiki page after each LGP session.
We use Linear. Does SHIFT work with it?↓
- SEMI scoring: add four custom properties to your issue type — S, E, M, I as number fields (1-3), plus a computed SEMI Score. Linear supports custom properties natively. Add these to your issue template so they are prompted at creation.
- DOI Status: add a custom select property (Green, Amber, Red). Use Linear's filter and grouping to view DOI distribution across active cycle issues.
- Throughput tracking: Linear's built-in cycle analytics show issue completion counts per cycle. Export to a spreadsheet for Monte Carlo inputs.
- IIS work: use a dedicated Label ('IIS') or a separate Linear project for IIS work within each cycle.
- Monte Carlo: Linear has no native Monte Carlo tool. Export throughput data and use ActionableAgile (which supports CSV import) or a spreadsheet template.
Linear is well-suited to teams of 3 to 7 people. For teams above 7 to 8 people or multi-node coordination, Jira or ADO is the better choice.
What about Notion, Trello, or spreadsheet-only setups?↓
- Minimum viable toolset: a shared spreadsheet with two tabs — (1) Sprint throughput log, one row per sprint; (2) SEMI backlog, listing each item with S, E, M, I scores and composite.
- A shared document (Google Docs, Notion, or Confluence) per sprint for the Sprint DOI Map, tagged Green/Amber/Red, updated mid-sprint.
- A shared document per ACN for the Node Health Card (updated each sprint end) and per IIS sprint for Learning Cards.
- Notion: good for documentation artefacts (Node Health Cards, Learning Cards, Pattern Library, AI Responsibility Map, LGP pre-reads and decision log). Weak for sprint tracking and throughput analytics. Use Notion for documentation alongside a dedicated board tool for sprint management.
- Trello: usable for 3 to 5 person teams. Add SEMI Score as a custom field on cards, DOI Status as a label, IIS as a separate list. Not recommended above 5 people.
The tool matters far less than the discipline of updating it. A team that maintains a clean shared spreadsheet will get more value from SHIFT than a team that configures a full Jira setup and never updates the DOI Status field.
Team and Roles
We do not have a dedicated Governance Steward. Can the Delivery Lead cover both roles?↓
- In teams of 5 or fewer people, yes, with caution. Two conditions must be met: (1) the LGP pre-read is compiled and distributed 24 hours in advance without exception, and (2) the decision log is maintained and visible to all ACNs.
- If either condition is not being met consistently, the Governance Steward function is being neglected and must be given to someone else.
- The risk: the Delivery Lead's primary loyalty is to sprint delivery. When the two roles conflict, governance gets deprioritised. The most common failure mode is the LGP pre-read arriving the morning of the session — participants have no context and the session becomes a status report.
In teams above 5 people, do not combine these roles. The authority concentration creates a single point of failure for both operations and governance.
Can we rotate the Anchor role between team members?↓
- Rotating the Delivery Lead is encouraged. Rotating the Anchor is not recommended for ACNs in active delivery.
- The Anchor role requires deep familiarity with the capability domain's history, decisions made, trade-offs accepted, and strategic direction. This context cannot be effectively transferred in a single handover session.
- The appropriate version of role development: a future Anchor candidate shadows the current Anchor for a full cycle, participates in LGP sessions, and takes on specific Anchor accountabilities before formally rotating in. This is a development pathway, not a rotation schedule.
What do we do when the Anchor and Delivery Lead persistently disagree on priorities?↓
- This is a governance issue, not a personality issue. Resolve it at the LGP, not informally.
- Persistent disagreement usually means either the capability domain is not clearly enough defined (both roles are working from different mental models of what the ACN owns), or sprint capacity is being managed without shared understanding.
- Resolution path: bring the specific disagreement to the LGP as an Unblock item. The Governance Steward facilitates a decision that documents the explicit scope and capacity boundary.
Once the boundary is written down and ratified, the disagreement usually resolves because both parties are now working from the same explicit constraint rather than implicit assumptions.
Integration and Adoption
Our company runs SAFe at the programme level. Can one team adopt SHIFT without the whole organisation changing?↓
- Yes, and this is one of the most common adoption patterns. The Anchor participates in SAFe's PO Sync and ART events as the team's representative. PI objectives map directly to SHIFT Cycle Objectives. The DOI map is maintained internally.
- The team uses SEMI scoring internally and derives story point counts from historical throughput averages for external PI Planning reporting. These roles are kept cleanly separate.
- The LGP operates entirely internally. SAFe's ART Sync serves as the external governance input; the LGP handles internal cross-node decisions.
- After two or three cycles of demonstrably better throughput predictability, the evidence base for expanding SHIFT to other teams is significantly stronger.
Risk: SAFe PI Planning items arrive without SEMI scores. Introduce a Spec Review session immediately after PI Planning to score incoming PI objectives before they enter the sprint backlog.
We already have OKRs. Where does the DOI map overlap with what we already track?↓
- DOI is not a replacement for OKRs. It is the connection between your OKR system and sprint work.
- Mapping: Company OKRs → Alignment layer input. Team OKR (or the Key Result your team owns) → Cycle Objective in the DOI map. Sprint Contributions → the specific deliverables each sprint is committing to in service of that Key Result. DOI Health Score → mid-cycle, sprint-by-sprint view of whether delivery is contributing to the Key Result as planned.
- What SHIFT adds that OKRs alone do not: a continuous, sprint-level signal of whether execution matches intent, surfaced during the sprint when it can still be corrected — not at quarter end when it cannot.
How do we handle stakeholders who want predictable release dates but do not understand Monte Carlo?↓
- Most stakeholders do not need to understand Monte Carlo. They need a confident, honest answer to 'when will it be done?'
- Communication formula: 'Based on our delivery data from the last N sprints, we have 85% confidence this will be complete by [date]. Our most likely completion, based on current throughput, is [earlier date].'
- This gives two numbers: a planning date (P85, for roadmap commitments) and an optimistic date (P50 or P70, the team's working target).
- For stakeholders who push back on probabilistic language: 'It is the same logic as a weather forecast. We are saying that based on our actual delivery patterns, 85 times out of 100 it would be done by then. We will give you a heads-up the moment we see signals that put that date at risk.'
What is the most common reason SHIFT implementations fail in the first cycle?↓
- Failure mode 1: introducing too much at once. Teams attempt to run IIS, DOI, LGP, Monte Carlo, SEMI, AI-first standup, and 3-track retrospective in sprint 1. The cognitive overhead is too high. Sprint 1 becomes a meta-conversation about SHIFT instead of a delivery sprint. Follow the phased adoption plan strictly.
- Failure mode 2: SEMI scoring without honest disagreement. Teams go through the motions of scoring but socially align on scores rather than surfacing real disagreement. When two people give the same item S=1 and S=3, explore why — do not average. The value of SEMI is in the forced clarification. If SEMI scores are not causing any disagreement, the scoring is being done dishonestly.
- Failure mode 3: LGP pre-read not enforced. The pre-read is not ready, the session goes ahead anyway as a verbal status update, and the pattern sets in. Once LGP becomes a status report forum, it is very difficult to recover without an explicit reset. The Governance Steward must enforce the pre-read rule from the very first session. If it is not ready, postpone the session — do not hold it without it.
SHIFT Agile Framework v2.0 · ricardominas.com/shift-agile