Method
SHIFT Agile Framework
Scalable Hybrid Iterative Framework for Teams.
SHIFT was built from the conditions modern teams actually face: distributed and hybrid work, governance that slows delivery rather than protecting it, delivery pressure that crowds out every innovation intent, and the wide gap between quarterly OKRs and day-to-day sprint decisions. It is a framework for all four of those problems at once, and it is designed to augment whatever you are already running, not to replace it.
SHIFT stands for Scalable, Hybrid, Iterative Framework for Teams: a system designed to flex across team size, geography, working mode, and AI capability without losing the discipline that makes iteration meaningful.
The four failures SHIFT addresses
The Coordination Failure. When organisations have more than one team, most frameworks break down. Scrum of Scrums is a band-aid. SAFe adds so much ceremony that teams spend more time coordinating than delivering. Most organisations end up with an informal coordination layer of side conversations and management escalations that do not scale.
The Innovation Failure. Delivery pressure always wins over exploration. The people closest to the problems, the delivery team, are also the people best positioned to solve them. But they never get protected space to try. SHIFT makes exploration structurally mandatory, not aspirational.
The Alignment Failure. Teams set OKRs in January and discover in March that the sprint work had no measurable connection to them. Most frameworks do not have a mechanism for mid-sprint alignment checking. SHIFT does.
The AI Transition Failure. Teams adopt AI tools within frameworks designed before AI existed. Story points were never designed for a production function where one well-specified task produces ten times the output of a vague one. SHIFT was built knowing that AI-assisted work is the default, not the exception.
The four layers of SHIFT
Layer 1: Learning (continuous)
The foundational layer. Captures, synthesises, and routes learning back into the delivery system through three channels: operational learning, product learning, and capability learning. Key artefacts: AI-Augmented Retrospective output, IIS Learning Cards, pattern library updates, SEMI recalibration. Learning is Layer 1, not Layer 4, because everything else builds on it.
Layer 2: Delivery (sprint, 2-week)
Where execution happens. Every item entering a sprint must have a passing SEMI score. Blockers are classified within four hours. Throughput is tracked daily. The Delivery Lead maintains the Sprint DOI Map and runs the sprint ceremonies.
Layer 3: Governance (bi-weekly)
Cross-node risk management, dependency resolution, compliance decisions, and resource reallocation. Lean Governance Pods meet for a maximum of 75 minutes. They produce exactly three types of decisions: Unblock, Approve, or Defer. No status reports. No decisions outside these three types.
Layer 4: Alignment (quarterly, 12-week cycle)
Connects team delivery to organisational strategy. Sets Cycle Objectives using the DOI model. Reviews portfolio health at cycle end. Produces one output per cycle: the Cycle Objective Set with RAG status and DOI Health Scores. Mid-cycle calibration at week 6.
Core principles
People before process
Every structural choice in SHIFT is evaluated against whether it helps or burdens the people doing the work. Governance, tooling, and ceremony exist to serve teams, not the other way around.
Hybrid by design
Distributed and co-located contributors are treated as equal participants. Asynchronous-first communication is the default. Synchronous time is reserved for decisions and dialogue that genuinely require it.
Iterative and adaptive
Short cycles with structured reflection. SHIFT teams do not commit to large plans. They commit to learning loops that progressively sharpen direction.
Lean governance
Oversight and accountability without bureaucracy. Lean Governance Pods keep decisions moving at the pace the work demands, with clear ownership and minimal coordination overhead.
Innovation is part of delivery
Experimentation is not deferred to a future quarter. Innovation-Integrated Sprints build dedicated capacity for exploratory work inside the delivery rhythm so that learning does not compete with shipping.
Continuous alignment
Strategy does not live in a quarterly deck. Dynamic Objectives Integration keeps team-level work connected to organisational goals throughout every sprint, not just at planning.
Adaptive Collaboration Nodes (ACNs)
An Adaptive Collaboration Node is the primary delivery unit in SHIFT. The word 'node' is deliberate: it implies connectivity, not isolation. An ACN has clear internal structure, a defined scope of ownership, and explicit interfaces to other nodes.
An ACN owns a capability domain, not a feature list. It is responsible for the full vertical slice of work within that domain: from specification through testing through deployment. It owns outcomes, not outputs. An ACN forms around a capability domain when that domain requires more than two cycles of sustained delivery.
Node size runs from a minimum of 3 to a maximum of 9 people. The optimal size for full SHIFT operation is 5 to 7. Every ACN maintains an AI Responsibility Map: a one-page document defining which tasks are AI-assisted, which are AI-led with human review, and which are human-only. Every ACN publishes a Node Health Card after each sprint: throughput trend, DOI alignment status, IIS status, and open cross-node dependencies, readable in 90 seconds.
ACNs dissolve when their capability domain is complete, when throughput signals indicate sustained delivery failure, or when team size drops below three without approved backfill. Dissolution takes two sprints. The dissolving node's Learning Cards remain in the shared pattern library permanently.
Lean Governance Pods (LGPs)
A Lean Governance Pod is SHIFT's answer to the governance tax problem. In most scaled frameworks, governance consumes 20 to 40 percent of senior contributor time with minimal delivery impact. LGPs reduce governance overhead to under 10 percent of any individual's time while maintaining decision quality and traceability.
An LGP is not a steering committee or an approval board. It is a decision-making forum with a defined scope, a time budget, and an explicit anti-bureaucracy mandate. Maximum 7 participants. 75 minutes per bi-weekly session. Three decision types only: Unblock (resolve or assign a delivery blocker), Approve (ratify prepared proposals), or Defer (assign an owner, a due date, and specify the exact information missing). Any item not fitting one of these three types does not belong in the LGP.
The pre-read is distributed 24 hours before every session. If an item is not in the pre-read, it is not discussed today. This single rule prevents the LGP from becoming a fire-fighting forum. Any item that appears in three consecutive pre-reads without resolution is automatically escalated to the Alignment layer.
Roles
SHIFT Anchor
The single accountable person for an ACN's capability domain outcomes, not outputs. Owns DOI connection, prioritisation decisions, IIS theme selection, and the node's AI Responsibility Map. Represents the node in LGP. Should spend 30 to 40 percent of their time on direct delivery work: an Anchor who attends only meetings is disconnected from delivery reality.
Delivery Lead
The operational heart of the ACN. Owns the sprint plan, SEMI compliance, Sprint DOI Map, the mid-sprint DOI check, the 4-hour blocker escalation SLA, the Node Health Card, retrospective facilitation, and Monte Carlo forecasting at sprint end. This is an operational role with delivery skin in the game, not a coaching role.
Node Contributors
The practitioners doing the delivery work. Expected to contribute to or write specs that meet the SEMI threshold before work enters the sprint, to own the human review step for all AI-led outputs they are responsible for, and to contribute to IIS themes with genuine engagement. Every contributor defines their personal AI workflow in the AI Responsibility Map.
Governance Steward
Responsible for the health of the governance system, not for making governance decisions. Compiles and distributes the LGP pre-read. Facilitates the bi-weekly LGP. Maintains the decision log (public, searchable, permanent). Tracks deferred items. Runs the three-strikes escalation. Owns the Maturity Model self-assessment process.
The SHIFT lifecycle
SHIFT runs in 2-week sprints grouped into 12-week cycles (six sprints per cycle). The cycle is the primary strategic alignment unit. The sprint is the primary delivery unit.
Within each sprint: Day 1 is the Spec Review Session. Days 1 through 10 carry the daily AI-First Standup (async-first, 15 minutes). Day 6 or 7 is the Mid-Sprint DOI Check-in. Day 10 is the Sprint Review (which includes the IIS Review when applicable), followed by the AI-Augmented Retrospective.
Within each cycle: Sprints 1 and 2 are standard delivery. Sprint 3 introduces IIS (15% capacity ring-fenced). Sprints 4 and 5 are standard delivery. Sprint 6 combines delivery, IIS, and the Cycle Portfolio Review. At the cycle end, the LGP re-validates objectives, reviews ACN composition, ratifies IIS themes for the next cycle, and runs the Maturity Model self-assessment.
Total ceremony overhead per person per sprint is approximately 6.7 hours, around 8% of available sprint capacity. This compares with Scrum at 12 to 16 hours and SAFe at 20 or more hours per person per sprint at scale.
Innovation-Integrated Sprints (IIS)
IIS is SHIFT's mechanism for sustaining exploratory work within a delivery-focused framework. The standard IIS allocation is 15% of sprint capacity per node. This is ring-fenced before sprint planning, not distributed from what is left over. It requires LGP approval to reduce below 10%.
IIS differs from 20% time, hackathons, and innovation sprints in four ways: the time is structurally protected (not informal), outputs follow a defined format (Learning Cards), a promotion funnel connects good ideas to real investment (not just to a parking lot), and themes are selected based on DOI connection to cycle objectives (not personal preference).
An IIS theme must have three elements: a question (what are we trying to learn?), a connection (how does this relate to a Cycle Objective or known limitation?), and a testability statement (how will we know in 1.5 days whether this is worth pursuing?). The output is a Learning Card: what was done, what was learned, what was not learned, a signal (Positive, Neutral, or Negative), and a recommendation (Promote to Pilot, Continue Exploring, Archive, or Share Externally).
Promoted items move through a funnel: Learning Card, Pilot Sprint (one sprint, defined success criteria, standard delivery capacity), LGP Approval, then Cycle Portfolio Investment. A pilot that misses its success criteria is archived, not extended. Extensions require a new LGP approval.
Dynamic Objectives Integration (DOI)
Dynamic Objectives Integration keeps sprint delivery connected to strategic objectives in real time, not retrospectively. Every sprint backlog item is tagged Green (directly serves a Cycle Objective), Amber (indirectly supports one), or Red (no objective connection) during the Spec Review Session.
Red items are challenged at sprint planning: delete, defer, or explicitly reclassify with a documented connection. Items kept as Red (for example, compliance or technical debt work) are acknowledged and tracked. The DOI Health Score is calculated as the percentage of Green contributions out of total Sprint Contributions. A score of 80 or above is strong. Below 40 triggers escalation to the Alignment layer.
The Amber Trap is the most common DOI failure: teams keep items Amber indefinitely to avoid the visibility of Red. SHIFT enforces an automatic conversion: any Amber item that has not moved to Green within four calendar days becomes Red. The Delivery Lead cannot override this rule. Only the Governance Steward can, with documented justification.
When a Red item is declared, the protocol is: declare within four hours (notification to Anchor, Governance Steward, and adjacent node Delivery Leads), initial response within four hours (resolve within node or proceed to resolution meeting), resolution meeting within 24 hours with named action, owner, and deadline. All Red items from a cycle are reviewed at the Cycle Portfolio Review for pattern analysis.
AI-first teams and Spec-Driven Development
AI-first teams operate with a fundamentally different production function. When AI tools can produce a working implementation from a clear specification in hours, the constraint shifts from coding capacity to specification quality. Vague requirements produce unreliable output regardless of the AI tools involved.
Spec-Driven Development treats the specification as the primary engineering artefact. A complete spec contains: a problem statement, acceptance criteria (each independently testable), edge cases, integration constraints, AI guidance notes (where AI output is expected and what review is required), and a definition of done. This spec is reviewed, challenged, and signed off before any implementation begins.
SDD changes what sprint planning is for. Planning is no longer primarily about task decomposition and effort estimation. It is about spec quality review. A sprint is ready to begin when every item has a spec that any contributor, human or AI, could implement without interpretation. SDD changes retrospectives too: the first question is not 'why did delivery take longer than expected?' but 'where did the spec fail us?'
Team sizing for AI-first work
AI-First Core Team (1-4 people)
Appropriate for well-scoped delivery work where specifications are clear and the domain is understood. These teams move fast, have minimal coordination overhead, and sustain high throughput with tight feedback loops. Not suitable for discovery work, cross-functional stakeholder alignment, or novel domains where specifications require significant exploration.
Standard ACN (4-8 people)
The default SHIFT configuration for most delivery workstreams. Supports the full range of roles, meaningful Spec Review sessions, and sufficient perspective diversity for DOI alignment to surface useful tensions. Full SHIFT operation is viable at this size. Five to seven people is the preferred range.
Coalition ACN (8-15 people)
Used for complex programmes where multiple workstreams need to maintain coherence. At this size the Anchor and Governance Steward become full-time, and the LGP cadence increases. Split into sub-nodes before reaching fifteen people. Coalition ACNs should coordinate through an LGP, not through informal channels.
The SEMI model
The SEMI model is SHIFT's estimation and sprint-readiness system. It replaces story points. Each work item receives four scores on a 1 to 3 scale before it can enter a sprint. The composite score (minimum 4, maximum 12) determines sprint entry eligibility, not calendar duration.
S is Specification Quality: how clear, complete, and testable is the specification? Score 1 if an engineer can start without clarifying questions. Score 3 if acceptance criteria are missing or untestable. E is Effort Uncertainty: has the team done this before? Score 1 for known work with precedent. Score 3 for novel work where the approach is unclear. M is Multi-system Impact: how many external systems, teams, or dependencies does this touch? Score 1 for fully contained work. Score 3 for work touching multiple systems with compliance implications. I is Implementation Confidence: how confident is the team in the chosen approach? Score 1 for a proven pattern. Score 3 when a spike may be needed before implementation begins.
Sprint entry rules: SEMI 4 to 6 (Green band) enters the sprint immediately. SEMI 7 to 8 (Amber band) enters only with a documented mitigation for the highest-scoring dimension, agreed by the Delivery Lead and Anchor. SEMI 9 to 10 (Red band) is returned to the Anchor for specification improvement before it can be re-scored. SEMI 11 to 12 (Black band) must be decomposed into child items.
SEMI scores tracked across sprints reveal systemic patterns: consistent S=3 signals specs are written too late. Consistent E=3 on one work type signals the team is not building pattern familiarity. Consistent M=3 signals the node's capability domain is too wide. Consistent I=3 signals a capability gap that an IIS theme should target.
Forecasting: throughput and Monte Carlo
Velocity-based forecasting collapses in AI-first teams. A team's effective output can double between sprints as prompting skills improve or new tooling is adopted. Effort-weighting becomes noise. SHIFT uses throughput: counting the number of work items completed per sprint, regardless of estimated size. An item is complete when it meets all acceptance criteria, has passed its required review step (including human review for AI-assisted outputs), and is deployed or integration-tested.
For release date forecasting, SHIFT uses Monte Carlo simulation. The team's historical throughput distribution is sampled across thousands of simulated completions to produce a probability distribution of outcomes. The output is communicated in probability bands: P70 for sprint goal-setting and internal commitment, P85 for stakeholder commitments, P95 for contractual commitments. 'Based on our last 10 sprints, we have 85% confidence this will be complete by 14 March' is more honest and more useful than a single predicted date.
Monte Carlo requires a minimum of 8 sprints of internal throughput data before it replaces reference class benchmarks. Before that, teams use industry-baseline throughput distributions, blended with actual data from sprint 3 onwards, and clearly flagged to stakeholders as external estimates rather than internal forecasts.
Ceremonies
Spec Review Session
Day 1. 45 minutes. The sprint backlog is reviewed ordered by SEMI score, highest first. For each Amber item (7-8), the team identifies the highest-risk dimension and agrees a mitigation action completable by day 3. Red items are removed and returned to the Anchor. Output: a sprint backlog where every item has a confirmed SEMI score and every Amber item has a documented mitigation.
AI-First Daily Standup
15 minutes. Async-first. Three-part format per contributor: progress signal (where is the work against its acceptance criteria?), AI workflow note (where is AI output meeting or missing expectations?), and blocker flag. Blockers are classified after standup: internal, cross-node, or external. Classification triggers the appropriate escalation.
Mid-Sprint DOI Check-in
Day 6 or 7. 30 minutes. Delivery Lead and Anchor review the Sprint DOI Map RAG status. Amber items checked: is the recovery action on track? If not, convert to Red. Capacity check: can the node complete its committed contributions at current throughput? Named actions, named owners, dated.
Sprint Review
Day 10. 60 minutes including IIS Review. Anchor opens with a one-sentence summary: was the sprint goal achieved? Each contributor demonstrates completed work against acceptance criteria. No slide decks. Stakeholder feedback is categorised immediately. The final 20 minutes are the IIS Review: Learning Cards presented, promote/continue/archive decided in the room.
AI-Augmented Retrospective
Day 10. 60 minutes across three tracks. Pre-ceremony: the AI retrospective agent synthesises throughput trend, SEMI distribution, DOI Health Score trend, and recurring themes from the last four retrospectives. Track 1 (Delivery System): what is the single biggest friction point in how we deliver? Track 2 (Collaboration and AI): where did AI tooling help or slow us down? Track 3 (Learning and Growth): what did we learn that we should not lose? Each track produces named actions with owners and sprint deadlines.
Cycle Portfolio Review
End of Sprint 6. 90 minutes. Governance Steward presents the Cycle DOI Summary. Each Anchor presents their Cycle Objective outcomes (5 minutes each, no blame). IIS Portfolio reviewed. Next cycle's Objectives and IIS themes set. ACN formation or dissolution decisions made. Maturity Model self-assessment run.
Integrating with other frameworks
SHIFT + Scrum
ACN replaces the Scrum team. Anchor replaces the Product Owner (broader outcome accountability). Delivery Lead replaces the Scrum Master (operational, not coaching). Spec Review replaces Backlog Refinement. SEMI replaces story points. Monte Carlo replaces velocity. IIS is added every cycle. DOI is added to sprint planning. LGP replaces Scrum of Scrums. Migration over eight sprints: introduce one layer at a time.
SHIFT + Kanban
SHIFT adds sprint time-boxing as a planning and review cadence without disrupting Kanban flow. DOI adds strategic alignment to flow items. IIS is a dedicated innovation swim lane with its own WIP limit. SEMI is applied before items enter the WIP queue. Kanban cycle time data feeds directly into SEMI E-dimension calibration, creating a virtuous loop.
SHIFT + SAFe
SHIFT replaces SAFe ceremony overhead by approximately 40% while preserving alignment outcomes. Iteration Planning (4 hours) becomes Sprint Planning and Spec Review (2.25 hours). ART Sync is absorbed into the LGP. The IP Sprint becomes IIS, integrated not separate. Inspect and Adapt is replaced by the Cycle Portfolio Review. Expected saving: 21 to 30 person-hours per contributor per 12-week PI.
SHIFT + LeSS
LeSS feature teams become ACNs with explicit outcome ownership. LGPs provide lightweight coordination where LeSS deliberately removes coordination roles, bridging the transition for organisations not yet ready for full LeSS. DOI distributes strategy alignment to sprint level, complementing the overall Product Owner. IIS provides structured innovation capacity that LeSS does not define.
SHIFT + OKRs
DOI is the operational bridge between OKRs and sprint delivery. Key Results become DOI Sprint Contribution outcome signals. The Cycle Objective maps to the Team OKR. The DOI Health Score is a leading indicator for Key Result progress, surfacing misalignment six weeks before the quarterly review rather than at it. The Cycle Portfolio Review replaces the quarterly OKR retrospective.
AI-first mindset shifts
From estimation to specification quality
Time spent debating story point estimates is time not spent clarifying what needs to be built. A good spec is worth more than an accurate estimate. Invest in specification quality; let throughput data handle forecasting.
From velocity to throughput
Velocity measures effort-weighted output. In teams where AI can produce ten times the output on a well-specified item versus a vague one, effort weighting becomes noise. Count items completed. Track the distribution. Use Monte Carlo.
From heroics to system quality
AI tools make the quality of the specification, the tooling, and the review process the binding constraint, not the individual. Team-level system quality replaces individual brilliance as the primary performance lever.
From done to verified
AI-generated output needs rigorous verification. The definition of done must include explicit review steps: does the output match the spec, pass the edge cases, and behave correctly in integration? Done means verified.
From synchronous planning to asynchronous alignment
If the spec is clear, most planning questions resolve themselves before the meeting. AI-first teams do their best alignment work asynchronously, leaving synchronous time for spec ambiguity resolution, DOI escalations, and retrospective inquiry that needs human nuance.
From AI as tool to AI as collaborator
A tool is used. A collaborator is directed, reviewed, and held to a standard. Node Contributors direct AI collaborators, review their outputs, and own the results. The AI Responsibility Map makes this accountability explicit for every task type in the node.
SHIFT Maturity Model
The SHIFT Maturity Model is a navigation tool, not a certification programme. Teams use it to understand where they are, what to focus on next, and what good looks like at each level. It is run at cycle end, facilitated by the Governance Steward, and takes 30 minutes.
Level 1, Established Foundation: ceremonies are running, SEMI scoring is applied before sprint entry, throughput is being tracked, the ACN structure is defined, and every contributor owns the review step for their AI-assisted outputs.
Level 2, Aligned Delivery: the DOI map is live and maintained, at least one Red item has been declared and resolved in the current cycle, IIS is running with Learning Cards as output, the LGP pre-read is distributed 24 hours before every session, and throughput data from the last six or more sprints is available.
Level 3, Adaptive Intelligence: Monte Carlo forecasting from internal data is in use and communicated to stakeholders in probability bands, at least one IIS Learning Card has been promoted through the full funnel to a cycle investment, the pattern library has at least ten entries and is actively consulted, and retrospective actions are completed at a rate above 70%.
Level 4, System Contribution: pattern library content is being shared with other teams, AI agent roles from the node are being used or adapted by other nodes, SEMI calibration is accurate enough to rarely produce surprise scope overruns, and a new team member can contribute meaningfully within five days using documented materials.
Getting started
SHIFT is adopted in four phases over 12 sprints. Phase 0 (one week before Sprint 1): map existing teams to ACN candidates, assign roles, write the capability domain in one sentence, draft the AI Responsibility Map, and SEMI-score the top 20 backlog items. Sprint 1 planning selects only items scoring 6 or below.
Phase 1 (Sprints 1-3): introduce the AI-first standup and throughput tracking immediately. Add the Spec Review ceremony in Sprint 2. Introduce IIS at 10% capacity and the 3-track retrospective in Sprint 3. Do not yet introduce DOI, LGP, or Monte Carlo. Milestone: Maturity Level 1.
Phase 2 (Sprints 4-6): introduce Cycle Objectives and the DOI map in Sprint 4. Stand up the LGP and run the Red Item Protocol for the first time in Sprint 5. Run the first Cycle Portfolio Review and calculate the first DOI Health Score in Sprint 6. Milestone: Maturity Level 2.
Phase 3 (Sprints 7-9): introduce Monte Carlo forecasting using reference class data and communicate the first P85 forecast to a stakeholder. Retire reference class data at Sprint 8 (8 sprints of internal data available). Evaluate Learning Cards for pilot promotion. By Sprint 9, full SHIFT operation is the target. Milestone: Maturity Level 3.
How Ricardo works with SHIFT
Ricardo uses SHIFT in transformation engagements where organisations are navigating the move from traditional Agile to hybrid, distributed, or AI-first working. The framework provides enough structure to align stakeholders and enough flexibility to adapt to context.
Engagements typically begin with an ACN design workshop, a Lean Governance Pod setup session, and a two-cycle pilot with a willing team before broader rollout. The downloadable guide provides the full framework documentation, ceremony facilitation notes, SEMI scoring reference tables, and the Monte Carlo forecasting model Ricardo uses with clients.