AI & Product Ownership
The product owner in an AI-first team: new responsibilities, new tools, and a different relationship with speed
I have been spending a lot of time with product owners lately, and the conversation has shifted. A year ago, the questions were about whether AI would automate their role away. Today the questions are more specific and more urgent: how do I write a spec that an AI agent can actually use? How do I run discovery when the development team is shipping code before I finish the first draft? How do I maintain product coherence when three AI agents are building simultaneously in different parts of the codebase? The Product Owner role is not disappearing. It is becoming more consequential, faster, and significantly more demanding in a specific direction: the quality of the specification. This article is a practical account of what is changing, what it requires, and what tools are worth your attention.
What actually changed
The fundamental shift is this: in an AI-first development team, the specification has replaced the conversation as the primary interface between the product owner and the development process. In a traditional team, a PO could write a rough user story and rely on the developer to fill the gaps through questions, standups, and informal alignment. The developer's judgment and experience were a buffer between an imprecise requirement and a working product.
AI coding agents do not have that buffer. Claude Code, Cursor, Windsurf, and GitHub Copilot Workspace are extraordinarily capable at implementing what they are given. They are not capable of inferring what you meant but did not say. A vague requirement produces a technically correct but functionally wrong implementation with remarkable speed and confidence. The correction spiral that follows, re-prompting, reworking, explaining what you actually wanted, costs more time than a well-written spec would have.
According to O'Reilly's 2025 analysis of agentic development, 'writing clear specs becomes as important as writing code, since the AI will faithfully implement whatever instructions it are given, and only those instructions.' This is the sentence every product owner needs to absorb. The spec is now the product. Everything downstream of it, the code, the tests, the architecture, is a function of the quality of what you wrote before the first prompt was run.
AI agents do not fill gaps with judgment. They fill gaps with plausible-sounding code. The product owner's ability to write a precise specification is now a direct determinant of product quality.
Three ways the PO role is shifting
The change is not uniform. It is playing out across three distinct dimensions, each requiring a different adaptation.
From backlog owner to spec architect
User stories used to be conversation starters
In AI-first teams, they are executable instructions
Vague acceptance criteria produce confidently wrong code
The PO who writes precise specs unblocks the whole team
The PO who writes vague specs creates a rework loop
New skill: structured, testable requirement writing
From PRD writer to prototype maker
Figma AI and Google Stitch let POs prototype in hours
A working prototype replaces 10 pages of written requirements
60% of Figma files are now created by non-designers
Prototypes as specs eliminate most interpretation gaps
Stakeholder alignment happens around something tangible
New skill: rapid visual prototyping without a designer
From sprint planner to strategic keeper
AI agents move faster than sprint planning cadences
Teams ship features before the next backlog refinement
The PO's role shifts toward vision coherence
Keeping 'why we are building this' visible across the team
Representing the user in a team that can now outrun them
New skill: maintaining coherence at speed
These shifts are not happening in sequence. Many product owners are experiencing all three simultaneously, which is what makes the transition genuinely hard. The tooling, the process, and the required mindset are all changing at once, and most organisations are not providing the support or the time to develop the new skills deliberately.
What AI coding tools actually need from you
Claude Code, Cursor, Windsurf, and GitHub Copilot Workspace are the tools most development teams are working with in 2026. Understanding how they operate is essential context for writing specifications that work with them rather than against them.
These tools operate on context. The more precise and complete the context, the better the output. They read your spec, your codebase conventions, and your stated constraints, and then they implement. What they cannot do is ask a clarifying question in the way a developer would. Some agents support a planning mode, where they propose an approach before generating code, but even this is only as good as the instructions that preceded it.
The practical implication for product owners is that the old approach to writing requirements, which assumed a developer would interpret, question, and fill in the blanks, does not transfer. A specification written for an AI agent needs to be complete before it is started, not completed through iteration.
The story that worked for human developers
As a user, I want to filter search results
So that I can find relevant items more quickly
Acceptance criteria: Filters work correctly
A developer reads this and asks questions
Gaps are filled through conversation and judgment
Works in a team with strong shared context
The spec that works for AI agents
Filter panel visible on results page, desktop and mobile
Filter by: category (multi-select), date range, status (single-select)
Selecting a filter updates results without page reload
Active filters shown as removable chips above results
Clear all button removes all active filters
URL reflects active filters (shareable/bookmarkable state)
Zero results state shows 'No results for these filters' with reset option
The second spec is not longer because the task is more complex. It is longer because the thinking has been done upfront. The edge cases have been considered. The agent has what it needs to build correctly on the first pass. Addy Osmani's 2025 analysis of spec quality found that well-structured specifications achieve 95% or higher first-pass accuracy, compared to significantly lower rates for vague requirements.
A useful test before running any agent: could a competent junior developer implement this specification without asking a single question? If the answer is no, the specification is not ready for an AI agent either.
The AI-augmented product lifecycle
The traditional discover, define, design, develop, and deliver cycle has not been replaced. It has been compressed and partially automated. Understanding where AI helps, where it accelerates, and where it creates new risks is essential for any PO trying to manage a team that can now ship code faster than the organisation can absorb it.
Discover
- AI synthesises research and interview transcripts
- Discovery compressed by up to 75% (Miro, 2025)
- AI surfaces patterns humans miss in large data sets
- Human interpretation of what matters is still required
Define
- Notion AI and Linear AI draft PRDs automatically
- Generates acceptance criteria and flags contradictions
- Identifies missing requirements before dev starts
- PO shifts from drafting to editing and precision
Design
- Figma Make and Google Stitch generate prototypes from text
- Clickable, data-connected prototype in hours
- 60% of Figma files now created by non-designers
- Prototypes replace lengthy specification documents
Develop
- Claude Code, Cursor, Windsurf implement specs as code
- 50%+ increases in code output reported by teams
- Quality proportional to specification clarity
- The spec is now the product — write it precisely
Deliver
- AI monitors, generates release notes, surfaces anomalies
- Agentic engineering compressing cycles from weeks to hours
- New risk: shipping faster than users can adapt
- PO is the check on whether speed serves the user
Discover
- AI synthesises research and interview transcripts
- Discovery compressed by up to 75% (Miro, 2025)
- AI surfaces patterns humans miss in large data sets
- Human interpretation of what matters is still required
Define
- Notion AI and Linear AI draft PRDs automatically
- Generates acceptance criteria and flags contradictions
- Identifies missing requirements before dev starts
- PO shifts from drafting to editing and precision
Design
- Figma Make and Google Stitch generate prototypes from text
- Clickable, data-connected prototype in hours
- 60% of Figma files now created by non-designers
- Prototypes replace lengthy specification documents
Develop
- Claude Code, Cursor, Windsurf implement specs as code
- 50%+ increases in code output reported by teams
- Quality proportional to specification clarity
- The spec is now the product — write it precisely
Deliver
- AI monitors, generates release notes, surfaces anomalies
- Agentic engineering compressing cycles from weeks to hours
- New risk: shipping faster than users can adapt
- PO is the check on whether speed serves the user
The most significant practical implication of this compression is that the feedback loop has shortened dramatically. An idea explored on Monday can be prototyped by Tuesday, specified by Wednesday, built by Thursday, and in front of users by Friday. This is not hypothetical. Teams using AI-first workflows are reporting exactly this cadence for well-scoped features.
The risk is equally real. Traditional agility programmes stalled because organisations could not absorb change fast enough. AI-augmented development can generate change ten times faster than those programmes could. The PO is the primary check on whether the speed is serving the user or simply accelerating the production of the wrong thing.
AI can generate working code ten times faster than before. The question is whether the product owner has thought ten times more carefully about what to build.
AI tools built for product owners
The tooling available to product owners in 2026 has expanded significantly. The six tools below represent the most impactful additions to the PO toolkit, each addressing a different part of the product lifecycle.
Figma Make
Generate interactive prototypes from natural language
Connect to real data sources and design systems
60% of Figma files now created by non-designers
Prototypes replace PRDs as the primary alignment artefact
April 2026: Make kits connect to real component libraries
Google Stitch
Free AI-native UI prototyping from Google Labs
Multi-screen flows from a single text prompt
Exports to React, Vue, Flutter, SwiftUI, and more
Auto-generates next screens based on user interaction logic
350 generations per month on free tier (April 2026)
Notion AI
Drafts full PRDs from a single sentence
Summarises user interviews and research transcripts
Flags dependencies and contradictions across initiatives
September 2025: AI Agents that execute, not just suggest
Connected roadmaps with context-aware scheduling
Linear AI
AI triage: auto-assigns issues, labels, and teams
Agents authored 25% of new issues in Q4 2025
Semantic search across tickets, feedback, and support
Daily and weekly AI summaries in inbox or audio
Installed in 75% of Linear enterprise workspaces
Loom AI
Auto-generates titles, summaries, and action items
Removes pauses and filler words automatically
Transcripts and captions in 50+ languages
Datasite cut 4,000 meetings in five months using Loom
Ideal for bug reports, demos, and stakeholder reviews
Jira AI (Atlassian Intelligence)
Natural language to JQL query conversion
AI summarises comment threads and surfaces decisions
Rovo AI agent creates work items from any tool
Sentiment analysis on customer tickets
Triage automation: context and request type suggestions
A note on tool overload: ProductPlan's 2025 State of Product Management report found that 48% of product teams say they need fewer tools, not more. The value of these tools comes from integrating two or three of them deeply, not from adopting all six. A PO using Figma Make for prototyping, Notion AI for specifications, and Linear AI for tracking will outperform one who has accounts on every platform but uses none of them with depth.
Technical vs. non-technical product owners: what actually differs
The question of whether technical background matters for product ownership has been debated for years. AI-first development has made the answer more nuanced, not simpler.
What changes, and what new risks appear
Can review AI-generated code and catch architectural drift
Can write more precise technical constraints in specs
Can participate in agent configuration and tooling decisions
Risk: over-investing in implementation rather than strategy
Risk: mistaking speed for quality in AI-generated outputs
Advantage: credibility and depth in the most technical conversations
What changes, and what new opportunities appear
Figma Make and Stitch remove the design dependency
Notion AI reduces the writing overhead of specification
AI can explain technical outputs in plain language
Risk: accepting AI-generated specs without interrogating them
Risk: losing influence when the team moves faster than the PO
Advantage: user focus and strategic clarity uncontaminated by implementation detail
The consensus from multiple 2025 analyses, including ProductBoard, Tealhq, and ProductSchool, is that AI literacy is now table stakes for both types of PO. This does not mean coding ability. It means understanding how AI models process specifications, what makes a prompt effective, how agents fail, and what the limits of current tools are. A non-technical PO who develops this literacy can lead an AI-first team effectively. A technical PO who does not is still ahead, but the gap is closing.
The differentiator in both cases is the same skill: the ability to write precisely, to think clearly about what the user actually needs before a line of code is written, and to hold the 'why' visible when the team is moving fast. That skill is not technical or non-technical. It is a discipline of thought.
AI literacy is now table stakes. But the real differentiator remains the same as it always was: the ability to think clearly about what the user needs before anyone starts building.
What the numbers say
The data from 2025 and early 2026 is consistent across multiple research programmes: AI is dramatically accelerating development but creating new quality and coherence risks that product leadership must actively manage.
Speed and its consequences
Epics completed per developer: +66%
Time in PR review: +441%
Incidents per pull request: +243%
Teams are shipping far more, far faster
But review burden and incident rates have risen sharply
Speed without governance creates downstream fragility
How POs are already using AI
96% of product managers use AI frequently
AI can automate up to 80% of routine PM tasks
48% say they need fewer tools, not more
The POs using AI most effectively consolidate deep
Rather than adopting every available tool
Depth of use matters more than breadth
What AI does to MVP timelines
Traditional MVP: 5 to 9 months, $100k to $250k
AI-augmented MVP: 6 to 12 weeks, $30k to $80k
Discovery research compressed by up to 75%
Source: Akraya agentic engineering analysis, 2026
MVPs fail when AI is treated as a shortcut
They succeed when AI is engineered as a capability
The DORA 2026 figures deserve particular attention. A 243% increase in incidents per pull request is not a sign that AI coding tools are poor. It is a sign that organisations are shipping faster than their quality and review processes were designed to handle. The product owner is not responsible for code review, but they are responsible for the pace of delivery. These numbers are a direct argument for POs who govern the flow of work, not just the content of it.
What the best product owners are doing differently
Across the teams I have worked with where AI-first development is going well, the product owners share a set of practices that are distinct from what worked in traditional Agile environments. These are not theoretical recommendations. They are observed behaviours from teams that are managing the compression well.
What effective POs do in AI-first environments
Write the prototype before the spec: use Figma Make or Stitch to create a clickable artefact that the specification then describes precisely
Define done before starting: acceptance criteria written and reviewed before any agent session opens
Attend planning mode: for significant features, review the agent's proposed approach before code generation begins
Set a coherence review cadence: a weekly slot to read across what has been built and check that it still forms a coherent product
Own the 'why' explicitly: write a one-paragraph user and business context for every significant feature, visible to the whole team
Build AI literacy actively: understand enough about how the tools your team uses work to know when outputs are drifting from intent
Slow the pace when the quality metrics move against you: the DORA data is a signal, not a badge of honour
The most common failure mode I see is the product owner who treats AI-first development as permission to write less, define less, and be less present in the detail. The opposite is required. The PO who steps back because 'the AI handles it now' is the PO whose team ships the wrong product very quickly.
Warning signs your specs are not keeping pace
These signals appear regularly in teams where AI adoption has outrun the quality of product leadership. None of them are catastrophic in isolation. Together they describe a pattern that, if unaddressed, produces a codebase that is technically impressive and functionally incoherent.
Spec quality health check
Developers are asking for clarification after agent sessions complete, not before they start
Features are being built that are technically correct but wrong in ways that are hard to explain
The product feels faster but less coherent than six months ago
Acceptance criteria are being written after the feature is built to match what was shipped
No one in the team can articulate what the product does in two sentences
Stakeholders are surprised by what is being demonstrated in sprint reviews
The team is shipping more but you are less confident it is the right more