Skip to main content

AI transformation

AI transformation: what it actually takes, and what nobody prepares you for

Most organisations I work with are somewhere in the middle of what they are calling an AI transformation. A few pilots are running. Someone bought a platform licence. The innovation team gave a presentation with a roadmap. And underneath all of it, there is a layer of unaddressed anxiety, about data, about security, about jobs, about what this actually means for the people doing the work. This article is my attempt to address that layer directly.

April 202613 min read

What AI transformation actually looks like, versus what organisations think it looks like

The first thing I tell every leadership team at the start of an AI transformation conversation is this: what you are describing is not a project. It is a shift in how work gets done, and it will touch every function, every role, and every assumption you have about what your organisation is for.

Most organisations approach AI transformation in three recognisable waves. The first is automation: using AI to do repetitive, rule-based tasks faster and at lower cost. Summarising documents. Routing tickets. Generating first drafts of standard communications. This wave delivers visible ROI quickly and is relatively low-risk. Most organisations are here, or think they are.

The second wave is augmentation: AI working alongside knowledge workers to expand their capacity. A lawyer reviewing contracts with AI assistance. A product manager synthesising user research at a scale previously impossible. A developer implementing from specifications. This wave is where the real productivity story lives, and where the organisational dynamics start to get complicated.

The third wave is transformation: AI changing what the organisation actually offers, not just how it delivers it. New business models. Products that are themselves AI-native. Entirely different relationships with customers. Very few organisations are genuinely here yet, though many claim to be.

The mistake I see repeatedly is organisations jumping to wave three language, calling everything a transformation, while still doing wave one work. This creates a credibility gap between the story leaders are telling and the reality employees are experiencing. That gap is where trust erodes.

The gap between the transformation story leaders are telling and the reality employees are experiencing is where trust erodes.

Where to actually start: use cases, governance, and broad literacy

The organisations that navigate AI transformation well do three things early that most organisations treat as afterthoughts: they start from use cases, not tools; they build governance before they need it; and they invest in AI literacy well beyond the technical team.

Starting from use cases means asking: what are the ten most painful, repetitive, or high-effort tasks in this organisation, and which of them are good candidates for AI assistance? This grounds the transformation in real work rather than vendor roadmaps. It also generates early wins that are meaningful to the people whose work is being improved, rather than impressive-looking demos that have no connection to the daily reality of most employees.

Building governance before you need it means establishing, early, who is accountable for AI decisions, what data can be used and how, what the process is for evaluating and retiring AI tools, and what happens when an AI system produces an outcome that harms someone. The EU AI Act, which entered into force in 2024 and is being phased in through 2026 and beyond, makes a number of these questions legally significant for organisations operating in or selling to European markets. The NIST AI Risk Management Framework provides a practical structure for organisations that want to approach this rigorously regardless of jurisdiction.

Investing in AI literacy broadly means not treating AI capability as a specialism owned by the technology team. The employees who will ultimately determine whether AI tools are used well or badly are the domain experts in finance, HR, legal, customer service, and operations. If they do not understand what these tools can and cannot do, what risks they carry, and how to critically evaluate their outputs, no amount of technical excellence in the platform layer will produce good outcomes.

Data safety and the LLM risk most organisations are quietly ignoring

The data safety conversation is the one I have most often and most urgently with organisations that are beginning to use large language models at scale. And the honest observation is that most organisations are underestimating the risk significantly.

The core issue is deceptively simple: when an employee pastes a customer record, a performance review, a contract, or internal financial data into a public LLM interface, that data has left the organisation. The terms of service of most public AI platforms do not guarantee that submitted data will not be used for model training, stored in ways subject to different jurisdictions, or accessible to the platform provider's staff. For organisations subject to GDPR, HIPAA, financial services regulation, or contractual confidentiality obligations, this is not a theoretical risk.

The OWASP Top 10 for Large Language Model Applications, first published in 2023 and updated in 2025, identifies prompt injection as the most critical vulnerability class in LLM deployments. Prompt injection is an attack where malicious content in the data a model processes causes it to behave in unintended ways, exfiltrating information, bypassing controls, or acting against the interests of the deploying organisation. For organisations building internal tools on top of LLM APIs, this is a live threat, not a future concern.

Practically, organisations serious about data safety should be doing three things. First, establishing a clear taxonomy of data sensitivity and a corresponding policy about which data classes may be processed by which AI tools. Not all data is equal, and not all AI tools carry the same risk. Second, evaluating deployment architectures that keep sensitive data within organisational control: self-hosted models, private cloud deployments with data residency guarantees, or retrieval-augmented generation systems that avoid sending raw sensitive data to external APIs. Third, training employees to recognise what constitutes sensitive data and to apply the same judgment to AI tools that they would to any external service provider.

When an employee pastes a customer record or internal financial data into a public LLM, that data has left the organisation. Most organisations are underestimating this risk significantly.

Sandboxing AI agents: when AI can act, not just advise

Assistive AI, a model that reads, summarises, and drafts, carries one risk profile. Agentic AI, a model that takes actions in the world, sends emails, executes code, calls APIs, modifies files, carries a fundamentally different one. The shift from the first to the second is happening faster than most organisations' governance frameworks are prepared for.

The principle I apply when advising on agentic AI deployments is the same principle that has governed good security practice for decades: least privilege. An AI agent should have access only to the systems, data, and capabilities it needs to complete its specific task, and no more. An agent that summarises customer support tickets does not need write access to the CRM. An agent that generates draft contracts does not need to send email. The temptation to give agents broad access in the name of capability and convenience is exactly the temptation that creates catastrophic failure modes.

Sandboxing, running AI agents in isolated environments where their actions are contained and reversible before any consequence reaches production systems, is not optional for serious deployments. It is the equivalent of testing in a staging environment before deploying to production: an obvious practice that nonetheless gets skipped under time pressure until something goes wrong.

Human-in-the-loop design matters enormously for consequential decisions. Any agentic workflow where the AI's action is difficult or impossible to reverse, sending a communication to a customer, modifying a financial record, initiating a procurement action, should require explicit human approval. The cost of that approval is a few seconds of human attention. The cost of removing it can be reputational, regulatory, or financial damage that far exceeds any efficiency gained.

Google's Secure AI Framework (SAIF) and Anthropic's published guidance on responsible agentic deployment both provide practical starting points for organisations building internal policies. The specifics matter less than the commitment to having the policy before you deploy, not after.

An AI agent should have access only to what it needs for its specific task, and no more. Least privilege is not a constraint on capability. It is the condition for trust.

The fear of job loss is real, and it deserves a real answer

I want to say something that I think a lot of AI transformation consultants avoid saying, because it is uncomfortable and does not fit neatly into an optimistic change narrative: some roles will be significantly diminished by AI. Some will be eliminated. The fear that employees are feeling is not irrational. It is, in many cases, accurate.

The World Economic Forum's Future of Jobs Report 2025 estimated that AI and automation will displace 85 million roles globally by 2030, while creating 97 million new ones. The net number is positive. But that is cold comfort if you are a mid-career professional in a role that sits in the 85 million column, and the 97 million new roles require skills you do not currently have and may not be able to acquire quickly.

McKinsey's research found that approximately 60% of occupations have at least 30% of their tasks potentially automatable with current AI technology. That does not mean 60% of jobs disappear. It means 60% of jobs change, and the people in them either adapt or struggle. The distinction matters enormously for how organisations communicate and support their people through this transition.

What I consistently see in organisations that handle this well is a commitment to honesty over reassurance. Telling employees 'AI will not affect your job' when it clearly will, is not kindness. It is a short-term avoidance of a difficult conversation that destroys trust when reality inevitably arrives. The honest conversation is harder: 'This technology will change how this work is done. Here is what we know about how. Here is how we are investing in helping you adapt. Here is the timeline we are working to. And here are the questions we do not yet have answers to.'

Telling employees that AI will not affect their jobs, when it clearly will, is not kindness. It is a short-term avoidance that destroys trust when reality arrives.

How to handle the human side of AI transformation

The organisations I have seen navigate the human side of AI transformation well share a set of practices that are less about the technology and more about the leadership.

They involve people in the transformation rather than doing it to them. This sounds obvious. It is rarely practised. Involving people means asking frontline employees which tasks they find most frustrating and repetitive, and making those tasks the first candidates for AI assistance. It means piloting AI tools with the people who will use them, gathering honest feedback, and being willing to drop tools that do not actually help. It means treating employees as the experts on their own work, which they are, rather than as the recipients of a technology decision made elsewhere.

They communicate the 'why' repeatedly and specifically. Not 'we are doing this to stay competitive' (which is true but meaningless), but 'we are introducing AI into the contract review process because it currently takes our legal team three days to turn around standard NDAs, and that is slowing down sales. Our goal is to get that to same-day. Here is how we are doing it, and here is how it changes what the legal team spends time on.' Specificity is the antidote to rumour.

They invest in reskilling with genuine commitment. This means budget, time, and management support for learning, not a self-directed e-learning library that employees are expected to complete in their own time. The organisations doing this well are identifying the skills that will matter most in an AI-augmented version of each role, and actively building pathways to those skills for the people currently in those roles.

And they create forums for people to express fear and uncertainty without those expressions being treated as resistance. Psychological safety, the ability to speak up without fear of professional consequence, is never more important than during a transformation that people are frightened by. Leaders who treat anxiety as a communication problem to be managed, rather than a signal to be listened to, consistently make the transition harder for everyone.

The roles AI enriches rather than replaces

The framing of AI as a job-destroyer misses the more interesting and more accurate story: AI is a profound amplifier of human judgment, creativity, and relational capability. The roles it enriches most are precisely those where those qualities are most central.

Coaches, facilitators, and organisational development professionals are seeing their practices expanded rather than replaced. AI can surface patterns in team data, generate draft retrospective formats, and analyse engagement survey results faster than any human. But the work of building trust, holding space for difficult conversations, and helping a team understand itself, that work is irreducibly human, and AI tools make it easier to prepare for and follow up on, not to substitute.

Analysts and researchers at every level are being transformed by AI's ability to synthesise large volumes of information rapidly. A data analyst who used to spend 60% of their time cleaning and preparing data can now spend that time interpreting it. A market researcher who used to read fifty reports to find the five relevant insights can now direct an AI to do the reading and spend their time on the judgment. The role does not disappear; the ratio of high-value to low-value work shifts dramatically.

Leaders and managers who develop AI fluency alongside emotional intelligence are becoming significantly more effective. AI can prepare briefings, model scenarios, draft communications, and flag anomalies in operational data. The leader who can direct these capabilities while remaining genuinely present with their people, rather than retreating into dashboard management, has capabilities that simply were not available five years ago.

Creative professionals, designers, writers, brand strategists, are finding that AI handles first-draft generation and variation at a scale that frees them for the work that actually requires taste and judgment. The designer who used to spend two days generating initial concepts can now spend two hours directing an AI through twenty directions and two days refining the three that are genuinely interesting. The output is not better because AI replaced the designer. It is better because the designer's judgment was applied to a wider creative surface.

Customer-facing roles that involve genuine problem-solving and empathy are among the most robust in an AI-augmented world. An AI can handle tier-one queries, route correctly, and surface relevant information. The human agent who handles what the AI escalates is dealing with the genuinely complex, genuinely emotional, and genuinely ambiguous situations that require real understanding. That role, freed from the volume of routine queries, becomes more satisfying and more impactful, not less.

AI is a profound amplifier of human judgment, creativity, and relational capability. The roles it enriches most are precisely those where those qualities are most central.

The transformation that lasts

I have worked through enough technology transitions to know that the organisations which come out stronger are not the ones that moved fastest. They are the ones that kept their people with them while they moved.

AI transformation is genuinely different from previous technology transitions in its pace, its breadth, and its direct impact on knowledge work. The scale of change is real and the speed is accelerating. But the leadership principles that make transformation human and sustainable have not changed: clarity, honesty, involvement, and investment in people's capacity to adapt.

The organisations that will look back on this period with pride are the ones that decided early that AI was not going to be done to their people, but built with them. That required uncomfortable conversations, genuine commitment to reskilling, governance that sometimes slowed deployment, and leaders who were willing to say 'I do not know' more often than felt comfortable.

That is the transformation that lasts. Not the fastest deployment of the most tools, but the deepest integration of capability with culture.