Skip to main content

AI & Future of Work

AI Adoption Assessment

How ready, active, and responsible is your team or organisation with AI?

This assessment goes beyond whether AI tools are being used. It examines five dimensions of genuine adoption: readiness and mindset, active application in workflows, critical evaluation of AI outputs, responsible and safe use, and the continuous evolution of your practice over time.

Two versions available: one for teams, one for organisations. Answer 15 questions on a 1–5 scale and get an instant radar chart with targeted guidance.

Readiness

Assess the team's openness, awareness, and motivation to engage with AI tools, before adoption begins in earnest.

  1. 1.How open are team members to experimenting with AI tools in their daily work?

    Actively resistant or indifferentEnthusiastic and proactively exploring
  2. 2.How well does the team understand what AI tools can and cannot realistically do?

    Significant misconceptions or no awarenessClear, grounded understanding of capabilities and limits
  3. 3.When a new AI tool is introduced, how does the team typically respond?

    Avoids it or waits for others to go firstTests it quickly and shares what they learn

Application

Evaluate how actively and effectively the team integrates AI tools into real day-to-day workflows, not just experiments, but sustained use.

  1. 1.How regularly do team members use AI tools as part of their standard working process?

    Rarely or never, mostly ignored after initial tryConsistently embedded in daily workflows
  2. 2.How well does the team apply prompt engineering techniques to get useful, reliable outputs?

    No awareness of prompting, uses tools as a black boxDeliberately iterates on prompts and structures them for context and specificity
  3. 3.When AI tools are used, how often does it meaningfully reduce time or effort on real tasks?

    Rarely, often creates more work than it savesConsistently saves time on tasks that matter

Insight

Measure how critically the team evaluates AI outputs, knowing when to trust, when to challenge, and when to discard what a model produces.

  1. 1.How reliably does the team verify AI-generated outputs before using or sharing them?

    Outputs are rarely checked, usually taken at face valueSystematic verification is a consistent habit
  2. 2.How well does the team recognise when an AI output is confidently wrong, plausible-sounding but inaccurate?

    Hallucinations and errors often go unnoticedTeam spots errors quickly and knows what to look for
  3. 3.When AI outputs conflict with team expertise or context, how does the team respond?

    Defers to the AI output or doesn't notice the conflictApplies domain knowledge to challenge and correct the output

Safety

Assess how responsibly the team uses AI tools, covering data privacy, bias awareness, appropriate use boundaries, and avoiding harm.

  1. 1.How aware are team members of what data should not be entered into AI tools?

    Little to no awareness, data is entered without thoughtClear shared understanding of what is and isn't acceptable to share
  2. 2.Does the team consider potential bias in AI outputs before applying them to decisions or communications?

    Bias is not consideredBias is actively discussed and mitigated before using outputs
  3. 3.Are there shared team norms about which tasks AI tools should not be used for, regardless of convenience?

    No boundaries exist, anything goesClear, agreed limits on appropriate use

Evolution

Evaluate how deliberately the team improves its AI practices over time, measuring impact, learning from what fails, and building on what works.

  1. 1.Does the team regularly review and improve how it uses AI tools, not just continue using them on autopilot?

    No review happens, use is static or decliningStructured and frequent reflection on AI use leads to visible improvement
  2. 2.How well does the team measure the actual impact of AI tool use on quality, speed, or outcomes?

    No measurement, impact is assumed rather than knownClear metrics are tracked and used to guide decisions about AI use
  3. 3.When a new AI capability or tool becomes available, how effectively does the team evaluate and integrate it?

    Adoption is random or reactive, no structured evaluationNew tools are evaluated deliberately against real needs and integrated when they add value