Skip to main content

AI & Developer Maturity

F.O.R.G.E. Assessment

Find your AI Development Level and understand what it means for how you work.

The F.O.R.G.E. assessment measures five dimensions of AI developer maturity: Fluency, Orchestration, Reliability, Growth, and Evolution. Your combined score maps to one of five AI Development Levels, from AI-Assisted (Level 1) through AI-Autonomous (Level 5).

15 questions, 1–5 scale, individual assessment. Results include a radar chart, personalised guidance, a curated AI tool directory, and a complete learning resource map. No login required.

1

AI-Assisted

2

AI-Augmented

3

AI-Orchestrated

4

AI-Directed

5

AI-Autonomous

This assessment is based on the 5 Levels of AI Dark Factory framework introduced in the article The human cost of AI in software teams. Reading it first provides valuable context for your results.

Fluency

Assess how naturally and effectively you use AI coding tools, and how accurately you understand what good prompting actually looks like.

  1. Scenario

    You are starting a 3-day ticket: build a user profile page with form validation, API integration, and three distinct error states.

    1.Which of the following best describes your approach with AI tools?

  2. Scenario

    You ask an AI assistant to refactor a service class to use dependency injection. It returns code that still instantiates its dependencies directly inside the constructor.

    2.What is your most effective next step?

  3. Scenario

    A developer sends this prompt to an AI coding assistant: "Can you help me fix the bug? The login isn't working properly."

    3.What are the two most critical things missing from this prompt?

  4. Scenario

    A team is adding a new POST /users endpoint to their Express.js API and wants useful, accurate code review feedback from an AI assistant.

    4.Which of the following prompts would produce the most actionable review?

Orchestration

Measure how well you manage complex, parallel, or multi-step AI workflows, including your ability to write specifications precise enough for reliable AI execution.

  1. Scenario

    Your sprint ticket is to add a complete billing module: subscription management, invoice generation, and payment processing. These touch the user service, database layer, and a third-party payment API.

    1.How do you approach this with AI tools?

  2. Scenario

    You have three AI sessions open: one generating tests for a feature you finished this morning, one building a new API endpoint mid-way through, and one investigating a production bug. A response arrives in the test session while you are mid-prompt in the API session.

    2.What do you do?

  3. Scenario

    A developer needs to delegate the implementation of a JWT authentication module to an AI agent for a Node.js/Express API.

    3.Which of the following specifications would most reliably produce a production-ready result?

  4. 4.During a complex AI-assisted task, the agent produces something fundamentally wrong at step 4 of 7. What do you do?

Reliability

Evaluate how rigorously you validate, test, and take genuine ownership of AI-generated code, and whether your review practice builds real comprehension or only surface confidence.

  1. 1.You receive a 200-line AI-generated feature implementation. All tests pass. What do you do before merging?

  2. Scenario

    A developer says: "I always review AI-generated code before merging." Last sprint they merged a 300-line feature. During a post-incident review, they cannot explain how the error handling and retry logic work.

    2.What does this most likely indicate?

  3. Scenario

    An AI assistant generates a complete password reset service: token generation, expiry, rate limiting, and email dispatch. All tests pass at 97% coverage. You are ready to merge.

    3.What do you do with the test suite?

  4. 4.AI-generated code passes all tests and looks clean, but something about the approach feels off. You cannot immediately articulate why. What do you do?

Growth

Assess whether your foundational technical skills are growing alongside AI adoption, or whether AI has become a crutch that masks a widening comprehension gap.

  1. Scenario

    Your company's AI coding tools go down unexpectedly for two full working days mid-sprint. You have two tasks: debug a memory leak in a production service, and implement a new caching layer for the data pipeline.

    1.What actually happens?

  2. Scenario

    A junior developer joins a team and uses AI tools for all their coding from day one. Six months later they ship features that pass code review. When asked to debug a production incident in a module they built, they struggle to form a hypothesis about the cause.

    2.What is the primary risk this situation illustrates?

  3. 3.AI generates a solution using a library or design pattern you have not seen before. What do you do?

  4. Scenario

    Six months ago you built a critical data ingestion pipeline with heavy AI assistance. Tonight there is a production incident: the pipeline is silently dropping records, AI tools are unavailable, and you need to fix it urgently.

    4.What happens?

Evolution

Measure how actively and systematically you track, evaluate, and adapt to the rapidly changing AI development landscape, and whether your approach to change is principled or reactive.

  1. Scenario

    A major AI coding tool releases a significant update: a new agentic mode where the AI can autonomously browse your codebase, run tests, and make multi-file edits. A colleague asks if you have formed a view on it. It has been three months since the announcement.

    1.What is true for you?

  2. Scenario

    Your team adopted an AI-assisted code review practice twelve months ago. A colleague asks: "Did we ever actually evaluate whether it improved review quality, or did we just assume it did?"

    2.Which of the following most accurately describes what happened?

  3. Scenario

    A senior developer is thinking about their career strategy for the next three years in a landscape where AI development tooling is changing faster than any previous technology wave in software.

    3.Which approach is most resilient over that horizon?

  4. Scenario

    The AI model your team relies on for coding assistance is deprecated. The replacement has meaningfully different behaviour: stronger reasoning, more variable output format, and less tolerant of vague prompts. It needs more structured input to produce reliable results.

    4.How does the transition go?