General·Guide·9 min read

AI Readiness Audit: 12 Questions to Ask Before Any AI Rollout

A 12-question audit to use before any AI implementation — covering data, workflow, success metrics, ownership, and change-management capacity. Surfaces whether your organization is actually ready to roll out AI, or whether you need to fix something else first.

TL;DR

Most AI projects fail because of organizational gaps, not technology. This 12-question audit covers data quality, workflow definition, success metrics, operational ownership, and change-management capacity. Score honestly — if you cannot answer 9 of the 12 affirmatively for a specific use case, you are not ready and the project will fail. Fix the gaps first.

Andrew Ng's framework for transforming organizations with AI — covers strategy, sequencing, and the readiness gaps that kill most projects

How to use this audit

Pick one specific AI use case you are considering — not "AI in general." Examples: AI for invoice processing, AI for customer support triage, AI for sales call analysis. Score the 12 questions below honestly for that single use case. The audit is not transferable — readiness for one use case does not mean readiness for another.

Each question scores yes or no. There is no middle ground; "kind of" is no.

The 12 questions

Section 1: Workflow definition (3 questions)

  1. Can we describe the workflow as a clear sequence of steps?
    If the workflow is "everyone does it slightly differently," AI cannot automate it. Fix the process first.
  2. Do we know what the team currently does at each step?
    Including the unofficial steps. AI inserted into a workflow nobody can describe will produce output nobody can validate.
  3. Is there a single person accountable for the workflow today?
    Not a department. A named owner. AI rollouts without a single throat to choke fail.

Section 2: Data quality (3 questions)

  1. Is the input data digital and accessible?
    Not "the source of truth is a spreadsheet someone emails on Tuesdays." Digital, queryable, in a system you control.
  2. Is the data structured enough to be reliable?
    Even unstructured data (PDFs, emails) is acceptable if the format is consistent. Inconsistent format is what kills AI quality.
  3. Is the data volume large enough to validate AI performance?
    For most use cases, you need at least 100 historical examples to evaluate AI accuracy. Less than that and you cannot tell if it is working.

Section 3: Success metric (2 questions)

  1. Is there a numerical success metric the AI is supposed to move?
    Time per task, error rate, throughput, cost per unit, response time. "Productivity" is not a metric.
  2. Do we know the current baseline for that metric?
    If you do not know what it is now, you will not know if AI moved it. Measure baseline before kickoff.

Section 4: Ownership and change management (4 questions)

  1. Is there an operational (not IT) owner whose performance review depends on adoption?
    IT-led AI projects fail more often than operations-led ones. The team who lives with the workflow has to own the rollout.
  2. Has the affected team been told about the project before procurement?
    Surprises produce resistance. Pre-procurement conversations produce buy-in.
  3. Is there budget for change management — workshops, training, iteration?
    Typically 30–40% of total project budget. Skipping it is the #1 reason adoption fails.
  4. Is there a feedback mechanism for the team to flag AI errors?
    AI quality improves with feedback. Without a structured feedback channel, errors compound silently.
The 12 questions are not a gating exercise to delay projects. They are a diagnostic to find out which gap to fix first. Most failed AI projects had the gaps before kickoff and nobody named them.
Vineet Parekh, Co-Founder, Pure Billion Technologies

Scoring

What your score means
ScoreStatusWhat to do
11–12 / 12Ready to startRun pilot. Move fast. Most teams here under-execute by waiting for permission.
9–10 / 12Mostly readyStart, but explicitly fix the 2–3 gaps in parallel. Do not pretend they are not there.
7–8 / 12Not ready yetPause AI procurement. Spend 4–8 weeks fixing data, workflow definition, or ownership gaps. Re-score.
Below 7 / 12StopAI is the wrong project. The gaps are operational, not technical. Fix the foundation.
60–80%
of enterprise AI initiatives fail to reach production
Per BCG, Gartner, and McKinsey 2024–2025 surveys. The majority fail on the readiness dimensions captured in these 12 questions, not on technical execution.

What to do with a failing score

A failing score is more valuable than a passing one — it tells you exactly which gap blocks the project. The remediation tracks:

  • Workflow gaps: Run a workflow-mapping exercise (1–2 days with the team). Document the SOP. Designate an owner. Re-score.
  • Data gaps: Digitize the source data, structure it, build the data pipeline before the AI pipeline. Often a 4–8 week prerequisite project.
  • Metric gaps: Define the metric, instrument the baseline. Cannot be skipped — a metric defined after rollout is a fiction.
  • Ownership gaps: Find the operational owner before procurement. If you cannot, the project does not have an internal sponsor and will fail regardless of tool.

Common reasons businesses fail this audit

  1. Tool was picked first. Now the project is "implement Tool X," not "solve problem Y." The audit fails because the questions were never asked.
  2. IT owns the project. The operational team is downstream and uninvested. Adoption will not happen.
  3. The use case is too broad. "AI for customer service" is not a use case; "AI for triaging inbound support tickets and drafting first-response replies" is.
  4. Data is treated as someone else's problem. "We will figure out the data once the AI tool is configured." This is backwards.

Why this matters

The audit takes an hour to score. The information it produces is the difference between a 6-month rollout that works and an 18-month one that gets abandoned. We use this exact audit at the start of every AI engagement we run. Run it on yourself first.

Once you have a score, see the AI adoption playbook for the next steps, or build vs buy for the tool-selection framework.

Run this audit with us

If you'd rather have an outside read on your readiness, we'll run this audit with your operational and IT leads in a 2-hour session and produce a written remediation plan. Faster than internal scoring, especially when politics gets in the way.

Frequently asked questions

A structured evaluation of whether an organization has the operational, data, and change-management foundations to implement a specific AI use case successfully. Not "do we have AI talent" — that question is too abstract. The right form is "are we ready to do this specific thing with AI."

Related reading

  • The AI Adoption Playbook for Indian Mid-Market Businesses (2026)

    A practical playbook for Indian mid-market and SMB leaders — how to assess AI readiness, choose the right tools, integrate AI into existing operations, and avoid the most common adoption failures. Built from real implementation work, not vendor brochures.

  • AI Build vs Buy: When to Subscribe to Tools vs Custom-Integrate

    A practical framework for deciding when off-the-shelf AI subscriptions are sufficient and when custom integration into your operational systems is the right call. Built from real implementation experience, with cost ranges and decision rules for Indian mid-market businesses.

  • Why Your Team Rejected Every AI Tool You Tried (and What to Do)

    AI tools fail in organizations the same way ERPs fail — bought as procurement, ignored as operations. The team is not resistant; they are rationally avoiding tools that do not fit their workflow. Here is the diagnosis and the actual fix.

VP
Vineet Parekh
Co-Founder, Pure Billion Technologies

Vineet leads custom ERP and ecommerce engagements at Pure Billion Technologies. 7+ years building bespoke operational software for Indian manufacturers, distributors, and global D2C brands.

Last updated: 04 May 2026 · LinkedIn