Skip to content
20 February 2026

What an AI readiness assessment actually looks like

If someone offers you an AI readiness assessment that's just a questionnaire, walk away. Here's what a real one involves.

I've seen some absolute rubbish passed off as "AI readiness assessments." A 20-question survey. A maturity model with five levels and a traffic light scoring system. A consultant who talks to your CTO for an hour and sends you a PDF.

None of that tells you anything useful.

A real assessment should leave you with a clear picture of where AI will actually help in your specific business, what's blocking you, and what to build first. Not theory. Not a maturity score. Actual next steps.

Here's what I do when I run an AI readiness assessment for a client. Every engagement is slightly different, but the shape is consistent.

Week 1: talking to the people who do the work

Not the execs. Not the board. The people who actually touch the processes every day.

I spend the first 3-4 days doing 30-minute conversations with 8-12 people across the business. Engineers, ops people, customer support, product managers. Anyone who spends their day doing repetitive work that involves information.

I'm listening for three things. What takes too long. What's error-prone. What makes people say "I hate doing this" when nobody important is listening.

Most of the best AI use cases don't come from strategy sessions. They come from someone saying "I spend every Tuesday afternoon copying data from this system into that spreadsheet, and I want to scream."

Last assessment I ran, for a 40-person e-commerce company in Manchester, the single biggest opportunity came from a warehouse manager who mentioned, almost in passing, that they spend 3 hours every day reconciling inventory between two systems that don't talk to each other. Nobody in leadership had flagged it. It wasn't on any roadmap.

Week 1-2: mapping the data

This is the bit most assessments skip, and it's the bit that matters most.

AI is only as good as the data it can access. If your data is trapped in spreadsheets, locked behind manual exports, or scattered across 15 different tools with no API, then your AI options are limited regardless of how good your use cases are.

I map out where the key data lives. What format it's in. How accessible it is. Whether there are APIs. Whether the data is clean enough to be useful or whether it's a mess of inconsistent entries and missing fields.

This isn't glamorous work. It's looking at database schemas, checking API documentation, asking "can I get this data out programmatically?" about 50 times. But it's the difference between an assessment that produces realistic recommendations and one that produces fantasy.

For that Manchester company, three of their top five use cases depended on data from a legacy system that had no API and couldn't export cleanly. That's not a reason to abandon those use cases. But it changes the timeline and cost significantly. An assessment that ignores data readiness will promise you the moon and then leave you stuck when you try to build.

Week 2: scoring and prioritising

By now I've got a list of maybe 15-25 potential AI applications. Some are obvious. Some are speculative. Some are probably rubbish but worth noting anyway.

I score each one on four dimensions:

Impact. How much time, money, or pain does this save? Not theoretical. Actual hours, actual pounds, actual frustration.

Feasibility. Can we actually build this with current AI capabilities? Is the data accessible? Is the process well-defined enough?

Risk. What happens if the AI gets it wrong? If the answer is "someone gets a slightly messy report," that's low risk. If the answer is "we ship the wrong product to a customer," that's different.

Speed to value. How quickly can we get a working version? I bias heavily toward things that can show results in 2-4 weeks. Quick wins change organisational attitudes. A 6-month project, even a good one, doesn't build the momentum you need.

The output is a prioritised list. Usually the top 3-5 items become the recommendation. Not 15. Not 10. Three to five. Because nobody can execute on 15 things at once, and trying to is how AI initiatives die.

Week 3: the deliverable

I hand over a document. It's not a 60-page report. It's usually 8-12 pages. It covers:

What I found. The top opportunities ranked by priority. For each one: what it does, what data it needs, estimated build time, estimated ongoing cost, expected value, and the risks. Then a recommended sequence: what to build first, second, third.

There's also a section on blockers. Things that need fixing before AI can work properly. Sometimes it's data quality. Sometimes it's missing APIs. Sometimes it's organisational (the team that owns the data doesn't talk to the team that needs the AI). I flag these honestly because ignoring them just means the build phase goes badly.

What most people get wrong

The biggest misconception is that an AI readiness assessment tells you whether you're "ready for AI." That framing is wrong. Everyone's ready for some form of AI. The question isn't if, it's where and how much.

A good assessment doesn't give you a score. It gives you a plan.

The second thing people get wrong is treating the assessment as the end point. I had a client last year who ran an assessment with a different consultancy, got a nice PDF, put it in a drawer, and did nothing for 4 months. Then they called me. The assessment they'd had was fine on paper but none of the recommendations came with enough detail to actually execute on. It was all "consider implementing AI for X" without any specifics about how.

That's why I include build estimates, data requirements, and suggested approaches in every recommendation. If you can't hand it to an engineer and have them start building within a week, the assessment hasn't done its job.

If any of this sounds like what your team needs, have a look at how we run our AI readiness assessment. Happy to walk through what it'd look like for your situation.

If this is relevant to something you're working on, book a 30-minute call. No pitch. Just a conversation.

I'll tell you honestly whether I can help. If I can't, I'll say so and point you somewhere useful.

Written by Mark Darling, founder of BUILD+SHIP.