Most engineering teams are stuck in the same place right now. They know AI matters. They've been told it matters. Someone on the exec team has read an article and started asking questions. But nobody can point at a specific thing and say "that's where AI goes."
I've worked through this with a lot of teams now. The pattern is almost always the same.
Start with the boring stuff
Everyone wants to jump to the sexy use case. The AI-powered feature. The chatbot. The recommendation engine. Ignore all of that for now.
Look at the work your team hates doing.
I'm serious. The weekly reporting nobody wants to own. The manual QA on repetitive screens. The ticket triage that eats two hours every morning. The copy-paste between systems. That's where AI fits first, because that's where you'll actually get buy-in from the people who have to use it.
At Gymshark, the first thing that would've been worth automating wasn't anything customer-facing. It was the internal stuff. Release notes, test summaries, pulling data from Jira into a format someone could actually read. Boring. Unglamorous. But it was eating real hours from people who should've been building product.
Map the repetition
Here's what I do with every team I work with through our AI readiness assessment. I ask them to track, for one week, every task that involves:
- Taking information from one system and putting it in another
- Summarising something that already exists
- Following a checklist they could write down
That's it. Three categories. Not a massive audit. Not a six-week discovery phase. Just a week of paying attention.
Last month I did this with a 15-person engineering team at a fintech startup in London. They came back with 23 tasks across those three categories. Twenty-three. And they'd thought they had "maybe two or three" things that could use AI.
The gap between what people think is automatable and what actually is, once you look, is massive.
Don't start with the hardest one
This is where teams go wrong. They look at the list, pick the most impactful item, and try to build that first. Makes sense on paper. Terrible in practice.
Pick the easiest one. The one where failure doesn't matter. The one where the data is clean and the process is simple and nobody's going to lose their mind if the output is wrong for a week.
You're not trying to prove ROI yet. You're trying to prove the concept works. You're trying to get one person on the team to say "oh, that's actually useful." That's the domino that tips everything else.
I built an agent for a client that did nothing except format their standup notes into a consistent structure and post them to Slack. Took maybe 2 hours to build. Saved maybe 10 minutes a day. Barely worth measuring on a spreadsheet.
But it changed the conversation. Suddenly the team was asking "what else can we do with this?" instead of "is AI really worth the effort?"
The framework nobody wants to hear
There's a simple way to evaluate whether a task is a good AI candidate. I use four questions:
- Is the input mostly text or structured data?
- Is the expected output predictable enough that you'd recognise a wrong answer?
- Does it happen more than once a week?
- Would a smart intern be able to do it with a one-page brief?
If yes to all four, it's a good candidate. If you're answering no to number 2, especially, leave it alone for now. AI tasks where you can't tell if the output is wrong are the ones that blow up in your face three months later.
Where AI doesn't fit (yet)
I'll save you some pain. These are the places I see teams waste time:
Anything requiring real-time accuracy with no room for error. If a wrong answer causes a production incident or a compliance issue, AI isn't your first move. It might be your third, once you've built confidence and guardrails. But not first.
Anything deeply creative that has no reference point. AI is great at "do this like that." It's poor at "invent something nobody's seen before." If the task requires genuine originality, keep humans on it.
Anything political. If the reason a process exists is because someone important wants it that way, no amount of AI is going to fix the underlying problem. Sort the politics out first.
The actual starting point
If you've read this far and you're thinking "fine, but where do I literally begin," here's what I'd do tomorrow morning.
Open a shared doc. Write three columns: Task, Frequency, Pain Level (1-5). Send it to your team. Ask them to add anything they do repeatedly that they'd happily never do again. Give them a week. Don't filter or judge the entries.
Then pick the one with the highest pain score and the lowest complexity. Build something in a day. Ship it. See what happens.
That's it. Not a strategy deck. Not a vendor evaluation. Not a committee.
If you want someone to run through this with your team properly, that's basically what our AI readiness assessment is. But honestly, the doc exercise alone will show you more than most companies learn in months of "AI strategy."