I have lost count of the meetings with some of our clients where someone declares that we must be an AI business by year end. The room stiffens. A slide appears with a galaxy of logos. There is talk of copilots and centres of excellence. I ask a simple question. What are we trying to achieve that we cannot achieve today. The silence that follows tells me everything.
Hype is cheap. Readiness is hard. The difference sits in the unglamorous work leaders put off because it is slow or political or both. The truth is that most organisations are not short of AI. They are short of clarity, clean process, sensible data flows, and the muscle to change how work actually gets done.
The landscape we are walking into
AI is moving fast, yet the foundations inside many firms have not moved for a decade. You will find fragile workflows that were designed around individual heroics rather than repeatability. You will find data scattered across systems that cannot talk to each other without a translator. You will find controls that were built for a different risk profile. You will find leaders who want results without the discomfort of changing incentives or governance.
That is why AI programmes stall. Not because the models are weak. Because the organisation is not ready to absorb them. Tooling is the easy part. Adoption is the hard part. The prize is still very real. Better cycle times. Safer processes. Happier customers because backstage chaos finally calms down. The risk is equally clear. Money spent on experiments that never touch the line of service. Security gaps that widen under pressure. A workforce that tunes out because every month brings a new shiny thing.
It is tempting to chase headlines. Resist it. The work that matters is quieter. It looks like replacing brittle spreadsheets with stable services. It looks like writing down how a process really runs rather than how it should run. It looks like pruning back reports nobody reads so signal can breathe again. It looks like fixing permissions before automation magnifies the mess. It looks like protecting people from the blast radius of rushed change. The noisy world rewards theatrics. The boardroom rewards outcomes.
A practical way in: the Four Tests of AI Readiness
Treat AI readiness as a set of tests you must pass before you scale. Do not skip them. Do not outsource your judgement.
1. Clarity Test: can you state the job to be done in one line
Pick a real operational outcome. Shorten claims settlement from 12 days to 3. Cut onboarding time by half without adding risk. Reduce exception handling in billing by 60 percent. If you cannot write the target in plain English, you will end up optimising slides rather than operations.
2. Process Test: is the work stable enough to automate
Map the actual path work takes. Name the handoffs. Name the points where humans improvise because the system does not fit reality. Decide what to standardise, what to eliminate, and what to keep human by design. If the process is a moving target, AI becomes a moving mess.
3. Data Test: do the inputs exist in a usable form
List the data your use case needs. Where it lives. Who owns it. How often it changes. How clean it is. Decide the minimum viable improvements that unlock value now. Beware of data perfectionism dressed up as prudence. You do not need a cathedral to boil a kettle. You do need clean water.
4. Control Test: are security and compliance set up to say yes safely
Your policies should tell teams what is allowed, what is not, and how to get to yes. Keep audit trails. Define how models are selected, monitored, and retired. Decide how confidential information is handled. Align on what good looks like before the first pilot starts. Controls that exist only on paper will slow you down later when the scrutiny arrives.
If you fail any one of these tests, delay the pilot or narrow the scope. The fastest route to scale is to move at a speed you can sustain.
Three patterns I keep seeing
Pattern one: AI wrapped around broken process
A customer service team adds a smart triage layer on top of a queue that already buckles four times a week. Response times improve for a fortnight then decay because the upstream issues are unchanged. Lesson. Fix the roots before you feed the tree.
Pattern two: data ambition without data hygiene
A fraud team wants real time detection. The event data has inconsistent keys and the reference data updates weekly. The model looks impressive in a notebook then falls apart in production. Lesson. Align cadence and quality with the promises you intend to make to the business.
Pattern three: pilots that never touch live work
An innovation unit runs ten proofs of concept in isolation. Everyone applauds. Nothing ships. Lesson. Treat pilots as the first sprint of real delivery. Put them on the same rails as production from day one.
What to do first if you are serious
Choose one critical journey and pick the line of service that annoys your customers the most or ties up the most headcount. You want a problem that matters enough to earn political air cover.
Run a two week readiness sprint:
- Map the real process with the people who run it. No theatre.
- Trace the data that fuels it. Note gaps, owners, update cycles.
- Cost the current waste. Hours. Rework. Complaints. Risk events.
- Co-design a slim target state. Decide what stays human. Decide what becomes automated.
- Write the acceptance tests for value, safety, and adoption. Short, specific, measurable.
Build a minimum viable backbone
Put the basics in place before any fancy models. Stable interfaces. Idempotent jobs. Clear logging. A permission model that mirrors the org. A kill switch that works. This does not win awards. It prevents incidents and allows you to sleep.
Pilot in daylight
Pilot with real volume on real cases with real teams. Start small. Publish the metrics every week. Celebrate what works. Expose what breaks. Invite audit to the show and tell. Confidence is a control.
Hardwire adoption
Change the workflow, not only the tool. Train people with their own data. Update job aids. Adjust incentives so the new path is the easy path. Retire the old path. Adoption is a design choice.
How leaders help or hinder
Leaders set tempo by what they tolerate. If you accept vanity metrics, you will get vanity projects. If you celebrate shipped value, you will get shipped value. The moves that help are simple.
- Ask for one-page problem statements with a clear outcome.
- Fund readiness work explicitly so delivery teams do not have to hide it.
- Decide who owns data quality for each domain. Name the person. Give them authority.
- Create a standing route to yes with risk and compliance. Bring them in early.
- Remove obstacles fast. Drag decisions out of committee and into the room.
The moves that hurt are familiar. Declaring big numbers before anything runs. Forcing a platform decision before you know your real needs. Treating AI as a brand strategy rather than an operational strategy. Confusing activity with progress.
A simple maturity ladder
You do not need a thousand-point checklist. Use a ladder you can explain.
Level 0: noise: Lots of talk. No change to work. Experiments float outside the line of service.
Level 1: tidy: Key processes documented. Basic data plumbing in place. Security patterns defined. Teams have a way to launch a safe pilot without begging for favours.
Level 2: repeatable: First use cases live in production with monitoring. People use them because the workflow changed. Metrics show value that compounds rather than spikes.
Level 3: scalable: A shared backbone exists. New use cases follow the same rails. Data contracts are honoured. Audit trails are clean. The cost to add the next use case drops.
Level 4: adaptive: The operating model expects change. Teams improve processes monthly. Risk and compliance shape solutions rather than only policing them. Value shows up in cycle time, quality, safety, and cost, not just in slides.
Move one level at a time. Do not pretend to be at Level 3 if you have not earned Level 1. Reality is kinder than fantasy when auditors arrive.
Common objections, answered plainly
We do not have perfect data
You never will. You need data that is fit for the outcome you want. Start with what matters most to that outcome. Improve the rest in sequence.
Our legacy stack will slow us down
Only if you let it become an excuse. Wrap what you must. Replace what you can. Standardise interfaces so you can swap parts without open-heart surgery.
People will resist
People resist being done to. Involve them early. Show the value in their day. Remove low-value tasks. Keep the judgement work human. Most teams warm up when they see their pain shrink.
Security will say no
Give security a better job. Ask them to help design the safest path to yes. Write down the rules of the road together. Prove you can be trusted with small steps before you ask for bigger ones.
Where this is heading
Over the next two years the gap will widen between firms that treat AI as theatre and firms that treat it as operations. The winners will not be the loudest. They will be the ones that made unglamorous decisions early. Standard data domains. Boring interfaces. Clean logs. Clear ownership. Measurable outcomes that matter to customers and regulators.
The shape of work will keep shifting. Some tasks will move to machines. New human work will appear around judgement, exception handling, relationship care, and the stewardship of systems that now run at machine pace. Leaders who prepare their people for that mix will sleep better than those trying to automate their way out of complexity.
A closing thought
AI is not a parade to watch. It is a set of choices about how your organisation works. Choose clarity over theatre. Choose groundwork over slogans. Choose one real problem and earn the right to solve the next one. Readiness is not a status. It is a habit. The firms that build that habit now will look strangely calm while everyone else keeps chasing the next announcement.