Most businesses that hire an AI consultant get it wrong the first time. Not because they picked someone incompetent, but because they didn't know what to look for before the contract was signed. Getting this decision right matters more than most people realise.
If you're seriously considering hiring an AI consultant in 2026, this is the guide I wish more of our clients had read before they came to us. Some of them had already spent £20,000 with someone else and had nothing useful to show for it.
Here's how to choose an AI consultant who'll actually deliver something, and how to spot the ones who won't.
What an AI consultant actually does (and what they shouldn't)
An AI consultant analyses your operations, identifies where automation creates real value, designs the right system for your situation, and builds or oversees the build. That's the job in plain terms.
Before you start evaluating anyone, it's worth being clear on what you're actually buying. A good AI consultant does those specific things. What they shouldn't do is show up with a pre-packaged solution looking for a problem to attach it to. That's a vendor. There's a meaningful difference.
The difference between a vendor and a consultant
AI Consultant | AI Vendor |
|---|---|
Starts with your problem | Starts with their product |
Recommends the right tool for your situation, even if it's simpler | Recommends their tool regardless of fit |
May tell you you're not ready for automation yet | Has an incentive to sell you something now |
Runs a discovery phase before proposing anything | Often skips diagnosis to get to the pitch |
Accountable to your outcomes | Accountable to the sale |
Vendors aren't necessarily bad. But if you hire a vendor thinking you're getting a consultant, you'll end up with a tool that was never quite right for your business. We see this pattern constantly. A client comes to us after spending six months fighting a Make or Zapier setup that was never going to handle what they actually needed. The vendor sold them the tool. Nobody diagnosed the problem properly first.
The question to ask yourself is: does this person benefit from selling me a specific solution? If yes, treat their recommendations with appropriate scepticism.
Red flags to spot in the first call
You can usually tell in the first 20 minutes whether you're talking to a consultant or a tool-pusher. Here's what to watch for:
They pitch before they ask questions. If someone is telling you what you need before they've understood your operations, that's not consulting. That's selling.
They name-drop tools immediately. "We use OpenAI, we use Zapier, we use HubSpot" is a vendor tell. A consultant talks about your problem first, tools second.
They can't explain what they've built before. Vague answers to "can you walk me through a recent project?" is a serious warning sign. Real experience produces real stories.
They guarantee outcomes without understanding your data. Anyone promising "50% cost reduction" in the first conversation hasn't done the work to know that yet.
They skip past the audit. If they're ready to start building without a proper discovery phase, that's a problem. More on this below.
None of these on their own means walk away. But two or three together should give you pause.
The 5 questions to ask before signing anything
You don't need a 40-point checklist. You need five good questions asked in the right order. These will tell you most of what you need to know about whether someone is worth hiring.
Do they diagnose before prescribing?
This is the most important question. Ask them directly: "What does your discovery process look like before you recommend anything?"
A good answer involves structured analysis of your current processes, understanding your team's actual workflows, identifying where time and money are being lost, and only then forming a view on what to build. At AMPL, we run a structured audit before every engagement. Not because it's a nice-to-have. Because without it we'd be guessing. And guessing with someone else's operations is how you end up building the wrong thing. You can read more about how our engagement process works if you want a clearer picture of what this looks like in practice.
A weak answer is anything that skips this step. "We can start building next week" sounds efficient. It usually isn't.
Can they show you a real build?
Ask to see something they've actually built. Not a demo environment, not a slide deck. Something that's live, solving a real problem for a real client. They don't need to show you proprietary client data. But they should be able to walk you through how a system works, what problem it solves, and what it took to get there.
If they can't do this, or get vague, that's a red flag. The best consultants have war stories. They remember the builds that were complicated, the edge cases that nearly broke the system, the client who changed requirements halfway through. If everything sounds smooth and simple, they probably haven't built much.
The other three questions worth asking:
What happens when something breaks? Systems fail. What's their support model post-launch?
Who actually builds the work? Are you hiring a consultant who then outsources the build? That's not automatically bad, but you should know.
What does success look like in 90 days? If they can't answer this in specific, measurable terms, the engagement won't have clear accountability.
What a good engagement looks like from day one
If you're evaluating an AI consultant, it helps to know what the process should look like once you've said yes. This gives you a reference point. If what they're describing sounds different, ask why.
Audit first, then build
A proper engagement starts with a discovery and audit phase. This isn't just a kick-off call. It's structured analysis, mapping your processes, identifying the highest-value automation opportunities, understanding your existing tools and data, and working out the potential ROI in concrete terms.
This phase should produce a document. Something you could take to another supplier if you wanted to. A good audit tells you: here are the three processes we'd automate first, here's why, here's what it would cost, here's what you'd get back. That's a real deliverable, not just a sales step.
At AMPL, we make the audit fee refundable against the first build. That's because we think it should be valuable as a standalone thing, not just a way to get you to commit to a larger project. If after the audit you decide not to proceed, you still leave with a clear picture of where AI could help your business.
What deliverables should you expect?
Beyond the audit, here's what a serious engagement should produce:
A clear specification for what's being built, agreed before build starts
Regular progress updates, not "we're working on it" but actual visibility into what's done and what's next
A working system, documented so your team can use it and maintain it
Training or handover so the system doesn't become a black box
An agreed support arrangement post-launch
If any of these are vague or missing from what's being proposed, ask about them specifically. Vagueness here tends to show up later as scope creep, delays, or systems that nobody knows how to manage. Our guide to what an AI audit actually covers goes into more detail on this if it's useful.
How much should AI consulting cost in 2026?
To be honest, this question is harder to answer than people expect, because the range is genuinely wide and for legitimate reasons.
At the lower end, you'll find freelancers and small agencies building basic automations using Zapier or Make. Work in this category typically runs £500 to £3,000 for a project. It's fine for simple, well-defined workflows. It tends to hit its ceiling fast for anything complex.
Mid-range custom builds, the kind that handle multi-step logic, integrate with proprietary systems, or need to handle edge cases reliably, typically run £5,000 to £20,000 depending on complexity. This is where most established businesses end up if they've done the audit properly and know what they're buying.
Larger or more complex engagements, multi-system builds, ongoing development, retainer arrangements, sit above that. There's no ceiling really. It depends on scope.
The number that matters most isn't the fee. It's the ratio of fee to value. If a build automates a process that currently costs your team 30 hours a week at £25/hour, that's £39,000 a year. A £10,000 build that's paid back in three months is a very different conversation to a £10,000 build on a process that saves four hours a week.
A good consultant will help you work this out before you commit. If they're not doing that, they're not really consulting. They're quoting.
FAQ
What does an AI consultant actually do day to day?
An AI consultant analyses your business operations, identifies where automation would create the most value, designs and builds custom AI systems, and supports those systems after launch. In practice, this involves a lot of process mapping, technical scoping, and close work with your team to understand how things actually run, not just how they're supposed to run on paper.
Is AI consulting worth it for small businesses?
It depends on the volume of manual work in your operations. The businesses that get the most out of AI consulting, regardless of size, are the ones with repetitive, high-volume processes eating significant staff time. If your team of eight spends 40% of their week on manual admin, the maths usually works. If your processes are already fairly streamlined, the ROI case is weaker. A good consultant will tell you honestly which category you're in.
How long does an AI consulting engagement typically take?
A structured audit phase typically takes one to two weeks. A build, depending on complexity, runs four to twelve weeks. Simple automations can be live in two or three weeks. Multi-system builds with custom integrations take longer. The honest answer is: it depends on scope, and anyone who quotes a timeline without understanding your systems is guessing.
What's the difference between an AI consultant and an AI agency?
Mostly structure and scale. An individual consultant is typically more hands-on but has limited capacity. An agency has more resource and often more specialised skills, but you need to verify who's actually doing the work. The more important distinction is whether they're diagnosing your problem or selling you a product. That applies to both.
Should I ask for references before hiring an AI consultant?
Yes, always. Ask for two or three clients they've worked with on projects similar to yours, and actually call them. Ask what the process was like, whether deliverables arrived on time, and whether the system still works well. Most consultants doing good work are happy to provide references. Hesitation here is a signal.
The short version
Five questions before you hire anyone: do they diagnose before prescribing, can they show you something real they've built, does their process start with an audit, are deliverables clearly defined, and does the fee make sense against the value you'd get back? If yes to all five, you're probably talking to the right person.
If you want to see what this looks like in practice, we start every engagement with a structured audit, specific to your operations, with a clear output. If after that you decide not to proceed, you've still got a detailed picture of where AI could help your business. That's worth something on its own.
If that sounds like what you're looking for, book a free audit at amplconsulting.ai.

