Most AI engagements that go badly don't go badly during the build. They go badly during the brief. The scope was vague, the founder couldn't articulate what "good" looked like, and the consultant — usually under pressure to close the deal — quoted a fixed price against a moving target. Six weeks later, both parties feel cheated.
This post is the brief you should send before the first call. Filling it out takes about 30 minutes. It does three things at once: forces you to articulate what you actually want, surfaces consultants who can't engage with specifics, and shortens the discovery call from an hour of "tell me about your business" to a 20-minute working session on the actual problem.
It works whether you're talking to us, to another agency, or to a freelancer.
Before you write anything
Two things to do before the brief, both worth more than the brief itself.
Pick a single bottleneck. Not three. Not "everything inbound." One workflow that, if it ran better, would visibly change the week. "Inbound lead qualification takes too long" is a bottleneck. "Our operations are inefficient" is not. The brief should be about one thing. Once that one thing ships, the next bottleneck reveals itself, and you brief that one separately.
Decide whether you're buying automation or AI. This isn't always obvious from the symptom. The filter is in Automation vs. AI: which one your business actually needs first. The short version: structured input + defined rules = automation; unstructured input + interpretive judgment = AI. Most projects are 80/20 one direction. Knowing which one before the call means the consultant is not redirecting your scope mid-conversation.
If you can't pick the bottleneck, the brief isn't ready yet. Talk to your team for a week and watch where the friction repeats. Come back to the brief after.
The brief — eight sections
Send these eight answers as a doc, an email, or a form before the call. Short answers are fine. Vague answers are not.
1. The bottleneck, in one paragraph
Describe the workflow that's not working. What triggers it, who currently handles it, where it ends up, and what specifically about it is painful. Concrete numbers if you have them: "we get ~40 inbound a week, two of us spend ~6 hours combined routing and replying."
What this surfaces: whether the problem is shaped enough to solve. If you can't write the paragraph, the consultant can't quote it.
2. What "good" looks like
In one to three bullets, describe the outcome you'd accept. Not the system — the outcome. "Inbound replies go out within 30 minutes during business hours, and I personally only see leads that are either complex or high-intent." Or "weekly P&L summary lands in my inbox every Monday by 8 AM with anomalies flagged."
What this surfaces: whether you've thought about success or just about the problem. Consultants who can't tell what would satisfy you will optimize for the wrong thing.
3. The current stack
List the tools currently involved in the workflow. CRM, email, calendar, CMS, accounting, communication, anything custom. Note which ones are non-negotiable ("we're staying on HubSpot") and which are flexible.
What this surfaces: integration constraints. Most engagements get their first surprise here — a "simple" automation gets complicated because two of the tools have closed APIs or oauth weirdness. Better to know on day one.
4. The data you have, and don't have
What records exist for this workflow today? Email threads in Gmail? CRM entries? Spreadsheets? Voice transcripts? Nothing? Be honest about gaps. "We don't actually track outcomes" is fine — it's just a different starting line than "we have 18 months of categorized intake in a Postgres database."
What this surfaces: whether the project includes a data hygiene phase. AI on dirty data hallucinates. AI on missing data invents.
5. Who needs to approve, and who needs to be trained
Names and roles. Who signs the contract. Who has to actually use the result. Who can veto a tool choice (often a CTO or COO who's not on the kickoff call). Who's responsible for the workflow today and might feel threatened by automating it.
What this surfaces: change-management cost. The system that ships and gets adopted is the one whose users were involved in the brief. The one that ships and gets quietly disabled is the one that surprised the team.
6. Hard constraints
Compliance, data residency, security, budget caps, deadline drivers, contractual lock-ins, brand voice rules, anything that cannot be negotiated away. "We can't put customer data in any tool we don't have a DPA with." "We need this live before our Q3 push." "Our voice is editorial, not casual."
What this surfaces: scope boundaries. Better to learn on the brief that a consultant can't work inside your compliance posture than to learn it three weeks in.
7. What you've already tried
The Zaps that broke. The agency that quoted six figures and ghosted. The internal hire who built half a thing and left. Be specific about why each previous attempt didn't land — was it scope, talent, integration, change-management, or budget?
What this surfaces: pattern recognition. A consultant who hears "we tried three Zapier-based automations and they all broke when we hit volume" should immediately know what they're walking into. If they don't, that's a signal.
8. The shape of the engagement you want
Fixed-price project? Time-and-materials build? Retainer with monthly scope? Pure advisory? Embedded for a quarter? Every consultant is better at one of these than the others, and most will pretend to be flexible until you sign. Naming your preferred shape upfront filters fast.
What this surfaces: pricing fit. Some shops only do retainers. Some only do projects. The shape mismatch is the most common reason a great-looking discovery call ends in a quote nobody will sign.
What to watch for in the response
The brief is also a stress test. A few patterns to read.
Good signs:
- They reply with more questions, not fewer. Specifically about the outcome (section 2) and the data (section 4).
- They name the pattern they think fits — automation, AI-augmented automation, RAG, agentic loop — and explain why.
- They push back on at least one thing in the brief. The constraints are too tight, the scope is too broad, the success metric is fuzzy. A consultant who agrees with everything is selling, not advising.
- They tell you what's not in scope and why. Boundary-drawing on day one means fewer surprises on day 30.
Red flags:
- A quote in the first reply, before any of the brief has been discussed. Either they didn't read it, or the quote is the same number for everyone.
- The word "agentic" used four times without ever describing a tool surface, a goal, or a stop condition. (Definition here.)
- Refusal to name specific tools or models they'd use. "We pick the right tools for the job" is fine on a marketing page; on a brief response, you want to see "we'd put this on Make + Anthropic Claude with Postgres for state."
- A 6-month phase one. AI projects that don't show working software in 4-6 weeks usually don't show working software at all.
- Heavy "we'll figure that out together" energy with no anchor. Discovery is fine. Discovery without milestones is billable drift.
What a good consultant should ask you back
If the consultant is good, the response will include questions you didn't think to answer in the brief. The ones we ask, in roughly this order:
- "What's the dollar value of an hour of your time, and an hour of the team handling this workflow today?" (Anchors the ROI math.)
- "If this system worked perfectly, what would you spend the freed-up hours on?" (Filters projects where the answer is "nothing" — those projects don't get adopted.)
- "Who reviews the system's output for the first 30 days?" (No agentic system goes live unsupervised. Anyone who says otherwise has not run one in production.)
- "What's the rollback plan if the system makes a wrong call?" (If you don't have one, the project plan needs to include building one.)
- "Are you optimizing for cost, speed, or quality, and which one would you sacrifice last?" (You can't have all three. Knowing which is least sacrificable shapes every architecture decision.)
A consultant who asks these is doing diligence. A consultant who skips them is selling a template.
What this all adds up to
A 30-minute brief is the cheapest insurance you can buy on an AI engagement. It will save you weeks of rescoping mid-build, it filters out consultants who won't survive contact with your reality, and it forces you — the founder — to know your own business one layer deeper than you did this morning.
If you'd rather walk through it with someone instead of filling out the doc cold, that's the shape of our strategy call. We'll work the eight sections live, point at the bottleneck that actually matches the symptoms you're describing, and tell you whether what you need is automation, AI, both in sequence, or neither yet. Bring the names, the numbers, and the constraints. We'll bring the architecture.