
AI Maturity Model for Marketing Teams: The 6 Stages You Can't Skip
Nicole Leffer has trained 100+ B2B companies on AI adoption, from Series A startups to Fortune 50s. Her verdict: Most teams are chit-chatting with ChatGPT and calling it a strategy. Here's the maturity model that shows where you actually are.
Quick Summary: Most teams are still chatting with AI. The real advantage comes from building repeatable workflows, clean inputs, and controlled automations—before jumping to agents.
The AI adoption gap Nicole Leffer described at Atlanta's CMO Huddles Strategy Lab isn't subtle.
On one end: Organizations where half the team is still on free ChatGPT accounts, having back-and-forth conversations with the AI like it's a search engine.
On the other end: Teams deploying fully autonomous agents that go out, strategize, research, execute, and report back — while humans do something else.
Both sets of companies will tell you they're 'using AI in marketing.' One of them is correct.
Leffer started using generative AI tools in 2021 (before ChatGPT existed) and has since trained over 100 B2B companies on AI adoption, from Series A startups to Fortune 50 enterprises. She's seen what works, what fails, and what gets skipped. Her conclusion: There's a progression that cannot be shortcut, and the teams that try to jump straight to agents without building the foundation underneath are heading toward expensive, embarrassing disasters.
"You can't go from zero to 10,000 in an hour. I have not seen anybody successfully skip those steps." — Nicole Leffer, AI trainer and consultant
The Six Stages of AI Maturity (And Where Most Teams Are Stuck)
Stage 1: Chat.
Everyone starts here. Back-and-forth conversations with the AI, asking it questions, refining outputs through dialogue. This is a great starting point, and genuinely instructive. You learn how the models think, what they're good at, where they fail.
However, back-and-forth GPT chat is not a scalable marketing workflow. That means you can’t hand your chat to a colleague. Nor can you repeat it reliably. Instead, every output starts from scratch.
Stage 2: Systematizing.
Here's where most teams get stuck—and where the first major mindset shift happens. Instead of chatting to change output, you learn to edit the prompt. You put your instruction in, see where the result mismatches what you wanted, and go back to refine the prompt itself rather than having a conversation about it.
At first, the difference sounds small. In reality, the impact is enormous. As a result, you now have a prompt you can template, share, and repeat. This is the foundation of everything that follows.
Stage 3: Features and functionality.
Most people using ChatGPT, Claude, or Gemini daily have never explored more than 20% of what the tool can do. Deep research. Image generation. Agent mode. Connectors to external tools. Projects and memory. Custom GPTs.
In practice, understanding what's available—and what each feature is actually for—is what separates people who get consistently strong results from those who keep getting mediocre ones.
Stage 4: GPTs, Gems, and projects.
Stage 4 is the build stage. A custom GPT is not a trained model, it's a pre-prompted interface. You've saved your instructions, your context, your reference files on the back end. As a result, you now have a tool your whole team can use without anyone needing to understand how to prompt it from scratch.
One CMO shared how they built a Claude project for analyst relations, loading past briefings, analyst content, and her own notes to generate prep docs, improve decks, and scale the team's workflow across multiple analysts.
Another CMO built a competitive intelligence GPT that delivers a weekly synthesis of market news, earnings updates, and competitor movements. These are not advanced AI deployments. They are well-built prompt templates, and they save hours every week.
Stage 5: Automations.
Now the workflow runs without a human trigger. Something happens (an MQL hits a threshold, a competitor files an earnings report, a new piece of content goes live) and the AI process kicks off automatically. Tools like Zapier or Make.com connect the trigger to the AI to the output. The human reviews at the end.
Importantly, Leffer is explicit: For most marketing teams, automation is the right ceiling. You have full visibility into what the AI is doing. You haven't introduced autonomous decision-making into a live customer-facing process. You're getting compounding efficiency gains without compounding risk.
Stage 6: Agentic AI.
Agentic AI is the frontier—and the one most misunderstood. True agentic AI means you give the system a goal, not a task. It figures out how to accomplish it, takes actions, makes decisions, and operates with minimal human oversight.
One CMO in the room had already deployed agents at this level across their team. Most haven't. And Leffer argues most shouldn't, yet, because the failure modes are genuinely consequential and invisible to teams that haven't built literacy through the earlier stages.
⚠ Microsoft Copilot calls its custom prompts 'agents.' They are not agents. A Copilot agent is a GPT with a different name. This terminology confusion is causing CMOs to think they're further along the maturity curve than they are—and to be blindsided when real agentic AI behaves in ways they didn't anticipate.
The Biggest Light Bulb Moment Leffer Sees in Training
Across all the organizations Leffer has trained, one moment creates more acceleration than any other: When people stop chatting to refine output and start editing the prompt instead.
The chat-to-refine approach (ask, get output, say 'make it shorter,' 'make it more formal,' 'try again'), while natural, ultimately kills scalability. You can never hand that conversation to a colleague. You can never run it again. Every output is unrepeatable.
The moment people understand that the result mismatch is their prompt's fault—not the AI's—and start editing the instruction rather than negotiating with the output, something clicks. Now they're building something where the work compounds, and the tool is genuinely saving time rather than creating a new kind of back-and-forth busywork.
"When people start editing instead of chatting to change the output, they get better results. It also makes your results scalable instantaneously."
— Nicole Leffer
What CMOs Actually Need to Do Right Now
Leffer is direct about priorities for CMO-level leaders.
First: Get your team baseline training.
Not generic AI awareness content, but discipline-specific training that shows a product marketer how to build a GPT for their workflow, shows a demand gen manager how to build an automation for their lead process. No surprise here: Generic training produces generic adoption.
Second: Identify your stars early.
The person who will become your team's AI expert is probably not who you'd expect. Once people have baseline training and permission to experiment, watch who runs with it. Shift their role to reflect it. That internal champion will do more for your team's adoption than any external trainer.
Third: Build your AI knowledge base before you start deploying agents.
The documents your AI will reference (brand guidelines, personas, competitive positioning, product narratives) need to be clean, current, and organized. In reality, the messiest part of every enterprise AI deployment isn't the technology. It's the documentation. Fix that first.
Fourth: Set incentives.
Ultimately, if you want behavior to change, measure it and reward it. This doesn't require a compensation overhaul. Start with recognition, with protected time for experimentation, with a public showcase of wins. Make AI fluency visible as a career asset, not just a corporate directive.
Want more? Join the CMO Huddles community to access Strategy Labs, Expert Huddles, and peer conversations with senior B2B marketing leaders.
Nicole Leffer is an AI trainer and consultant who has worked with 100+ B2B companies on AI adoption, from Series A startups to Fortune 50 enterprises. She was an early generative AI adopter in 2021, pre-ChatGPT.
This article draws on insights from Atlanta's CMO Huddles Strategy Lab on AI Initiatives. CMO participant examples are shared anonymously per Chatham House rules.
AI maturity is the progression from basic chat-based AI use to repeatable prompts, custom GPTs or projects, automations, and eventually agentic workflows.
Many teams jump into advanced tools before building basic AI literacy, prompt discipline, and clean documentation.
No. Nicole Leffer noted that Copilot “agents” are closer to custom GPTs or pre-prompted tools, not fully autonomous agentic AI.
CMOs should train their teams, clean up internal knowledge bases, document workflows, and test human-reviewed automations first.
Stop refining outputs through endless chat. Edit the original prompt until the workflow becomes repeatable, shareable, and scalable.