April 30, 2026

From AI Curiosity to Capability

Marketing teams don’t need more AI tools. They need better habits around the ones they already have. 

Experimentation got marketing teams started, but it won’t take them very far on its own. The payoff starts when teams stop treating AI like a side experiment and start using it in ways they can repeat and build on. 

In this episode, Drew Neisser talks with Nicole Leffer, one of the most practical voices in B2B AI adoption, about what it takes to make AI use more consistent and scalable. After working with more than 100 companies, Nicole has a clear view of what separates teams that stay stuck in trial mode from teams that build a repeatable advantage. 

Three AI Mistakes Marketers Make: 

  1. Relying on back-and-forth prompting instead of building reusable workflows
  2. Underestimating what their core AI tool can already do
  3. Falling for hype cycles and constantly switching platforms

What You’ll Learn: 

  • How to build workflows that save real time
  • The hidden cost of tool sprawl
  • Where AI security risks are showing up now
  • How to build AI capability across the team

If you’re a B2B CMO working to build stronger AI habits across your team, this episode will give you plenty to work with! 

Renegade Marketers Unite, Episode 516 on YouTube

Resources Mentioned

  • Tools mentioned

Highlights 

  • [2:13] Three AI mistakes marketers make
  • [3:23] Why chatting back and forth fails
  • [9:07] Master an AI tool before adding more
  • [13:57] Hype switching wastes time and money
  • [16:56] Build once, save hours forever
  • [20:20] Build a markdown memory file
  • [26:22] AI adoption with security guardrails
  • [32:37] Open models still carry security risk
  • [36:36] Train the whole team, not one hero
  • [43:19] Project-specific memory fixes mix-ups
  • [47:23] Build GPT skills before agentic workflows
  • [49:09] Slow down and build your AI foundation

Highlighted Quotes  

"Those that hype switch, hype switch back. If you are shutting down your ChatGPT right now for Claude, you are gonna find out that ChatGPT added something, and suddenly you are spending all your time porting everything between tool to tool to tool."— Nicole Leffer, CMO AI Advisor 

"With ChatGPT, I would say I see maybe one to two percent of the functionality being used. Most people are so unaware of what you can actually ask it to do — same with Gemini, same with Claude. You're using such a fraction of the capability."— Nicole Leffer, CMO AI Advisor 

"Train your entire team collectively on a core tool really, really well. They should all be capable of communicating with it, using it in their day-to-day work, understanding how to do things like deep research, leverage code interpreter — baseline understanding, right?"— Nicole Leffer, CMO AI Advisor 

Full Transcript: Drew Neisser in conversation with Nicole Leffer

Drew: Hello, Renegade Marketers, if this is your first time listening, welcome. If you're a regular listener, welcome back. You're about to listen to an expert huddle where experts share their insights into topics of critical importance to our flocking awesome community. CMO Huddles — in this episode, multi-time guest Nicole Leffer, AI expert, shares some of the most practical AI advice a marketing leader can get right now. She gets into why too many teams keep bouncing from tool to tool, why they often miss the full value of the tools they already have, and why real progress starts with a stronger foundation, better workflows, and a clear strategy. If you like what you hear, please subscribe to the podcast and leave a review. You'll be supporting our quest to be the number one B2B marketing podcast. All right, let's dive in.

Narrator: Welcome to Renegade Marketers Unite, possibly the best weekly podcast for CMOs and everyone else looking for innovative ways to transform their brand, drive demand, and just plain cut through — proving that B2B does not mean boring to business. Here's your host and Chief Marketing Renegade, Drew Neisser.

Drew: Hello, Huddlers! Nicole Leffer has joined CMO Huddles several times and is one of the most practical voices helping marketing teams move from AI curiosity to real capability. She has worked with more than 100 B2B companies, from startups to Fortune 50 firms, helping them train teams, design workflows, and adopt AI responsibly. Nicole, welcome back — and oh, by the way, I forgot to mention that she is actually a real person. She exists. We met in person in Atlanta last week, which is very cool, and I have the selfie to prove it. But Nicole, welcome back.

Nicole: Thank you so much. It's good to see you again.

Drew: It is! So, how are you and where are you this fine day?

Nicole: I am good. I am in Atlanta, and I am also in a computer — like, just head down — because AI has been really insanely fast moving for the last few months. This last week has been really crazy too, though, so super deep in the computer.

Drew: It's true. So, okay, and we'll talk about some of that, but one of the things — just in case we've got a bunch of folks here that we need to sort of convince them to stay or they have to leave early — either way, what I'd like to do on these expert Huddles is sort of identify and talk about the three biggest mistakes you see marketers making right now when it comes to getting the most out of AI. And if you can — I know it's hard — just list them, and then we'll go through them one at a time, because if you provide all the explanation, then I have no questions to ask you.

Nicole: Yeah, and I wrote it down so I hopefully can keep myself on track, because everybody knows I can't not talk too much. So: one, in a chat — like in the chatbot, chatting back and forth to change what you're getting out of it — that's number one mistake. Number two is not fully understanding the tool that they have and what it's capable of, how to use it, before they switch to another tool, get another tool, add to their tool arsenal. And number three is falling for all of the hype and switching tools just because everybody else is. So those are the three big mistakes I see happening.

Drew: What I can promise folks is that you're going to get a lot of practical, real world — this is not theoretical — and Nicole spends all her time with these tools. So talk about the back-and-forth thing and why that's so problematic.

Nicole: Yeah. So you know, you have to start out learning the tools based off of just this chat, refine the output because you start to understand them, but you cannot ever get two things out of it. One, you can never get the best possible outputs out of that back-and-forth conversation, just because of how the tools work. And it's going to keep bringing in all the things you don't want into its calculation, instead of just reading what you do want from it to give you the output. So you just don't get quite as good results when you're doing that back and forth. And then the other piece that's even bigger when you're doing the back and forth is you cannot scale and repeat a back-and-forth workflow. You can't share it with your team. You can't build it into an automation. You can't build a GPT out of that back and forth. But what you can scale is if you take that and — instead of going, "I'm going to put in an input, see what ChatGPT, Claude, or whoever you're using — Gemini — gives me, and then chat to say, 'Okay, well, actually, can you change this?'" — you actually look at what you didn't like about what it gave you, edit your initial prompt so that it reflects more what was in your mind of what you want, and then see what it gives you, until you get to the point that the prompt you're putting in is actually giving you what you want. Then you can templatize that prompt, and then you can reuse it over and over and over in a scalable way. You can turn it into a GPT or a Gem or whatever your tool uses. You can actually start putting together workflows that work on a consistent basis for you, but you can never hand anybody else on your team — or even yourself — a replicable chat back and forth. That's like starting from scratch every single time to get what you want, and that's not practical in the long term.

Drew: And I want to ask a question, or maybe it's additive: one of the things that unfortunately — there have been a number of times where I've had the back and forth, but eventually I got to what I wanted — and at the end of that I said, "Okay, this is exactly what I want. Now write the prompt that I should have used at the beginning that would have gotten me here, that I can use again."

Nicole: I love that. I love that strategy, with a caveat: if you're going to use that strategy, you should know enough about good prompting to be able to look at that prompt and know what about it is good and what about it is not good to make it work within your entire AI strategy. Because a lot of times, like, yes, you can take that, but then it's not going to fit into your entire workflow when it becomes one piece of a bigger workflow. So you need to understand enough about the AI — if you're using that strategy — to take that prompt and go, "Is this actually good? Is this as good as it could be? How could I improve this?" One of the things that a lot of people don't realize is, while the AI is in many ways very good at writing prompts, it's also behind on its training. So what it knows to be the best ways to do it is based on outdated training. So you should still know enough about it to be able to recognize what's good, what's bad, and how could I improve what it just gave me?

Drew: So I think the one big takeaway from this section so far is the goal is not to reinvent the wheel every time you sit in front of this machine and create new prompt after new prompt after new prompt, particularly for things that you're doing over and over again, right?

Nicole: Absolutely. There are going to be times where it's a one-off thing you're doing with the AI, right? It's still a best practice to be in the habit of editing your prompts instead of chatting to change the outputs — even in that thing that's a one-off — just because of the quality increase that you get out of it. However, there are instances where it's totally fine to chat. It's not a hard and fast "there's never a time you should be doing that." But the ultimate goal for truly going from experimentation into adoption is to be able to build scalable, repeatable workflows with the technology that have specific points with a human in the loop and a human's involvement, but that you do it the same way every time.

Drew: We had a question about this, and so I'll give an example — and this is a "new can too" — which is pretty much every Saturday for the last year and a half, I have written an editorial on LinkedIn. I call them rants, and every one of them is accompanied by a penguin CMO with a purple baseball cap with the word "CMO" on it in some form of Antarctic office setting. And I've been doing this long enough — so that's probably 75 to 80 different images — and that's one that is a workflow. And that is a workflow when you know you're going to do it, and so forth. And what happened after doing 40 of those is that the prompt ran out — like, it no longer would work — and so I had to then go back, and then it sort of forgot. And so it's been a journey to get it, but now I have it really dialed in. I now will go and say, "I want three concepts — before you do anything, give me three concepts based on this idea." Sometimes I'll put the entire editorial in there. Other times I'll just say "on this theme," and it pretty much gets me three good concepts. We tweak a little bit, and then it'll give me — first or second try — a good image. And that's huge. But boy, it took me a long time to get to where I wanted to be, where I was consistently happy with the output. And I learned things like, don't put a lot of words in there, right? And literally saying, "Do not put a lot of words in it." But it's gotten so much better, I have to say — at least DALL-E has. So okay, let's keep moving. Your second point was fully understanding the platform before you add another one — talk about that.

Nicole: So there are a few reasons for this. If you are using ChatGPT, Gemini, Claude, or Copilot — those are, you know, most companies have at least one of those in their arsenal, and it is probably their primary tool that they are using. What I see so many companies doing is so strongly underestimating what that tool is capable of, and so they want purpose-built tools for all different kinds of things, instead of just actually understanding what are the features, what is the functionality, what is this capable of, what are the limits that I can push this specific tool to? And the reality is, ChatGPT can do almost anything you possibly want to do with AI — not 100%, but almost. Gemini is getting there, Claude's getting there. It's not 100% with any of them, but you end up spending so much more money and so much more cognitive load trying to figure out every single time: one, what's the best tool — and testing a bunch of different tools for this — then, how do I use the platform now that they just moved the buttons around and changed things? And now you have 17 different tools that you have to keep track of, understand, and comprehend, instead of one tool you need to just keep track of and understand. It's not to say not to have a backup tool or something else that you can use, because sometimes they go down — like, we saw that actually this week; there were issues with — I think Claude was down at some point this week. So you do want a backup tool, but really, not fully understanding the tool that you're already paying for is a huge mistake. Because usually you can do it as well, if not better, with one of those four general tools than you can with any purpose-built tool. It's just that you don't understand how to do it. And if you don't understand the tool you're using, you're just missing out on the actual value and capability that it has.

Drew: Nano banana infographics are so cool.

Nicole: They are so cool, but you know what? Infographics came out a week later with updates, and they are also so cool — because that's the other piece of it. Instead of hopping for changes, it's understanding the limits, capabilities, everything about your tool and where you see it falling short — and just paying attention to when they update it so that you can use it, because there are plenty of other things that you can be doing with that tool. And inevitably, we are not talking months or years for them to catch up with each other if another one has that feature or functionality. We are talking usually days — if anything, weeks — you're not talking some huge gap, right? And more than half the time, what most marketers think they have to switch to another tool for is something that's already in the tool that they are using.

Drew: And by the way, I was obviously joking about the native banana — but also, if you look at the history of martech usage, most of the time about 20% of the capabilities are being used. And so it's the same thing, right? And it seems like a similar tendency: "We're going to get a solution-specific tool," and, "Oh, but what about this? This one's really good for slideshows.”

Nicole: And with ChatGPT, I would say I see maybe one to two percent of the functionality being used. Like, most people are so unaware of what you can actually ask it to do. Same with Gemini — same with Claude — I see people using such a fraction of the capability, because it's not just an obvious "here's a button for this." And so if you don't understand it — if you don't understand, like, "Oh, I could just upload a skill to ChatGPT and ask it to have the code interpreter run it for me" — then you think ChatGPT can't use that functionality. But if you understand the tool, you realize, oh wait, actually it can. So there's a lot going on that is just completely out of sight. People don't understand how to use it. So I just think, constantly jumping — or leaving your tool to get another one, well, that's more the switching — but the adding of a tool before figuring out if your tool can do it, it's just something that most marketers are spending so much energy on that they don't need to be.

Drew: Interesting and that could save you a lot of money too, right? It could save you a lot of money, because if you end up with enterprise licenses for three tools.

Nicole: Enterprise licenses for three tools for 50 people — you start adding up that budget pretty quickly.

Drew: And then somebody says, "You know, I really need that $200-a-month plan." And next thing you know, you're actually spending real money on these tools, and you're under-utilizing them, which I think is fascinating. The third point — we sort of merged into that one.

Nicole: Yeah, but it's a little different. So one of the things I'm seeing, and it's especially seeing it right this minute, there is a lot of hype switching — like, you start seeing that. A lot of people are saying, "I switched from this to this. I just shut down my account here and switched to this." So I would strongly warn against getting caught up on that for a couple of reasons. One, a lot of it does go back to not understanding your tool. And the people switching are like — it's because they don't understand their tool. So that's piece one of it. Piece two of it is not understanding the amount of time that you spend relearning a new tool. And those that hype switch, hype switch back, right? Like, it doesn't go one way — they hype switch. If you're shutting down your ChatGPT right now for Claude, you're gonna find out that ChatGPT added something, and people are gonna start saying — because this has happened. I mean, this is not the first time it's been "shut down ChatGPT for Claude." This has happened several times over the years. And so then people don't realize that, oh wait, now ChatGPT is going to then come back at one point and be the leader, and then Gemini is going to be the leader. It goes constantly for who has the best tech, and suddenly you are spending all your time porting everything between tool to tool to tool. If you want to use — like you hear, "Claude is amazing" — not so great. Get a Claude account, but don't just shut down your ChatGPT account, because then you're gonna have to switch it all back and forth. You kind of go through all of this. I mean, again, I think having a backup account is great, but like, I think there's a lot you don't realize. You don't know about the learning curve. The buttons are different. They may have the same functionality in many ways. You gotta learn new terminology, new buttons, all of these things, and then you just do it again. And what a lot of people I see do not account for is price difference. They do not realize that different tools charge in different ways. So like right now, a lot of people don't realize, if you change from a ChatGPT $20 a month to a Claude $20 a month, you are not getting the same amount. And if you shut down your ChatGPT account, and then you start using Claude, and you realize, "Oh no, after like four messages I get rate limited and I can't use it again." And now to get the same amount of AI, you've got to spend $200 a month instead of $20 a month. And now you extrapolate that out to your team, because you just got caught up in everybody else saying, "Hey, I was doing it." There's just a lot more to learn than people realize before you start making those switches.

Drew: Okay, I think we got this point really well done. You know, Eric Eden joined us, and he's been spending a lot of time in Gemini, and one of the things he just reminded us in the Strategy Labs was nobody wrote — there weren't any manuals. And if there were manuals, they'd have to be updated every single day. But in that, it is a challenge. So, I'm switching gears completely from what I was thinking about. I was talking about what application right now that you're working on — that you maybe could walk us through, sort of a project that you're kind of like, "That was so damn cool. That really was amazing, how that helped me." Oh, for you personally, yeah, or for a company that you're one of your — one of your customers?

Nicole: You know, I actually would say the stuff that's made the biggest difference for me is not like the most current, and that's kind of interesting to me. But like, when I really look at it, it's stuff I put in place quite a while ago that I think now people are coming around to — that it's possible — versus stuff that I'm doing with the latest cutting-edge technology. And I actually think it's more important to point that out, because like, my most useful workflows that I have — one of my favorites — when I built my "Foundations of Generative AI for B2B Marketing" course, I built an AI automation that the second I'm done editing the video, it puts it into a Dropbox. Then that triggers an automation that takes the video, creates a transcript of the course lesson, and then takes that and turns it into notes, turns it into a product lesson description, puts it into an HTML format, pulls out the homework that I told people to do — like, it pulls out all of these different things and puts them in very specific ways for me. And then I get all of that in my Google Drive together, and it is ready to use. So like, I've got a PDF of the course notes that I can upload to my platform. I have the HTML ready to paste into my platform. It saves me hours and hours and hours of work. And the reality is that I did that two years ago, and everybody thinks you've got to use Quad or like one of these crazy tools to do something like that. My automation I built two years ago — I do not need to reinvent it with agentic, way more expensive technology. I'm able to continue using that automation because it was already built out. And so I would say, like, that — I just constantly look at it and I'm like, the value that this built-out workflow that consistently performs every single time exactly how I need it — that's the thing that continues to blow my mind. But I think the thing that's blowing my mind about it is this has actually been possible, if you knew how to use the tech, for a really long time, and people are just discovering you can do stuff like this now. The agentic technology has gotten to the point like — I probably could go in there and update it to automatically go into my course platform and add those things and stuff. Like, that would be cool, but the real time that takes is 30 seconds. The real time saver is the stuff that I built out that has been working. So I would say, like, those kinds of uses — whether you're doing it through automation, where you have fine-tuned control and you're not taking the risks of what agentic AI gives you, or you're doing it through the more current, newest hype-your-tools where it's a little bit more AI controlling the entire process — that kind of functionality, I think, is just awesome, where you can have one solid input and end up with everything you need, already edited, already reviewed, already in ready-to-use format. You still gotta put human eyes on it to make sure it didn't mess up, but like, it is definitely an awesome use of the tech.

Drew: Okay. Sherehan Ross also, who was with us in Atlanta — Sherehan, what's your question?

Sherehan: Hi guys. So good to see you both, Nicole and Drew. Nicole, I think we talked about this a little bit last week in Atlanta, but maybe it's useful for everybody here. I still struggle occasionally when I'm working within Cowork with context. When I start in a task, everything within that task, context-wise, is fine, and I'm learning a lot with it. And it's automating some stuff for me, and it's doing some miraculous stuff, which is really like watching magic happen. But then if I ask it about a different task, it has no context or no memory, or the tasks don't talk to each other. So now I'm asking it to build something where it can save context from every task into maybe like one folder, and then it can reference that each time I start something new. Am I going about it in a very long-winded way? Is there an easier process here? I don't know. I just — again, it's all new to me, so I'm learning. 

Nicole: And I want to step back and just make sure everybody even knows what we're talking about, because Cowork is extremely new, and I think a lot of times we have a tendency — in marketing conversations — to assume everybody has even a clue what new tool you're talking about. So in Claude, you can have the Claude desktop app. And in the Claude desktop app, there is a tool called Cowork that allows, essentially, Claude to go into your computer files. You select the files you want it to work on, and it can do work inside of your file. So it can be creating new files actually on your computer. It can be working with the files. It can kind of do — and it can do stuff in parallel. It is an agentic capability, so it is not the same as like a Copilot agent, which is more like a GPT or pre-prompted ChatGPT. It is actually a capability where you give the AI a task, it figures out how to do the task, and it takes the tools and resources at its disposal and the parts of your technology you have given it access to, and it autonomously decides how to do it. So, just so everybody understands, that's what we're talking about with Cowork. If you're in the OpenAI ecosystem, Codex — it's a little bit different, but there is a Codex app which, in fact, was only on Mac. It's now also on Windows as of literally, like, 12 to 24 hours ago. And the Codex app allows you — in the OpenAI ecosystem, it's included with a ChatGPT subscription — so if you're like, "Oh, I'm not on Claude," you can use it there too. All right. That said, to answer your question — one, I think everybody needs to understand, with that technology, we're still understanding best practices. It is extremely new. So I mean, this has been out a matter of weeks, not like months or years. So we're talking in very short time frames. What I am seeing people find success with, with it, is creating — like having it create a Markdown file that it continuously updates for every file you work in. So it's not necessarily everything you do, because not necessarily everything you do does the AI need context for. But any files or references or projects that are going to interconnect — you're creating like a Markdown memory file for it to give itself notes about what it has been doing. And then part of your directions to it are to essentially ask the AI to go read the Markdown files for every folder that are like the memory Markdown files. And then it can go in, understand that memory, so that it can bring it all back together. A huge asterisk for people before you start doing Claude Cowork or Codex or Pilot Code or any of these tools on any kind of work computers: please, please, please make sure that your internal company AI use policies allow it. The second you are going into giving tools access to actually make changes on your computer, access your files — depending on connectors you set up, MCPs, all kinds of pictures here — there opens up many, many security risks. And I will say I have seen far more AI use policies that explicitly prohibit this kind of technology, and so many people are completely oblivious to their policies. And there are going to be people who lose their jobs over not understanding that, not understanding the risks they're taking, and all of this. So this, again, goes into the idea of, like, don't just follow hype. Definitely understand what you are doing when you do it. And I'm not saying you're doing something wrong — just to be very, very clear — I just think that's important to get as an asterisk if you're talking about this stuff, that there's a really high likelihood you're not even allowed to do this stuff at your company.

Drew: It's so funny. You know, I listened to Hard Fork, and they told the story of the OpenClaw usage by a very sophisticated person at a very large social media company. And they built it and used OpenClaw on an isolated basis. And then they said, "Oh, it's working really good." And so then they put it over to their regular computer, and it started deleting all their emails. This is a good point, I think, to pause. Thank you, Sherehan. 

Ad Break: This show is brought to you by CMO Huddles, the only marketing community dedicated to B2B greatness and that donates 1% of revenue to the Global Penguin Society. Why? Well, it turns out that B2B CMOs and penguins have a lot in common. Both are highly curious and remarkable problem solvers. Both prevail in harsh environments by working together with peers, and both are remarkably mediagenic. And just as a group of penguins is called a huddle, our community of over 300 B2B marketing leaders huddle together to gain confidence, colleagues, and coverage. If you're a B2B CMO, why not dive into CMO Huddles by registering for our free starter program on cmohuddles.com? Hope to see you in a Huddle soon.

Drew: It's so funny — I know marketers are, for the most part, on the bleeding edge of using these tools in most companies, and marketers may or may not remember that martech in general opened up their companies to more security risks than any other area of the company. Most of those data breaches, where massive amounts of emails and other credit card information were exposed, were through e-commerce and other things that marketing controlled. So now we have early adopters of AI tools, and marketing, once again, could be at the center of the risk. So how are you recommending — because we want folks using these tools, and we want folks that are really good at using these tools, which means they're going to be at the cutting edge — how are companies managing this?

Nicole: Well, there are two — those are two different questions. Question one is, how am I recommending? Question two is, how are people actually doing it?

Drew: Let's just focus on what you're recommending.

Nicole: Yeah. So my recommendation is, there's a sweet spot between being on the front cutting edge of this and kind of falling behind. Like, there's an in-between place where you are not trying to be literally the first to do every single thing. Like, if your company was out there adding OpenClaw to your computer the day it started going viral, and letting it go out and create its own Malt book, and talk about your company on the internet, and all of the crazy stuff that came with that — that's a mistake, right? Like, so step one is you don't have to be literally first at everything. If you're on this conversation, the kinds of stuff we're talking about, you're already so far ahead of most companies. Let's be honest — like most companies that I talk to are barely doing anything. Like, we write email subjects, if that. So first things first is, you've got to find the sweet spot, and you've got to actually take the time to understand the risks. Like, whatever you're about to do — what is the risk, and how can I mitigate the risks? Talk to your security people. Whoever does your IT security, I really hope, has some understanding of this, because some of these risks are exceptionally technical. Like, they are so far beyond something that anybody who is advising on how to get the most out of these tools actually understands. It is, you know, there's "how do you use this for marketing," and then there's "technical security," and those are not the same person in general. So you need to be understanding how to use it for marketing, and you need to be talking to the right experts on the technical security. And what I see a lot of people not doing is the technical security. We are so far — so many people are still in their mind, and companies are still in their mind, that their fear is the AI learning from their data. That's not the risk we need to worry about anymore. That's not the current risk with this technology. Like, the actual current risk with this technology is you give it way too much access without any control. But I mean, sorry, I'm gonna stop — if you have your settings on dumb technology, you are still risking training the AI on your data. But the risk right now is the AI literally autonomously goes and gives your data to a malicious actor who has conned the AI into giving away your data. So, like, those are two very, very different things, because one is it's kind of in a pool of billions of data points, and two is it's actually basically the AI equivalent of being hacked, and it's like a hacker is getting all of this data to do stuff with. Those are very different risks, and a lot of companies are not understanding that. A lot of companies are not understanding that if you download skills from the internet — which, skills, for anybody that does not know, that is on Quad — but it's not really, because everybody is using skills now since they're open. But what skills are is a Markdown file inside of a zip file. So you have a Markdown file that gives the AI directions about how to do specific tasks for you. "This is our brand guidelines, and here's how to make sure that we're following our brand guidelines." "This is how we put together a deck." "This is how we think through analyzing a transcript and pulling out insights." There are an unlimited number of things that a skill could be — to have this Markdown file that tells the AI how to execute that task. But that Markdown file can also tell the AI which tools to use — so which connectors to activate, which MCPs, which are connections to external tools, to activate. It can tell the AI — it can also, in the Markdown file — you have the Markdown file, you also have a file that can contain resources and references, and that can include code in that file; it can contain other information that the AI needs to reference. You can also have a file in there that includes assets, which can include images and fonts and things like that. I'm telling you this so you understand that a skill is not just step-by-step directions about how to do something. It is code. It is images that can contain viruses. It is telling it to activate connectors. And people are going out downloading these from random people on the internet, installing them into their company Quad account, and that skill they have installed is basically telling the AI how to steal all your data and send it to some kid in his dad's basement who's gonna sell it to somebody else for a lot of money — like, whatever it is. I'm not saying that exact one, but right? That can happen. That is a real risk. And if you are running so fast and telling your team, "Go, go, go, go, go," you run the risk of these things happening, and you don't even know it's possible, because you ran so fast you didn't pay attention to the risk. So you've got to find the sweet spot where you understand the risk you're taking, educate your team so that they have the literacy so they don't make these mistakes. And that means you're not going to be the first person running and doing all of this, because there are so many people taking reckless risks right now.

Drew: Like running with scissors. I want to get Guy Yalif from Webflow on the call. Guy was also with us in Atlanta. This a reunion of sorts. So Guy, what's your question for Nicole?

Guy: Hey, great to see everybody again. Nicole, thank you for sharing in Atlanta. Could not agree more on the changing surface area. I was terrified watching OpenClaw — like, you could — this is the most sensitive data in your life. I think OpenClaw highlighted a great use case, but the lack of security — like, you could actually, genuinely, irreversibly screw up your life with some of these things. 

Nicole: One of my friends his OpenClaw, his agent — the AI just took it upon itself to start posting on his LinkedIn and sharing its experience. And I was like, "That is so brave," because that could, like, really mess up your life.

Guy: Deeply. Or it's got root access to your hard drive and could actually delete a bunch of stuff. I'm totally with you on downloading skills — I feel like when I watch my daughter downloading free stuff on the internet unsafely, I'm like — so the question for you is that one, at least in my little brain, seems clear. The one that I've been less clear on is: is there a similar risk that there will be pointers to other tools or payloads that are bad for you in open-source models, which in theory are just a bunch of model weights? So in theory, that should be safe — it's just a bunch of numbers. What's your point of view?

Nicole: So first off, I'm not like the security technical expert, so I would absolutely not trust my take on it. Let me just start with that.

Drew: And we have some security people here, so maybe they may want to come on camera.

Nicole: I would say there are certain models that you can actually download, depending on the source, how reputable the company is — like, you know, that piece of it — that you could securely do, that is, if you are technically savvy enough to even understand, right? And that's the huge asterisk to us, because most marketers are not anywhere near technically savvy enough to be able to look at that code and understand and see what they're seeing, and make sure that what they are downloading is just model weights, right? Like, most of us are never going to look at that and go, "That's what this is." So if what you're doing is really, genuinely, truly just these model weights and all of that, and it's from a reputable source — like, you know, they've vetted it, they've checked it, you know, all of that — I think most likely it's probably safe. But I also think we don't even fully understand how this stuff can get manipulated. My kind of take on all of this is: don't trust any of it, not fully. I'm probably the most conservative person you'll meet as far as a lot of the things I will do or not do with this technology, at least in AI. I'm like, very slow to adopt a lot of the functionality, because I do know the risk. But I would say it really depends on your technical savvy and ability to actually look at that code, and how well you're vetting what you're putting in, because open model or downloadable model — there are literally millions out there. So are there risks that some of them contain malicious stuff? 100%, 1,000%, absolutely. And if you don't understand this stuff well enough — and probably 99.99% of marketers don't understand this stuff well enough — you're taking that risk.

Drew: I want to bring this back. Thank you, Guy. There is this tremendous pressure — there always is, but it's even ramped up more for CMOs — to, you know, 10x employee impact, to use these tools to incredible effect, to have, as you describe them, a superstar AI person on your team who can do this. And I, knowing you know this, the CEO who hears Sam Altman say "billion dollar company, one employee" isn't thinking risk there. So I do think that we've spent enough time on risk, and I want to flip and focus for a second on building stronger teams — getting to the point of what a great team might look like today, and how you might get to a great team today that is taking advantage of these tools really well, better than the top five percentile, and not putting the company at risk.

Nicole: Okay, so what I have seen is: first off, you don't want a single person on your team who is your AI superstar that is responsible for everything AI-related. That is a huge mistake, and I can tell you exactly how that plays out. You have this person. They're probably the person who is the most curious, excited about AI. They start tinkering. They show aptitude. They start building for your team. Suddenly they've built all your GPTs and your automations and all of the things you're doing, all of your workflows. You're getting so much productivity, all of this. And they vanish, right? They vanish because you never paid them more. They have the same job title, the same everything. And they have now built this track record of AI transformation. And they can go — they were making $100,000 at your company — and they can go get $400,000 or $500,000 at another company with that same skill set, or go out on their own and make an insane amount of money as a consultant, because I will tell you, the in-house company superstar to AI consultant pipeline is the strongest in-house to external consultant pipeline I've ever seen in 20-some-odd years in marketing. So do not put all of it into one person. That's not saying don't identify those superstars and leverage them. But when that person leaves, the company goes, "We don't know how anything's built. We don't know how to update our GPT, we don't know how to connect data. The tech changed this week. How do we update?" And they're absolutely shocked and clueless. Also, by the way, the CMO should also not be that person. CMOs are leaving for that too, so I mean, it's great for you as an individual. It's really bad for your company and your team for you to come in, do all that, and nobody knows what's happening. So what do you do? And what have I seen work really well that also insulates from that other risk that we don't really talk about as risk? Train your entire team collectively on a core tool, really, really well. You give them the literacy on understanding how to communicate with it — features, functionality, all of that. Again, core tools in my book: whatever you have that you have approved, either ChatGPT, Gemini, Claude, or Copilot — you should have one of those as a core tool. That's the thing your whole team is learning at least a baseline of skills on. They should all be capable of communicating with it, using it in their day-to-day work, understanding how to do things like deep research, leverage Code Interpreter, leverage all the different buttons, features, functionality — baseline understanding, right? And then, as you've trained those people, you're going to naturally have resistors that are barely using it. That's their problem. Don't worry about it. Don't stress about it. You don't have to drag the people who want nothing to do with this along with you. It's going to happen — unless you have a team of two or three, in which case you've got to deal with it. But bigger teams, you're going to have those who just aren't going with it. You're also going to have a group of people — again, I say group of people — that are going to start clearly demonstrating, "I have a natural aptitude. I enjoy this. I'm doing cool stuff with this." Now, you take that group of people, who may not be in any way, shape, or form who you would have expected, and they collectively are putting together: How are we doing the strategy as a team? How are we building out these workflows everybody could use? They're building the GPTs. They're building the automations, the agents, whatever you're doing — this is the group executing it. So I'm not saying you bring in other people to execute it. You're having people internally do it, but it's multiple people, so you are not relying on one. And those multiple people — while one might build stuff — everybody understands in that core group what those people do. They're working on it together. That way, if one chooses to leave, you are not left going, "What do we do now?" If they all go together, sorry.

Drew: All right, so the resistor thing is interesting. We talked a lot about this last week at the strategy labs, and I have a pretty strong point of view. I think it was tolerable last year. I think we've run out of time for folks that are resisting. Like, if they're resisting at their core — we're in March of 2026 — the tools have proven their value over and over and over again, and you've offered to train them, and they've taken the training and they're still not using it.

Nicole: I think you need to have a real conversation. I agree that it's not okay. I'm saying you are spinning your wheels trying to force them to be the ones on your team who come along. I think you need to have a real conversation with them one time: "You are hurting your career by not doing this, and you need to understand that." Because there are so many people unemployed in marketing right now who are spending their entire in-between-layoff time building these skills, executing — like, there is actually a significant talent pool in the unemployed because they actually have the skills. They have the time to be deep learning, building this skill set. I will tell you, some of the most skilled AI people you will find are people who have had an extended layoff right now.

Drew: Yep, that's amazing.

Nicole: So I'm not saying let people go. I'm saying you make it really clear to them: "You can't just not do this." It's up to you. But stop spinning your wheels trying to force them, because you will be able to find plenty of people who do.

Drew: It's the proverbial horse to water. So, you know, there's the water. Come on.

Nicole: I'm not encouraging mass layoffs in any way. I'm not saying replace with AI. There are people who are embracing this that you can be leveraging on your team. If there are people on your team who refuse to leverage it.

Drew: You wouldn't accept an employee who didn't use email, or didn't use a computer. This is the same kind of fundamental part of it. Quick question, Nancy, because we're running out of time.

Nancy: I'll simplify the detailed question in the chat and summarize it as two points: context switching and starting over. So what I would like to ask is, oftentimes I'm chatting on multiple unrelated client projects, etc. I don't want ChatGPT to remember from one client project what has no relevance to the next client, right? That's one. And then the second thing is, even chatting about a specific client — sometimes I don't like the 10 hours of conversation I had. I just want to start over, because now I'm a lot smarter about what I should have asked, instead of going around in circles for 10 hours. With that being said, I want to know: what is the most effective way to ensure ChatGPT knows I want to context switch, and doesn't remember that these two clients have nothing to do with each other? And is there an effective way for me to say to ChatGPT, "Forget about what I had told you about Client A. Can we start over?"

Nicole: So a couple of different things with this — I have a few things coming to mind. First off, I just want to say you don't have to use memory, period. You can go into your settings and just turn memory off. I actually don't use memory. I am maybe one of the only people who doesn't like it. I want to know exactly, in every single thing, what context the AI has, what it's using, what it's not. I want to control that fully, so I just turn memory off. So that is always an option, and that's probably the easiest option if you don't want it remembering stuff. 

Drew: Can I just point out the opposite? I use memory. And every time I'm writing something, it says, "Hey, would you like this in the Penguin in Chief voice?" Yeah, because it knows me, and it knows my writing, and it knows my style. And when it does that, it's great. So there are reasons to have memory.

Nicole: There are lots of great reasons, and it's just different working styles. So I was just saying, if it's driving you crazy to have memory, you have the option to just not have memory. Like, there's not a right or a wrong. Memory can be amazing. It's just a personal preference. But you do have a few other options. One is you can monitor your memory. You can actually go into ChatGPT, go to your memory settings, manage memory, and if there's anything that it has chosen to remember that you don't want, you can just delete specific memories. So it's like wiping — it's like using the little Men in Black neuralyzer. You don't remember that, and now you're good. We've cleaned that memory. The other option that you have, and I would highly recommend doing in your context even if you're monitoring memory, is to create projects in ChatGPT specific to the clients, and then set the context of the memory to be project-specific memory, not account-specific memory. You have to do that when you create the project — like when you're putting in the name originally, you set the memory to be either project or account. Then reference files can be added to your project that are specific to that client. Now, this also applies if you're in-house at one company — to that product line, for example, or different things. So this is not just a client-specific kind of thing. There are lots of different uses you could have for this. But you can just make your memory, your instructions, your custom instructions, and your reference files client-specific, not account-wide, by doing it within the project, and that should probably help you with those issues that you're having.

Nancy: Thank you.

Nicole: You're welcome. Okay, wow, it goes fast. You could have me for like a six-hour session —

Drew: Okay, wow, it goes fast, Nicole.

Nicole: I know, you have to schedule me for like a six hour session sometime.

Drew: Yeah. Okay, six hours with Nicole — who's in? Feel free. There we go. Okay, I am as well. So before we wrap up, if someone here wants to continue the conversation, where do they find you, and what's the right time to bring you in for your services?

Nicole: LinkedIn is a great way to find me — just my name, Nicole Leffer, on LinkedIn. I have the verification badge so you'll know it's actually me. And email is just Nicole@NicoleLeffer.com — nice and easy. My website is just NicoleLeffer.com. And as far as when to bring me in, I would say my sweet spot and my deepest strength is in helping teams that are in early-stage adoption get up to speed and get caught up. I meet you where you are. Like, if you are already beyond the point of building out GPTs, I can get you to building GPTs, building workflows with those GPTs, all of that. I can go beyond that. But you don't want to bring me in to help you build out agentic workflows — that's not where I'm coming in to help companies. I'm coming in to help you get to the point where you are getting the education to get your team on track, and the strategic guidance and help to get from the back-and-forth chatting to "we are building out GPT skills, workflows" — all of these things that go together. But there are a lot of pieces and building blocks you need. You can't skip steps. A lot of people are trying to skip steps. So bringing me in — and I don't help with 10,000 different tools. I usually come in and help with, like, what is that core tool or two that you are using, and help you understand how to really build out your strategy and the skill set around that tool, so you're not building a tech stack that costs you $1,000, $2,000, $5,000 a month per employee because you've gone so over the top with it.

Drew: Okay, I have my last question for you. If CMOs leave today remembering only two things from this conversation, what would you say are the two most important takeaways?

Nicole: I mean, that depends on the level you're at, but like — don't feel like you have to move so fast that you're not putting your foundation in place. The foundation is really, really, really important. The more sophisticated the tech gets, the more important that foundation becomes. So don't skip the steps of building the foundation in the effort to be at the front. Number two would be: there is a lot of hype out there right now. And I've even seen in the chat that people are very insistent about their specific tool or whatever they're doing. I want you to take away that there is no one right answer. For any marketing team, there are so many different considerations. And very few people who are preaching a specific tool even know jack about the other tools, to be totally honest. Like, they get into a cult and they tell you — even if they switched from one to the other — they don't really understand. It is a full-time job for me to understand all of these different tools, where they are, what they're capable of, all of that. Very few people who are running a marketing org have that level of understanding of multiple tools. So anybody giving you advice is giving you advice through the lens of one tool that they use most likely, and if they're giving you advice through the lens of seven different tools, they probably don't really know any of them. So just take what you hear from anybody with a grain of salt, pick a lane and stick with it. And know that the surface-level story you are getting about anything is probably utterly and completely untrue about how these companies' tools work, how they operate, about their morals, their ethics — like, any of it. The surface-level hype around all of it — you probably don't know 98% of the story. Just keep that in mind as you're making any of these decisions.

Drew: I love it. So if we think about it from the leadership standpoint — which is the lens we're putting on everything for CMO Huddles this year — building a foundation is fundamental. If you step back and say, "How does that foundation relate to the strategy of the things that you are trying to achieve overall?" — well, we know that AI is only as good as the data that you have in it, and then, of course, the user. So that's strategic leadership, number one. Number two: we don't get distracted by the bright, shiny objects, because we have a strategy, a vision, and a foundation. Those are great reminders. Nicole Leffer, thank you so much for joining us. 

 

If you're a B2B CMO and you want to hear more conversations like this one, find out if you qualify to join our community of sharing, caring, and daring CMOs at cmohuddles.com.

Show Credits

Renegade Marketers Unite is written and directed by Drew Neisser. Hey, that's me! This show is produced by Melissa Caffrey, Laura Parkyn, and Ishar Cuevas. The music is by the amazing Burns Twins and the intro Voice Over is Linda Cornelius. To find the transcripts of all episodes, suggest future guests, or learn more about B2B branding, CMO Huddles, or my CMO coaching service, check out renegade.com. I'm your host, Drew Neisser. And until next time, keep those Renegade thinking caps on and strong!