May 26, 2025

The GenAI Reset: How to Simplify, Strategize, and Scale

Is your GenAI stack bloated, scattered, or underused? You're not alone.  

In this Huddles Quick Take, GenAI advisor Nicole Leffer delivers a fast-paced, practical deep dive into what marketers are still getting wrong with GenAI—and how to fix it. You’ll also learn why Deep Research may be the most powerful feature you're not using. 

Plus, Nicole shares how ChatGPT helped her build a full keynote deck (including visuals!) and how to stay on top of hallucination risks in strategic work. 

What You’ll Learn 

  • 3 common GenAI mistakes marketers make—plus one just for CMOs 
  • Why most teams only need one core tool (and which Nicole ranks highest) 
  • How to use Deep Research to analyze competitors, build strategy, and repurpose content 
  • Why great prompting starts with generous context and a clear goal 

Want more?
Catch the rest of the conversation on the CMO Huddles Hub YouTube channel or click here: https://www.youtube.com/watch?v=XbdXnseUK-U    

Renegade Marketers Unite, Episode 454 on YouTube 

Resources Mentioned 

Highlights

  • [2:35] 3 (+1) common AI mistakes 
  • [4:07] Stack simplicity: ChatGPT > 25 tools 
  • [9:55] Context (how you prompt) is everything 
  • [12:55] Don’t under or overestimate what AI can do 
  • [20:28] Why team-wide GenAI access matters 
  • [23:03] Deep research = full competitive intelligence  
  • [27:46] Yes, Deep Research does hallucinate 
  • [34:10] How Nicole used Deep Research to build a new presentation

Highlighted Quotes  

Context is everything. The smarter the models get, the more important context becomes.” —Nicole Leffer

Full Transcript: Drew Neisser in conversation with Nicole Leffer

Drew: Hello, Renegade Marketers! If this is your first time listening, welcome, and if you're a regular listener, welcome back. Before I present today's episode, I am beyond thrilled to announce that our second in-person CMO Super Huddle is happening November 6th and 7th, 2025. In Palo Alto last year, we brought together 101 marketing leaders for a day of sharing, caring, and daring each other to greatness, and we're doing it again! Same venue, same energy, same ambition to challenge convention, with an added half-day strategy lab exclusively for marketing leaders. We're also excited to have TrustRadius and Boomerang as founding sponsors for this event. Early Bird tickets are now available at cmohuddles.com. You can even see a video there of what we did last year. Grab yours before they're gone. I promise you we will sell out, and it's going to be flocking awesomer!

Welcome to CMO Huddles Quick Takes, our Tuesday Spotlight series where we share key insights that you can put to work right away. In this one, Gen AI strategist Nicole Leffer returns to share the top mistakes marketers are making with AI, from using too many tools to skipping essential context and how to fix them, plus stick around for her favorite underused feature, deep research, possibly you heard it here first. The most powerful marketing tool of 2025. Okay, let's dive in.

Narrator: Welcome to Renegade Marketers Unite, possibly the best weekly podcast for CMOs and everyone else looking for innovative ways to transform their brand, drive demand, and just plain cut through, proving that B2B does not mean boring to business. Here's your host and Chief Marketing Renegade Drew Neisser.

Drew: Hello Huddles, today I'm super excited to introduce Nicole Leffer, who has quickly become one of the most sought after Gen AI experts for marketing leaders. And by the way, this is like, I think the third visit, I think maybe.

Nicole: Even we talk all the time, so I can't keep track.

Drew: I mean, and the earliest one was in 2023, but what I love about Nicole's approach, and why we keep coming back to her, is that she focuses on practical applications that deliver sort of immediate results. So Nicole, welcome back. Thank you so much. I did want you to know this is your third visit. When you hit five, we will get you an SNL smoking jacket, right? But it'll be what are we doing next week, exactly? So there it is. So we're going to do this all right. So on these types of programs, I don't think we're going to need to have to convince anybody to stay, but just in case our audience has to leave early or needs a reason to stay, let's focus on the three biggest mistakes you see marketers making when it comes to generative AI. And just give me the top line, and we'll go through them one at a time. Okay?

Nicole: I'm making three for marketers in general, and one just for marketing leaders. So get a bonus. Okay? Number one mistake everybody, or many people, are making is trying to use too many different tools. Okay, we'll get too big of an AI tech stack, right. Number two is not providing your tools with anywhere near enough context for it to actually do as good a job as you want it to do. Yes, love that. Okay. Number three is misestimating, for lack of a better term, what your model or what your AI is capable of. I see it on two extremes, like underestimating it and overestimating it, but like one or the other, dramatically misestimating what it's possible. And then my bonus number four is specifically for marketing leaders, and like leadership at companies, and that is not providing a paid ChatGPT or Claude or Gemini secure company account for your team to specifically be using, and having them pay for or just use whatever they feel like on their own, yeah.

Drew: All right. Well, this is exciting. All right, so let's go through these one at a time. Too many tools is great, and you and I have talked about it. I think the world has changed since probably a year ago when we said ChatGPT was enough for.

Nicole: The vast majority of people, ChatGPT is more than enough. Okay, all right, marketers, ChatGPT or Gemini is going to do pretty much anything you want to do now. I mean, if you want to learn how to build automations and like video, like actual like video from scratch, then you might need to supplement it with something else. But for most marketing use cases, most individuals, one of those two tools will do the vast majority of things you want to do, and so what people are going out and doing is they're dramatically, this kind of goes back to issue number three, is they're dramatically underestimating what their tools are actually capable of. They go buy a bunch of different purpose-built tools, and then they never learn how to use any of them, because if you have 25 different AI tools, you're never going to learn any of them, and they change every week anyway. You just don't actually get much of anything out of any of them. Versus you have one core tool, you learn it inside out. You genuinely learn it inside out. You know where to go for anything you're doing. And then as one, it's just going to continue adding features and get some of that extra functionality. By the time you understand what it is, there's always going to be like 10 new things an hour, I believe, right? And so it's going to keep adding functionality. But then beyond that, like yeah, if you need to do AI-generated songs, you could go get a tool that could do that, or AI-generated video, although with ChatGPT you can even do some of that, like with Sora. So there's very little that you can't do with those tools. If you do automation and stuff, then yes, you're going to need another tool.

Drew: Okay, so for most of the applications that folks are doing, I get that's interesting. It's funny. I would love it if those that are here with us live list how many tools that your team are using, if you want to name them, go for it. But I found, and this was a year ago, I was just so unhappy with the writing, and everybody kept saying that Claude is better at writing. And I started to learn Claude and found that it was so much better, I couldn't help it. So then.

Nicole: It flipped back, and now ChatGPT, I would say, is a better writer than Claude, so it flips back and forth. And that's the thing that I would put that as like a whole other mistake people are making is thinking something that a result they had last week means anything this week, right? Like people keep changing back and forth, or they just make this assumption because if a year ago Claude was better, that means literally nothing about where it is head-to-head today. So it's kind of an interesting thing that I'm seeing of people are floating around not understanding the pace that it improves that.

Drew: Yep, all right, you got me. I'm going back to chat for writing, or at least testing it for writing. You mentioned Gemini as a good alternative. Where's Copilot fit in this? Is it inferior?

Nicole: I wouldn't say it's inferior.

Drew: I've heard that, by the way. I've heard that. You know folks that have it, and that's the only thing they.

Nicole: Have, that's all you have, that's all you have, right? I wouldn't put it in a different class than ChatGPT and Gemini. And I mean, ChatGPT and Gemini are different. And I would, personally, if I was going to rank it this hour, with the disclaimer I have not tested all of the new Copilot updates that came out within like the last matter of a few days. So this could be completely obsolete. That's the whole thing of like don't judge it by something in the past. But I would rank ChatGPT as king still, number one, probably for the foreseeable future. Most likely will be Gemini is number two, Claude is number three, Copilot is number four, as how I would personally prioritize a core tool choice. I would say like probably model-wise, Copilot, you're getting the same stuff you get in ChatGPT. So like it's very similar, and you're getting just as good a quality as you're getting in ChatGPT in many, many ways. So it's not like it's not a bad tool. It's great, brilliant models. Functionally, it's more similar to Gemini because you can use it in lots of different places. There's just little things that I think they need to catch up on, but on other things are a little bit ahead. So it kind of, if you are a Microsoft company and you're using Copilot, I wouldn't be like, oh my gosh, we have a problem. I just like if you were choosing across just one of these, I would personally probably go with ChatGPT.

Drew: It's funny. I use Perplexity as an answer engine, and I find it really helpful for certain kinds of queries, and I just like it. It's sort of my Wikipedia, and I kind of trust it because it built that with the thing. Honestly, have not tried the same searches on Perplexity. I'm just very happy with when I'm looking for a.

Nicole: Fact, and I think Perplexity is good, like additional tool, but again, I don't think it's a necessary.

Drew: No. And I use the free version.

Nicole: At this point. Like, I mean, like I use Perplexity occasionally because like I have a free account through LinkedIn Premium, and I like Perplexity. Like I don't have anything negative to say about Perplexity, but I think just, you know, there's a difference between if you were like literally doing AI all the time in, of your job, like I do, I need to use a lot more tools because I need to understand a lot more tools. But for the average person, they don't need a full-time job to keep up with one of these tools at this point.

Drew: That's part of now. So we have the, we're focused on one tool, making sure that everybody is using it and learning from each other. And that's another advantage of having one tool because there's the shared knowledge of, whatever. Not providing enough context is a really interesting and important thing, and that gets into the fine art of prompting and so forth. And I have seen personally how much a difference that makes.

Nicole: Is everything, and the smarter the models get, the more important context becomes because they can work with so much like detailed information and parse out the details and understand the nuances. So you start like working with like a deep research or something like, you know, 2.5 Pro in Gemini, or o1 or o3 in ChatGPT, the more context you give it by far, the better it does. Some of the older generations of models, it's not necessarily going to be able to handle as much nuance in the context, but these, these though, frontline models, frontier models, are amazing.

Drew: What we mean, just to be clear, what we mean by context is who are you? Who are you writing about? What is this for? Everything that might have, you know, that will make it smarter about the output you're looking for.

Nicole: You would provide a human, right, to do a good job that a human would need to have the information out of to do a good job. The AI needs to do a good job too. And a lot of people just are thinking the AI is just going to somehow magically have an idea, or they'll be like my target is CMOs. No, you need to be like my target is B2B SaaS CMOs at companies of this size at this stage, and you know this is their pains, their gains, their weaknesses, their secret fears. Like you got to give it all of that information, your full personas, and then ask it to do it. And so people are saying they think they're giving context because they're saying write this for a CMO or write this for an IT leader, but they're not providing all the things you would give to one of your humans who is doing that type of work. So that's what I mean when I'm saying context is like the full amount of.

Drew: And is it possible that when you're doing your thing, could you ask before they do it? Could you say, am I giving you enough context? I mean, or are there some other things that you would like to know before we go further?

Nicole: Yeah, a lot of people absolutely love that strategy. And if you are a newer prompter and you don't have a feel for it yet, that is a great strategy with one caveat that I have to it. After you ask it, like in the prompt, if you say is there anything else I should be providing for you, then when it gives you that list of whatever else you should provide, use the edit prompt button and edit your prompt to add that context. Or if you have to start a new chat to give it the prompt with that context for like attachments or whatever, and don't go back and forth through the chat because that can start causing a whole bunch of other stuff, which is a little in the weeds. But trust me, you will get better results if you edit your chat and then take out that question, edit your prompt and take out the question, rather than like chatting to give it that extra information.

Drew: Okay, so the third point was misestimating. I'm hoping that's a word, misestimating, either up or down, underestimating or overestimating what these tools can do. You recently had an experience with that where I wanted to, through a series of prompts together, actually have it create an application like that I could, you know, put on my iPhone. That was a little bit of an overestimating, yeah. And the good news is, you know, I knew that, but I thought let's try anyway. So, but underestimating is another thing altogether. And so where do you see people underestimating?

Nicole: I think most people are dramatically underestimating rather than overestimating. I see it on both ends, but underestimating, people are still thinking it's for writing content. It's gonna regurgitate the internet, like it's just, you know, it can write headlines, it could write a social post and, like, that's basically the extent. Maybe it can make a cheesy picture or whatever. They have no idea that there are other people out here doing, like, really deep, strategy-based research where you're going, like, let me give the AI a list of all my competitors, have it go out and do competitive intelligence information about me, take all that competitive intelligence, do head-to-head comparisons, and write me the messaging that I could use, and turn that into sales enablement. And that's an hour to get from like zero to 100 finished kind of a project, and most people have absolutely no idea they can be doing that kind of thing, right? ChatGPT and Gemini, they are at the point that they are capable of really intelligent strategic work, probably far more intelligent than the vast majority of the people you hire, to be honest. Like, yeah, that's just where you still need the humans, but the level of intelligence on the majority of what it does is far beyond the vast majority of humans, and that is just something that most people are completely blind to. So they're looking at use cases that are significantly less sophisticated, right? 

Drew: “Give me a blog post,” right? That's when… 

Nicole: I see marketers a lot of times still, “Give me a world-beating strategy that will, you know, dramatically increase our sales to this target.” And you can just go, like, really interesting stuff if you're creative in what you want to ask it for, and if you understand the tools and which features to use and which models to be recommending. You know, and I don't want people getting caught up on which model, but if you do understand that, you can get such insane things out of it. 

Drew: And now it helps. I mean, obviously this is all about the data that you provide as well. So the more data that you can provide… 

Nicole: Context more than data. It's context, okay? And we go back to that, right? Like, you can't build a strategy without all the context around building that strategy. 

Drew: Okay, yeah. So on the overestimating front, can you give an example of something that happened to you where you were thinking, “I think it could do this,” and it couldn't quite get you there? 

Nicole: I've had some. I tried to make, like, an app-y, like a software app-y kind of thing too. Now, I think if you are a coder and you, like, I don't speak code and I don't know what I don't know, I think if I knew the right questions to ask, I actually probably am not overestimating what is possible. But because I don't know the things to ask, I am like, I don't know how to get it to do the things, because I've seen examples that I'm like, I believe the people creating this stuff. It's just like I don't know what to prompt and say to do. 

Drew: Among our many accolades, we cannot call you a vibe coder yet. 

Nicole: Not yet, not a vibe coder yet. But don't you aspire to be one? I don't know, one day. If you've never heard this term, it's like prompting your coding. I mean, I do do some prompting of coding and use it, but to get to the point of “build it,” like I can make little tools to put on a website, like little calculators to put on a website. Maybe that's what counts as coding, right? Because it's writing code to make slider calculators that do whatever. But I don't know if, like, I mean, there's people building, like, full-blown, just from text, going back and forth with the AI. I'm not there yet. Maybe one day. I don't know if I'll find time. 

Yeah, so other stuff that I feel like I've overestimated… I have a pretty good feel, so I don't feel like I make that mistake a lot myself. But I see people, they come to me for consulting, and they're like, “Okay, how do I get ChatGPT to go into my HubSpot, change all of this, go research somebody's LinkedIn profile, pull all of this different stuff, add it to my Salesforce, write a custom thing, and I don't want to do anything,” right? Like they are thinking that it can go out, take all of this autonomous action, do all these things, it's completely safe, it's not going to cause any harm or any risk. They just think we're about a year or two ahead of where we really are. That's more it. 

And a lot of the agent hype, I think, is driving that, because agents are real, agents being AI that acts autonomously and can make decisions and do this. We are at the point that they exist, but we're not at the point where they are trustworthy and safe to deploy without insane amounts of supervision that almost no companies would have the risk tolerance for. And so I think that that's where people think that that kind of tech is a lot more evolved than it really is. 

Drew: Yes, though, the word “agentic” or “agentic,” I'm not sure which, was all over HumanX at the conference, and people felt that was here. Yet even some of the simplest tasks, like “have an email come into my desk every day with this information,” it doesn't work. And that's a simple one. So anyway, all right… 

Nicole: It's also just risky to deploy too, because you don't know what you don't know about how it's going to behave. And so, like, I used ChatGPT. If you have the Pro account, it has an agent called Operator that can go out and do stuff autonomously. So I play with this stuff to just see, and I had it place an Instacart order, because it can go navigate the internet and do all this stuff. I was like, in my mind, I was telling it, “Send it to this apartment address,” and I was just trying to point out which one to select, because there are a few addresses in my Instacart account. 

I didn't realize the AI went in and changed my default Instacart address until the next time I ordered, and the people were like, “It doesn't have your apartment number. How the heck do I get in? There's no delivery direction.” And I went, “Operator changed my address.” I didn't even notice when it did that, and it had a real-world consequence. Now, did it really matter? No, it was solvable. But multiply that at scale to a business, and that's a real risk. If you're giving these tools access to your HubSpot or your Salesforce or any of this kind of stuff, letting it autonomously make adjustments and changes, what if it does something you don't want it to do and you didn't know because you didn't mean to prompt it that way? I would be very careful with deploying that tech, but a lot of people are thinking that's already okay. They have AI robots running around doing everything for us, and we're not there. 

Drew: All right, we're not ready quite for agents, but they're right around the corner, and so we'll keep you posted on that. Now, you did mention that you want to make sure that the CMOs in the audience here and listening later in the podcast are making sure that they have sort of group licenses in a safe place. Anything you want to add to that? 

Nicole: I bring this up because I work with a lot of CMOs, and one of the most common things I hear when we have initial conversations, I say, “What type of accounts do you have for your team? What are you using?” “Oh, my team just has their own ChatGPT accounts. Some use free, some use Plus, and then they expense it. But they're just kind of like, they do whatever.” 

The thing with that situation is that their employees, I can almost guarantee you, know nothing about turning data training off, so you've got this concern that they're then going out and potentially putting company data into an AI that is training and learning from the data. Versus if you get a Teams account, like ChatGPT Teams, or the Q* version, or Google Gemini for Workspace—Gemini is included, so you should just have it if you're a Google shop—but if you have those team-based accounts, that data cannot be even turned on for training if it's made for businesses. 

And so there's a security factor to that that you're not accounting for, but also nobody is talking about: if you have your team doing work in their personal AI accounts, who owns that? If they're out there and it's their own account and it's not the company account, who owns their work, who owns every bit of that chat history? If they leave the company, they're taking all of that with them. And I just think a lot of companies are not really thinking about that. There's much bigger implications. I don't even know the answer to that question of who owns it, but I don't think for $25 a month per person it's really worth all the risks associated. 

And if you don't give them a paid, company-sanctioned account, they're going to go use their own free stuff. They are, prove it or not. You say, “We don't allow AI,” and you don't give them anything to use and “We don't do that,” they are using it. I promise you. I cannot tell you how many people have told me, “My company does not allow AI, so I'm using it on my phone. I'm using it on my own computer. It's faster to have the AI do it and then type it into my Google Doc myself while I look at it.” Everybody's using it, so you've got to give them the tools to do it. 

Drew: Okay, all right. So I think we've answered that and wrapped up that section, and I think got people some pretty good insights. Now I want to dive into deep research, because I know a lot of folks haven't. You and I talked about a project that you did recently. And so let's first establish what is different about deep research versus standard GPT, and then give the example of the one that you did. If you want to do a show and tell, you can do that too. 

Nicole: Okay, deep research. When we're talking about deep research, there are a few tools that have it, and what they all have in common is you put in your prompt with all your context and everything, and then in some of the tools it asks you some follow-up questions. ChatGPT will ask you follow-up clarification questions. Gemini will have you approve its plan. Perplexity just goes to work. So it depends on what you're using, but what they do is, after they have that information, they go out and they are searching the internet—not just a few pages, they're going to tons of different pages—to solve your problem, and then they come back and write a report. 

It could take anywhere from eight minutes to, I've seen ChatGPT deep researches take up to an hour. It's been a while, so I don't know if they put a limitation so it can't spend an hour anymore, but earlier today one took 25 minutes for me. So they regularly take longer than eight minutes; I think the average is like 12 to 15 minutes from what I have seen. But it's going out and doing this and writing you a report. And these reports can be—I’ve seen them up to 50 pages long—usually 20–30 pages unless you ask for it to be significantly shorter. So many uses for this. 

They are also, especially the ChatGPT one, using a far more intelligent model to do this. So it's not actually whatever model you have selected that is doing the research. It's a model called O3 that is doing the research, which is far smarter than anything else in ChatGPT, and it's not the same as the O3 Mini and Mini High. Also, they need marketers to come help them rebrand their model names. But it's a very, very, very smart model. So it's capable of strategic analysis and insight at a PhD level. It's wildly powerful. The Gemini one is also good. I think Gemini goes to more websites but maybe not as deep. ChatGPT goes to fewer websites but deeper. So it's just a little bit different depending on which one. 

But you can also, in the ChatGPT one—I think you can probably do this in Gemini too, I just haven't experimented with it—you can give it files or documents of your own stuff and have it leverage that in your research too, which nobody recognizes. This is not glorified Google Search. It can be glorified Google Search, but it can also be, when I was saying you want to do competitive intelligence and you go out, you're like, “This is all of our company information. This is who a competitor is. I want you to go out, I want you to do deep research into their entire go-to-market strategy, messaging, positioning, all of this. Here's our messaging and positioning. I want you to tell us what are their strengths, what are their weaknesses, full SWOT analysis. Give me the competitive messaging.” You can get that complicated so that the report it brings back is full competitive intelligence and all of this competitive differentiation messaging, because you've asked it not just to do research but also to build strategy out of that research. 

Again, you can give it lots of your own files and have it build strategy out of your own files. So there are lots of different ways you can do this, or some hybrid combination. 

Drew: So let me give, I'm going to give a quick… So I did an experiment this morning. One of the things that we do at CMO Huddles is that we connect our huddlers on a one-on-one basis, based on somebody having an issue and they want to talk to someone who's resolved it. And we've had lots and lots of conversations, we have CMO Huddle Studios, we have a lot of different information that is out there that may look like expertise. So basically worked through deep research, and it was amazing. 

So I said, these are some of the possible things that people would want to match on. So first, help us identify all of the areas of expertise that someone might ask a CMO about, might want to talk about. Then with the list of the community, it went out and created a spreadsheet with all of the folks’ strengths, and I was watching it go. And then it would go to one person and then, “Oh, now I'm going to Renegade Marketing and I'm looking at the podcast interview that he did,” and you could see this in real time as it was doing that. Now, we could have done that, and a lot of that was in my head and I'll still have to check it, but it was just a relatively complicated thing. Now, the next step would be, “Okay, cool, we've got this as an Excel spreadsheet. Do you think you can create an app that our community could search and do it self-serve on it?” 

Nicole: And a lot of people are using deep research to do that code planning and the coding and everything. So if you are into code and creating apps, I haven't really tried it with that, but a lot of people are. 

One thing I did want to do, I want to answer a question in the chat so I don't forget, and then I'll tell you… Deep research does have the hallucination issue. Every single large language model out there has the issue of hallucination, so they do make things up out of thin air. My experience is deep research in ChatGPT does it far less than anything else I've used, but it still is going to fill in some gaps if it finds them. It happens. You still, if you're going to make any kind of decisions based off of it, you still want to double-check, right? You still want to fact-check. 

If you're giving it your product information, you want to double-check that you're telling the truth about your product and what it outputs and all of that kind of stuff, especially if you're giving it strategic goals and it's trying to accomplish the plan. If it's trying to optimize for conversions or different things, sometimes it'll invent things like, “Yes, if we did have a promotion that was planned that was 85% off, we would have a very high conversion rate if we're giving people $750,000 off their contract,” but that doesn't mean we are, right? So you've got to just be aware of double-checking whatever you're actually putting public-facing, whatever you're making big decisions on. But it can seriously speed up that entire process. 

Drew: By the way, just for everybody, you know, for a while deep research was $200 a month. Now it's included in your $20. And so… 

Nicole: And it's gold. It is, like, it's gold. You get, you know, 20 in a Plus or a Teams or an Enterprise account. You get 10 per billing cycle in your account. So if you have Teams for your whole team, everybody gets 10. So be judicious about how you all use it, but you only get 10 per month. Now, if you have a Pro account, you get 120 a month, which is why I've done a ton with it, because I have, like, way too many ChatGPT… Don't ask me why I have four paid ChatGPT accounts of all different levels. 

I had a talk earlier this week that I was working on the deck for last week, and it was a company that wanted me to come in and give a little bit different talk than my typical talk that I give, but I knew that I had, between a bunch of different decks from the last two and a half, three years, I had the slides somewhere in those decks. But I had no idea where they were, and I am so crazy, because everybody wants to learn AI right now. So I was like, how am I going to find all of this? 

So I went and loaded seven slide decks that I thought, “These ones, it would make sense to have this type of content in it somewhere,” in the hundreds of slides. Uploaded that to deep research. I had handwritten notes for what we were doing. I had my contract of what the talk agreed to be. I just took a photo, got the text out of the photo of my notes, took out her name and stuff, put it up there. So I gave it all the context—what the topic was supposed to be, all my decks, length of the talk and everything—and said, “Please read all of my decks. I need you to tell me and plan my talk to accomplish the goals. Tell me which deck, what slide number I should be pulling out. And if the slide does not exist as a standalone slide for each one, then I want you to write me the prompt for the GPT-4o image generator to make me the slide, but use my own ideas and language.” 

And it came back, and it ended up, it was a nine-slide deck, because it was only a 20-minute talk, a nine-slide deck that it planned out for me. When there were the slides, it decided to tell me, “Well, you have the slides. You could do far better if you combine this slide here from this one and this slide here from that one. So here's your new bullet points. Here's the prompt for you to put in GPT-4o image generator to make it with all the imagery and everything.” So then I just copied the prompt. It opened seven new tabs. Two slides I got to reuse. It told me I was not allowed to reuse any others and that I could do better, telling me it believes in my ability to do a better job on my deck than where I was starting from. 

So I opened seven new tabs, pasted in the prompts it wrote, and then I looked at the content after the images were made. It wrote prompts. I told it, “Make the prompts horizontal slides, this style, blah, blah, blah.” It wrote those prompts, made them. I looked at them, and I was like, “This is my words.” And it really was taking—this deck was from 2023, and this deck was from 2023, and this one was from last week—but it was taking the ideas, it was able to comprehend them, synthesizing these ideas to meet the goals, all my own ideas, and yet somehow a completely original deck that the AI created. 

I went through, did it, actually did the talk a couple days ago. I showed the CMO ahead of time, and they were like, “Oh my God, how did you make this so perfect to what we wanted when you said you were reusing your old content if we did this?” And I was like, “Yeah, ChatGPT is magical.” It was so cool. It was so cool. The only caveat with doing something like that is my old decks were all very text-y. There were images in them, but the information was not in images; the information was in text, so it was able to read all of that. If you did that in an image-heavy, everything-is-embedded-images deck, it's not going to be able to read it. 

So I think I'm going to start putting alt text in all my new deck slides now that ChatGPT can make really good-looking images with text, and put the alt text in so that you could still do this, even if… 

Drew: Right, so it's in the notes section of your… 

Nicole: In the notes, or just, I'm pretty sure you can put—I haven't done it yet—but I'm pretty sure you could do alt text of what the slide says. Right? Okay, I'm there for turning it into a PDF or whatever. So I haven't figured that part out yet, but it's really cool what it did, and the slides look so much better than my own slides. So much better. 

Drew: Yeah, and that's a note. I've had a couple of folks send me decks lately on various IPs they've done, and I was kind of, “Boy, these are not very effective pieces of communication.” And I feel like any deck that you're about to do, if you've already got it ready to go, why not upload it and just say, “Hey, can I make these better, clearer, more…?” and use it that way, as long as you give the context of who the audience is, what you're trying to communicate, because how it looks does matter. 

Nicole: It does, it does. So with deep research, I want to give everybody another tip to get the most out of deep research if you're going to play with it, because those 10 are gold. So this will help you. Context—we already talked about context. Deep research needs lots of context about what you want, especially because you can't go back and forth other than it asking clarifying questions. And in that one, you just answer the clarifying questions in the chat; you don't edit your prompt. 

The other thing is, tell it what on earth you're going to do with it, right? So with this report it gives you, tell it what you're going to do with it and why you are asking. What is this for? What is the point? What is your objective? What goals are you trying to meet? And that will help it create something that is actually what you're looking for. That piece of the model is smart enough to understand that that piece of context really does make a difference to what it should be giving you, and so in that model definitely give it that information. 

Drew: Well, that's a wrap on this Huddles Quick Take. To hear the full episode, including Nicole's take on earning LLM visibility, wrapper tools, and the future of GenAI search, head over to the CMO Huddles Hub on YouTube, or look for the link in the show notes. To connect with Nicole, visit nicoleleffer.com—that's L-E-F-F-E-R dot com. Check out her online course, Foundations of Generative AI for B2B Marketing, and follow her on LinkedIn. Just tell her where you heard her name: CMO Huddles or Drew Neisser. She also offers team training and hands-on consulting for B2B marketing orgs looking to level up their AI maturity, and frankly, who isn't?

Show Credits

Renegade Marketers Unite is written and directed by Drew Neisser. Hey, that's me! This show is produced by Melissa Caffrey, Laura Parkyn, and Ishar Cuevas. The music is by the amazing Burns Twins and the intro Voice Over is Linda Cornelius. To find the transcripts of all episodes, suggest future guests, or learn more about B2B branding, CMO Huddles, or my CMO coaching service, check out renegade.com. I'm your host, Drew Neisser. And until next time, keep those Renegade thinking caps on and strong!