July 21, 2025

The Synthetic Research Advantage

Synthetic research is rewriting the rules—and Jon Lombardo is here to explain how. In this Huddles Quick Take, the Evidenza co-founder breaks down the three biggest misconceptions marketers have about AI-generated research, from assumptions about accuracy to the myth of average insights. 

You’ll hear how synthetic research can help you test messaging, uncover clear audience insights, and even rewrite jargon so sales and buyers actually understand it—all without waiting months or spending a fortune. 

What You’ll Learn: 

  • 3 misconceptions marketers have about synthetic research 
  • Why AI can surface insights that are better than traditional methods 
  • How synthetic research unlocks deeper audience understanding at scale 
  • When to use synthetic over human research (and how they work together) 

For the rest of the conversation with Jon—including B2B use cases, segmentation examples, and a deeper dive into category entry points—visit our YouTube channel (CMO Huddles Hub) or click here: [https://www.youtube.com/watch?v=k4S_Ib-SmZY]. 

Get more insights like these by joining our free Starter program at cmohuddles.com. 

Renegade Marketers Unite, Episode 466 on YouTube

Resources Mentioned 

Highlights

  • [1:59] Meet Jon Lombardo 
  • [3:23] 3 misconceptions about synthetic research 
  • [8:10] Misconception #1: “AI can’t tell me what a human is thinking” 
  • [10:09] Misconception #2: “AI stands for ‘Average Intelligence’” 
  • [16:22] Misconception #3: “Synthetic research is risky”  
  • [17:31] A real-world B2B synthetic study  

Highlighted Quotes  

You are defining your customer set for us, then we build their digital twin. —Jon Lombardo

You're not getting an average intelligence from that model. You're effectively getting a PhD to go and grade answers. Then those answers are used to train the model. It’s PhD level intelligence at scale. —Jon Lombardo 

”People are still asking questions the old way. We don't have those constraints anymore. We can ask more interesting questions, more of them, and we can go back and interrogate them even further.” —Jon Lombardo

Full Transcript: Drew Neisser in conversation with Jon Lombardo

   

Drew: Hello, Renegade Marketers! If this is your first time listening, welcome, and if you're a regular listener, welcome back. Before I present today's episode, I am beyond thrilled to announce that our second in-person CMO Super Huddle is happening November 6th and 7th, 2025. In Palo Alto last year, we brought together 101 marketing leaders for a day of sharing, caring, and daring each other to greatness, and we're doing it again! Same venue, same energy, same ambition to challenge convention, with an added half-day strategy lab exclusively for marketing leaders. We're also excited to have TrustRadius and Boomerang as founding sponsors for this event. Early Bird tickets are now available at cmohuddles.com. You can even see a video there of what we did last year. Grab yours before they're gone. I promise you we will sell out, and it's going to be flocking awesomer!

Welcome to CMO Huddles Quick Takes, our Tuesday spotlight series where we share key insights that you can use right away. In this one, we're diving into synthetic research with Jon Lombardo, co-founder of Evidenza and former head of research at LinkedIn B2B Institute. Jon lays out the three biggest misconceptions holding marketers back from adopting synthetic research and why this faster, cheaper, and more flexible alternative to traditional research is worth your time. Let's get into it.

Narrator: Welcome to Renegade Marketers Unite, possibly the best weekly podcast for CMOs and everyone else looking for innovative ways to transform their brand, drive demand, and just plain cut through, proving that B2B does not mean boring to business. Here's your host and Chief Marketing Renegade, Drew Neisser.

Drew: Hello Huddlers! I'm really thrilled to have you all here today for what promises to be, I think, a fascinating conversation about the future, or certainly the potential future of market research. Today, we're diving into the world of synthetic research, which some of you may know, some of you may not, but you're all going to be going "mind-blowing," and we're going to talk about it with Jon Lombardo, founder of Evidenza, or co-founder. Jon's company is making waves by delivering market research for major brands ten times faster and half the cost of traditional methods while maintaining equal accuracy or efficacy. Yeah, as I said, it's kind of mind-blowing. So before founding Evidenza, Jon became known to many of you via the incredible work of LinkedIn B2B Institute. So we know we need insights. We know that they can be the difference in the lifeblood of really effective B2B marketing. JD, we just talked about that and the need of an insight to create a great sales pitch. But traditional methods are often slow and expensive and cumbersome. So Jon is here to help us understand how synthetic research is changing that equation. So with that, Jon, welcome. How are you and where are you joining us from this fine day?

Jon: I'm doing well. I'm delighted to be here. I'm in Brooklyn, in the WeWork in downtown Brooklyn, on Montague Street, if any of you know it.

Drew: All right, let's get to it. And we're going to be talking about B2B marketing, and I want to make sure that, just in case our audience has to leave early, or we want to convince others to stay, what are sort of three misconceptions that marketers may have about synthetic research? And one of them maybe they simply don't know what exists. But give us sort of three thoughts, one at a time, and then you and I can discuss those thoroughly.

Jon: Yeah, I guess the number one misconception would be that AI cannot reliably transmit to you what a human thinks or communicate to you what a human thinks, right? People have a hard time believing that AI can tell me what a wealth manager thinks or a product manager thinks. And in fact, there's increasing amounts of research showing that AI is very good at impersonating people and telling you exactly what they think. You know, we sometimes joke that we're working not with an n of a certain number, but n infinity, because you're getting, you know, all of the product managers in a specific category to give you an opinion, rather than just one of them. So this is probably the biggest misconception. It's probably the biggest skepticism we have to overcome every day. You know, there's just no way that AI can reliably communicate to me what my customer thinks. It has to be done with humans. It can't be done with AI. So that's probably number one, I would say.

Drew: Okay, well, I'll come back to that because I have some questions. But go ahead and go to number two.

Jon: I would say number two is that, you know, if AI can tell me something, you can just tell me the average insight. It can't tell me anything innovative, anything unique, anything insightful. And I would say that's entirely false. I would say there's two different ways to think about that. One is just, I think, a misunderstanding of what actually innovation is. You know, innovation is generally just making kind of like incremental progress every day. You know, it is not some lightning bolt that comes out of the sky and strikes you and you have this kind of great idea. So just the idea that you can learn faster than other people using synthetic research, that is a huge advantage of AI. Speed alone allows you to learn more rapidly than other companies, other competitors, and if you just learn more rapidly than your competitors, then you will end up being more innovative than they are. So I would say that people think it's average, but actually, the process alone is extremely innovative. That's kind of on the more, let's call it, market research side of things. But I would say even on the creative side of things, you know, there's an idea called temperature in AI, and you can turn the temperature way up, or you can turn it way down. If you turn it way down, it means you get very little variability in your responses. You'll get a very consistent answer. If you turn the temperature way up, then actually you'll get a lot of variance in your answer. So in a creative campaign, or for a creative campaign or creative idea, you probably turn the temperature way up, and it'll give you some really wild ideas, you know, and you'll probably throw away most of them, but you'll probably get one really wild idea that's very interesting, very memorable, likely to cut through and be remembered. And so I would say on the insight side, you know, just the process of running faster, more iterative research is very innovative. But on the creative side, it can also come up with very creative ideas. And increasingly, it is doing that kind of work.

Drew: Okay. And number three, because I have questions about that too, but let's see. Let's get to the third.

Jon: And I think in all of this, you know, people think that synthetic research is risky. You know, I'm not talking to humans. It's going to be very risky for me. You know, maybe it's a mistake. Maybe I'll lose my job if I try. It's a novel idea, but I would say again, if I just kind of doubled down on the core idea, that I believe it's much less risky. You know, if you do research today, I think you have two primary challenges when you do research today. One is what we call survey remorse. You write the survey the wrong way, you field the survey, you get the answers back. Only after that point do you realize, "Oh, shit, you know, I wrote it the wrong way. I wish I had that to do back." I just burned, you know, a bunch of money getting something that is not useful to me. That's the survey remorse side of it. So the client writing the survey, that's why people labor so much over the surveys. We can rerun surveys. So if you write it and you get the wrong answer back, we can run it for you again. So we remove survey remorse on, I would call it, kind of the client side. And on the respondent side, there's a well-known idea called survey fatigue. And survey fatigue is why people only ask 15 questions, and it's often "choose from these options," and, you know, they're very simple, Likert scale type stuff, because you just have to get people through the survey fast. But in fact, robots are infinitely patient, and so they will answer question 30 with as much fidelity and as much integrity as they answered question one. So it is not less risky to do this kind of research. In fact, because you can go back and rerun the survey and then you get these kind of much better responses, so much so that people have said to us, "These are actually too good. These responses," which is also something you've heard about synthetic diamonds recently, which is the complaint now, is that synthetic diamonds are too big and too good.

Drew: Got it. Awesome. All right. Well, let's go back. AI reliability. Lots of folks have been doing research for a long time. They've sort of created some understanding that there's a certain level of accuracy. What helps you? What kind of proof have you developed that is actually showing that this is, in fact, reliable from a... you know, this is really, truly as good as what humans would have responded to?

Jon: Yeah, well, I would say first and foremost, the more actual market research I've seen, especially in B2B, the more riddled with errors I realize it is. So many people who we work with have a lot more confidence in our results than their own results, but their results are human results, so they've trusted them because that's the only option that they have. But I would say the more you see of what we do versus what you do with humans, the more you realize you get much better responses, much more insightful responses, from AI than you do from humans. But we do a lot of comparisons between AI and humans, and we generally see correlations, you know, point six, point seven, point eight, which is extremely high. You know, in the world of hard science, you have very high correlations. That's the whole thing, right? The things have to work. In the world of soft science, which is what we're all in, you know, if you get a point seven or point eight correlation, that's effectively a gold standard, and we relatively routinely get that across a survey. Now, in terms of just kind of replicating human response, most research doesn't have that level of accuracy. So we do a lot of testing like that, where people give us a survey, we don't get the answers, we just get the survey. We then run it. We recreate their sample, we recreate their survey, we run it, we then compare it afterwards, a kind of classic double-blind study, and then we look at the correlations. And so we see very high correlations. And then when you have the ability to go back and ask additional questions, as we often prefer, then you can even further refine the results. So it's not a one-off result, it's an iterative process, both in kind of gathering insights, but also improving the correlation. So we've just done a ton of that. Now, we've probably done more of those in B2B, frankly, you know, than anybody at this point. Like we could easily and have thought about and will at some point partner with an academic just to write up what we've learned, because I think it's such a fascinating area. And we arguably have more B2B data on both, kind of like the human and the synthetic side, in this way, than anybody else right now.

Drew: I love it, yeah, and that's certainly an easy way to sort of test. This is the results that came back with humans. This is the results that came back with us. What's it look like? So, and you've done that, that's a good answer. Now we got to the second point, a big area was average insights. And I think that a lot of us, when we started using, say, ChatGPT to write or ask questions and so forth, we sort of had AI standing for average intelligence. So you're saying it's not average insight. And you talked about, from a temperature standpoint, how you make sure that that's not... I want to dive in a little deeper into making sure, and that maybe this is a moment where you can explain what are we talking about here, when we're talking about creating synthetic research?

Jon: Yeah, I guess the number one misconception would be that AI cannot reliably transmit to you what a human thinks or communicate to you what a human thinks, right? People have a hard time believing that AI can tell me what a wealth manager thinks or a product manager thinks. And in fact, there's increasing amounts of research showing that AI is very good at impersonating people and telling you exactly what they think. You know, we sometimes joke that we're working not with an N of a certain number, but N-infinity, because you're getting, you know, all of the product managers in a specific category to give you an opinion, rather than just one of them. So this is probably the biggest misconception. It's probably the biggest skepticism we have to overcome every day. You know, there's just no way that AI can reliably communicate to me what my customer thinks. It has to be done with humans. It can't be done with AI. So that's probably number one, I would say.

Drew: Okay, well, I'll come back to that because I have some questions. But go ahead and go to number two.

Jon: I would say number two is that, you know, if AI can tell me something, you can just tell me the average insight. It can't tell me anything innovative, anything unique, anything insightful. And I would say that's entirely false. I would say there's two different ways to think about that. One is just, I think, a misunderstanding of what actually innovation is. You know, innovation is generally just making kind of like incremental progress every day. You know, it is not some lightning bolt that comes out of the sky and strikes you and you have this kind of great idea. So just the idea that you can learn faster than other people using synthetic research, that is a huge advantage of AI. Speed alone allows you to learn more rapidly than other companies, other competitors, and if you just learn more rapidly than your competitors, then you will end up being more innovative than they are. So I would say that people think it's average, but actually, the process alone is extremely innovative. That's kind of on the more—let's call it market research side of things. But I would say even on the creative side of things, you know, there's an idea called temperature in AI, and you can turn the temperature way up, or you can turn it way down. If you turn it way down, it means you get very little variability in your responses. You'll get a very consistent answer. If you turn the temperature way up, then actually you'll get a lot of variance in your answer. So in a creative campaign, or for a creative campaign or creative idea, you probably turn the temperature way up, and it'll give you some really wild ideas, you know, and you'll probably throw away most of them, but you'll probably get one really wild idea that's very interesting, very memorable, likely to cut through and be remembered. And so I would say on the insight side, you know, just the process of running faster, more iterative research is very innovative. But on the creative side, it can also come up with very creative ideas. And increasingly, is doing that kind of work.

Drew: Okay. And number three, because I have questions about that too, but let's see. Let's get to the third.

Jon: And I think in all of this, you know, people think that synthetic research is risky. You know, I'm not talking to humans. It's going to be very risky for me. You know, maybe it's a mistake. Maybe I'll lose my job if I try. It's a novel idea, but I would say again, if I just kind of doubled down on the core idea, that I believe it's much less risky. You know, if you do research today, I think you have two primary challenges when you do research today. One is what we call survey remorse. You write the survey the wrong way, you field the survey, you get the answers back. Only after that point do you realize, oh, shit, you know, I wrote it the wrong way. I wish I had that to do back. I just burned, you know, a bunch of money getting something that is not useful to me. That's the survey remorse side of it. So the client writing the survey, that's why people labor so much over the surveys. We can rerun surveys. So if you write it and you get the wrong answer back, we can run it for you again. So we remove survey remorse on, I would call it kind of the client side, and on the respondent side, there's a well-known idea called survey fatigue. And survey fatigue is why people only ask 15 questions, and it's often choose from these options, and, you know, and they're very simple, Likert scale type stuff, because you just have to get people through the survey fast. But in fact, robots are infinitely patient, and so they will answer question 30 with as much fidelity and as much integrity as they answered question one. So it is not less risky to do this kind of research, in fact, because you can go back and rerun the survey and then you get these kind of much better responses, so much so that people have said to us, these are actually too good. These responses, which is also something you've heard about synthetic diamonds recently, which is the complaint now, is that synthetic diamonds are too big and too good.

Jon: Yeah, I mean, what we're really doing is we are creating digital twins of your customers. So you are defining your customer set for us. You're saying, "I operate in the cat... I'm LinkedIn. I operate in the category of advertising solutions. My primary buyers come from technology. They come from financial services, they come from manufacturing, they come from healthcare, they come from education." You know, they are primarily, you know, in the marketing function. But there will be some financial decision makers involved. So we'll include the financial function. You know, they have decision making authority. They have some number of years of experience, and they're probably from, you know, let's say 50-plus employees on. You would define your audience in the exact same way you would do it with traditional market research. We will then build digital twins. So I will build like, a synthetic version of Drew or synthetic version of Jon, and it will then go and build all those characteristics, then essentially learn everything about that kind of person and the kinds of decisions that they make and the kind of authority that they have, and what their industry is, and who their customers are, and how they think about ROI versus clicks. And it will just give all of that back to you, and it will rank order it for you. So in the same way you run a survey, you know, you'll ask a number of questions. It can be either here are the list of options, choose from it, or even here's open ended. Give me how you would think about this, or even more interesting stuff, like, how would you explain this to your board or your CEO? So that's all that you're really doing with synthetic research, is you are creating the sample of people you care about, and then you're asking them the questions you want, and then you are essentially teasing and better questions while they get better answers. And more average questions get more average answers. So there's an art to it all, of course, but you know, that's generally what the process is. And I would say, you know, so let me just stop there, see if you have a question. Then I have another comment on average versus kind of excellent intelligence, let's call it.

Drew: Yeah. I mean, there's, I have so many questions, but well, I think we're, we're still in this world of, in the realm of average insight versus exceptional. So let's, let's keep going with your other additional insight on average insight.

Jon: Yeah. I mean, I would just say, as a start, I'm actually curious on this call, how many people have ever used the o1 pro model from OpenAI? So I see at least one hand if I keep on looking, you know. So let's assume it ain't that many people, because I only see one hand. I see, I see a couple thumbs up. The o1 pro model is actually basically what it is doing is taking the core model that OpenAI built, and then it is trained basically using answers from PhDs. So you're not getting an average intelligence from that model. You're effectively getting a PhD in the hard sciences or the soft sciences to go and grade answers. Then those answers are used to train the model, so you're getting PhD level intelligence now at scale. So when you get a response about marketing, you can go ask it to give you a response about marketing from Byron Sharp or Philip Kotler, whoever you care about as your thinker, and you'll get that kind of response. So effectively, you have a PhD in your pocket. This has been going on for a long time, actually, in fact, you know, they use a lot of very standardized tests to show the improvement that the models make. So for a long time now, the models have been able to ace the LSAT or the MCAT. And obviously, I think we all can agree that the LSAT and the MCAT are a lot more difficult than most of the marketing decisions that are being made. So I think if it can do those things well, it can do marketing quite well. In fact, really well.

Drew: Yeah. I mean, this is sort of subtle in the sense that what we're counting on and with this research, is we've got a target audience, and there's multiple people on the buying committee, and that you can create a synthetic twin of every single person on the buying committee with the wrinkles that go on with, you know, with each industry and the verticals and all those things. And so that when, just as you would with research, where you say, these are the people that we want to talk to, we want to talk to heads of, you know, security at these companies, for example, we're relying on the fact that the synthetic twins are going to be with all their sort of will have the same foibles as humans. And sort of answer, hopefully with with a degree of honesty and transparency.

Jon: But they can do another thing Drew, they can do another thing that the... I mean, we talk a lot of our people in B2B talk a lot about an ideal customer profile, right? But in fact, if you go listen, if you go read the verbatims in lots of market research, it doesn't really sound like the ideal customer profile. To me, it sounds very much like the average customer profile. And I'm not sure how much you can learn from the... I think in aggregate, you can learn a lot from the customer. You know, right? An average response means that lots of people believe that thing, which means there's lots of commercial opportunity. So people use average in a way that I wouldn't use average. Average in that sense across a big sample for certain questions, means the biggest commercial opportunity that is not average. That is important to understand. It's a stable insight. You can build a business around to channel Bezos language, but in terms of qualitative response, which we also pair with our qual. So our quant is what we just talked about, the average. It's actually the big commercial opportunity. The qual we also give you is your actual ideal customer profile. You can effectively get the smartest customer on demand to answer your questions about your product or your pricing or your placement or your promotion, your four Ps. So AI has a flexibility to kind of, in some sense, be as dumb or as smart as you want, to be as average or intelligent as you want. So it's knowing where to choose those kind of, you know, the average or the kind of intelligence and apply it. That's where it matters. But people don't offer you the flexibility, or, frankly, the intelligence, and I say that as a person that we probably want as marketers.

Drew: So okay, the last thing that we talked about is that this is actually a less risky form of research in the sense and we've been there any number of us who've done research studies, and it's one of the reasons why you create a version of your research before, you know, then you test it of the reasons why you create a version of your research before, you know, then you test it with a few people before you go out to thousands, if you're if that's where you're going. But it does remove that risk, because, as you said, you can, sort of, you can re-run it, and also, sort of, given the fact that it costs less, it takes some of the risk off the table as well, right?

Jon: Yeah, it does, all right. Some people have said, arguably, we should charge more because it's less risky, which I think is a very fair point of view. We don't do that. But there's a view that, you know, if it's so risky to do it one way, and then we remove that risk, arguably it is more valuable for that reason.

Drew: Well, and you know, and you also have that sort of perfect, faster, better, cheaper potential, which is sort of the Golden Triangle of promises out there. So let's, I think it would really help people, particularly if you had an example, a specific example of some research that you have done that you can talk about in general, or ideally a B2B customer.

Jon: Yeah, I can absolutely do that for you. So we do lots of research. Sometimes it is research around we have three messaging pillars we're trying to understand which one we use to put on our landing page or have our sales team talk about. I think the key idea there is what you find often with messaging is that it's just a reflection of the most senior person in the room. So like, when we ran this for a big data management company, they were like, "This is so interesting. You ran the three pillars." And one thing that we do is we just identify what is the least understood messaging and who is it least understood by. And so everybody's talking about AI and AI agents. Nobody in a non-technical role has any idea what that stuff means. None. Even many of the people in technical roles don't have any understanding what that means. So if you take that data back to the organization, they get a better understanding of their customer. Hey, your customer doesn't understand what that is. The AI, it turns out, can also take things that are confusing to people and write a more clear version to the entire buying committee. So you can, say, identify all the confusing language in these messaging pillars and write it in a way that is clear to everybody on the buying committee, technical and non-technical, right? So, like, one we had was bidirectional, you know, like, you know, communication with, I can't remember the platform, and the AI rewrote it as seamless integration with Microsoft 365 tools. So it turns out it's very good at jargon busting or removing confusion and then writing it in a way that's clear to everybody that was not really possible before. You wouldn't have asked those kinds of questions or had that kind of data as quickly as they needed it to answer that right? So that's kind of in the beta in the B2B world. We do that a lot. We do we test value propositions, message testing. We test creative and it can either be concepts or it can be actual ads. And then another thing you can do is you can just ask questions. You can pivot your research almost in real time. So the thing we did in the B2C world, which I think has a lot of application also in the B2B world, is this was around like actual ready-to-drink products. So you go in and you buy, kind of like a pre-mixed Jack and Coke. Let's call it something like that. The company that you all would know, a very well-known CPG company, they realized that in course of the research, they didn't actually really understand what they meant by convenience. Did convenience mean it was in the store next door? Did it mean an aluminum bottle versus a glass bottle? Did it mean a screw top versus kind of like a, you know, bottle opener top? Did it mean a variety pack versus a single flavor? These are questions you wouldn't be able to so they ran the research, realized that they didn't understand convenience. We're able to ask a whole set of questions, or serious questions about convenience and give them a much deeper understanding of something that you would think they would understand, but they didn't right. And of course, that's true in B2B too, like you may want to understand, what does integration really mean when I talk about integrating my product with other products, or who is actually on the buying committee, or how would you frame it to your CEO or your board? There's all these questions that are really wonderful to ask, but you probably haven't asked them, so you're unsure whether you should ask them. So we don't have the limitation of 15 questions, and they have to be on a Likert scale, we can actually ask questions that are more interesting for storytelling and for teasing out insights, and then it's easier for you to tease out the stories and tell the stories internally. So there are just ways that when you have an old when TV basically became a thing, they just took what worked in radio and they did it on TV, right? Everyone knows that story. When the new medium emerges, you just did the old thing. You don't do the new thing. Yet. The same thing is going on right now. Synthetic research. People are still asking questions the old way. We don't have those constraints anymore. So we can ask much more interesting questions, many more questions. We can go back and interrogate the question even further. So we're reimagining right now what is possible. And I think that is just incredibly exciting and insightful.

Drew: It's funny because over the years, I've had so many conversations with CMOs where we talk about sometimes they they just, can I be dumb for a moment and ask this question, like, what does this mean? And that was kind of the role of the CMO, in many ways, is to help get this into language that people could actually understand. That wasn't just for technical and by the way, I see agentic everywhere, I still have no clue what that means. And it's used everywhere. And I think what happens in companies is that you just get used to, hey, we're all using it, so everybody understands it, right? And so that's a that's a pretty profound thing. And I think sometimes, by the way, another thing that's interesting, and it's always helpful with research, is it's not the CMO or the marketing people saying, hey, nobody understands this. The research is telling us that, that this is a problem. Now this gets back to, well, what research and why should I trust synthetic research? And I do think that you are coming from a place where proving effectiveness on a broad scale, not I mean, look, there's lots of folks I know, because you guys have been growing like crazy that have bought into it, but there is a there's a leap of faith here. Oh, wait, this is suddenly different. Is it really going to be as accurate as if the old research was accurate? Right? That's which is a funny thing in itself. So, I mean, this is really complicated for B2B CMOs. Is that if they were to bring this in as a tool, they would really need to, I think, sort of do what you've done, which is run some research that they've done already, and see what the results are with the synthetic.

Jon: Yeah. But I would also, I would, again, I would just kind of push on a point I made earlier. I think people have a fundamental misunderstanding of what accuracy is, because the big questions, can it be accurate? The real question is, can you work with us. It may be accurate out of the gate, we have multiple customers where we have worked over subsequent iterations to improve the accuracy, and now you have a calibrated understanding of your customer that allows you to move faster than your competitors. So the idea that it is a one-shot effort is not right. It's a multi-shot effort. It's about collaborating, calibrating, and then you have an advantage. And that's what everybody should seek to do. So there's an idea called evals in AI, which you see a lot of. Actually, it's what I talked about earlier. It's I have AI take the LSAT or the MCAT, and I say, wow, it's doing better than all doctors, all lawyers, or I have it take a scientific exam and Oh, my God, it's better than PhD candidates, right? And so what they're doing is they're basically showing that it's getting more and more accurate, better and better and better like we start, we've started to do that now too. So you can basically bring us a survey now, and we will run it against a bunch of different models at a bunch of different temperatures, and say we ran it 20 different ways. This is the best calibration for your audience. And now you can have confidence that we know it didn't work. We also know it did work. We're going to go with what did work and then make decisions on that going forward. So, but again, I feel this is going to be a continuous process that will be in, the customers will be in, but, but the idea isn't to be it's the get to kind of accuracy or through refinement. That's how I would encourage people to think about it.

Drew: Right. If this isn't, this isn't someone just saying to you, hey, give me research on this target, and you plug it into your little machine and you give them answers. There's some kind of it's, it sounds like it's, it's a process that they go through when they work with you to to sort of find the insights that they're looking for.

Jon: That's why I think we've been successful, because when we fail, we can still correct the failure in the old model, you fail. You fail. It's over. It's done in our model, if we fail, like we've had projects that don't go well the first time we work with the client to make them better, to refine them, and then they end up in a great place. So we've had multiple customers who, you know, through refinement, have basically like, become completely convinced of what we do is better than the old way of doing it, and then become subscription customers. It seemed like it would have failed, and then they became a kind of a, you know, an ongoing customer. So, you know, this is just kind of the the new way of doing things isn't the old way of doing things.

Drew: That's a wrap on this. Huddle's quick take to hear the full episode, including where CMOs can start Jon's take on situational awareness and why brand fame doesn't always unlock growth. Head over to the CMO Huddles hub on YouTube, or check the link in the show notes. And if you're ready to rethink your research strategy, be sure to follow Jon on LinkedIn, or visit evidenza.com to learn more. Jon will also be at the CMO super huddle in November 7 in Palo Alto. Get your early bird tickets while they last. cmohuddles.com/super-huddle peace out.

Show Credits

Renegade Marketers Unite is written and directed by Drew Neisser. Hey, that's me! This show is produced by Melissa Caffrey, Laura Parkyn, and Ishar Cuevas. The music is by the amazing Burns Twins and the intro Voice Over is Linda Cornelius. To find the transcripts of all episodes, suggest future guests, or learn more about B2B branding, CMO Huddles, or my CMO coaching service, check out renegade.com. I'm your host, Drew Neisser. And until next time, keep those Renegade thinking caps on and strong!