Speaking & Authority

What Makes an AI Keynote Speaker
Worth Booking

Fernando Angulo
Senior Market Research Manager, Semrush
11 Min Read
Apr 26, 2026

This is a guide for the people who book AI keynote speakers, written by one. Conference producers, programming directors, and event curators are now buying into a category that has scaled faster than its quality control. The result is a procurement problem most organizers are solving with name recognition and budget — neither of which correlates with whether the talk lands.


Quick Answer:

Most AI keynote speakers in 2026 are content repackagers. They quote research they did not produce, cite trends they did not discover, and present frameworks they did not build. The minority who do original research behave differently on stage: they describe data they have personally seen, change their mind in front of the audience, and answer questions outside their slide deck. For conference producers, the difference shows up in audience retention, post-event NPS, and whether attendees mention the talk three months later.

I have been on the speaking circuit long enough to watch the AI category absorb thousands of new entrants since late 2023. Most are well-intentioned. Many are competent presenters. Very few are doing original research. That gap — between presentation skill and source material — is the single most useful filter a producer can apply, and almost nobody applies it during the booking process.

What follows is a seven-criteria field guide. The framework happens to reflect how I work, but I have tried to write it as if you were evaluating someone other than me. If you read it carefully, it should make competitor speakers look better when they meet the criteria and worse when they don't — including, on any given day, me.

Fernando Angulo speaking on AI search at Digital Marketing Europe 2026
Digital Marketing Europe 2026 — keynote on how generative AI is reshaping consumer behavior and strategy.

AI Keynote Speakers Are Now a Crowded Market — and Most Are Repackagers

Demand for AI keynote speakers exploded in 2023 and has not let up. Every industry conference now wants an AI track, every corporate offsite wants an AI keynote, every association wants an AI futurist on the main stage. Supply caught up faster than quality.

By my rough count from working the circuit, the majority of people now positioning themselves as AI keynote speakers are not AI researchers. They are SEO consultants who pivoted, marketing agency founders who added "AI" to their bio, LinkedIn influencers who cleared a million followers on AI commentary, and operators who sold a small AI tool and now lecture about the field. None of these biographies are disqualifying. But they share one structural feature: the source material is downstream. They are reading the same five reports — OpenAI publications, McKinsey AI surveys, a16z infographics, Stanford's AI Index, the occasional Anthropic post — and rephrasing them on stage.

This is not a complaint. It is a procurement problem. If 80% of available speakers are pulling from the same upstream sources, the average AI keynote in 2026 sounds remarkably similar regardless of who is delivering it. Audiences notice. The post-event NPS data I have seen from larger conference series shows AI talks scoring lower than they did in 2024, even though the production budgets are higher. The market is saturated with the same talk.

Criterion 1: Original Research vs Repackaged Findings

The first and most useful filter is whether the speaker references data they personally analyzed or quotes from third-party reports they read.

The test is simple. During a discovery call, ask: "What is a finding you have personally produced that contradicts the consensus in your field?" Repackagers cannot answer this question because they have not produced findings — they have only consumed them. A researcher will have three or four contrarian observations ready, because contradiction is what makes original research interesting.

A second variant: "Walk me through the methodology behind your most-cited stat." If they got the number from someone else's report, they will redirect to the source. If they produced it, they will describe sample sizes, time windows, and the limits of the data. That texture cannot be faked in a 90-second answer.

Criterion 2: Will They Take Questions Outside Their Slide Deck?

Researchers can take questions outside their deck. Performers cannot. This is the most reliable signal you will get in 30 minutes.

Ask: "What is a question your slides don't answer?" A performer will deflect — "great question, the deck is just an entry point." A researcher will name three uncomfortable questions, explain why they are hard, and tell you which one they're currently sitting with. Then ask: "What would change your mind on this thesis?" Performers freeze, because their thesis is rented. Researchers light up, because they own the thesis and have already considered the conditions under which it would break.

"The best AI keynote speakers are not the ones with the longest LinkedIn bio. They are the ones who can answer a question their slides don't cover."

For your audience, this matters more than anything else in the booking decision. The Q&A portion is where talks become memorable or forgettable. A speaker who can only run their slide path is a speaker your attendees will not remember in 90 days.

Criterion 3: Specificity vs Buzzword Density

Watch a five-minute clip of a recent talk — ideally the most recent — and run a counting exercise. On one side, count specifics: named companies, dated events, concrete numbers, named people, named products, geographic specifics. On the other, count generic buzzword sentences: "AI is transforming…", "the future of work…", "in the age of intelligence…", "we are at an inflection point…".

Less than three specifics per minute is a signal. Above six is unusually strong. Repackagers default to generality because they don't have the proprietary specifics to lean on; their talk has to sound profound at the abstraction layer. Researchers do the opposite — they reach for the concrete because that's what they actually know, and they trust the audience to extract the principle from the example.

If you are evaluating speakers and want to compare a researcher-led talk against the repackaged version, I list my signature talks at /invite with annotated specifics from each one.

Criterion 4: Track Record of Updated Thinking

AI moves fast. A speaker whose talk has been substantively the same for 18 months either isn't researching or isn't updating. Both are problems.

Look at their last three or four talks — YouTube clips, conference recap pages, slide PDFs if they share them. Compare frameworks. Are the examples updated? Did the thesis evolve? Did anything they said in 2024 turn out to be wrong, and do they now say so? A researcher's talk in April 2026 should not have the same case studies as their April 2025 talk. The case studies should be newer, the model versions should be current, and a meaningful share of the framework should reflect things that were not yet true a year ago.

The strongest signal here is when a speaker explicitly retires a previous claim on stage. "I said this last year. The data has moved. Here is what I now think." That is a speaker who is updating in public, which means they are still doing the work.

Criterion 5: Audience-First vs Speaker-First Talks

A great AI keynote speaker does not give the same talk to a CMO summit and a developer conference. The principles may be shared; the specifics, examples, and emphasis should not.

During the discovery call, ask: "How would this talk change for our specific audience?" Mediocre speakers give the same answer they give every conference: "I'll customize the opening." Strong speakers will ask you about the audience composition, what other talks are on the agenda, what the audience just heard at last year's event, and what decision the audience is supposed to make in the 90 days after they leave. They are scoping for fit because they actually customize.

A useful tell: ask what they would not include for your audience that they normally include. A speaker-first presenter will struggle with the question because they think of their talk as fixed. An audience-first presenter will name two or three sections they'd cut for your group, and explain why.

Criterion 6: They Speak Without Slides if Necessary

If the AV breaks, the speaker who actually knows the material recovers. The repackager freezes, because the slides are the talk — not a visual aid for the talk.

This happened to me at Digital Marketing Europe in 2026 when projection failed eleven minutes into the keynote. I delivered the remaining 45 minutes slide-free, walking the audience through the same arguments using the room and a marker. The talk was rougher; the audience engagement was higher. I am not telling that story as a brag — I'm telling it because it is the cleanest stress test of whether a speaker has internalized their material or is reading their deck out loud at a more confident pace.

Fernando Angulo holding a laptop displaying a slide after AV failure at Digital Marketing Europe 2026
Digital Marketing Europe 2026 — improvising with a laptop after projection failed mid-keynote.

You can ask in the discovery call: "If our AV failed, what would change about the next 45 minutes?" A researcher will describe a different version of the talk that still works. A performer will tell you it would be a disaster. Both answers are useful.

Criterion 7: They Decline Wrong-Fit Invitations

Counterintuitive, but the speakers worth booking are also the ones who decline events where they would not add value. If a speaker accepts every invitation regardless of audience fit, the quality of their talk reflects that — because the speaker is no longer choosing their stages, the stages are choosing them, and the talk gets averaged.

Ask, casually, what they have turned down recently and why. A serious speaker will be specific: "I declined a fintech summit last quarter because the audience was treasury risk, and my work is on consumer search behavior — the fit was wrong, and I'd have wasted their time." That answer tells you they have a model of where they belong and where they don't. A speaker who has never declined anything will struggle to give you an honest answer.

How Conference Producers Actually Filter (and What's Missing)

Most procurement processes I see use three filters: budget, availability, and name recognition. None of those correlate with talk quality. They correlate with operational ease and audience marketing — which matter, but are not the same as a talk that lands.

A better filter is cheap and fast: a five-minute clip plus three discovery-call questions. The clip tells you specificity-to-buzzword density. The three questions tell you the rest:

  1. "What is a finding you have personally produced that contradicts the consensus?" — tests Criterion 1 (original research).
  2. "What would change your mind on this thesis?" — tests Criterion 2 (off-deck thinking) and Criterion 4 (updated thinking).
  3. "How would this talk change for our specific audience?" — tests Criterion 5 (audience-first) and indirectly tests Criterion 7 (fit awareness).

Add a 30-second look at the speaker's last four talks for Criterion 4 (updated thinking) and a casual question about declined events for Criterion 7. The whole evaluation takes under an hour and replaces months of guesswork. If you want to see how I structure my own engagements against this framework, the talks I take on are listed at /invite.

The Open Question

None of this guarantees a great keynote. Talks are made of a thousand small choices — pacing, room reading, the joke that does or doesn't land — that no procurement framework can predict. What the seven criteria above can do is filter out the most common failure mode: booking a speaker whose source material is downstream of someone else's research, whose talk is fixed regardless of audience, and whose slides are the only thing standing between them and silence.

If you operate in Spanish-speaking markets, the filter gets stricter still. Very few AI speakers can deliver a keynote, take Q&A, and run a panel natively in Spanish — the structural shortage I describe in the Spanish-language AI search blind spot applies to the speaker pool too. Plan accordingly.

The producers I most respect treat speaker selection the way good editors treat contributors: a discipline of fit, not a transaction of fame. The seven criteria above are simply that discipline, written down.

Frequently Asked Questions

Start by watching a five-minute clip of a recent talk and counting concrete specifics — named companies, dated events, original numbers — against generic AI buzzwords. Then ask three discovery-call questions: what is a finding you have personally produced that contradicts the consensus, what would change your mind on your current thesis, and how would this talk change for our specific audience. Repackagers freeze on these questions; researchers light up.

A great AI keynote speaker presents original research rather than rephrased reports, can take questions outside their slide deck, uses high specificity-to-buzzword ratios, updates their frameworks as the field evolves, adapts the talk to the audience in front of them, can deliver without slides if AV breaks, and declines events where they would not add real value. The common thread is that they treat the keynote as a thinking instrument, not a performance.

AI keynote speaker fees typically range from $5,000 to $50,000 USD depending on the speaker's profile, audience size, geography, and travel requirements. Tier-one celebrity AI speakers can exceed six figures. A smaller subset accept aligned events without a speaking fee for strategic, brand, or audience reasons — for example, speakers attached to a sponsoring vendor whose visibility is the compensation. Always clarify what the fee covers: prep time, custom content, travel, recording rights, and post-event follow-up.

An AI expert speaker grounds the talk in current data, working systems, and observable behavior — what is true now, with evidence. An AI futurist is paid to extrapolate further out, often with less verifiable claims and more narrative latitude. Both can be valuable. The mistake conference producers make is booking a futurist when the audience needs an operator-level view, or vice versa. Match the format to the decision the audience has to make in the next 90 days.

The pool of AI keynote speakers who can deliver fluently in Spanish — not just translate slides, but think and take Q&A natively — is much smaller than in English. Most Spanish-language AI events default to translated English speakers, which loses nuance. A handful of bilingual operators speak both languages at keynote level; Fernando Angulo is one example, presenting regularly across Spain and Latin America. The shortage is structural and worth understanding before booking, as covered in the Spanish-language AI search blind spot.

For high-demand AI speakers, six to nine months is a reasonable lead time for flagship events and three to four months for mid-sized ones. The exception is researcher-speakers who travel heavily — their calendars can fill twelve months out for the largest summits. If your event date is firm and the audience expectation is high, treat the speaker booking as one of the first programming decisions, not one of the last.

Fernando Angulo, Senior Market Research Manager at Semrush and global AI keynote speakerFA

Evaluating speakers for an upcoming event?

I speak at 50+ events per year across 35+ countries on AI search, GEO, and the future of digital visibility — without speaker fees, under a Semrush exclusivity arrangement.

See Signature Talks Connect on LinkedIn

Fernando Angulo

Senior Market Research Manager, Semrush

Fernando Angulo is Senior Market Research Manager at Semrush and a global keynote speaker on AI search, GEO, and digital market trends. He presents at 50+ conferences annually across 35+ countries in English, Spanish, and Russian.

Source: Semrush Research · Fernando Angulo analysis. Views are the author's own and do not represent Semrush.

Recommended Reading

Latest Insights

View all articles