Slide 1

Slide 1 text

1 How the Future of Search Works Michael King iPullRank Speakerdeck.com/ipullrank @iPullRank

Slide 2

Slide 2 text

2 2 Download this deck: https://speakerdeck.com/ipullrank

Slide 3

Slide 3 text

3 Salutations! I’m Mike King (@iPullRank)

Slide 4

Slide 4 text

4

Slide 5

Slide 5 text

5

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

7 7

Slide 8

Slide 8 text

8 Our understanding of how Google Works is Out of Date

Slide 9

Slide 9 text

9 9 In SEO we still think Google is Here

Slide 10

Slide 10 text

10 10 Google Shifted from Lexical to Semantic a Decade Ago

Slide 11

Slide 11 text

11 11 Google Has Been More Like This Since Hummingbird

Slide 12

Slide 12 text

12 Semantic Search is Fueled by High Density Embeddings …just like large language models.

Slide 13

Slide 13 text

13 13 Since the Introduction of BERT, Google Has Looked More Like This

Slide 14

Slide 14 text

14 14 Under the SGE Model, Google is Structured Liked This

Slide 15

Slide 15 text

15 15 This is a huge problem because SEO software still operates on the lexical model.

Slide 16

Slide 16 text

16 The threat of Google’s Search Generative Experience (SGE)

Slide 17

Slide 17 text

17 At I/O Google Announced a Dramatic Change to Search The experimental “Search Generative Experience” brings generative AI to the SERPs and significantly changes Google’s UX.

Slide 18

Slide 18 text

18 18 Queries are Longer and the Featured Snippet is Bigger 1. The query is more natural language and no longer Orwellian Newspeak. It can be much longer than the 32 words that is has been historically in order 2. The Featured Snippet has become the “AI snapshot” which takes 3 results and builds a summary. 3. Users can also ask follow up questions in conversational mode.

Slide 19

Slide 19 text

19 19 Sundar is All In. In Sundar’s recent press run he keeps saying how Google will be doubling down on SGE. So it’s going to be a thing moving forward.

Slide 20

Slide 20 text

20 20 The Search Demand Curve will Shift With the change in the level of natural language query that Google can support, we’re going to see a lot less head terms and a lot more long tail term.

Slide 21

Slide 21 text

21 21 The CTR Model Will Change With the search results being pushed down by the AI snapshot experience, what is considered #1 will change. We should also expect that any organic result will be clicked less and the standard organic will drop dramatically. However, this will likely yield query displacement.

Slide 22

Slide 22 text

22 Rank Tracking Will Be More Complex As an industry, we’ll need to decide what is considered the #1 result. Based on this screenshot positions 1- 3 are now the citations for the AI snapshot and #4 is below it. However, the AI snapshot loads on the client side, so rank tracking tools will need to change their approach.

Slide 23

Slide 23 text

23 23 Context Windows Will Yield More Personalized Results SGE maintains the context window of the previous search in the journey as the user goes through predefined follow questions. This will need to drive the composition of pages to ensure they remain in the consideration set for subsequent results.

Slide 24

Slide 24 text

24 The State of Google’s Search Generative Experience (SGE)

Slide 25

Slide 25 text

25 25 Ask to Trigger.

Slide 26

Slide 26 text

26 26 Auto-Trigger.

Slide 27

Slide 27 text

27 27 HELLO EMPTY REAL ESTATE!

Slide 28

Slide 28 text

28 28

Slide 29

Slide 29 text

29 29 Now we’re seeing autoTriggered results with Show more

Slide 30

Slide 30 text

30 30

Slide 31

Slide 31 text

31 31

Slide 32

Slide 32 text

32 32 I scraped 91k keywords to find out what’s going on…

Slide 33

Slide 33 text

33 33 It’s an “experiment” so we don’t know much, but here’s what we can infer.

Slide 34

Slide 34 text

34 34

Slide 35

Slide 35 text

35 35 Autoloading AI Snapshots initially took 11- 30 seconds to load.

Slide 36

Slide 36 text

36 36 Average AI Snapshot Load Time 6.08 seconds

Slide 37

Slide 37 text

37 37 We Know that Featured Snippets Take 35.1% of Clicks

Slide 38

Slide 38 text

38 38 Snapshot Type Distribution

Slide 39

Slide 39 text

39 39 39.66% Results Had AI Snapshots

Slide 40

Slide 40 text

40 40 AI Snapshot Speeds Vary by Category

Slide 41

Slide 41 text

41 41 Positions 1,2 & 9 are Most Often Present in the AI Snapshot

Slide 42

Slide 42 text

42 42 The AI Snapshots Most Often Use Six of the Top Ten Results

Slide 43

Slide 43 text

43 43 Using All that Information We’re Modeling the Threat of SGE

Slide 44

Slide 44 text

44 44 Get your threat report: https://ipullrank.com/sge-report

Slide 45

Slide 45 text

45 What is Retrieval Augmented Generation (RAG)?

Slide 46

Slide 46 text

46 46 This is Called “Retrieval Augmented Generation” Neeva (RIP), Bing, and now Google’s Search Generative Experience all use pull documents based on search queries and feed them to a language model to generate a response.

Slide 47

Slide 47 text

47 47 Google’s Initial Version of this is called Retrieval-Augmented Language Model Pre-Training (REALM) from 2021 REALM identifies full documents, finds the most relevant passages in each, and returns the single most relevant one for information extraction.

Slide 48

Slide 48 text

48 48 DeepMind followed up with Retrieval-Enhanced Transformer (RETRO) DeepMind's RETRO (Retrieval-Enhanced Transformer) is a language model that combines a large text database with a transformer architecture to improve performance and reduce the number of parameters required. RETRO is able to achieve comparable performance to state-of-the-art language models such as GPT-3 and Jurassic-1, while using 25x fewer parameters.

Slide 49

Slide 49 text

49 Google’s Later Innovation Retrofit Attribution using Research and Revision (RARR) RARR does not generate text from scratch. Instead, it retrieves a set of candidate passages from a corpus and then reranks them to select the best passage for the given task.

Slide 50

Slide 50 text

50 50 SGE is built from REALM/RETRO/RARR + PaLM 2 and MUM MUM is the Multitask Unified Model that Google announced in 2021 as way to do retrieval augmented generation. PaLM 2 is their latest (released) state of the art large language model. The functionality from REALM, RETRO, and RARR is also rolled into this.

Slide 51

Slide 51 text

51 51 If You Want More Technical Detail Check Out This Paper https://arxiv.org/pdf/2002.08909.pdf

Slide 52

Slide 52 text

52 Retrieval-Enhanced Transformer (RETRO) https://arxiv.org/pdf/2112.04426.pdf

Slide 53

Slide 53 text

53 53 RARR Paper https://arxiv.org/pdf/2210.08726.pdf

Slide 54

Slide 54 text

54 54 Sounds cool, but how does it work?

Slide 55

Slide 55 text

55 55

Slide 56

Slide 56 text

56 56 It’s so easy that I built a proof of concept

Slide 57

Slide 57 text

57 57

Slide 58

Slide 58 text

58 58

Slide 59

Slide 59 text

59 59

Slide 60

Slide 60 text

60 60

Slide 61

Slide 61 text

61 61 AvesAPI + Llama Index + ChatGPT = Raggle Rankings data Vector index & operations Clearly you know what this does.

Slide 62

Slide 62 text

62 62 It’s pretty simple # Make an index from your documents index = VectorStoreIndex.from_documents(documents) # Setup your index for citations query_engine = CitationQueryEngine.from_args( index, # indicate how many document chunks it should return similarity_top_k=5, # here we can control how granular citation sources are, the default is 512 citation_chunk_size=155, ) response = query_engine.query("Answer the following query in 150 words: " + query)

Slide 63

Slide 63 text

63 63 Limitations of my POC It doesn’t do follow up questions It’s not responsive It only does the informational snippet

Slide 64

Slide 64 text

64 64 You can play with it at raggle.net

Slide 65

Slide 65 text

65 Optimizing for SGE?

Slide 66

Slide 66 text

66 66 Search Works Based on the Vector Space Model Let’s go back to the vector space model again. This model is a lot stronger in the neural network environment because Google can capture more meaning in the vector representations.

Slide 67

Slide 67 text

67 67

Slide 68

Slide 68 text

68 Dense Retrieval You remember “passage ranking?” This is built on the concept of dense retrieval wherein there are more embeddings representing more of the query and the document to uncover deeper meaning.

Slide 69

Slide 69 text

69 69 Dense Retrieval is Scoring down to the Sentence Level

Slide 70

Slide 70 text

70 70 It’s all about the chunks. So use Llama Index to determine the your chunks and improve the similarity to the query.

Slide 71

Slide 71 text

71 71

Slide 72

Slide 72 text

72 72 The Fraggles Show What SGE Used for the AI Snapshot

Slide 73

Slide 73 text

73 73 Fraggles Relevance Relevance against the chunks to keyword: Relevance against AI Snapshot:

Slide 74

Slide 74 text

74 74 What is Mitigation for SGE? 1. Manage expectations on the impact 2. Understand the keywords under threat 1. Re-prioritize your focus to keywords that are not under threat 1. Optimize the passages for the keywords you want to save

Slide 75

Slide 75 text

75 The Content Opportunity of RAG

Slide 76

Slide 76 text

There’s a Lot of Synergy Between KGs and LLMs There are three models gaining popularity: 1. KG-enhanced LLMs - Language Model uses KG during pre-training and inference 2. LLM-augmented KGs - LLMs do reasoning and completion on KG data 3. Synergized LLMs + KGs - Multilayer system using both at the same time https://arxiv.org/pdf/2306.08302.pdf Source: Unifying Large Language Models and Knowledge Graphs: A Roadmap

Slide 77

Slide 77 text

Organizations are doing RAG with Knowledge Graphs ● Anyone can feed their data into an LLM as a fine-tuning measure to improve the output. ● People are currently using their knowledge graphs to support this.

Slide 78

Slide 78 text

78 78 The code is not much different sitemap_url = "[SITEMAP URL]" sitemap = adv.sitemap_to_df(sitemap_url) urls_to_crawl = sitemap['loc'].tolist() ... # Make an index from your documents index = VectorStoreIndex.from_documents(documents) # Setup your index for citations query_engine = CitationQueryEngine.from_args( index, # indicate how many document chunks it should return similarity_top_k=5, # here we can control how granular citation sources are, the default is 512 citation_chunk_size=155, ) response = query_engine.query("YOUR PROMPT HERE")

Slide 79

Slide 79 text

79 79

Slide 80

Slide 80 text

80 80 Custom Indexes for RAG Are Coming Soon to AIPRM

Slide 81

Slide 81 text

81 ChatGPT Responses without RAG vs with RAG RAG yields content that is more likely to be factually correct. Combined with AIPRM’s prompts, you’re able to better counteract the more bland content that is flooding the web. RAG

Slide 82

Slide 82 text

Fact Verification ● Although Google has historically said they do not verification of facts. ● LLM + KG integrations make this a possibility and Google needs to combat the wealth of content being produced with LLMs. So, it’s likely they will use this functionality. Source: Fact Checking in Knowledge Graphs by Logical Consistency Source: FactKG: Fact Verification via Reasoning on Knowledge Graphs

Slide 83

Slide 83 text

83 Roll the Credits

Slide 84

Slide 84 text

84 84 Get your threat report: https://ipullrank.com/sge-report

Slide 85

Slide 85 text

Thank You | Q&A [email protected] Award Winning, #GirlDad Featured by Get Your SGE Threat Report: https://ipullrank.com/sge- report Play with Raggle: https://www.raggle.net Download the Slides: https://speakerdeck.com/ipullrank Mike King Chief Content Goblin @iPullRank