Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Mastering_SEO_in_a_Generative_AI_World__Turning...

Michael King
November 14, 2024
57

 Mastering_SEO_in_a_Generative_AI_World__Turning_Google_s_Secrets_into_ROI.pdf

Michael King

November 14, 2024
Tweet

Transcript

  1. 1 1

  2. 4

  3. 7 7

  4. 8 8

  5. 10 10 40% of People Leaving ChatGPT Go to Google

    My assumption is that many of these people are fact-checking. That is a bad behavior to establish for nearly half of your users.
  6. 11 11 Yes, TikTok is a (Nascent) Search Engine 41%

    of Tiktok users perform searches, but the search volume around a series of broad and meaningful queries is not there to make it more than a small supplement to Google Search.
  7. 12 12 20% of People Going to Tiktok Come from

    Google 22.5% of People Leaving Tiktok Go to Google
  8. 18 18 Google is still the main event, but we

    are going back into a world where we need to optimize for multiple search engines across a series of channels.
  9. 21 21 Search Engines Work based on the Vector Space

    Model Documents and queries are plotted in multidimensional vector space. The closer a document vector is to a query vector, the more relevant it is.
  10. 22 22 TF-IDF Vectors The vectors in the vector space

    model were built from TF-IDF. These were simplistic based on the Bag-of-Words model and they did not do much to encapsulate meaning.
  11. 23 23 Relevance is a Function of Cosine Similarity When

    we talk about relevance, it’s the question of similar is determined by how similar the vectors are between documents and queries. This is a quantitative measure, not the qualitative idea of how we typically think of relevance.
  12. 26 26 The lexical model counts the presence and distribution

    of words. Whereas the semantic model captures meaning. This was the huge quantum leap behind Google’s Hummingbird update and most SEO software has been behind for over a decade. Google Shifted from Lexical to Semantic a Decade Ago
  13. 27 Word2Vec Gave Us Embeddings Word2Vec was an innovation led

    by Tomas Mikolov and Jeff Dean that yielded an improvement in natural language understanding by using neural networks to compute word vectors. These were better at capturing meaning. Many follow-on innovations like Sentence2Vec and Doc2Vec would follow.
  14. 29 29 This Allows for Mathematical Operations Comparisons of content

    and keywords become linear algebraic operations.
  15. 33 33 This is a huge problem because most SEO

    software still operates on the lexical model.
  16. 35 35 8 Google Employees Are Responsible for Generative AI

    https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/
  17. 36 36 The Transformer The transformer is a deep learning

    model used in natural language processing (NLP) that relies on self-attention mechanisms to process sequences of data simultaneously, improving efficiency and understanding in tasks like translation and text generation. Its architecture enables it to capture complex relationships within the text, making it a foundational model for many state-of-the-art NLP applications.
  18. 42 42 Embeddings are one of the most fascinating things

    we can leverage as SEOs to catch up to what Google is doing.
  19. 43 MixedBread’s Open Source Embeddings are Highly Performant Last week

    @dejanseo shard his research on how MixedBread’s embedding models perform better than anything else for his SEO use cases. He also talked about lowering the dimensionality and converting them to binary representations to save space.
  20. 44 Site Embeddings Are Used to Measure How On Topic

    a Page is Google is specifically vectorizing pages and sites and comparing the page embeddings to the site embeddings to see how off-topic the page is. Learn more about embeddings: https://ipullrank.com/content-relevance
  21. 45 45 You Can Compute This Too I’m using the

    embeddings with cosine similarity and clustering to examine two ways that how pages relate or don’t relate to the site average of the embeddings. Notice how my recent posts on AI related topics for SEO all have high PageSiteSimilarity whereas my post about MozCon from 2011 does not.
  22. 46 Check out the Colab This uses Requests, Trafilatura, Transformers,

    PyTorch, Scikit Learn, and Sentence Transfomers to compute SiteScore and a dataframe of cosine similarities and cluster based scores for all URLs crawled. https://colab.research.google.com/drive /19PJiFmv8oyjhB-jwzEK9TPlbfK-xB573 You can remove the outliers to improve your site focus score. Add this to your content audits.
  23. 47 47 When I Run it on my whole Sitemap,

    My Site is Not Very Focused
  24. 48 48 Add this data to your content audits to

    make data-driven decisions of what to cut.
  25. 49 Content needs to be more focused We’ve learned definitively

    that Google uses vector embeddings to determine how far off given a page is from the rest of what you talk about. This indicates that it will be challenging to go far into upper funnel content successfully without a structured expansion or without authors who have demonstrated expertise in that subject area. Encourage your authors to cultivate expertise in what they publish across the web and treat their bylines like the gold standard that it is.
  26. 50 Build Topic Clusters Well defined topic clusters can position

    your website and brand into an authority in your space and strengthen your entities in the eyes of Google. A site that focuses on a series of topics that are relevant to each other are going to benefit in rankings. Here are a few tools that can help you design and build your topic clusters systematically. Thruuu https://thruuu.com/keyword-clustering-tool Keyword Insights https://www.keywordinsights.ai/features/keyword-clustering/
  27. 51 Let Screaming Frog Do the Heavy Lifting Generate embeddings

    while you crawl using Screaming Frog SEO Spider. Take the file to Colab and do the following things: Keyword - Landing Page Relevance Scoring Keyword Mapping Link Building Target Identification Redirect Mapping Internal Link Mapping https://ipullrank.com/vector-embedding s-is-all-you-need You can also work with your language model to combine crawl data with SERP data and do things like information gain calculations.
  28. 52 52 Leveraging generative AI is a combination of content

    strategy, your unique creative angles, and deep understanding of the technical nuances of a channel.
  29. 53 53 This is our opportunity to get up from

    the kids table. This is our opportunity to get up from the kids table
  30. 55 55 Google’s Algorithms Inner Workings Have Been Put on

    Full Display Lately Through a combination of what’s come out of Google’s DOJ antitrust trial and the Google API documentation leak, we have a much clearer picture of how Google actually functions.
  31. 56 56 I was the First to Publish on the

    Google Leak, but it was a Team Effort
  32. 57 57 We Now Have a Much Stronger Understanding of

    the Architecture https://searchengineland.com/how-google-search-ranking-works-445141
  33. 59 The Primary Takeaway is the Value of User Behavior

    in Organic Search Google’s Navboost system keeps track of user click behavior and uses that to inform what should rank in various contexts.
  34. 65 65 I remain adamant that both Google and the

    SEO community owes @randfish an apology.
  35. 68 68 User Click Data is What Makes Google More

    Powerful Than Any Other Search Engine The court opinion in the DoJ Antitrust trial, Google’s leaked documents, and Google’s own internal documentation all support the fact that click behavior is what makes Google perform the way that it does.
  36. 75 Modern SEO Needs UX Baked-in Google has expectations of

    performance for every position in the SERP. The user behavior signals collected reinforce what should rank and demote what doesn’t perform just like a social media channel. The best way to scale this is by generating highly-relevant content with a strong user experience.
  37. 77 NavBoost Performance Starts at the SERP Itself Continually testing

    your metadata is a must. Check out the SearchPilot team’s case studies for ideas: https://www.searchpilot.com/resources /case-studies/tag/meta-data
  38. 78 78 Design Content for the Human Condition Design your

    content so it is easier to consume and it will yield better performance metrics. https://moz.com/blog/10-super-easy-seo-copywriting-tips-for-link-building
  39. 79 79 Google is Using Passage Indexing to Try to

    Drop the User Into the Right Spot
  40. 80 80 Use Logical Chunking To Get Users to the

    Information Faster https://www.nngroup.com/articles/in-page-links-content-navigation/
  41. 84 84 Build Pages that Are Easy to Parse Create

    semantically relevant content Build a table of contents Drop anchor links throughout the page to help Google understand where the user is meant to go.
  42. 85 Stop Leading With This I came for a recipe.

    Not your Grandma’s life story!
  43. 86 Less is More, More or Less It’s time to

    cut out the content madness
  44. 87 87 You Don’t Need Link Volume, You Need Link

    Quality Indexing Tier Impacts Link Value A metric called sourceType that shows a loose relationship between the where a page is indexed and how valuable it is. For quick background, Google’s index is stratified into tiers where the most important, regularly updated, and accessed content is stored in flash memory. Less important content is stored on solid state drives, and irregularly updated content is stored on standard hard drives. The higher the tier, the more valuable the link. Pages that are considered “fresh” are also considered high quality. Suffice it to say, you want your links to come from pages that either fresh or are otherwise featured in the top tier. Get links from pages that live in the higher tier by modeling a composite score based on data that is available.
  45. 89 89 Google Stores Your Content Like the Wayback Machine

    and Uses the Change History Google’s file system is capable of storing versions of pages over time similar to the Wayback Machine. My understanding of this is that Google keeps what it has indexed forever. This is one of the reasons you can’t simply redirect a page to an irrelevant target and expect the link equity to flow. The docs reinforce this idea implying that they keep all the changes they’ve ever seen for the page. You’re not going to get away with things by simply changing your pages once.
  46. 90 Indexing is Also Harder It’s not being talked about

    as much, but indexing has gotten a lot harder since the Helpful Content update. You’ll see a lot more pages in the “Discovered - currently not indexed” and “Crawled - currently not indexed” than you did previously because the bar is higher for what Google deems worth capturing from the web.
  47. 91 91 Google Wants to Crawl Even Less Gary Illyes

    has indicated that he wants to have Google crawl less. Search quality certainly cannot suffer, so crawlin has to get increasingly intelligent.
  48. 92 92 I Believe This is a Function of Information

    Gain Conceptually, as it relates to search engines, Information Gain is the measure of how much unique information a given document adds to the ranking set of documents. In other words, what are you talking about that your competitors are not?
  49. 94 94 There are Gold Standard Documents There is no

    indication of what this means, but the description mentions “human-labeled documents” versus “automatically labeled annotations.” I wonder if this is a function of quality ratings, but Google says quality ratings don’t impact rankings. So, we may never know. 🤔
  50. 95 Measure Your Content Against the Quality Rater Guidelines Elias

    Dabbas created a python script and tool that uses the Helpful Content Recommendations to show a proof of concept way to analyze your articles. We’d use the Search Quality Rater Guidelines which serve as the Golden Document standard. Code: https://blog.adver.tools/posts/llm-content-evaluation/ Tools: https://adver.tools/llm-content-evaluation/
  51. 96 96 In conclusion: “More content” is no longer inherently

    the most effective approach because there’s no guarantee of traffic from Google. Google’s sophistication won’t allow it.
  52. 98 98 I’m Leaving Y’all with Three Actions Today 1.

    How to Prune Your Content 2. How to Appear in LLMs 3. How to Use LLMs to Generate Valuable Content
  53. 10 1 Aleyda Has a Process Aleyda’s workflow is a

    great place to work through whether your content should be pruned or not. https://www.aleydasolis.com/en/crawli ng-mondays/how-to-prune-your-website- content-in-an-seo-process-crawlingmon days-16th-episode/
  54. 10 2 10 2 We like automate to get to

    a Keep. Revise. Kill. (Review.)
  55. 10 3 10 3 Content Decay The web is a

    rapidly changing organism. Google always wants the most relevant content, with the best user experience, and most authority. Unless you stay on top of these measures, you will see traffic fall off over time. Measuring this content decay is as simple comparing page performance period over period in analytics or GSC. Just knowing content has decayed is not enough to be strategic.
  56. 10 4 10 4 It’s not enough to know that

    the page has lost traffic.
  57. 10 8 10 8 Interpreting the Content Potential Rating 80

    - 100: High Priority for Optimization 60 - 79: Moderate Priority for Optimization 40 - 59: Selective Optimization 20 - 39: Low Priority for Optimization 0 - 19: Minimal Benefit from Optimization If you want quick and dirty, you can prune everything below a 40 that is not driving significant traffic.
  58. 10 9 10 9 Combining CPR with pages that lost

    traffic helps you understand if it’s worth it to optimize.
  59. 11 0 11 0 Step 1. Pull the Rankings Data

    from Semrush Organic Research > Positions > Export
  60. 11 1 11 1 Step 2: Pull the Decaying Content

    from GSC Google Search Console is a great source to spot Content Decay by comparing the last three months year over year. Filter for those pages where the Click Difference is negative (smaller than 0) then export.
  61. 11 2 11 2 Step 3: Drop them in the

    Spreadsheet and Press the Magic Button
  62. 11 3 The Output is List of URLs Prioritized by

    Action Each URL is marked as Keep, Revise, Kill or Review based on the keyword opportunities available and the effort required to capitalize on them. Sorting the URLs marked as “Revise” by Aggregated SV and CPR will give you the best opportunities first.
  63. 11 4 11 4 Get your copy of the Content

    Pruning Workbook : https://ipullrank.com/cpr-sheet
  64. 11 5 How to Kill Content Content may be valuable

    for channels outside of Organic Search. So, killing it is about changing Google’s experience of your website to improve its relevance and reinforce its topical clusters. The best approach is to noindex the pages themselves, nofollow the links pointing to them, and submit an XML sitemap of all the pages that have changed. This will yield the quickest recrawling and reconsideration of the content.
  65. 11 6 11 6 How to Revise Content Review content

    across the topic cluster Use co-occurring keywords and entities in your content Add unique perspectives that can’t be found on other ranking pages Answer common questions Answer the People Also Ask Questions Restructure your content using headings relevant to the above Add relevant Structured markup Expand on previous explanations Add authorship Update the dates Make sure the needs of your audiences are accounted for Add to an XML sitemap of only updated pages
  66. 11 7 How to Review Content The sheet marks content

    that has a low content potential rating and a minimum of 500 in monthly search volume as “Review” because they may be long tail opportunities that are valuable to the business. You should take a look at the content you have for that landing page and determine if you think the effort is worthwhile.
  67. 11 9 11 9 Combining a Search Engine with a

    Language Model is called “Retrieval Augmented Generation” Neeva (RIP), Bing, and now Google’s Search Generative Experience all use pull documents based on search queries and feed them to a language model to generate a response. This concept was developed by the Facebook AI Research (FAIR) team.
  68. 12 0 12 0 Google’s Initial Version of this is

    called Retrieval-Augmented Language Model Pre-Training (REALM) from 2021 REALM identifies full documents, finds the most relevant passages in each, and returns the single most relevant one for information extraction.
  69. 12 1 12 1 DeepMind followed up with Retrieval-Enhanced Transformer

    (RETRO) DeepMind's RETRO (Retrieval-Enhanced Transformer) is a language model that combines a large text database with a transformer architecture to improve performance and reduce the number of parameters required. RETRO is able to achieve comparable performance to state-of-the-art language models such as GPT-3 and Jurassic-1, while using 25x fewer parameters.
  70. 12 2 Google’s Later Innovation Retrofit Attribution using Research and

    Revision (RARR) RARR does not generate text from scratch. Instead, it retrieves a set of candidate passages from a corpus and then reranks them to select the best passage for the given task.
  71. 12 3 12 3 AIO is built from REALM/RETRO/RARR +

    PaLM 2 and MUM MUM is the Multitask Unified Model that Google announced in 2021 as way to do retrieval augmented generation. PaLM 2 is their latest (released) state of the art large language model. The functionality from REALM, RETRO, and RARR is also rolled into this.
  72. 12 5 12 5 Documents are Broken into Chunks and

    the Most Relevant Chunks are Fed to the Language Model to Generate a Response
  73. 12 7 12 7 The Three Laws of Generative AI

    content 1. Generative AI is not the end-all-be-all solution. It is not the replacement for a content strategy or your content team. 2. Generative AI for content creation should be a force multiplier to be utilized to improve workflow and augment strategy. 3. You should consider generative AI content for awareness efforts, but continue to leverage subject matter experts for lower funnel content.
  74. 13 0 13 0 Think back to 7 Minutes Ago

    - Retrieval Augmented Generation
  75. 13 1 13 1 It’s Not Difficult to Build with

    Llama Index sitemap_url = "[SITEMAP URL]" sitemap = adv.sitemap_to_df(sitemap_url) urls_to_crawl = sitemap['loc'].tolist() ... # Make an index from your documents index = VectorStoreIndex.from_documents(documents) # Setup your index for citations query_engine = CitationQueryEngine.from_args( index, # indicate how many document chunks it should return similarity_top_k=5, # here we can control how granular citation sources are, the default is 512 citation_chunk_size=155, ) response = query_engine.query("YOUR PROMPT HERE")
  76. 133 PAGE GENERATIVE AI PRODUCTIVITY USE CASES RAG opens up

    a series of generative AI use cases that work well for your situation. Briefing & Business Cases Content Analysis First-pass Brand Review First-pass Legal Review Content First Draft Keyword Insertion Structured Data Generation Link Identification & Insertion Generating Voiceovers Generating Images Generating Videos Writing Code
  77. 13 4 @BritneyMuller’s Guide to Using Colab Britney talked about

    how easy it is to use Colab with Python. Now it’s even easier to using LLMs. https://github.com/BritneyMuller/colab- notebooks?tab=readme-ov-file
  78. 13 5 Just describe what you want You can tell

    your language model what you want the code to do and it will handle the rest. If it doesn’t work, just describe what went wrong or paste the error and it will fix it for you. In this example my prompt is: {write python code for colab that takes a csv file of keywords and using bertopic with the chatgpt to compute the natural language topics for each row.}
  79. 15 2 15 2 Blocking LLMs is a Mistake. Appearing

    in these places will be recognized as brand awareness opportunities very soon.
  80. Embrace Structured Data There are three models gaining popularity: 1.

    KG-enhanced LLMs - Language Model uses KG during pre-training and inference 2. LLM-augmented KGs - LLMs do reasoning and completion on KG data 3. Synergized LLMs + KGs - Multilayer system using both at the same time https://arxiv.org/pdf/2306.08302.pdf Source: Unifying Large Language Models and Knowledge Graphs: A Roadmap
  81. 15 4 15 4 What is Mitigation for AIO? 1.

    Manage expectations on the impact 2. Understand the keywords under threat 3. Re-prioritize your focus to keywords that are not under threat 4. Optimize the passages for the keywords you want to save
  82. 15 6 15 6 We Can Also Show You Per

    Keyword How You Show Up
  83. 16 1 16 1 Fraggles Relevance Relevance against the chunks

    to keyword: Relevance against AI Snapshot:
  84. 16 6 The GEO team shared their ChatGPT prompts The

    GEO team also shared the ChatGPT prompts that help them improve their visibility. You can augment them and put them to work right away. https://github.com/GEO-optim/GEO/blo b/main/src/geo_functions.py
  85. Check out @GarrettSussman’s post on how to optimize for AI

    Overviews: https://ipullrank.com/optimize-content-for-sge
  86. 16 9 16 9 What you should know and do

    to win Google is still the primary show in town Relevance is a quantitative measure GenAI works on the same math as search engines Focus on making your chunks for relevant to rank in GenAI Search Improve UX to drive more long clicks Focus on content your audience wants, prune what they don’t Use RAG to generate content with AI
  87. Contact me if you want to get better results from

    your SEO: [email protected] Thank You | Q&A Award Winning, #GirlDad Featured by Download the Slides: https://speakerdeck.com/ipullrank Mike King Chief Executive Officer @iPullRank