prompts, hallucination rates reach 38.3% Zero-Shot: What is the capital of Switzerland? Few-shot: Is it Berlin, Bern, or Rome? Chain of Thought (CoT): Think step-by-step. Instruction: Respond with a concise and factually correct answer. Vague: I heard it‘s Geneva?
cited study by Hookline& says 82.1% of Americans can spot AI. But the survey asked people to rate their own skills without an actual test. Peer-reviewed studies have shown humans can’t reliably tell AI from human text, and text quality is independent from origin. (See CISPA 2024, Fiedler/Döpke 2025). Exception: People who frequently work with AI.
according to this study: This is just one screenshot of very lengthy instructions on how to recognize “slop” with concrete examples. Human annotators had the same instructions.
and then cite the sources that are the closest match to the response Source: Lazarina Stoy „How AI Search Platforms Leverage Entity Recognition and Why It Matters" Query Query Query Query Query Source Source Source Source Source Source Source Source Source Source Source Source Source Source Source Source Source Source Response Citation Citation Citation
citations in AI Overviews. Methodology: AI wrote drafts based on extensive content briefings, edited by human, published on established blog. Source: https:/ /seranking.com/blog/ai-content-experiment/ Test: AI writes blog post based on human briefing
gained initial visibility but then suddenly dropped out of search and never recovered. Methodology: Fresh websites with content fully written by AI, no guidance and no edits. Source: https:/ /seranking.com/blog/ai-content-experiment/ Test: Content fully written by AI
for 68% more keywords than AI content. Methodology: Human writer and AI tasked with writing a blog post optimized for same keyword, no other guidance. Source: https:/ /kaizen.co.uk/free-resources/can-you-scale-llm-traffic-through-ai-ed-coles Test: Human writer and AI write for the same keyword