Slide 43
Slide 43 text
Why LLMs Need KGs
1 As happened so many times previously over the past 80 years of AI, the initial
fervor around current “AI” (transformers/LLMs) is driven by compelling demos,
but constraints and limitations are typically not apparent until attempts at
production deployment
2 Standalone LLM frustrations include the limited context window (hence RAG) and
its poor scaling characteristics (O(n²)), the lack of introspectability and
justifiability, the inherent propensity towards hallucinations, and the implicit/static
nature of the very knowledge encoded during pre-training
3 Many of these problems can be mitigated or solved by a hybrid approach where
the LLM accesses and manipulates a symbolic knowledge base where facts are
captured and represented in explicit form