Upgrade to Pro — share decks privately, control downloads, hide ads and more …

ASIMOV Protocol on NEAR DevHub Live #39 (Februa...

ASIMOV Protocol on NEAR DevHub Live #39 (February 2025)

Join Arto Bendiken on this week's NEAR DevHub Live for a discussion about the ASIMOV Protocol, the newest AI project launching on NEAR. ASIMOV enriches user-owned AI with high-quality structured, linked, and verified knowledge so as to realize the promise of the neurosymbolic synthesis: the integration of large language models with knowledge graphs to create AI systems that are practical, trustworthy, explainable, and interoperable.

ASIMOV Protocol

February 20, 2025
Tweet

Video

More Decks by ASIMOV Protocol

Other Decks in Technology

Transcript

  1. A Brief History of Two Tribes of AI The Connectionists


    (Machine Learners) The Symbolists (Knowledge Engineers) 1 2
  2. “ Cyc notorious failure ”

 The project [was] the most

    in the history of AI. – Pedro Domingos, The Master Algorithm (2015)
  3. Electronic brains (1940s)
 Artificial neural networks or ANNs (1960s/1980s)
 Connectionism

    (1940s/1980s)
 Neural computation (1980s)
 Deep learning (2006/2012+)
 Transformers (2017+)
 Large language models or LLMs (2018/2020+)
 “AI” (2022+) ↓
 ↓
 ↓
 ↓

  4. Mechanical reasoning (1940s) Symbolic processing/AI (1956/1960s) Knowledge representation or KR

    (1970s) Knowledge-based systems (1980s) Semantic networks (1990s) Semantic Web/tech (2001/2000s) Linked data (2006/2010s) Knowledge graphs or KGs (2012+) ↓ ↓ ↓ ↓
  5. Why LLMs Need KGs 1 As happened so many times

    previously over the past 80 years of AI, the initial fervor around current “AI” (transformers/LLMs) is driven by compelling demos, but constraints and limitations are typically not apparent until attempts at production deployment 2 Standalone LLM frustrations include the limited context window (hence RAG) and its poor scaling characteristics (O(n²)), the lack of introspectability and justifiability, the inherent propensity towards hallucinations, and the implicit/static nature of the very knowledge encoded during pre-training 3 Many of these problems can be mitigated or solved by a hybrid approach where the LLM accesses and manipulates a symbolic knowledge base where facts are captured and represented in explicit form
  6. Why KGs Need LLMs 1 LLMs encode internally much of

    the commonsense tacit knowledge that in previous decades proved the most formidable challenge to symbolic approaches of knowledge representation and AI 2 Until recently, any toddler ultimately knew more about the nature and causal structure of our physical and social worlds than any computer 3 The ambition of the four-decade Cyc project that sought to capture and encode
 all commonsense knowledge was amply matched by the interminability
 of the endeavor
  7. Why KGs Need LLMs 4 LLMs’ “knowledge soup”—a large reservoir

    of loosely organized encyclopedic knowledge [Sowa, 1990]—is quantitatively derived from statistical distributions in the pre-training dataset, and after training is implicitly encoded in the billions of weights that constitute the model 5 This “soup” is not particularly introspectable nor is it even consistent, but it is nonetheless practically useful 6 It largely sidesteps the need to explicitly represent and encode truly basic tenets of the world (cf. the Cyc project)—such as that the noon sky is blue, that the arrow of gravity points downwards, and that humans tend to live in families and build housing to seek shelter from the elements
  8. Neural networks in the form of LLMs provide the missing

    ingredient for a hybrid that realizes
 long-standing visions for symbolic AI beyond narrow expert system use cases. And conversely, the explicit knowledge representation in KGs complements and enables LLMs to transcend their limitations and ultimately be safely deployed in real-world use cases beyond technology demos. While not yet widely known in industry,
 the neuro-symbolic approach has in academia already established itself as the most promising state-of-the-art pathway towards building practical, trustworthy, explainable, and interoperable AI systems. The Synthesis