Slide 1

Slide 1 text

#BuildOnTruth

Slide 2

Slide 2 text

The Neurosymbolic Synthesis

Slide 3

Slide 3 text

A Brief History of Two Tribes of AI The Connectionists
 (Machine Learners) The Symbolists (Knowledge Engineers) 1 2

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

A Brief History of Two Tribes of AI

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

Doug Lenat (1950-2023)

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

“ Cyc notorious failure ”

 The project [was] the most in the history of AI. – Pedro Domingos, The Master Algorithm (2015)

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

Yann LeCun
 (1960-)

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

Lessons in Marketing

Slide 38

Slide 38 text

Electronic brains (1940s)
 Artificial neural networks or ANNs (1960s/1980s)
 Connectionism (1940s/1980s)
 Neural computation (1980s)
 Deep learning (2006/2012+)
 Transformers (2017+)
 Large language models or LLMs (2018/2020+)
 “AI” (2022+) ↓
 ↓
 ↓
 ↓


Slide 39

Slide 39 text

Mechanical reasoning (1940s) Symbolic processing/AI (1956/1960s) Knowledge representation or KR (1970s) Knowledge-based systems (1980s) Semantic networks (1990s) Semantic Web/tech (2001/2000s) Linked data (2006/2010s) Knowledge graphs or KGs (2012+) ↓ ↓ ↓ ↓

Slide 40

Slide 40 text

LLMs × KGs =

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

The Neurosymbolic Synthesis

Slide 43

Slide 43 text

Why LLMs Need KGs 1 As happened so many times previously over the past 80 years of AI, the initial fervor around current “AI” (transformers/LLMs) is driven by compelling demos, but constraints and limitations are typically not apparent until attempts at production deployment 2 Standalone LLM frustrations include the limited context window (hence RAG) and its poor scaling characteristics (O(n²)), the lack of introspectability and justifiability, the inherent propensity towards hallucinations, and the implicit/static nature of the very knowledge encoded during pre-training 3 Many of these problems can be mitigated or solved by a hybrid approach where the LLM accesses and manipulates a symbolic knowledge base where facts are captured and represented in explicit form

Slide 44

Slide 44 text

Why KGs Need LLMs 1 LLMs encode internally much of the commonsense tacit knowledge that in previous decades proved the most formidable challenge to symbolic approaches of knowledge representation and AI 2 Until recently, any toddler ultimately knew more about the nature and causal structure of our physical and social worlds than any computer 3 The ambition of the four-decade Cyc project that sought to capture and encode
 all commonsense knowledge was amply matched by the interminability
 of the endeavor

Slide 45

Slide 45 text

Why KGs Need LLMs 4 LLMs’ “knowledge soup”—a large reservoir of loosely organized encyclopedic knowledge [Sowa, 1990]—is quantitatively derived from statistical distributions in the pre-training dataset, and after training is implicitly encoded in the billions of weights that constitute the model 5 This “soup” is not particularly introspectable nor is it even consistent, but it is nonetheless practically useful 6 It largely sidesteps the need to explicitly represent and encode truly basic tenets of the world (cf. the Cyc project)—such as that the noon sky is blue, that the arrow of gravity points downwards, and that humans tend to live in families and build housing to seek shelter from the elements

Slide 46

Slide 46 text

Neural networks in the form of LLMs provide the missing ingredient for a hybrid that realizes
 long-standing visions for symbolic AI beyond narrow expert system use cases. And conversely, the explicit knowledge representation in KGs complements and enables LLMs to transcend their limitations and ultimately be safely deployed in real-world use cases beyond technology demos. While not yet widely known in industry,
 the neuro-symbolic approach has in academia already established itself as the most promising state-of-the-art pathway towards building practical, trustworthy, explainable, and interoperable AI systems. The Synthesis

Slide 47

Slide 47 text

Why Isn’t Everyone Doing This?

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

No content

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

#BuildOnTruth The Protocol for the Connectivity
 of Verified and Structured Knowledge in a World of Misinformation