Slide 1

Slide 1 text

An integrated model of concept learning and word-concept mapping Molly Lewis Michael C. Frank Stanford University The 35th Annual Cognitive Science Society Meeting 1 August 2013

Slide 2

Slide 2 text

Two problems of word learning “apple” Mapping Problem Generalization Problem ? ? ?

Slide 3

Slide 3 text

The Mapping Problem – Cross-situational statistics (Pinker, 1984; Smith & Yu, 2008; Yu & Smith, 2007) – Disambiguation (Markman & Wachtel, 1988; Clark, 1987) – Social cues (Baldwin,1991; Baldwin, 1993) The Generalization Problem – Shape bias (Smith, Jones, Landau, Gershkoff-Stowe, & Samuelson, 2002) – Taxonomic bias (Markman, 1990) – Apart from word learning, well-studied in adults (e.g., Laurence & Margolis, 1999; Rosch & Mervis, 1975; Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976; Medin & Ortony, 1989; Sloutsky, 2001, for developmental work) Solving the word learning problems

Slide 4

Slide 4 text

Two Problems: Theoretically distinct, but intimately related Goal: Explore within a single formal framework how these two problems might be solved jointly. Mapping Problem Generalization Problem

Slide 5

Slide 5 text

Part I: Experimental tests of generalization in word learning

Slide 6

Slide 6 text

Method To model Generalization Problem, draw on the Boolean concept learning – Concepts are defined by a set of features with a range of values (Shepard, Hovland, & Jenkins, 1961). To model Mapping Problem, focus on the role of cross-situational statistics – Adopt model that considers a speaker’s intention to identify the referent in ambiguous contexts (Frank, Goodman, and Tenenbaum, 2009).

Slide 7

Slide 7 text

Hierarchical Bayesian Model [ 1 2 2 ] “wug” (exact inference by explicit enumeration) [1 2 2] [1 2 1] wug dax

Slide 8

Slide 8 text

Experiments • Learner’s goal: Map a word to a set of features that define the relevant concept. • Gave subjects ambiguous evidence about the mappings between words and objects – Experiment 1: One situation – Experiment 2: Two situations • Measured subjects’ generalization patterns to other objects given training data • Adults recruited from Amazon Mechanical Turk

Slide 9

Slide 9 text

Experiment 1: Task [ 1 2 2 ] [ 1 1 1 ] These objects could be called dax gren nes: Bet on whether these objects could be called dax gren nes: [1 1 1] [1 1 2] [1 2 1] [2 1 1] [1 2 2] [2 1 2] [2 1 1] [2 2 2]

Slide 10

Slide 10 text

Experiment 1: Results Graded generalizations Training TWOwithONE Test item type M 0 20 T M 0 20 Training ONEwithBOTH ONEwithONE Training items share 2 features [1 1 1] [1 1 2] Test item type Mean bet on test item 0 20 40 60 80 100 Mean bet on test item 0 20 40 60 80 100 N=156 Training Data

Slide 11

Slide 11 text

● ● ● ● ● ● ● ● ● ● ● ● 0 20 40 60 80 100 20 40 60 80 100 model predictions mean human bet Cross−Situational Concept Model r = 0.877 ● ● ● ● ● ● ● ● ● ● ● ● 0 20 40 60 80 100 20 40 60 80 100 model predictions mean human bet Feature Distance Model r = 0.822 Experiment 1: Model Fits ` Feature Distance Model Count # of different features between object and training exemplar, and sum across all training exemplars [1 2 2] and [ 2 2 2] => FD = 1

Slide 12

Slide 12 text

Experiment 2: Task Suppose you saw these two objects and heard dax bren nes. Now suppose you saw these two new objects and heard dax bren nes. [2 2 1] [1 1 1] [2 2 1] [1 1 1] * Manipulated number of features shared within and and across situations One shared feature within situation

Slide 13

Slide 13 text

Experiment 2: Task Suppose you saw these two objects and heard dax bren nes. Now suppose you saw these two new objects and heard dax bren nes. [1 1 1] [1 1 1] [1 2 2] [1 2 2] * Manipulated number of features shared within and and across situations One shared feature across situations

Slide 14

Slide 14 text

Test item type 0 feature Confounded within situations: 2 features Test item type Mean bet on test item 0 20 40 60 80 100 Test item type 0 ature Confounded within situations: 2 features Test item type Mean bet on test item 0 20 40 60 80 100 Mean bet on test item Test item type 0 0 Confounded within situations: 2 features Test item type Mean bet on test item 0 20 40 60 80 100 Mean bet on test item 0 20 40 60 80 100 s1 s2 Test item type Test item type Experiment 2: Results Graded generalizations N=266 Training Data

Slide 15

Slide 15 text

Experiment 2: Model Fits ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 20 40 60 80 100 20 40 60 80 100 model predictions mean human bet Cross−Situational Concept Model r = 0.95 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 20 40 60 80 100 20 40 60 80 100 model predictions mean human bet Feature Distance Model r = 0.891

Slide 16

Slide 16 text

Part II: Theoretical simulations of Generalization and Mapping Problems

Slide 17

Slide 17 text

Data Posterior Distribution over Lexicons Generalization and Mapping simulation I

Slide 18

Slide 18 text

Data Posterior Distribution over Lexicons Generalization and Mapping simulation II

Slide 19

Slide 19 text

Conclusion • Our model performed competitively with a simple feature distance model. • However, our model has the machinery to also deal with more complex worlds in which multiple words are present. • Provides a fruitful theoretical tool for future work to explore how children might solve Mapping and Generalization problems together.

Slide 20

Slide 20 text

Members of Language and Cognition Lab Research Assistant, Mia Kirkendoll Members of Markman Lab Thank you