Slide 1

Slide 1 text

Text Classifica on: Naive Bayes [DAT640] Informa on Retrieval and Text Mining Krisz an Balog University of Stavanger August 24, 2021 CC BY 4.0

Slide 2

Slide 2 text

Naive Bayes • Example of a generative classifier • Estimating the probability of document x belonging to class y P(y|x) = P(x|y)P(y) P(x) • P(x|y) is the class-conditional probability • P(y) is the prior probability • P(x) is the evidence (note: it’s the same for all classes) 2 / 8

Slide 3

Slide 3 text

Naive Bayes classifier • Estimating the class-conditional probability P(y|x) ◦ x is a vector of term frequencies {x1 , . . . , xn } P(x|y) = P(x1, . . . , xn|y) • “Naive” assumption: features (terms) are independent: P(x|y) = n i=1 P(xi|y) • Putting our choices together, the probability that x belongs to class y is estimated using: P(y|x) ∝ P(y) n i=1 P(xi|y) 3 / 8

Slide 4

Slide 4 text

Es ma ng prior class probabili es • P(y) is the probability of each class label • It is essential when class labels are imbalanced 4 / 8

Slide 5

Slide 5 text

Es ma ng feature distribu on • How to estimate P(xi|y)? • Maximum likelihood estimation: count the number of times a term occurs in a class divided by its total number of occurrences P(xi|y) = ci,y ci ◦ ci,y is the number of times term xi appears in class y ◦ ci is the total number of times term xi appears in the collection • But what happens if ci,y is zero?! 5 / 8

Slide 6

Slide 6 text

Smoothing • Ensure that P(xi|y) is never zero • Simplest solution:1 Laplace (“add one”) smoothing P(xi|y) = ci,y + 1 ci + m ◦ m is the number of classes 1More advanced smoothing methods will follow later for Language Modeling 6 / 8

Slide 7

Slide 7 text

Prac cal considera ons • In practice, probabilities are small, and multiplying them may result in numerical underflows • Instead, we perform the computations in the log domain log P(y|x) ∝ log P(y) + n i=1 log P(xi|y) 7 / 8

Slide 8

Slide 8 text

Reading • Text Data Management and Analysis (Zhai&Massung) ◦ Chapter 15: Section 15.5.2 8 / 8