Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
NLTK Intro for PUGS
Search
Victor Neo
March 27, 2012
Programming
7
570
NLTK Intro for PUGS
Slides for the NLTK talk given on March 2012 for Python User Group SG Meetup.
Victor Neo
March 27, 2012
Tweet
Share
More Decks by Victor Neo
See All by Victor Neo
Django - The Next Steps
victorneo
5
640
DevOps: Python tools to get started
victorneo
9
13k
Git and Python workshop
victorneo
2
790
Other Decks in Programming
See All in Programming
One Enishi After Another
snoozer05
PRO
0
160
PHPに関数型の魂を宿す〜PHP 8.5 で実現する堅牢なコードとは〜 #phpcon_hiroshima / phpcon-hiroshima-2025
shogogg
1
330
3年ぶりにコードを書いた元CTOが Claude Codeと30分でMVPを作った話
maikokojima
0
640
スキーマ駆動で、Zod OpenAPI Honoによる、API開発するために、Hono Takibiというライブラリを作っている
nakita628
0
320
CSC305 Lecture 09
javiergs
PRO
0
310
Server Side Kotlin Meetup vol.16: 内部動作を理解して ハイパフォーマンスなサーバサイド Kotlin アプリケーションを書こう
ternbusty
3
240
The Past, Present, and Future of Enterprise Java
ivargrimstad
0
250
Range on Rails ―「多重範囲型」という新たな選択肢が、複雑ロジックを劇的にシンプルにしたワケ
rizap_tech
0
7.7k
AI駆動で0→1をやって見えた光と伸びしろ
passion0102
1
840
NIKKEI Tech Talk#38
cipepser
0
230
マンガアプリViewerの大画面対応を考える
kk__777
0
250
あなたとKaigi on Rails / Kaigi on Rails + You
shimoju
0
190
Featured
See All Featured
ReactJS: Keep Simple. Everything can be a component!
pedronauck
667
130k
The Illustrated Children's Guide to Kubernetes
chrisshort
49
51k
How STYLIGHT went responsive
nonsquared
100
5.9k
For a Future-Friendly Web
brad_frost
180
10k
Visualization
eitanlees
149
16k
The Art of Delivering Value - GDevCon NA Keynote
reverentgeek
16
1.7k
Designing for Performance
lara
610
69k
Making the Leap to Tech Lead
cromwellryan
135
9.6k
Raft: Consensus for Rubyists
vanstee
140
7.2k
Fireside Chat
paigeccino
41
3.7k
Optimizing for Happiness
mojombo
379
70k
Why Our Code Smells
bkeepers
PRO
340
57k
Transcript
Natural Language Toolkit @victorneo
Natural Language Processing
"the process of a computer extracting meaningful information from natural
language input and/or producing natural language output"
None
Getting started with NLTK
Open source Python modules, linguistic data and documentation for research
and development in natural language processing and text analytics, with distributions for Windows, Mac OSX and Linux. NLTK
None
installatio n # you might need numpy pip install nltk
# enter Python shell import nltk nltk.download()
None
packages # For Part of Speech tagging maxent_treebank_pos_tagger # Get
a list of stopwords stopwords # Brown corpus to play around brown
Preparing data / corpus
tokens NLTK works on Tokens, for example, "Hello World!" will
be tokenized to: ['Hello', 'World', '!'] The built-in tokenizer for most use cases: nltk.word_tokenize("Hello World!")
text processing HTML text: raw = nltk.clean_html(html_text) tokens = nltk.word_tokenize(raw)
text = nltk.Text(tokens) Use BeautifulSoup for preprocessing of the HTML text to discard unnecessary data.
Part-of-speech tagging
pos tagging text = "Run away!" nltk.word_tokenize(text) nltk.pos_tag(tokens) [('Run', 'NNP'),
('away', 'RB'), ('!', '.')]
pos tagging [('Run', 'NNP'), ('away', 'RB'), ('!', '.')] NNP: Proper
Noun, Singular RB : Adverb http://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos. html
pos tagging "The sailor dogs the barmaid." [('The', 'DT'), ('sailor',
'NN'), ('dogs', 'NNS'), ('the', 'DT'), ('barmaid', 'NN'), ('.', '.')]
Sentiment Analysis Code: http://bit.ly/GLu2Q9
Differentiate between "happy" and "sad" tweets. Teach the classifier the
"features" of happy & sad tweets and test how good it is.
Happy: "Looking through old pics and realizing everything happens for
a reason. So happy with where I am right now" Sad: "So sad I have 8 AM class tomorrow"
Process data (tweets) Extract Features Train classifier Test classifer accuracy
Tokenize tweets extract_features Naive Bayes Classifier
Process data (tweets) Extract Features Train classifier Test classifer accuracy
Tokenize tweets extract_features Naive Bayes Classifier
happy.txt sad.txt happy_test.txt sad_test.txt } training data } testing data
Tweets obtained from Twitter Search API
Process data (tweets) Extract Features Train classifier Test classifer accuracy
Tokenize tweets extract_features Naive Bayes Classifier
Happy tweets usually contain the following words: "am happy", "great
day" etc. Sad tweets usually contain the following: "not happy", "am sad" etc. features
{'contains(not)': False, 'contains(view)': False, 'contains(best)': False, 'contains(excited)': False, 'contains(morning)': False,
'contains(about)': False, 'contains(horrible)': True, 'contains(like)': False, ... } output of extract_features()
Process data (tweets) Extract Features Train classifier Test classifer accuracy
Tokenize tweets extract_features Naive Bayes Classifier
training_set = \ nltk.classify.util.\ apply_features(extract_features, tweets) classifier = \ NaiveBayesClassifier.train
(training_set) training the classifer training classifer
Process data (tweets) Extract Features Train classifier Test classifer accuracy
Tokenize tweets extract_features Naive Bayes Classifier
def classify_tweet(tweet): return \ classifier.classify(extract_features (tweet)) testing classifer
$ python classification.py Total accuracy: 90.00% (18/20) 18 tweets got
classified correctly.
Where to go from here.
http://www.nltk.org/book
https://class.coursera.org/nlp/auth/welcome
http://www.slideshare.net/shanbady/nltk-boston-text-analytics
[('Thank', 'NNP'), ('you', 'PRP'), ('.', '.')] @victorneo