Slide 50
Slide 50 text
!50
The 4th Workshop on Noisy User-generated Text, Nov 1, 2018, Brussels, Belgium (at EMNLP 2018)
Method Accuracy Precision Recall
Dict 51.2 52.2 53.3
XGB 84.8 87.4 79.5
RForest 89.3 86.1 92.3
LogReg 92.4 91.1 93.1
LSTM 95.1 94.6 95.0
AVG 93.3 93.1 92.9
■ LSTM with pre-trained word vectors
■ Experiments
• Data
• Collected 20,000 distinct steps from cookpad
• Manually annotated by human
• Pos / Neg rate is balanced
• Only using text information
• Without position, timestamp, etc.
• Methods
• Dict: Manually selected clue words
• Non-neural models (Input: TF-IDF vector)
• LogReg: LogisticRegression
• RForest: RandomForest
• XGB: XGBoost
• Neural models(Input: pre-trained skip-gram)
• LSTM: LSTM
• AVG: Average of word vectors
•Distinguish between fake steps and the steps actually used for cooking
Word Embedding
LSTM
# # # # # # #
Dropout
# # # # # # #
LeakyReLu
Sigmoid
Model Architecture • Preprocessing
• Split sentences into words
• Input: “ௐຯྉΛશͯೖΕͯࠞͥ߹Θͤ·͢”
• Ouput: “ௐຯྉ”, “Λ”, “શͯ”, “ೖΕ”, “ͯ”, “ࠞ
ͥ߹Θ”, “ͤ”, “·͢”
• Not apply normalization
• e.g. “ೖΕ” → “ೖΕΔ”
• Word Embedding
• Skip-Gram Word2vec
• Pre-trained with our whole recipe steps
dataset (over 20 million steps)
• (218408, 100) vector
• Results and Discussion
• Over 95% accuracy in neural
models
• Non-neural models are easy to
deploy but insufficient for our service
• LSTM-based model misclassifies
• Steps that include special words
such as “finished”, “completed”,
etc.
• Steps that are advice including
cooking vocabularies
• Future Work
• Get higher score
• Not only textual features
• Deploy and apply in our services
cookedˑ
• False Positive
• JP: “ίʔώʔʹೖΕͯɾɾɾ͓͍͍ͯ͘͠
ϛϧΫίʔώʔͷग़དྷ্Γ”
• EN: “Put in Coffee. You will get sweet delicious
milk coffee. That’s all.”
• False Negative
• JP: “'ˎΦʔϒϯͷ߹̎̏̌℃ʹ༧ͨ͠Φʔ
ϒϯͰ̍̑͘Β͍ম͘ɻ'”
• EN: “Bake for about 15 minutes in an oven
preheated to 230℃.”
Example of results