Slide 1

Slide 1 text

How to apply the NLP model to advertising

Slide 2

Slide 2 text

Goal • How to apply the NLP model to advertising • Why BERT

Slide 3

Slide 3 text

Agenda • NLP models • Transformer • BERT • Sequence analyze vs Human analyze • Advertising by analyzed human

Slide 4

Slide 4 text

Who am I? • Used Collaborative-Filtering for Recommendation • Predict correct/incorrect answer for TOEIC • Learn “Transformer” from NLP experts • seq-to-seq • LSTM • encoder/decoder • RNN • query/key/value

Slide 5

Slide 5 text

• Because of their internal memory, RNN’s can remember important things about the input they received, which allows them to be very precise in predicting what’s coming next. This is why they’re the preferred algorithm for sequential data like time series, speech, text, fi nancial data, audio, video, weather and much more Recurrent Neural Network

Slide 6

Slide 6 text

Recurrent Neural Network

Slide 7

Slide 7 text

Recurrent Neural Network

Slide 8

Slide 8 text

Recurrent Neural Network

Slide 9

Slide 9 text

sequence-to-sequence աח ೟Үী р׮ I go to school seq2seq

Slide 10

Slide 10 text

sequence-to-sequence աח ೟Үী р׮ I go to school seq2seq Encoder Decoder Context

Slide 11

Slide 11 text

sequence-to-sequence աח ೟Үী р׮ I go to school seq2seq Encoder Decoder Context RNN RNN RNN RNN I h1 go h2 to h3 school h4 Contex Vector Contex Vector RNN RNN RNN աח ೟Үী р׮

Slide 12

Slide 12 text

Vanishing Gradients Problem

Slide 13

Slide 13 text

Long short-term memory

Slide 14

Slide 14 text

sequence-to-sequence(LSTM) աח ೟Үী р׮ I go to school seq2seq Encoder Decoder Context LSTM LSTM LSTM LSTM I h1 go h2 to h3 school h4 Contex Vector Contex Vector LSTM LSTM LSTM աח ೟Үী р׮

Slide 15

Slide 15 text

Long sentence problem

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

StateOfTheArt

Slide 18

Slide 18 text

Transformer

Slide 19

Slide 19 text

Transformer(Attention is all you need) • Abstract
 “The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models are connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. …”

Slide 20

Slide 20 text

Transformer Architecture

Slide 21

Slide 21 text

Attention

Slide 22

Slide 22 text

Top-down / Bottom-up

Slide 23

Slide 23 text

Top-down attention

Slide 24

Slide 24 text

Attention map

Slide 25

Slide 25 text

Transformer Architecture I go to school աח ೟Үী р׮

Slide 26

Slide 26 text

Transformer Architecture ࠺о ৡ׮ it is raining

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding • Abstract
 “… Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. …”

Slide 29

Slide 29 text

BERT input representation

Slide 30

Slide 30 text

Fine-tuning tasks

Slide 31

Slide 31 text

Q&A about NLP models

Slide 32

Slide 32 text

Sequence vs Human

Slide 33

Slide 33 text

Human by meta data

Slide 34

Slide 34 text

Mutable vs Immutable

Slide 35

Slide 35 text

Handle Immutable object

Slide 36

Slide 36 text

Behavior

Slide 37

Slide 37 text

Intention

Slide 38

Slide 38 text

Use case

Slide 39

Slide 39 text

References • VIT • CLIP • BERT • GPT-3 • Transformer • RNN • Sequence to Sequence • https://arxiv.org/pdf/1409.0473.pdf • https://arxiv.org/pdf/1905.10949.pdf • https://www.youtube.com/watch?v=I1wJ_-kEvNQ

Slide 40

Slide 40 text

Q&A