the menu Any other options? I want a pizza Please select the option Please select the option Please select the option * The scenario has nothing to do with the actual products
at an annual rate of 35%. Expecting $3.5 billion in 2021” (Technavio) By 2020, 55 percent of large companies will use more than one chatbot .. In 2021, more than 50 percent of companies will spend more on chatbot than mobile apps.” Chatbot is expected to help us save more than $2 billion this year and $8 billion annually by 2022 (Juniper Research)
Pre-built industry templates ü Less scenario, better performance Easy to BUILD ‘Flexible, Quick and Smart Chatbot’ Channels ü Leverage LINE platform ü Legacy DB I/F for Enterprise ü Format adjustment for messengers Easy to SERVE ü Supports 6 or more languages ü Compatible with speakers ü No limit on # of intents, scenario Easy to EXPAND Chatbot Builder with LINE Messenger Platform”
Chatbot, connect Chatbot and Callbot within one framework. Moreover… OR Phone Call Mobile APP CLOVA PARTNER Customer Service Center Legacy(DB) CUSTOMER CHATLOG SOCIAL Customer Support Search CS Manual Sort Customer Info Search sample cases A A’ Internal CS MID BACK General CS (AI) Detail CS Input TEXT VOICE FRONT CANNOT RESPOND CONTACT CS AI-based Customer Service Process (Example of AI-ARS)
API to provide answers ${ActionMethod} Answer : Today’s weather in Bangkok is ${weather} Example : ${Weather} = http://weather.com?location=#{Bangkok} 2. Provide Mutiple-Choice Type or Short Answer Type ${Form} 3. Complicated customer order & Slot filling to complete such actions from each customer ${Task} $ # We provide various functions with conversational components
provide not only text messages but text balloons from each messenger platform We provide basic components such as Text, Button, Image Type, Flex Message, Carousel, Quick Reply etc
Train —> Tune —> Deploy 1st Stage: User QA Dataset Preprocessing [Hadoop] 2nd Stage: [GPU] Prepare for training [Tensorflow or Various ML Framework] 3nd Stage: Model tuning and deployment [Server]
§ Use our platform to insert dialogue scenario § Various data generation tools such as Action-method, Form, State, Answer Tagging can enhance each dialogue based on user preference
data scenario § Extract language features § Request/Manage all model builds from c3dl GPU § Rama § Support multi-lingual tokenizers § Part-of-Speech Tagging § EOMIS Tagging(for Korean only) § Named Entity Tagging
Vector Representation / N-hot Representation Memorize important information: LSTM Stacked Study sentence in forward & backward: Bi-directional Review sentence again: Residual / Highway Network Feedbacks based on answers: Attention Find the answer based on various opinions : Model Ensemble
Semantic Space xt Word sequence Relevance measured by cosine similarity f1 , f2 , …, fTQ w1,w2, …,wTQ 128 sim(X, Y) f1 , f2 , …, fTD1 w1,w2, …,wTD 128 X Y g ( . ) f ( . ) Multi-Layer Perceptron (MLP) if text is a bag of words [Huang+ 13]
Train Query Feature Vector Tokenizing Answer Test Query Test Query Feature Vector Averaging cos - similarity Train & Prediction Threshold • Wikipedia • Chat Corpus • Twitter § Compute similarity between trained query and input query to provide pre-paired answer § Improve response coverage since global embedding can cover where local embedding cannot cover
use global embedding vectors such as fastText § Apply various pre-trained embedding vectors such as Elmo, BERT etc Classification Layer, Baloo & Kaa § When training, fine-tune only the top of layer § When serving, deploy only middle part of network as inference § Apply CNN, FeedForward NN, Transformer in progress
appropriate ensemble answer to one specific platform § Serve multiple messenger platforms such as LINE, facebook messenger, etc. § Use Akka Sharding to provide non-stop service by communicating with each server
§ N-hot Embedding Vectors from given data corpus § Global Embedding Vectors from Glove, fastText, TAPI, etc. § Locate all words/sentence properly within given vector space § N elements in N-dimensions represent characteristics of each sentence § The Closer embedding vectors, The More Semantically matching
of automated end-to-end process by applying machine learning algorithms § AI-based solution to the ever-growing challenge by applying machine learning algorithms § Data Preprocessing, Algorithm Selection, Hyperparameter Optimization, etc.
Preprocessing Data Corpus Features Extraction Features Selection Features Construction Parameter Optimization Model Validation § Iterate and Train multiple times to optimize and find local minimum for each data corpus
Scenario Candidate Answer Model Ensemble Query Pre-Processing Feature Extraction Global Vector Representation Embedding Model Sequence Model Semantic Model N-hot features Clustering Model Tuning POS EOMI Named Entity
each query is located at the proper vector space § Use Cross-Validation to split train and valid set § Randomly sample from dataset with the ratio of 8:2 § Run several times to prevent model to be less biased § Or, Remove 1~2 queries from each group for evaluation § For each group, choose queries far from the center by computing cosine similarity
Clustering Train s2s Model MSE Auto-Evaluation § Create validation set after every clustering process § Run semi-infinite clustering process until it reaches to local minimum § Choose the best “k” with the best MSE score
clusters are well-organized by models § More Efficiently? More Accurately? § Coordinate Search, Random Search, Genetic Algorithms § Bayesian Hyper-parameter Optimization § Use Cosine Similarity, MSE or Hit Ratio for validation score
exists a model that answers better than the others § For the given query, Response Selector provides final answer that maximizes the linear combination of each model’s prediction intensity such as cosine similarity and the accuracy of each model as the selection strength so-called ensemble weights
truly understands what people say, not just their intents and keywords It is difficult to cover all combinations of human expression just by adding simple question and answers. CLOVA chatbot not only understands incoming queries, but also chooses where to get the best answer WHY NAVER? WHY CLOVA? ü Anyone could build and deploy chatbot ü Built-in dashboard to control different domains ü Visualize and re-train chatbot through chat history and statistics
Test under same condition for each chatbot builder ü Used Japanese dataset and translated into Korean and English ü Randomly selected test queries from the sample data - Korean: train(1,665), test(159) - English: train(512), test(152) 95.6% 91.8% 88.7% 72.3% KOREAN 91.4% 90.1% 88.2% ENGLISH 91.4% “Utilizing volume of data collected from search engine, CLOVA has great Korean language processing capabilities, and through auto-tune features it is able to learn new languages with less amount of data”
Why Clova Chatbot Builder? Provide high quality conversation system with minimum labor cost → B2B Template & Strong NLP Engine Support multilingual conversation → Minimum language barrier NLP Engine Continuous update with users’ feedback Successfully launched Clova Chatbot Builder Framework, To be positioned as an “AI Messenger Platform” Conclusion