Slide 1

Slide 1 text

No content

Slide 2

Slide 2 text

Tensorflow Hub Frontier for Reusable ML Modules Chris Barsolai Organizer, Nairobi AI @chrisbarso GDG DevFest Nairobi 2018

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

4

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

https://toolbox.google.com/datasetsearch

Slide 8

Slide 8 text

TensorFlow Hub Frontier for Reusable ML Modules Chris Barsolai @chrisbarso

Slide 9

Slide 9 text

Make machines intelligent. Improve people’s lives.

Slide 10

Slide 10 text

● Make solar energy affordable ● Provide energy from fusion ● Develop carbon sequestration methods ● Manage the nitrogen cycle ● Provide access to clean water ● Restore & improve urban infrastructure ● Advance health informatics ● Engineer better medicines ● Reverse-engineer the brain ● Prevent nuclear terror ● Secure cyberspace ● Enhance virtual reality ● Advance personalized learning ● Engineer the tools for scientific discovery www.engineeringchallenges.org/challenges.aspx U.S. NAE Grand Engineering Challenges for 21st Century

Slide 11

Slide 11 text

Deep Learning for Image-Based Cassava Disease Detection

Slide 12

Slide 12 text

Half screen photo slide if text is necessary Using Tensorflow to keep farmers happy and cows healthy Source: https://www.blog.google/technology/ai/using-tensorflow-keep-farmers-happy-and-cows-healthy/

Slide 13

Slide 13 text

AutoML: Automated machine learning (“learning to learn”)

Slide 14

Slide 14 text

Current: Solution = ML expertise + data + computation

Slide 15

Slide 15 text

Current: Solution = ML expertise + data + computation Can we turn this into: Solution = data + 100X computation ???

Slide 16

Slide 16 text

Neural Architecture Search to find a model Controller: proposes ML models Train & evaluate models 20K times Iterate to find the most accurate model

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

Inception-ResNet-v2 computational cost Accuracy (precision @1) accuracy AutoML outperforms handcrafted models Learning Transferable Architectures for Scalable Image Recognition, Zoph et al. 2017, https://arxiv.org/abs/1707.07012

Slide 19

Slide 19 text

Inception-ResNet-v2 Years of effort by top ML researchers in the world computational cost Accuracy (precision @1) accuracy Learning Transferable Architectures for Scalable Image Recognition, Zoph et al. 2017, https://arxiv.org/abs/1707.07012 AutoML outperforms handcrafted models

Slide 20

Slide 20 text

Learning Transferable Architectures for Scalable Image Recognition, Zoph et al. 2017, https://arxiv.org/abs/1707.07012 computational cost Accuracy (precision @1) accuracy AutoML outperforms handcrafted models

Slide 21

Slide 21 text

cloud.google.com/automl

Slide 22

Slide 22 text

TensorFlow Hub The Next Frontier in AI Collaboration Chris Barsolai @chrisbarso

Slide 23

Slide 23 text

“Doing ML in production is hard.” -Everyone who has ever tried

Slide 24

Slide 24 text

Because, in addition to the actual ML... ML Code

Slide 25

Slide 25 text

...you have to worry about so much more. Configuration Data Collection Data Verification Feature Extraction Process Management Tools Analysis Tools Machine Resource Management Serving Infrastructure Monitoring Source: Sculley et al.: Hidden Technical Debt in Machine Learning Systems ML Code

Slide 26

Slide 26 text

Typical ML Pipeline batch processing During training “online” processing During serving data request

Slide 27

Slide 27 text

2005, no source control 2010, source control & C.I. … but not for ML 2018, great tools and practices … some still to be defined Story time ML Software

Slide 28

Slide 28 text

Repositories: Shared Code Repositories

Slide 29

Slide 29 text

TensorFlow Hub Tensorflow Hub: Shared ML

Slide 30

Slide 30 text

Data Algorithm Expertise Compute Ingredients of ML

Slide 31

Slide 31 text

Tensorflow Hub: Shared ML Module TensorFlow Hub Model

Slide 32

Slide 32 text

A repository of pre-trained model components, packaged for one-line reuse Easiest way to share and use best approaches for your task tensorflow.org/hub Module

Slide 33

Slide 33 text

Modules contain pretrained weights and graphs

Slide 34

Slide 34 text

Modules are composable, reusable, retrainable

Slide 35

Slide 35 text

Image Retraining

Slide 36

Slide 36 text

Model structure

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

# Download and use NASNet feature vector module. module = hub.Module( "https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/1") features = module(my_images) logits = tf.layers.dense(features, NUM_CLASSES) probabilities = tf.nn.softmax(logits) 62,000+ GPU hours!

Slide 39

Slide 39 text

# Download and use NASNet feature vector module. module = hub.Module( "https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/1", trainable=True, tags={“train”}) features = module(my_images) logits = tf.layers.dense(features, NUM_CLASSES) probabilities = tf.nn.softmax(logits)

Slide 40

Slide 40 text

● NASNet-A mobile, large ● PNASNet-5 large ● MobileNet V2 (all sizes) ● Inception V1, V2, V3 ● Inception-ResNet V2 ● ResNet V1 & V2 50/101/152 ● MobileNet V1 (all sizes) Available image modules

Slide 41

Slide 41 text

Text Classification

Slide 42

Slide 42 text

Sentence embeddings “The quick brown fox”

Slide 43

Slide 43 text

Sentence embeddings “DevFest Nairobi is super awesome so far”

Slide 44

Slide 44 text

Available text modules ● Neural network language model (en, jp, de, es) ● Word2vec trained on Wikipedia ● ELMo (Embeddings from Language Models)

Slide 45

Slide 45 text

From researchers to you

Slide 46

Slide 46 text

Universal Sentence Encoder embed = hub.Module( "https://tfhub.dev/google/universal-sentence-encoder/1") messages = [ “The food was great, and USIU rocks”, “I didn’t understand a thing today” ….. “Where is jollof rice?” ] print session.run(embeddings) Check out the semantic similarity colab at https://alpha.tfhub.dev/google/universal-sentence-encoder/2

Slide 47

Slide 47 text

# Use pre-trained universal sentence encoder to build text vector column. review = hub.text_embedding_column( "review", "https://tfhub.dev/google/universal-sentence-encoder/1") features = { "review": np.array(["an arugula masterpiece", "inedible shoe leather", ...]) } labels = np.array([[1], [0], ...]) input_fn = tf.estimator.input.numpy_input_fn(features, labels, shuffle=True) estimator = tf.estimator.DNNClassifier(hidden_units, [review]) estimator.train(input_fn, max_steps=100)

Slide 48

Slide 48 text

# Use pre-trained universal sentence encoder to build text vector column. review = hub.text_embedding_column( "review", "https://tfhub.dev/google/universal-sentence-encoder/1", trainable=True) features = { "review": np.array(["an arugula masterpiece", "inedible shoe leather", ...]) } labels = np.array([[1], [0], ...]) input_fn = tf.estimator.input.numpy_input_fn(features, labels, shuffle=True) estimator = tf.estimator.DNNClassifier(hidden_units, [review]) estimator.train(input_fn, max_steps=100)

Slide 49

Slide 49 text

# Use pre-trained universal sentence encoder to build text vector column. review = hub.text_embedding_column( "review", "https://tfhub.dev/google/universal-sentence-encoder/1") features = { "review": np.array(["an arugula masterpiece", "inedible shoe leather", ...]) } labels = np.array([[1], [0], ...]) input_fn = tf.estimator.input.numpy_input_fn(features, labels, shuffle=True) estimator = tf.estimator.DNNClassifier(hidden_units, [review]) estimator.train(input_fn, max_steps=100)

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

More modules ● Progressive GAN ● Google Landmarks Deep Local Features (DELF) ● Audio, video, and more coming...

Slide 52

Slide 52 text

New Web Experience

Slide 53

Slide 53 text

Try it! tensorflow.org/hub Build notebooks Share modules #tfhub

Slide 54

Slide 54 text

Blog blog.tensorflow.org

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

One URL to get involved tensorflow.org/community

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

https://meetup.com/NairobiAI To Join:

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

No content