At Google Developer Group (GDG) Seattle I/O Show & Tell, I shared what I have learned from Google I/O 2017: AI/machine learning, TensorFlow and Android.
day conference for Google developers, May 17 - 19 2017 • Took place at Amphitheater next Googleplex in Mountain View • Incredible learning opportunity! ◦ 150 tech talk sessions ◦ 100 office hours ◦ 85 codelabs ◦ 19 sandboxes 4
of initiatives that uses computer vision • Visual search and then take actions • Integrate with Google Home & Google Photos Link to Keynote on Google Lens 7
was announced at I/O last year, 15x faster than a GPU Limitation: only used for inference / running the ML model • Cloud TPU: ◦ much faster than a TPU, ◦ chained together to form a pod ◦ Can be used for ML training as well A short video clip here for you to see the AI/ML booth at I/O 10
Home ◦ Voice calls ◦ Bluetooth support ◦ New partners added: Spotify & HBO now etc. ◦ Visual responses (connecting to phones, tablets & TVs) • Support payment transactions • Support smart devices 11
popular ML open source library • 17,5000+ commits since Nov 2015 • 475+ non-Google contributors (v 1.0) • Significant external commits Source: I/O ‘17 talk TensorFlow Frontier 15
Android Inference library and Java API to build.gradle dependencies { compile 'org.tensorflow:tensorflow-android:1.2.0-preview' } Link to talk: Android meets TensorFlow 17
for Android! • Open source under Apache 2 • Drop in replacement bi-directionally • Bundled in Android Studio (3.0) • Concise and expressive • Null safety • Many other nice features... The official guide to Kotlin for Android developer is here. (short video) 21
I/O on architecture components: 1. Intro 2. Solving the lifecycle problems 3. Persistence and offline Official guide to architecture components is here 22
the Canary version alongside the stable version There are 3 major features: 1. App perf profiling tools (CPU, Memory & Network) 2. Support for Kotlin 3. Improve build speed for apps with large project size 23