Slide 1

Slide 1 text

Assistant Recap (Session 3) Yoichiro Tanaka, I/O Extended 2018 Tokyo@GDG

Slide 2

Slide 2 text

Yoichiro Tanaka Software Engineer / IT Architect Google Developers Expert (Assistant, Web) twitter.com/yoichiro google.com/+YoichiroTanaka

Slide 3

Slide 3 text

Assistant I/O 2018 I believe that this Google I/O 2018 may be renamed as "Assistant I/O 2018" !

Slide 4

Slide 4 text

Google Duplex demo movie.

Slide 5

Slide 5 text

Continued Conversation allows users not to need to say "Ok Google" each time.

Slide 6

Slide 6 text

Multiple Actions allows users to ask multiple phrases at one sentence to the Google Assistant.

Slide 7

Slide 7 text

By Pretty Please feature, if a child says some phrase with "please", Google Assistant replies "Thanks for saying please".

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

Three Smart Display models will be released in July 2018.

Slide 10

Slide 10 text

The Google Assistant in navigation in Google Maps.

Slide 11

Slide 11 text

Google Assistant is in many devices, also many devices are connected by Google Assistant.

Slide 12

Slide 12 text

Users use different devices for each situation. Of course, contents, services and apps are different for situations.

Slide 13

Slide 13 text

We must support both voice-activated and visual devices.

Slide 14

Slide 14 text

Google Assistant is strongly integrated and connected with Android and Web.

Slide 15

Slide 15 text

App Actions Slices App Links AMP Support Built-in Intents Routine Suggestion Table card Theme customization Notification Many features were announced in this I/O to achieve the integration.

Slide 16

Slide 16 text

App Actions Slices App Links AMP Support Built-in Intents Routine Suggestion Table card Theme customization Notification First, talk about each feature to integrate Google Assistant with Android and Web.

Slide 17

Slide 17 text

Android Apps are suggested on All Apps page based on Routines, Context and etc.

Slide 18

Slide 18 text

When users select some text, related apps are suggested. By clicking the suggestion,

Slide 19

Slide 19 text

The related app is launched and is deep-linked. That is, the content about the selected text is shown.

Slide 20

Slide 20 text

Also, Google Assistant shows related apps for each content.

Slide 21

Slide 21 text

The AI recognizes the content, then the AI lists up actions which the user may do. Finally, related apps are suggested.

Slide 22

Slide 22 text

Up to now, users select an app to use, then some content is consumed in the app. From now on, they will be inverted.

Slide 23

Slide 23 text

Google prepares many Built-in Intents. Google's AI decides intents from a target content.

Slide 24

Slide 24 text

Each intent can have one or more parameters.

Slide 25

Slide 25 text

This is an "Actions.xml" file to define actions which the app can treat. Actions on Google recognizes this definitions.

Slide 26

Slide 26 text

When a user says some phrase like above,

Slide 27

Slide 27 text

Google Assistant shows the search result and suggests apps which can treat the content.

Slide 28

Slide 28 text

When the user clicks the suggestion, the app is launched with the deep-link based on the URL.

Slide 29

Slide 29 text

Of course, Conversational Actions can handle the Built-in Intents.

Slide 30

Slide 30 text

In the Dialogflow Console, when adding the "Play game" built-in intent to events of some intent,

Slide 31

Slide 31 text

If the user says "I want to play a game", Google Assistant suggests game apps which have the "Play game" intent.

Slide 32

Slide 32 text

Developers can provide "Sliced" UI for each content on the Search Result page and the Google Assistant (later this year).

Slide 33

Slide 33 text

Users can invoke an action directly by clicking a URL provided by Action Links.

Slide 34

Slide 34 text

Each link for Action Links can be generated on the Actions on Google Console.

Slide 35

Slide 35 text

If you have web contents, you can provide and optimize them for Google Assistant without creating Custom Actions.

Slide 36

Slide 36 text

Content Providers can provide "Content Actions" for Google Assistant.

Slide 37

Slide 37 text

For Content Actions, content providers need use AMP and Structured Data Markup as each content format.

Slide 38

Slide 38 text

For example, developers can create a content using AMP and embed a structured data markup dynamically.

Slide 39

Slide 39 text

The structured data markup is written by a definition of the Schema.org.

Slide 40

Slide 40 text

Google Assistant renders the content as rich component.

Slide 41

Slide 41 text

If AMP and Structured Data Markup supported, you will receive an email explaining how to claim your directory page.

Slide 42

Slide 42 text

App Actions Slices App Links AMP Support Built-in Intents Routine Suggestion Table card Theme customization Notification The preceding pages were how to integrate Android and Web with Google Assistant.

Slide 43

Slide 43 text

Related sessions Getting started with App Actions Android Slices: build interactive results for Google Search Integrating your Android apps with the Google Assistant Integrating your content with the Google Assistant using AMP and markup An introduction to developing Actions for the Google Assistant How to build a user base for your Actions

Slide 44

Slide 44 text

App Actions Slices App Links AMP Support Built-in Intents Routine Suggestion Table card Theme customization Notification Next, talk about new features of Google Assistant itself.

Slide 45

Slide 45 text

Users tends to ask Google Assistant multiple tasks at one time for each situation.

Slide 46

Slide 46 text

Default Routines are sets of actions for each event. One routine can has one or more actions.

Slide 47

Slide 47 text

Users cannot only use default routines, but they also create a Custom Routine and use it.

Slide 48

Slide 48 text

conv.ask(new Suggestion("Add to routine")); "Routine Suggestions" is a new feature to suggest to add your action to the routine the user has.

Slide 49

Slide 49 text

For example, users can add the "Order Coffee to Starbucks" action to the "Commuting to work" routine.

Slide 50

Slide 50 text

Visual assistive Another keyword is "Visual assistive".

Slide 51

Slide 51 text

Visual assistive will become more important concern than ever before. For example, Smart Display supporting.

Slide 52

Slide 52 text

A new rich response named "Table card" to show users 3x3 information brings more visual and more clear.

Slide 53

Slide 53 text

conv.ask("Simple response"); conv.ask(new Table({ dividers: true, columns: ["Header1", "Header2", "Header3"], rows: [ ["Row1 Item1", "Row1 Item2", …], ["Row2 Item1", "Row2 Item2", …], ["Row3 Item1", "Row3 Item2", …] ] }); Since actions-on-google-nodejs version 2.1.0, the Table card has been supported.

Slide 54

Slide 54 text

"Theme customization" allows you to customize the design of rich responses. When specifying the background image,

Slide 55

Slide 55 text

The background image is reflected to each rich response like a basic card above.

Slide 56

Slide 56 text

For monetization, Google Sign-in, Transactions API and some features have already been available.

Slide 57

Slide 57 text

Finally, developers can release apps for limited users with Alpha/Beta Release feature.

Slide 58

Slide 58 text

Related sessions Design Actions for the Google Assistant: beyond smart speakers, to phones and smart display 10 tips for building better Actions Best practices for testing your Actions Personalize Actions for the Google Assistant Add transactional capabilities to your Actions

Slide 59

Slide 59 text

Voice User Interface Design & Usecase Building games for the Google Assistant on smart displays Creating a persona: what does your product sound like? Build engaging conversations for the Google Assistant using Dialogflow Smart Home, Automotive and others Integrate your smart home device with the Google Assistant What's new in automotive What's new with the Google Assistant SDK for devices

Slide 60

Slide 60 text

Build Actions for the Google Assistant (Level 1) Creating a new project, intent, Using an entity and Handing them by fulfillment code with the inline editor. Build Actions for the Google Assistant (Level 2) Creating own fulfillment code, custom entity, Adding a deep-link, follow-up intent and Using helpers, SSML Smart Home Washer Creating a new project, a washer on local, Adding modes, toggles, request syncs and report states Codelabs

Slide 61

Slide 61 text

We just have been starting building an ecosystem of the Google Assistant. We need your cooperations.

Slide 62

Slide 62 text

Assistant Developer Community Japan Google Groups http://bit.ly/assistant-dev-japan Slack http://bit.ly/assistant-dev-slack

Slide 63

Slide 63 text

Any questions?