Thanks to the latest advancements in Machine Learning, we're now capable of interacting with machines through natural language. The age of voice assistants is here with Siri, Alexa and others. But, as an iOS developer, what can I do on my existing app in relation to conversational features?
When we think about developing features that are voice-forward, we think about existing voice assistants such as Alexa and Siri. What about the fully-capable computers that we have with us all the time, our smartphones? Some moments on our day to day life are very well suited for voice interactions: while in a car or cooking for example. Let's not forget that voice interactions are extremely accessible, not only in a physical way (for people with dexterity or motion impediments) but also in a cognitive way (I think we all have a loved one in our lives that really struggles with technology, and people from some emerging countries have very limited access to computers and are not at ease with technology).
In this talk, I'll explain what integrations can be done in iOS:
- 1st-party solutions such as the Natural Language Framework and Siri Shortcuts
- 3rd-party solutions such as Porcupine, Snips, Dialogflow, Amazon Lex, RASA and many others
In summary, this talk will help think about why you should implement conversational features on your app and how.