Ever wondered about the technology behind Google Photos? Or wanted to build an app that performs complex image analysis, like detecting objects, faces, emotions, and landmarks? The new Google Cloud Vision API (currently in alpha) exposes the machine learning models that power Google Photos and Google Image Search. Developers can now access these features with just a simple REST API call. We’ll learn how to make a request to the Vision API, and then we’ll see it classify images, extract text, and even identify landmarks like Harry Potter World. We’ll end the talk by live coding an iOS app that implements image detection with the Vision API.