Upgrade to Pro — share decks privately, control downloads, hide ads and more …

WayFinder - Indoor Navigation with ARCore

Ryan Hodgman
October 27, 2018

WayFinder - Indoor Navigation with ARCore

In a world where Google Maps has made strangers asking for directions on the street an oddity rather than the norm, somehow we still turn to passersby for assistance with finding our way indoors. Our team tackled this pain point in an experiment with image recognition and ARCore, and accumulated a bunch of augmented reality learnings along the way. In this talk we share our experience with building a navigation solution atop Google’s AR framework.

Presentation by:
Zhenya Li - https://www.linkedin.com/in/zhenya-li-95280b104/
Ryan Hodgman - https://www.linkedin.com/in/ryanhodgman/

Ryan Hodgman

October 27, 2018
Tweet

Other Decks in Programming

Transcript

  1. WayFinder
    Indoor Navigation with ARCore
    Ryan Hodgman Zhenya Li

    View Slide

  2. Augmented
    Reality

    View Slide

  3. View Slide

  4. View Slide

  5. Google AR
    Experiments
    https://experiments.withgoogle.com/collection/ar

    View Slide

  6. View Slide

  7. View Slide

  8. Is this you?

    View Slide

  9. Outdoor wayfinding is
    and taken for granted.
    accurate,
    ubiquitous,

    View Slide

  10. Indoor wayfinding is
    and difficult to deploy.
    unreliable,
    confusing,

    View Slide

  11. Can we improve the
    indoor experience using
    augmented reality?

    View Slide

  12. View Slide

  13. View Slide

  14. Lost?
    Let’s get started

    View Slide

  15. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  16. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  17. View Slide

  18. Stable release (1.0) - 24th Feb 2018

    View Slide

  19. View Slide

  20. View Slide

  21. View Slide

  22. Our approach
    Marker-Based Localisation

    View Slide

  23. View Slide

  24. View Slide

  25. View Slide

  26. Origin
    Recorded
    Anchor
    Physical
    Content
    Recorded
    Origin
    New
    Anchor
    Virtual
    Content
    Virtual
    Δ
    Δ

    View Slide

  27. Origin
    Recorded
    Anchor
    Physical
    Content
    Recorded
    Origin
    New
    Anchor
    Virtual
    Content
    Virtual
    Δ
    Δ
    Δ
    Δ

    View Slide

  28. Anchor image
    Realtime
    Database
    &
    File Storage
    Physical environment
    AR positioning
    &
    image capture
    Recording Flow
    Camera capture

    View Slide

  29. Realtime
    Database
    &
    File Storage
    Image detection
    Detection Flow
    Anchor image
    Relocalise AR
    world
    Image target
    Localisation
    algorithm Detected pose
    AR scene

    View Slide

  30. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  31. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  32. View Slide

  33. ARCore opens up a brand new
    user interface

    View Slide

  34. ...but in order to achieve
    this...

    View Slide

  35. val modelViewMatrix = Tool.convertPose2GLMatrix(trackable.pose).data
    // Deal with the model view and projection matrices
    val modelViewProjection = FloatArray(16)
    Matrix.translateM(modelViewMatrix, 0, render.translationX,
    render.translationY, render.translationZ)
    Matrix.scaleM(modelViewMatrix, 0, render.scale, render.scale,
    render.scale)
    Matrix.multiplyMM(modelViewProjection, 0, projectionMatrix, 0,
    modelViewMatrix, 0)
    // Activate the shader program and bind the vertex/normal/tex coords
    GLES20.glUseProgram(shaderProgramID)
    GLES20.glDisable(GLES20.GL_CULL_FACE)
    GLES20.glVertexAttribPointer(vertexHandle, 3, GLES20.GL_FLOAT, false,
    0, render.model.vertices)
    GLES20.glEnableVertexAttribArray(vertexHandle)
    // Did we find any trackables this frame?
    (0 until state.numTrackableResults).forEach { trackableIndex ->
    val trackable = state.getTrackableResult(trackableIndex)
    TargetRenderLoader.loadedRenders[trackable.trackable.name]?.let { render ->
    onTargetDetectedListener(trackable.trackable.name)
    if (render.model.texCoords != null) {
    GLES20.glVertexAttribPointer(textureCoordHandle, 2, GLES20.GL_FLOAT,
    false, 0, render.model.texCoords)
    GLES20.glEnableVertexAttribArray(textureCoordHandle)
    GLES20.glActiveTexture(GLES20.GL_TEXTURE0)

    View Slide

  36. We are not graphics devs!

    View Slide

  37. “I have no idea what I’m doing”

    View Slide

  38. Release supporting ARCore 1.0
    (ViroCore 1.4) - 27th February 2018

    View Slide

  39. ...now we can do this
    with some friendly APIs!

    View Slide

  40. val posterObject = Object3D()
    posterObject.loadModel(viroContext, modelUri, Object3D.Type.OBJ) {
    override fun onObject3DLoaded(_: Object3D, _: Object3D.Type) {
    val texture = BitmapFactory
    .decodeStream(context.assets.open(textureFileUri))
    val material = Material().apply {
    diffuseTexture =
    Texture(texture, Texture.Format.RGBA8, true, true)
    }
    posterObject.geometry.materials = listOf(material)
    }
    })
    anchorNode.addChildNode(posterObject)

    View Slide

  41. But wait, why are we using a 3rd party
    framework?

    View Slide

  42. Sceneform
    Core ML
    ARKit
    SceneKit
    ML Kit
    ARCore

    View Slide

  43. Sceneform
    Initial release (1.0) - 9th May 2018

    View Slide

  44. Sceneform makes this
    even easier!

    View Slide

  45. ModelRenderable.builder()
    .setSource(context, Uri.parse(modelUri))
    .build()
    .thenAccept { posterModel ->
    val posterNode = Node()
    posterNode.setRenderable(posterModel)
    anchorNode.addChild(posterNode)
    }

    View Slide

  46. Sceneform Asset Definition
    (.sfa)
    Sceneform Binary
    (.sfb)

    View Slide

  47. View Slide

  48. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  49. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  50. View Slide

  51. https://plugins.jetbrains.com/plugin/9717-adb-wifi-connect

    View Slide

  52. ‘AR’chitecture is complex

    View Slide

  53. View Slide

  54. What makes a good image target?

    View Slide

  55. Don’t. Do.
    Good image targets are:
    ● Recognisable

    View Slide

  56. Do.
    Don’t.
    Good image targets are:
    ● Recognisable
    ● Unique

    View Slide

  57. Good image targets are:
    ● Recognisable
    ● Unique
    ● Stable
    Don’t. Do.

    View Slide

  58. Images are expensive!
    1GB
    750MB
    500MB
    250MB
    0B
    Date

    View Slide

  59. Anchor image usability

    View Slide

  60. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  61. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  62. View Slide

  63. Our AR design mindset

    View Slide

  64. Prefer 3D over 2D
    We don’t learn if we don’t try

    View Slide

  65. Validate often
    Familiarity with AR breeds a loss of perspective

    View Slide

  66. Keep it real
    Grant users the illusion of physical presence

    View Slide

  67. Let’s break down some interactions

    View Slide

  68. ...create a stable environment?
    How do we...

    View Slide

  69. How do we create a stable environment?
    Environmental awareness

    View Slide

  70. Environmental awareness

    View Slide

  71. Environmental awareness

    View Slide

  72. Environmental awareness

    View Slide

  73. Environmental awareness

    View Slide

  74. ...create a stable environment?
    ...indicate success?
    How do we...

    View Slide

  75. How do we indicate success?
    Image detection

    View Slide

  76. Image detection

    View Slide

  77. ...create a stable environment?
    ...indicate success?
    ...highlight offscreen content?
    How do we...

    View Slide

  78. How do we highlight offscreen content?
    Expanding the viewport

    View Slide

  79. Expanding the viewport

    View Slide

  80. Expanding the viewport

    View Slide

  81. ...create a stable environment?
    ...indicate success?
    ...highlight offscreen content?
    ...maintain immersion?
    How do we...

    View Slide

  82. How do we maintain immersion?
    Use of screen space

    View Slide

  83. Use of screen space

    View Slide

  84. Use of screen space

    View Slide

  85. ...create a stable environment?
    ...indicate success?
    ...highlight offscreen content?
    ...maintain immersion?
    ...prevent fatigue?
    How do we...

    View Slide

  86. How do we prevent fatigue?
    Path display

    View Slide

  87. Path display

    View Slide

  88. ...create a stable environment?
    ...indicate success?
    ...highlight offscreen content?
    ...maintain immersion?
    ...prevent fatigue?
    ...guide content placement?
    How do we...

    View Slide

  89. How do we guide content placement?
    Placing objects

    View Slide

  90. Placing objects

    View Slide

  91. Placing objects

    View Slide

  92. ...create a stable environment?
    ...indicate success?
    ...highlight offscreen content?
    ...maintain immersion?
    ...prevent fatigue?
    ...guide content placement?
    ...reposition content?
    How do we...

    View Slide

  93. How do we reposition content?
    Dragging objects

    View Slide

  94. Dragging objects

    View Slide

  95. Dragging objects

    View Slide

  96. ...create a stable environment?
    ...indicate success?
    ...highlight offscreen content?
    ...maintain immersion?
    ...prevent fatigue?
    ...guide content placement?
    ...reposition content?
    ...switch modes of interaction?
    How do we...

    View Slide

  97. How do we switch modes of interaction?
    Selecting objects

    View Slide

  98. Selecting objects

    View Slide

  99. ...create a stable environment?
    ...indicate success?
    ...highlight offscreen content?
    ...maintain immersion?
    ...prevent fatigue?
    ...guide content placement?
    ...reposition content?
    ...convey interaction modes?
    How do we...

    View Slide

  100. AR design best practices are still undefined!
    Here be dragons

    View Slide

  101. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  102. How did we design it?
    What was challenging?
    How did we build it?
    What did we build?

    View Slide

  103. Takeaways

    View Slide

  104. Designing for 3D is fundamentally
    different to 2D design

    View Slide

  105. View Slide

  106. View Slide

  107. AR is the freedom to make
    something new

    View Slide

  108. We made it!
    Ryan Hodgman
    @Glyphalcon
    Zhenya Li
    @GargoyleLizy

    View Slide

  109. Come see us at our booth for a
    demo of the WayFinder app!

    View Slide

  110. Illustration
    Nate Cooper
    Jack Walsh
    Talk Feedback
    Flora Wang
    Ankitha Sheeba
    Konrad Biernacki
    Amir Abdi
    With enormous thanks to...
    App Development & Design
    Alex Chiviliov
    Alex Lai
    Yasitha Chinthaka
    Amir Abdi
    Julius Canute
    Matthew Falzon
    Sucharitha Alli
    Ibramsha Sirajudeen
    Bindiya D’Souza
    Alex Waluyo
    Aurenia Permadi
    Michelle Ritoli
    Wiliane Chua
    Miffra Lee
    Joshua Kenzie
    Tony Archibald
    Jack Mauleekoonphairoj
    Yasmeen Vorha

    View Slide