Slide 1

Slide 1 text

WebXR Beyond
 WebGL HTML5 Conference 2018 | 2018-11-25

Slide 2

Slide 2 text

Frontend Engineer at PixelGrid, Inc. Akihiro Oyamada @yomotsu

Slide 3

Slide 3 text

https://www.youtube.com/watch?v=ttDLGV5IJCM

Slide 4

Slide 4 text

This will possible on
 every mobile phone

Slide 5

Slide 5 text

1. WebXR Device API

Slide 6

Slide 6 text

• The API for both AR and VR
 (WebVR API will be replaced by WebXR Device API) • Close relationship
 with Khronos’ OpenXR WebXR Device API

Slide 7

Slide 7 text

• Currently, only available in
 Chrome Canary with flags • Only works on either
 https or localhost As of Nov. 2018

Slide 8

Slide 8 text

Connect your Android Chrome to localhost
 with DevTools on desktop

Slide 9

Slide 9 text

FYI: Origin Trials Token is required
 except localhost https://github.com/GoogleChrome/OriginTrials/blob/gh-pages/developer-guide.md

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

Consist of 2 sources

Slide 13

Slide 13 text

WebGL scene (Under your control) XR imaginary
 (Camera input) WebGL framebuffer

Slide 14

Slide 14 text

WebGL framebuffer

Slide 15

Slide 15 text

How does it work?

Slide 16

Slide 16 text

How does it work? const device = await navigator.xr.requestDevice();

Slide 17

Slide 17 text

How does it work? const device = await navigator.xr.requestDevice();

Slide 18

Slide 18 text

XRSession device.requestSession( { immersive: false, outputContext: xrContext, environmentIntegration: true, } );

Slide 19

Slide 19 text

XRSession const glCanvas = document.createElement( 'canvas' ); const gl = glCanvas.getContext( 'webgl', { xrCompatible: true } ); xrSession.baseLayer = new XRWebGLLayer( xrSession, gl ); WebGL context to be used
 as a source for XR imagery

Slide 20

Slide 20 text

The framebuffer can be filled with XR imagery gl.bindFramebuffer( gl.FRAMEBUFFER, xrSession.baseLayer.framebuffer );

Slide 21

Slide 21 text

Draw your 3D scene on it

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

frameOfReference Real world coordinate

Slide 24

Slide 24 text

XRFrame
 (delta) frameOfReference

Slide 25

Slide 25 text

xrFrame.view[n].projectionMatrix
 
 const pose = xrFrame.getDevicePose(frameOfRef) pose.getViewMatrix( view ) frameOfReference XRFrame
 (delta) Position and
 Orientation

Slide 26

Slide 26 text

Demo

Slide 27

Slide 27 text

Dive into the code!

Slide 28

Slide 28 text

const width = window.innerWidth; const height = window.innerHeight; navigator.xr.requestDevice().then( ( device ) => { const outputCanvas = document.getElementById( 'xrCanvas' ); outputCanvas.width = width; outputCanvas.height = height; const xrContext = outputCanvas.getContext( 'xrpresent' ); // session request must be done in user action such as click window.addEventListener( 'click', onEnterAR ); async function onEnterAR() { const xrSession = await device.requestSession( { outputContext: xrContext, environmentIntegration: true, } ); const renderer = new THREE.WebGLRenderer(); renderer.autoClear = false; renderer.setSize( width, height );

Slide 29

Slide 29 text

const width = window.innerWidth; const height = window.innerHeight; navigator.xr.requestDevice().then( ( device ) => { const outputCanvas = document.getElementById( 'xrCanvas' ); outputCanvas.width = width; outputCanvas.height = height; const xrContext = outputCanvas.getContext( 'xrpresent' ); // session request must be done in user action such as click window.addEventListener( 'click', onEnterAR ); async function onEnterAR() { const xrSession = await device.requestSession( { outputContext: xrContext, environmentIntegration: true, } ); const renderer = new THREE.WebGLRenderer(); renderer.autoClear = false; renderer.setSize( width, height );

Slide 30

Slide 30 text

const width = window.innerWidth; const height = window.innerHeight; navigator.xr.requestDevice().then( ( device ) => { const outputCanvas = document.getElementById( 'xrCanvas' ); outputCanvas.width = width; outputCanvas.height = height; const xrContext = outputCanvas.getContext( 'xrpresent' ); // session request must be done in user action such as click window.addEventListener( 'click', onEnterAR ); async function onEnterAR() { const xrSession = await device.requestSession( { outputContext: xrContext, environmentIntegration: true, } ); const renderer = new THREE.WebGLRenderer(); renderer.autoClear = false; renderer.setSize( width, height );

Slide 31

Slide 31 text

window.addEventListener( 'click', onEnterAR ); async function onEnterAR() { const xrSession = await device.requestSession( { outputContext: xrContext, environmentIntegration: true, } ); const renderer = new THREE.WebGLRenderer(); renderer.autoClear = false; renderer.setSize( width, height ); // bind gl context to XR session const gl = renderer.getContext(); gl.setCompatibleXRDevice( xrSession.device ); xrSession.baseLayer = new XRWebGLLayer( xrSession, gl ); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(); camera.matrixAutoUpdate = false; const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial()

Slide 32

Slide 32 text

window.addEventListener( 'click', onEnterAR ); async function onEnterAR() { const xrSession = await device.requestSession( { outputContext: xrContext, environmentIntegration: true, } ); const renderer = new THREE.WebGLRenderer(); renderer.autoClear = false; renderer.setSize( width, height ); // bind gl context to XR session const gl = renderer.getContext(); gl.setCompatibleXRDevice( xrSession.device ); xrSession.baseLayer = new XRWebGLLayer( xrSession, gl ); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(); camera.matrixAutoUpdate = false; const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial()

Slide 33

Slide 33 text

window.addEventListener( 'click', onEnterAR ); async function onEnterAR() { const xrSession = await device.requestSession( { outputContext: xrContext, environmentIntegration: true, } ); const renderer = new THREE.WebGLRenderer(); renderer.autoClear = false; renderer.setSize( width, height ); // bind gl context to XR session const gl = renderer.getContext(); gl.setCompatibleXRDevice( xrSession.device ); xrSession.baseLayer = new XRWebGLLayer( xrSession, gl ); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(); camera.matrixAutoUpdate = false; const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial()

Slide 34

Slide 34 text

xrSession.baseLayer = new XRWebGLLayer( xrSession, gl ); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(); camera.matrixAutoUpdate = false; const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial() ); scene.add( box ); const frameOfRef = await xrSession.requestFrameOfReference( 'eye-level' ); xrSession.requestAnimationFrame( onDrawFrame ); function onDrawFrame( timestamp, xrFrame ) { const session = xrFrame.session; // xrSession === xrFrame.session const pose = xrFrame.getDevicePose( frameOfRef ); session.requestAnimationFrame( onDrawFrame ); gl.bindFramebuffer( gl.FRAMEBUFFER, session.baseLayer.framebuffer );

Slide 35

Slide 35 text

scene.add( box ); const frameOfRef = await xrSession.requestFrameOfReference( 'eye-level' ); xrSession.requestAnimationFrame( onDrawFrame ); function onDrawFrame( timestamp, xrFrame ) { const session = xrFrame.session; // xrSession === xrFrame.session const pose = xrFrame.getDevicePose( frameOfRef ); session.requestAnimationFrame( onDrawFrame ); gl.bindFramebuffer( gl.FRAMEBUFFER, session.baseLayer.framebuffer ); if ( ! pose ) return; // if the session is for both right and left eyes, length of views would be 2. // if not, length is 1, xrFrame.views.forEach( ( view ) => { const viewport = session.baseLayer.getViewport( view ); renderer.setSize( viewport.width, viewport.height );

Slide 36

Slide 36 text

scene.add( box ); const frameOfRef = await xrSession.requestFrameOfReference( 'eye-level' ); xrSession.requestAnimationFrame( onDrawFrame ); function onDrawFrame( timestamp, xrFrame ) { const session = xrFrame.session; // xrSession === xrFrame.session const pose = xrFrame.getDevicePose( frameOfRef ); session.requestAnimationFrame( onDrawFrame ); gl.bindFramebuffer( gl.FRAMEBUFFER, session.baseLayer.framebuffer ); if ( ! pose ) return; // if the session is for both right and left eyes, length of views would be 2. // if not, length is 1, xrFrame.views.forEach( ( view ) => { const viewport = session.baseLayer.getViewport( view ); renderer.setSize( viewport.width, viewport.height );

Slide 37

Slide 37 text

function onDrawFrame( timestamp, xrFrame ) { const session = xrFrame.session; // xrSession === xrFrame.session const pose = xrFrame.getDevicePose( frameOfRef ); session.requestAnimationFrame( onDrawFrame ); gl.bindFramebuffer( gl.FRAMEBUFFER, session.baseLayer.framebuffer ); if ( ! pose ) return; // if the session is for both right and left eyes, length of views would be 2. // if not, length is 1, xrFrame.views.forEach( ( view ) => { const viewport = session.baseLayer.getViewport( view ); renderer.setSize( viewport.width, viewport.height ); camera.projectionMatrix.fromArray( view.projectionMatrix ); const viewMatrix = new THREE.Matrix4().fromArray( pose.getViewMatrix( view ) ); camera.matrix.getInverse( viewMatrix ); camera.updateMatrixWorld( true ); renderer.clearDepth(); renderer.render( scene, camera );

Slide 38

Slide 38 text

function onDrawFrame( timestamp, xrFrame ) { const session = xrFrame.session; // xrSession === xrFrame.session const pose = xrFrame.getDevicePose( frameOfRef ); session.requestAnimationFrame( onDrawFrame ); gl.bindFramebuffer( gl.FRAMEBUFFER, session.baseLayer.framebuffer ); if ( ! pose ) return; // if the session is for both right and left eyes, length of views would be 2. // if not, length is 1, xrFrame.views.forEach( ( view ) => { const viewport = session.baseLayer.getViewport( view ); renderer.setSize( viewport.width, viewport.height ); camera.projectionMatrix.fromArray( view.projectionMatrix ); const viewMatrix = new THREE.Matrix4().fromArray( pose.getViewMatrix( view ) ); camera.matrix.getInverse( viewMatrix ); camera.updateMatrixWorld( true ); renderer.clearDepth(); renderer.render( scene, camera );

Slide 39

Slide 39 text

if ( ! pose ) return; // if the session is for both right and left eyes, length of views would be 2. // if not, length is 1, xrFrame.views.forEach( ( view ) => { const viewport = session.baseLayer.getViewport( view ); renderer.setSize( viewport.width, viewport.height ); camera.projectionMatrix.fromArray( view.projectionMatrix ); const viewMatrix = new THREE.Matrix4().fromArray( pose.getViewMatrix( view ) ); camera.matrix.getInverse( viewMatrix ); camera.updateMatrixWorld( true ); renderer.clearDepth(); renderer.render( scene, camera ); } ); } } } );

Slide 40

Slide 40 text

Hit test

Slide 41

Slide 41 text

source: https://codelabs.developers.google.com/codelabs/ar-with-webxr/index.html#4 Find intersection point
 that ray with the real-world surface

Slide 42

Slide 42 text

Demo

Slide 43

Slide 43 text

The code

Slide 44

Slide 44 text

const width = window.innerWidth; const height = window.innerHeight; const startButton = document.getElementById( 'startButton' ); navigator.xr.requestDevice().then( ( device ) => { const outputCanvas = document.getElementById( 'xrCanvas' ); outputCanvas.width = width; outputCanvas.height = height; const xrContext = outputCanvas.getContext( 'xrpresent' ); // session request must be done in user action such as click startButton.addEventListener( 'click', onEnterAR ); async function onEnterAR() { startButton.style.display = 'none'; const xrSession = await device.requestSession( { The same as previous one…

Slide 45

Slide 45 text

renderer.render( scene, camera ); } ); } window.addEventListener( 'click', onClick ); // use Raycaster to make ray origin and direction const raycaster = new THREE.Raycaster(); // onClick must be async, since hitTest will be done with await async function onClick() { const x = 0; const y = 0; raycaster.setFromCamera( { x, y }, camera ); const origin = new Float32Array( raycaster.ray.origin.toArray() ); const direction = new Float32Array( raycaster.ray.direction.toArray() ); const hits = await xrSession.requestHitTest( origin, direction, frameOfRef ); if ( hits.length ) { Add a click action

Slide 46

Slide 46 text

renderer.render( scene, camera ); } ); } window.addEventListener( 'click', onClick ); // use Raycaster to make ray origin and direction const raycaster = new THREE.Raycaster(); // onClick must be async, since hitTest will be done with await async function onClick() { const x = 0; const y = 0; raycaster.setFromCamera( { x, y }, camera ); const origin = new Float32Array( raycaster.ray.origin.toArray() ); const direction = new Float32Array( raycaster.ray.direction.toArray() ); const hits = await xrSession.requestHitTest( origin, direction, frameOfRef ); if ( hits.length ) {

Slide 47

Slide 47 text

window.addEventListener( 'click', onClick ); // use Raycaster to make ray origin and direction const raycaster = new THREE.Raycaster(); // onClick must be async, since hitTest will be done with await async function onClick() { const x = 0; const y = 0; raycaster.setFromCamera( { x, y }, camera ); const origin = new Float32Array( raycaster.ray.origin.toArray() ); const direction = new Float32Array( raycaster.ray.direction.toArray() ); const hits = await xrSession.requestHitTest( origin, direction, frameOfRef ); if ( hits.length ) { const hit = hits[ 0 ]; const hitMatrix = new THREE.Matrix4().fromArray( hit.hitMatrix ); const box = new THREE.Mesh(

Slide 48

Slide 48 text

const raycaster = new THREE.Raycaster(); // onClick must be async, since hitTest will be done with await async function onClick() { const x = 0; const y = 0; raycaster.setFromCamera( { x, y }, camera ); const origin = new Float32Array( raycaster.ray.origin.toArray() ); const direction = new Float32Array( raycaster.ray.direction.toArray() ); const hits = await xrSession.requestHitTest( origin, direction, frameOfRef ); if ( hits.length ) { const hit = hits[ 0 ]; const hitMatrix = new THREE.Matrix4().fromArray( hit.hitMatrix ); const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial() );

Slide 49

Slide 49 text

const raycaster = new THREE.Raycaster(); // onClick must be async, since hitTest will be done with await async function onClick() { const x = 0; const y = 0; raycaster.setFromCamera( { x, y }, camera ); const origin = new Float32Array( raycaster.ray.origin.toArray() ); const direction = new Float32Array( raycaster.ray.direction.toArray() ); const hits = await xrSession.requestHitTest( origin, direction, frameOfRef ); if ( hits.length ) { const hit = hits[ 0 ]; const hitMatrix = new THREE.Matrix4().fromArray( hit.hitMatrix ); const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial() );

Slide 50

Slide 50 text

raycaster.setFromCamera( { x, y }, camera ); const origin = new Float32Array( raycaster.ray.origin.toArray() ); const direction = new Float32Array( raycaster.ray.direction.toArray() ); const hits = await xrSession.requestHitTest( origin, direction, frameOfRef ); if ( hits.length ) { const hit = hits[ 0 ]; const hitMatrix = new THREE.Matrix4().fromArray( hit.hitMatrix ); const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial() ); box.position.setFromMatrixPosition( hitMatrix ); scene.add( box ); } } }

Slide 51

Slide 51 text

raycaster.setFromCamera( { x, y }, camera ); const origin = new Float32Array( raycaster.ray.origin.toArray() ); const direction = new Float32Array( raycaster.ray.direction.toArray() ); const hits = await xrSession.requestHitTest( origin, direction, frameOfRef ); if ( hits.length ) { const hit = hits[ 0 ]; const hitMatrix = new THREE.Matrix4().fromArray( hit.hitMatrix ); const box = new THREE.Mesh( new THREE.BoxBufferGeometry( .2, .2, .2 ), new THREE.MeshNormalMaterial() ); box.position.setFromMatrixPosition( hitMatrix ); scene.add( box ); } } }

Slide 52

Slide 52 text

Demo 52

Slide 53

Slide 53 text

How about iOS?

Slide 54

Slide 54 text

2. AR Quick Look

Slide 55

Slide 55 text

• Upon the release of iOS 12 • Only works exclusively on Safari
 (Doesn’t work even in iOS Chrome) • Special HTML syntax • Apple’s propriety AR Quick Look

Slide 56

Slide 56 text

Slide 57

Slide 57 text

Link to usdz with rel="ar" Must contain one or

Slide 58

Slide 58 text

58 Demo

Slide 59

Slide 59 text

USDZ

Slide 60

Slide 60 text

• Stands for “Universal Scene Description” archived with Zip • 3D model format for AR Quick Look • Created by Apple and Pixar USDZ

Slide 61

Slide 61 text

• With “usdz_converter”
 which is a Xcode command line
 Only for macOS • Vectary (Web service)
 Cannot configure details such as size How to prepare USDZ

Slide 62

Slide 62 text

• Up to 10M Polygons • Up to 10 seconds for the animation • Up to 2048×2048 texture size Limitations

Slide 63

Slide 63 text

$ xcrun usdz_converter ./my-model.obj my-model.usdz -color_map albedo.jpg -metallic_map metallic.jpg
 -roughness_map roughness.jpg -normal_map normal.jpg -ao_map ao.jpg -emissive_map emissive.jpg USDZ Converter Terminal.app

Slide 64

Slide 64 text

$ xcrun usdz_converter ./my-model.obj my-model.usdz -color_map albedo.jpg -metallic_map metallic.jpg
 -roughness_map roughness.jpg -normal_map normal.jpg -ao_map ao.jpg -emissive_map emissive.jpg Terminal.app

Slide 65

Slide 65 text

$ xcrun usdz_converter ./my-model.obj my-model.usdz -color_map albedo.jpg -metallic_map metallic.jpg
 -roughness_map roughness.jpg -normal_map normal.jpg -ao_map ao.jpg -emissive_map emissive.jpg Terminal.app File input Output name Input option name Option value

Slide 66

Slide 66 text

https://www.vectary.com/ Vectary

Slide 67

Slide 67 text

https://www.vectary.com/

Slide 68

Slide 68 text

https://www.vectary.com/ USDZ export

Slide 69

Slide 69 text

• Apple's propriety
 (Hopefully it’s a temporary spec until WebXR Device API) • Some limitations of USDZ • Just pop and show in AR
 (Cannot be utilized as game and others)

Slide 70

Slide 70 text

3D Model format
 for Web Apps?

Slide 71

Slide 71 text

3. glTF

Slide 72

Slide 72 text

• Stands for GL Transmission Format • Open standard 3D model format • JPEG of 3D • Maintained by Khronos What is glTF

Slide 73

Slide 73 text

• JSON format as the container with binary payloads
 or packed single binary file called glb • Animation supported • Extensible just like WebGL Spec
 (Like Adobe Fireworks PNG) What is glTF

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

• Supported by many 3D modeling tools • Several WebGL libraries support glTF loading
 (Such as three.js, BabylonJS, Cesium) • Microsoft Paint3D, Office and others use glb as 3D model format • Adobe Animate has glTF exporter • Facebook's 3D posts use glb • VRM: glTF extended format for humanoid avatar
 (For Virtual-YouTuber, VRChat and others) glTF of the present

Slide 76

Slide 76 text

https://www.youtube.com/watch?v=H2XoeQmkchw

Slide 77

Slide 77 text

https://www.youtube.com/watch?v=zzDM42PdqZk

Slide 78

Slide 78 text

Load glTF in three.js

Slide 79

Slide 79 text

const width = window.innerWidth; const height = window.innerHeight; const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera( 45, width / height, 0.001, 100 ); camera.position.set( 0, 0, 0.5 ); const renderer = new THREE.WebGLRenderer(); renderer.setSize( width, height ); renderer.gammaInput = true; renderer.gammaOutput = true; document.body.appendChild( renderer.domElement ); scene.add( new THREE.HemisphereLight( 0xffffff , 0x332222 ) );

Slide 80

Slide 80 text

document.body.appendChild( renderer.domElement ); scene.add( new THREE.HemisphereLight( 0xffffff , 0x332222 ) ); const loader = new THREE.GLTFLoader(); loader.load( './models/barger/barger.gltf', function ( gltf ) { scene.add( gltf.scene ); } ); ( function anim () { requestAnimationFrame( anim );

Slide 81

Slide 81 text

scene.add( gltf.scene ); } ); ( function anim () { requestAnimationFrame( anim ); renderer.render( scene, camera ); } )();

Slide 82

Slide 82 text

No content

Slide 83

Slide 83 text

Demo 83

Slide 84

Slide 84 text

glTF Animation

Slide 85

Slide 85 text

Demo

Slide 86

Slide 86 text

glTF is very popular
 as the standard

Slide 87

Slide 87 text

Security issues

Slide 88

Slide 88 text

• Gaze tracking
 Detect virtual keyboard type with gaze direction in a VR environment • Trusted environment
 Motion sickness by low FPS and pose track error • Fingerprinting
 Identify users room shape or even face shape with depth

Slide 89

Slide 89 text

Conclusion

Slide 90

Slide 90 text

XR is coming to the Web

Slide 91

Slide 91 text

• A Web API (in development) • For both VR and AR • Just works on WebBrowsers
 no add-ons nor installation required WebXR Device API

Slide 92

Slide 92 text

• AR for iOS • Using USDZ • Basic AR feature AR Quick Look

Slide 93

Slide 93 text

• The standard • 3D model format in JSON or Binary • Can be seen many places • Loaders are available in JavaScript glTF

Slide 94

Slide 94 text

Web will be connected
 to the real world 94 Source: https://www.netflix.com/jp/title/80182418 © ࢜࿠ਖ਼फɾProduction I.G/ߨஊࣾɾ߈֪ػಈୂ੡࡞ҕһձ

Slide 95

Slide 95 text

gl.finish(); @yomotsu