Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Ben Sandofsky: Building Periscope Sketches

Ben Sandofsky: Building Periscope Sketches

Presented at AltConf 2016

1fa9cb8c7997c8c4d3d251fb5e41f749?s=128

Realm

June 13, 2016
Tweet

Transcript

  1. Building Periscope Sketches Ben Sandofsky @sandofsky

  2. None
  3. None
  4. New Languages New Platforms New Processes New Frameworks

  5. iPhone Android Product OpenGL Platform ✅ Abstract Design

  6. None
  7. We take a feature from Pitch to Production,
 with a

    few dead ends and bugs along the way.
 
 You’ll walk away with a little more insight into
 graphics on iOS, and leveraging the GPU. Today
  8. “We think we want to draw on video.”

  9. Act 1: The Prototype

  10. What’s the minimum you need to answer the questions? It

    needs viewers. It needs interactive drawing. “Should drawings be baked into the video?” “Do we even want the feature?”
  11. Given the Architecture… • Video Stream (320x568) • JSON stream

    • NTP Embedded for syncing • Arbitrary payloads are accepted on staging
  12. Prototype Dev Time: 3 Days

  13. None
  14. The Hacky Design • New gesture = new stroke object

    • Each stroke gets its own CAShapeLayer • Upload a snapshot once a second • Animate of strokeStart and strokeEnd
  15. Should we build it? Yes, let’s build it

  16. Should drawings be
 baked into the video?

  17. Thoughts on Drawings… • They’re essential to the message. What

    happens when you save to the camera roll? • Do we need renderers for every single platform? • How do we version drawing schema?
  18. Bake it into the video Should drawings be
 baked into

    the video?
  19. Lingering doubt: Drawing snapshot stuff.

  20. Act 2:
 Real Code

  21. The Video Stack • Powered by GPUImage • Filters convert

    iPhone, GoPro and DJI Sources: 2-plane and
 3-plane YUV to RGB • Scaling to 320x568
  22. Video Processor Network Stack

  23. Touch Interpreter Sketch Engine Video Processor Network Stack

  24. Day 1: The Red Dot

  25. OpenGL in 10 Minutes

  26. Why People Struggle with OpenGL • It’s a wacky state

    machine based on “binding” • Multithreading is a battle • Legacy support e.g. client side vertex arrays
  27. GPUs are great at Parallelism CPU GPU Clock Speed 1,400

    450 Cores 2 4 Just avoid data transfer.
  28. It isn’t always optimal for drawing. It’s for 3D, but

    we work around it.
  29. Shaders are tiny programs. Vertex Shaders run on every vertex.

    Fragment Shaders run on every pixel.
  30. -0.0378297 0.12794 0.00447467 0.850855 0.5 -0.0447794 0.128887 0.00190497 0.900159 0.5

    -0.0680095 0.151244 0.0371953 0.398443 0.5 -0.00228741 0.13015 0.0232201 0.85268 0.5 -0.0226054 0.126675 0.00715587 0.675938 0.5 -0.0251078 0.125921 0.00624226 0.711533 0.5 -0.0371209 0.127449 0.0017956 0.888639 0.5 0.033213 0.112692 0.0276861 0.652757 0.5 0.0380425 0.109755 0.0161689 0.708171 0.5 -0.0255083 0.112568 0.0366767 0.454541 0.437538 -0.0245306 0.112636 0.0373469 0.448754 0.455187 0.0274031 0.12156 0.0212208 0.533079 0.5 -0.0628961 0.158419 -0.0175871 0.404517 0.5 0.0400813 0.104202 0.0221684 0.535542 0.5 0.0451532 0.0931968 0.0111604 0.579563 0.425995 -0.0324965 0.174231 -0.00238999 0.365607 0.5 -0.0804587 0.135827 0.0500319 0.499575 0.5 -0.0724944 0.126022 0.052902 0.564827 0.5 Vertices are points that make up your 3D object. You can include additional data with each point, for use in rendering later.
  31. -1.0, 1.0 -1.0, -1.0 1.0, 1.0 1.0, -1.0 Vertex shaders

    are mostly used to go from abstract coordinates into the screen space, -1.0 to +1.0.
  32. Vertex shaders are mostly used to go from abstract coordinates

    into the screen space, -1.0 to +1.0. Fragment shaders actually output pixel. You write shading algorithms for the desired effects.
  33. Vertex shaders are mostly used to go from abstract coordinates

    into the screen space, -1.0 to +1.0. Fragment shaders actually output pixel. You write shading algorithms for the desired effects. GPUs are really good at sampling pixels from textures, bitmaps uploaded to the GPU.
  34. Vertex Data Shaders Other Settings Vertex Shaders Run Fragment Shaders

    Run Setup On the GPU glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
  35. In a game, you loop through every object

  36. -1, 1 1, -1 -1, -1 1, 1

  37. None
  38. None
  39. attribute vec4 position; attribute vec4 inputTextureCoordinate; varying vec2 textureCoordinate; void

    main() { gl_Position = position; textureCoordinate = inputTextureCoordinate.xy; }
  40. varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; void main() {

    gl_FragColor = texture2D(inputImageTexture, textureCoordinate); }
  41. So let’s add another draw call.

  42. None
  43. glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

  44. glDrawArrays(GL_POINTS, 0, length);

  45. None
  46. None
  47. None
  48. None
  49. None
  50. There’s got to be something simpler than that snapshot stuff…

    Hmm… We’ve got a simulation, with realtime graphics, driven by user input.
  51. http://gameprogrammingpatterns.com/game-loop.html

  52. The Final Design • Treat it like a particle system

    • Append new particles the vertex buffer • Every frame increments its age • Only render particles where age < max life typedef struct { GLfloat x; GLfloat y; GLfloat radius; GLfloat age; GLfloat dissolveAngle; // Color GLfloat red; GLfloat green; GLfloat blue; GLfloat alpha; } SketchPoint;
  53. Day 2: Use a Particle Model

  54. Day 2: Add Interpolation

  55. Day 2: Add Fade/Burn

  56. float life = (aAge / uMaxAge); lowp float outroValue =

    smoothstep(0.75, 1.0, life); gl_Position.x += (cos(aDissolveAngle) * outroValue * 0.01); gl_Position.y += (sin(aDissolveAngle) * outroValue * 0.01);
  57. Day 3: Particles

  58. Day 3: Tone it down

  59. uniform float uMaxAge; uniform mat4 uTransform; uniform float uPointScale; attribute

    vec4 position; attribute float radius; attribute vec4 aColor; attribute float aAge; attribute vec4 inputTextureCoordinate; attribute float aDissolveAngle; varying lowp vec4 vColor; varying lowp float vLife; varying vec2 textureCoordinate; void main() { gl_Position = position; gl_PointSize = radius * uPointScale; vColor = aColor; float life = (aAge / uMaxAge); vLife = life; lowp float introValue = (1.0 - smoothstep(0.0, 0.05, life)); lowp float outroValue = smoothstep(0.75, 1.0, life); lowp float flairUpValue = smoothstep(0.7, 0.8, life); lowp float shrinkValue = 1.0 - smoothstep(0.7, 0.95, life); gl_Position.x += (cos(aDissolveAngle) * outroValue * 0.01); gl_Position.y += (sin(aDissolveAngle) * outroValue * 0.01); gl_Position *= uTransform; vColor.rgb = mix(vColor.rgb, vec3(1.0, 1.0, 1.0), (introValue * 0.6)); vColor.rgb = mix(vColor.rgb, vec3(1.0, 1.0, 1.0), flairUpValue * 0.6); gl_PointSize *= (introValue + 1.0) * shrinkValue; }
  60. varying highp vec2 textureCoordinate; varying lowp vec4 vColor; varying lowp

    float vLife; uniform sampler2D uBrushTexture; void main() { lowp float outroFade = (1.0 - smoothstep(0.95, 1.0, vLife)); gl_FragColor.a = texture2D(uBrushTexture, gl_PointCoord).r * outroFade; gl_FragColor.rgb = vColor.rgb * gl_FragColor.a; }
  61. “What about a color picker?”

  62. None
  63. Act 3: Real World Issues

  64. Video stabilization latency

  65. Touch Interpreter Sketch Engine Video Processor Network Stack

  66. Touch Interpreter Sketch Engine Video Processor Network Stack Preview View

    Sketch Engine
  67. OpenGL Multithreading • GCD Serial Queues are not dedicated threads

    • Sometimes GPUImage would dealloc on the main thread • Sometimes OpenGL contexts got crossed, messing up state
  68. Performance • Test on real hardware:
 iPod Touch 5th Gen

    • Test under realistic load:
 Near heart rate limit • Don’t guess. Measure.
  69. Epilogue

  70. The GPUImage Codebase • 29,744 lines of Objective-C. 16,094 are

    filters • Hardware capability checks • Resource pooling • Presents rendering on screen
  71. Investigating a Slim Renderer • 1,500 Lines of code •

    Reduces device utilization
 from 29% to 4% • Smaller surface area to understand • Metal is awesome, but not all devices support it Before After
  72. Credits • Aaron Wasserman • Sara Haider • Geraint Davies

    • Pablo Jablonski • Tyler Hansen • Veronika Hecko Wu • Joe Bernstein • Kayvon Beykpour
  73. Q & A