Google Developer Expert in Web Technologies Once spent a year living & working from 18 different countries Remote-working FTW Konami-JS A frivolous easter-egg script https://github.com/ snaptortoise/konami-js Ran in a marathon in Pyongang, North Korea. Accidentally cheated. " George Mandis
of my tools in wrong unexpected ways. Today we're going talk about making things— specifically crap art via JavaScript and some of the tools & techniques that allows us to do so.
expression of something captured only through the work itself Embracing one's curiosity in the pursuit to find an answer to a question. More succinctly: Not precisely useful, but not exactly pointless. A Definition For Today
better developers • Making art forces us to think about problems differently • Advances in early computing owe a lot to this sort of creative exploration • Opportunities for dabbling in interesting, cutting-edge technologies (AI/ML)
(1956) • Animation (1967) • 3D Animations + Interactivity (1971) • Literature & Poems (1952) (A woefully short and incomplete summary) Art & Early Computing
in 1956 dollars That’s ~$2.24 billion in 2019 dollars, in case you were wondering. The Never-Before-Told Story of the World's First Computer Art https://www.theatlantic.com/technology/archive/2013/01/the-never- before-told-story-of-the-worlds-first-computer-art-its-a-sexy-dame/ 267439/
tracking.js) Web-based Services (Azure Face API, Google Cloud Vision, Amazon Rekognition) Shape Detection API New(ish) in Chrome (Experimental: chrome://flags/ #enable-experimental-web-platform-features) Facial Recognition in JavaScript Different Approaches
we can recognize faces and expressions. Get permission to access the webcam and display it in our video element. const video = document.getElementById('inputVideo') async function loadModels() { await faceapi.nets.ssdMobilenetv1.load(‘face-api.js/weights') await faceapi.nets.faceExpressionNet.load('face-api.js/weights') await faceapi.loadFaceRecognitionModel('face-api.js/weights') const stream = await navigator.mediaDevices.getUserMedia({video:{}}) video.srcObject = stream }
determine how strict we want to match and pass the video element directly. If we get results we act on them. Otherwise, we can use requestAnimationFrame to continue processing new frames. video.addEventListener(‘play’, onPlay) async function onPlay() { const options = new faceapi.SsdMobilenetv1Options({ minConfidence: .5 }) const result = await faceapi.detectSingleFace(video, options).withFaceExpressions() if (result) { // do something with face & expressions object } window.requestAnimationFrame(onPlay) }
element directly so we need to take a snapshot of the current frame using canvas to return a data URL (data://) ctx.drawImage(video, 0, 0); let data = canvas.toDataURL("image/jpeg");
the data URL to return a Blob object that we can then send along (i.e. upload) to the service for processing. fetch(data) .then(res => res.blob()).then(blobData => { let subscriptionKey = "xxxxxx"; let uriBase = "https://westus2.api.cognitive.microsoft.com/face/v1.0/detect"; fetch(uriBase + "?" + parameters, { method: "POST", headers: { "Content-Type": "application/octet-stream", "Ocp-Apim-Subscription-Key": subscriptionKey }, body: blobData }) .then(response => { return response.json(); }) .then(response => { // act on response data