Independent developer for ~13 years based out of Portland, Oregon Google Developer Expert in Web Technologies Once spent a year living & working from 18 different countries Remote-working FTW Konami-JS A frivolous easter-egg script https://github.com/ snaptortoise/konami-js Ran in a marathon in Pyongang, North Korea. Accidentally cheated. " George Mandis
George Mandis I like to break make things. Make use of my tools in wrong unexpected ways. Today we're going talk about making things— specifically crap art via JavaScript and some of the tools & techniques that allows us to do so.
Anything created through joyful (?) exploration of a discipline An expression of something captured only through the work itself Embracing one's curiosity in the pursuit to find an answer to a question. More succinctly: Not precisely useful, but not exactly pointless. A Definition For Today
It's Everywhere! Browsers, servers, hardware... Low Barrier to Entry Taught in many places, forgiving... It's Where the People Are Conferences, Meetups, GitHub projects... Why JavaScript?
JavaScript is uniquely suited for creating interactive, expressive works of digital art of all kinds Why is this? • Ubiquity • The Browser • Industry & Community Support Why JavaScript?
Why Is This Important? • Through play we can become better developers • Making art forces us to think about problems differently • Advances in early computing owe a lot to this sort of creative exploration • Opportunities for dabbling in interesting, cutting-edge technologies (AI/ML)
Why Is This Important? We can't be experts at everything. We are guaranteed to be amateurs at something. Let's revel in our amateurism and make fun things!
• Computer Music (1951) • Synthesized Speech (1961) • Imagery (1956) • Animation (1967) • 3D Animations + Interactivity (1971) • Literature & Poems (1952) (A woefully short and incomplete summary) Art & Early Computing
Imagery (1956) SAGE Pin-Up Girl A $238 million military computer in 1956 dollars That’s ~$2.24 billion in 2019 dollars, in case you were wondering. The Never-Before-Told Story of the World's First Computer Art https://www.theatlantic.com/technology/archive/2013/01/the-never- before-told-story-of-the-worlds-first-computer-art-its-a-sexy-dame/ 267439/
3D Animations (1967) “Mythical Creature” — Charles Csuri Charles A. Csuri Project at Ohio State University https://csuriproject.osu.edu/index.php/Detail/objects/581
Literature & Poems (1952) Algorithmic Love Letters —Christopher Straychey A House of Dust, 1967 — Alison Knowles https://glitch.com/~house-of-dust Originally written in Fortran IV Recreated in JavaScript by Chad Weinard
Espruino Embedded JavaScript interpreter Tessel Linux-based Runs Node.js Johnny-Five Runs on a variety of hardware WebMIDI Accessible in the browser without special plugins (caveats) Hardware
Client-side Pre-trained models or train your own (Tensorflow.js, face-api.js, pico.js, tracking.js) Web-based Services (Azure Face API, Google Cloud Vision, Amazon Rekognition) Shape Detection API New(ish) in Chrome (Experimental: chrome://flags/ #enable-experimental-web-platform-features) Facial Recognition in JavaScript Different Approaches
face-api.js Client-side Facial Recognition Load the networks and models so we can recognize faces and expressions. Get permission to access the webcam and display it in our video element. const video = document.getElementById('inputVideo') async function loadModels() { await faceapi.nets.ssdMobilenetv1.load(‘face-api.js/weights') await faceapi.nets.faceExpressionNet.load('face-api.js/weights') await faceapi.loadFaceRecognitionModel('face-api.js/weights') const stream = await navigator.mediaDevices.getUserMedia({video:{}}) video.srcObject = stream }
face-api.js Client-side Facial Recognition Pass option to the models to determine how strict we want to match and pass the video element directly. If we get results we act on them. Otherwise, we can use requestAnimationFrame to continue processing new frames. video.addEventListener(‘play’, onPlay) async function onPlay() { const options = new faceapi.SsdMobilenetv1Options({ minConfidence: .5 }) const result = await faceapi.detectSingleFace(video, options).withFaceExpressions() if (result) { // do something with face & expressions object } window.requestAnimationFrame(onPlay) }
Face API Azure Cognitive Services Setup parameters to send to the service. var params = { returnFaceId: "true", returnFaceLandmarks: "true", returnFaceAttributes: "age,gender,headPose,smile,facialHair,glasses,emotion," + "hair,makeup,occlusion,accessories,blur,exposure,noise" }; let returnValue = []; for (param in params) { returnValue.push( `${encodeURIComponent(param)}=$ {encodeURIComponent(params[param])}` ); }
Face API Azure Cognitive Services We can’t pass the video element directly so we need to take a snapshot of the current frame using canvas to return a data URL (data://) ctx.drawImage(video, 0, 0); let data = canvas.toDataURL("image/jpeg");
Face API Azure Cognitive Services We can use fetch on the data URL to return a Blob object that we can then send along (i.e. upload) to the service for processing. fetch(data) .then(res => res.blob()).then(blobData => { let subscriptionKey = "xxxxxx"; let uriBase = "https://westus2.api.cognitive.microsoft.com/face/v1.0/detect"; fetch(uriBase + "?" + parameters, { method: "POST", headers: { "Content-Type": "application/octet-stream", "Ocp-Apim-Subscription-Key": subscriptionKey }, body: blobData }) .then(response => { return response.json(); }) .then(response => { // act on response data