Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Report- Magical Interfaces, MIT Media Lab Design and Innovation Workshop 2014, Mumbai

Anirudh
February 11, 2014

Report- Magical Interfaces, MIT Media Lab Design and Innovation Workshop 2014, Mumbai

Report of the Magical Interfaces track, MIT Media Lab Design and Innovation Workshop 2014, Mumbai

Dates: 27 Jan - 2 Feb 2014
Location: Weschool, Bombay
Instructors: Anirudh Sharma, Valentin Heun; Remote Instructor: Marco Tempest
Guest speaker: Prof. Pattie Maes

Anirudh

February 11, 2014
Tweet

More Decks by Anirudh

Other Decks in Education

Transcript

  1. MIT Media Lab Design and Innovation Workshop Magical Interfaces Track

    Track Description Clarke once said: "Any sufficiently advanced technology is indistinguishable from magic." We are fascinated by systems that do things that go against common convention like a ball that suddenly jumps while rolling, a screen that can actuate itself, two bicycles that can fuse together to make a rickshaw, a shirt collar that converts to an airbag for bikers during an accident, invisible inks, and transparent displays. We'll begin this track with introduction to magician's secrets, illusions, optics, stereoscopy, Pepper's ghost, holographic illusions – various techniques used by magicians for over 250 years. We'll then move to interaction design, building systems, design, solutions, technologies, and prototypes for their ability to surprise and please its users. Artists, designers and engineers with a sense of mischief, and interest in the mysterious are highly encouraged to apply. This document shows 10 projects done over a period of one week with a team of 40 students. The work was inspired by Magic, science fiction, dreams, and ideas otherwise not discussed. We chose 40 students for our track, a homogenous mix of designers, engineers, artists and architects. Dates: 27 Jan - 2 Feb 2014 Location: Weschool, Bombay Instructors: Anirudh Sharma, Valentin Heun; Remote Instructor: Marco Tempest Guest speaker: Prof. Pattie Maes   Magical Interfaces poster advertised from past 3 months All web-based content for the workshop is released under a Creative Commons license. All the code/schematics are under MIT license. As an experiment to document the design process we used, Build-in-progress CMS. All projects documented during the workshop can now be accessed at http://india.media.mit.edu/projects
  2. Day 1: Introductions, team building, field trip, brainstorming session We

    started the workshop with teaching students designing cognitive, anamorphic illusions, puzzles, Wizard of Oz, mechanical turk pranks. We then introduced them to new media projects such as ZeroN, inFORM, SixthSense. We also showed popular clips from Minority Report, Star Trek etc. We then shared popular techniques such as pepper’s ghost, holography, and projector-camera setups to get them acquainted with applied optics for new media art. For the field trip students went to Jehangir Art gallery and the streets of famous Colaba art gallery, with an assignment to find a small object they could use for creating a magical interface. Upon returning from 2-hour field trip teams bought a painting they wanted to add sound zones to, a light-saber, a low cost handheld vegetable cutter, a putty based spider-man toy that could climb walls down, an e-cigarette that produces dense smoke etc. We all met again in the workshop venue and discussed what teams found. The idea was to build up a conversation around a simple toy/object. ~30 sketches were made.
  3. Roadside Disentanglement Puzzle maker Low cost handheld vegetable chopper We

    had a 40 min session with students who tried to solve disentanglement puzzle bought from a street seller. The same could be related to constraints and affordances in The Fifth Element (Sci-Fi). When a little bit of the required element is provided to the placed stones, there is immediate feedback as small rectangles open just a bit near the tops. It is this partway state that indicates to the protagonists that, even though they haven’t completely supplied enough material, they are on the right track. This clue gives them enough of a signal that they continue trying to deduce control of the interface. A lot of us tried the simple but mind-bending puzzle for the first time. Unsolved Solved Zero-N- Students comment like ‘I think there is something fundamental behind motivations to liberate physical matter from gravity and enable control. The motivation has existed as a shared dream amongst humans for millennia. It is an idea found in mythologies, desired by alchemists, and visualized in Science Fiction movies.’ Students were not yet allowed to use computers or Internet for any background research. We formed teams with 4 members each. Idea was to separate friends, form new groups and encourage interdisciplinary thinking amongst teams. The teams went on to dinner together. Day 2: Team reshuffling, idea madness, introduction to magical/Sci-Fi materials The beginning of day 2 students were introduced to a lot of materials, a few of which such as glow paint, artificial snow polymer are used in movies/Sci-Fi to create special effects. • Aero gel-material that is 75 percent air • Paper which changes its color when exposed to sunlight • Thermal liquid crystal • Ferro-fluids • Thermochromic ink • Artificial snow powder • Euler disk • EL wire • Butterfly with Muscle wires • Party shades with optical illusion, prisms
  4. Ferrofluids Aerogel and material intro - Post introduction, there was

    another brainstorming session, the new teams were supposed to come up with 2-3 best ideas inspired by SciFi/Magic. All magical ideas, handsketched fresh on posters were spread on walls around the trackroom. - Students were given 4 voting stickies, to vote for the four best ideas they’d like to pursue. And the democracy ruled! There was chaos, and noise, convincing and discussions. 10 ideas were selected, with top votes and students divided themselves to work in teams based on their preference. After getting deep into fiction and fantasies for two initial days, students were finally given their laptops/internet to start researching. Interesting observation during the workshop was that initial intended and final prototypes had lot of micro-iterations. Due to the limit of time, and fixed number of resources, students improvised and came up with very
  5. ingenious low-cost solutions. e.g. the $2 fog screen prototype later

    in the document is a good example. In the evening there were tutorials How to Make Almost Anything style. Students were acquainted with Arduinos, Fablab, 3D printing, Laser cutting etc. 3D printing and laser-cutting tutorials On the end of day 2, students while prototyping got a chance to critique with Marco Tempest for a good duration to refine their ideas and structure them. Our magical interfaces group after brainstorming with Marco Tempest Idea filtering/critique session with Pattie
  6. PROJECTS 1. MistKonection! REFERENCE 1 The datastream on a Cylon

    basestar seems to actually consist of water; humanoid Cylons work with it by gently touching this illuminated panel covered by a thin sheet of water. Possibly display as well as interface? The ultimate touch screen. REFERENCE 2 Have you ever wondered what it would be like to have a virtual presence of the person with whom you are talking, like the way Sirius Black used to talk with Harry Potter in the fireplace of the Gryffindor common room using the Floo Network? Or control an interface in thin air like Tony Stark? Well, if you consider that as magic, then it surely is magical! This project aims at creating a 3D projection of the person you are talking (over long distance) on a mist screen. As shown in the diagram below, air flows into the device, is modified and then ejected and illuminated to produce the image. Nothing is added to the air; nothing affects air quality. Images can be seen up to 75 degrees off aspect, similar to an LCD screen; no special glasses or projection screens are required.
  7. (From Heliodisplay diagram) The image is two-dimensional, not volumetric. And

    of course we all remember this scene. (Help me, Obiwan Kenobi - you're my only hope) Why a mist screen? Mist has always been related metaphorically to magic and wonder. The word ‘mystical’ has been basically derived from the word ‘mist’ and is often used to describe things that we do not understand or find curious. Since the track is called “Magical Interface”, we found it highly pertinent to go ahead with the project called “MistKonection” which basically uses Kinect to project the image of the person on the other end of the line on a mist screen. We can also make the screen interactive to detect our gestures or write on screen. Attempt 1 - ultrasonic fog machine Ultrasonic Foggers - Background A piezoelectric transducer (resonating frequency 1.6MHz) The transducer vibrates causing the water to turn into droplets, which vaporize to turn
  8. into fog particles. Unlike thermal or heat-based foggers, the fog

    generated by an ultrasonic fogger is cold and wet. These foggers are small devices. They have an external AC/DC adapter for power supply. They are cost-effective. [source- buzzle] Failure of Fog Machine ! The fog machine we received was small and the fog being produced was very low on the intensity. It was not possible to project image on it. It was actually mist and not fog. Alternative Ideas-HACKING our way! We got down to some brainstorming and "Jugaad" (Indian word for ‘hack’)! Following ideas came up, as how we could possibly make an interactive 3d projection: 1) Water curtain 2) glycerin water solution 3) steam from electric kettle 4) dry ice 5) soap bubble solution And thus began the journey to unravel the mystery of the mist! This was followed by our attempts at creating invisible fog screen: 1) Steam boiling water: Even though it was unrestrained, it showed good projection for laser lights. When we tried projecting image on it, results were not so good. 2) Water with glycerin: Professional fog machines used at parties, use mixture of glycerin mixed in right proportion. Very Thicker Smoke: 30% Glycerin | 70% Water Medium Thick Smoke: 20% Glycerin | 80% Water Less Thick Smoke: 15% Glycerin | 85% Water We did make solution with 30% glycerin and 70% water expecting fog to be denser; the results were still not sufficient enough to produce the required fog. 3) Water with milk: milk is added to hookah to make its smoke thicker/heavier. So at 1 AM after midnight, we procured milk and tried seeing if the steam produced was dense enough. Fail again. ! 4) Water curtain: Since we failed to produce fog, we tried if we could possibly create a water curtain. We experimented with water hose with a horizontal slit and passed water through it from a normal tap. We realized that if we could make the water pressure better we might possibly get a curtain. Next day when we got the water pump, but its pressure was so dismal that we had to drop the idea. Failed attempts to create laminar water flow
  9. 5) Water curtain on glass: Since water pump failed, we

    though we might try having the water flow over glass plate for the required effect. This was OK, but not that great since water was cohesive with glass plate and tended to form streams very quickly in the flow. 6) Water Sprinkler: we dabbled with this idea, since we read Disney uses water sprinklers to create screen. We had major resource crunch since this it would require an outdoor and dark environment. 7) Dry ice in boiling water: Procuring dry ice was very difficult task in Mumbai, but we finally managed to get it, which gave us hope back. When dry ice was put in water the effect was very good, that we could practically think of creating a fog screen. To increase the intensity we boiled the water. Voila! The result was amazing; all we had to do now was to create a laminar flow. 8) Designed less than $2 fog machine: we set down to design our own fog machine. The prototype was drawn on paper as shown in the figure below. For making the fog screen laminar we acquired PVC pipes, batteries, biscuit tins and what not so that we could build up a contraption for producing a controlled outlet for the dry ice. Interaction with Gestures To make the interaction look like Iron Man’s Tony Stark we had to include Kinect into our arsenal. To be able to create a “WOW” factor, we had to incorporate custom gestures into our project. Thankfully, a recently updated Kinect Toolbox library came to our rescue. A gesture is represented as a union of some defined states and each state is described in terms of the relative position of the skeleton joints. For example, in the zoom in action that we implemented, the initial state is when both the hand palm joints are close together in front of the abdomen and the final state is when they are well separated from each other. The initial state translates to the following skeleton description: • Z-coordinate of the right and the left hand palm joint is less than the Z coordinate of the abdomen joint (i.e. they are in front of the abdomen)
  10. • Y-coordinate of the right and the left hand palm

    joint is less than the Y coordinate of Head joint but greater than the Y coordinate of the hip joint (i.e. they are positioned in between head and hip joints) • X- coordinate of the right and the left hand palm joint is within a given threshold and is less than the X coordinate of the respective hand elbow (i.e. they are held close to each other and are positioned in between the two elbows) Similar is the description of the final state where the Z coordinate and the Y coordinate description is the same as above, only the X-coordinate description changes so as to denote that the hand joints are now well separated and are to the right and left of the right and left hand respectively. The overall code compiles like a gesture in which initially both the palms are close to each other and then you move them in opposite directions. The Gesture Factory of Kinect Toolbox executes itself at regular intervals and checks for the presence of a gesture which has been defined in its list. To connect these gestures with the actual manipulation of the image and the displayed objects, we worked on with the image manipulation in C#. To enlarge it we: - created a new BitmapImage object and copied the original image data onto it. Then we manipulated with the width and the height of the new BitmapImage using simple mathematical operations. - Then we connected this new image to the completion of the gesture recognition event of Kinect (i.e. when the Gesture Factory recognizes the zoom in event it would then trigger a function which would display the new Bitmap image in the placeholder of the original image). However, there was one issue with this approach - human gestures are continuous, they are not like impulses. However, the image change that computer does is instantaneous. To give a human like feeling, we used Animation option in C#, which would allow the change in image (or zooming in or out) to take place over a course of time so as to give it a continuous feeling. Similar to this zoom in option, we also deployed the zoom out, image change and swipe option. Image change was notified by the push hand gesture of the right hand, whereas, the swipe option was indicated by a wave gesture of the right hand. On the D (demo) Day, these gesture controls did amaze the audience, however, we noticed that the gesture recognition wasn't that sound when multiple skeletons were detected. Resolving multiple skeletons and incorporating more new generic gestures would be our work ahead with respect to Kinect coding.
  11. There are a number of directions, which we need to

    focus on before we could achieve our final aim of a portable low-cost non-solid projection screen. For the sake of clarity, we are listing the few arenas, which we need to focus on. • Controlling the flow velocity of Smoke - During our testing, we noticed that the smoke coming out of the pipe holes were deflected after a very short span of time. If we could use pressure pumps and other devices, so that it gushes out at a higher velocity, then we believe better results could be obtained. • Alternatives to Fog/Mist - There can be number of other ways that we have to try, including smoke rings, aerosols and smoke filled soap bubbles. Suggestions and ideas from day-to-day life are most welcome.   Final Demo in the darkroom
  12. 2. Houdini: Virtual+Physical Rube Goldberg Houdini is an interactive installation

    that generates chain of events between physical objects and digital simulations of those same physical objects. The project explores how our understanding of movement in architectural space may change in future of digitally enabled environment. The installations consist of a ball, pieces of physical dominoes and an interactive projection. All these events create a hybrid chain reaction system that triggers events in consequence of some digital/ real action that preceded it. This project is on lines of Mixed/ Hybrid reality that aims to conjunct the control of gaming/ animation/ screen experience with the Physical gestures given by the user without any Hardware Interface. The control and graphics of the game changes from Physical space to Virtual and Vice Versa, depending on the dynamics of game/ screen progress. Any kind of interaction with the computer has always been an 'external' experience. But our Reality remains rooted in the personal and social. A century on, however, technology is granting us the ability to alter our perception of reality, construct multiple representations of ourselves like avatars, and have relationships with artificial agents like robots. All of these are simultaneously expanding and destabilizing our sense of self.
  13. Technology is a “second self,” as MIT professor Sherry Turkle

    has explained: a new interface between others and us. Debates over whether social technologies cause “detachment” from reality miss the point that we are entering a new hybrid reality in which assumptions about authenticity are fundamentally challenged: Who is real? What is the line between physical and virtual? Do we each get to live our own version of the truth? Let us begin with technology’s growing ability to manipulate how much information we have about the world around us. Google glasses and soon pixelated contact lenses will allow us to augment reality with a layer of data. Future versions may provide a more intrusive view, such as sensing your vital signs and stress level. Such augmentation has the potential to empower us with a feeling of enhanced access to “reality.” Whether or not this represents truth, however, is elusive. Consider the opposite of augmented reality: “deletive reality.” If pedestrians in New York or Mumbai don’t want to see homeless people, they could delete them from view in real-time. This not only diminishes the diversity of reality; it also blocks us from developing empathy. We feel enormously fascinated to take up this project of ours "Houdini" that deals with bringing together virtual and real world. The domains of inter-reality touch things physical to utterly psychological. They augment our perception of the world around us. This perception is partly digital and partly real. This dichotomy between the real-digital is a hindrance in our seamless interaction. Our attempt is to slim this gap to minimum by not only giving the User flexibility to engage with the computer through his actions but also giving the Computer an opportunity to act with the user, in the physical space. The Computer interprets the gesture and proceeds with the program and gives output both in the Physical space and the Virtual Space. How do we propose this to happen? And what is the problem that we solve. The video above should give you an insight.
  14. Idea Construction The Idea originally came as a bizarre thought

    of having Laser Sword Ninja. The problem implementing this idea was having the Laser beam size. With a bit more of brainstorming, we decided to keep the idea but change the implementation. It is at this time that Anirudh Sharma, our track Mentor, explained to us some ideas that have been fascinating mankind and the Media Lab since a long time: Rube Goldberg Machine and the Mixed Reality interfaces. What if we constructed a Rube Goldberg Machine that has action in both the screen and space? If we could have a series of steps that generate motion between objects and that motion is carried in Animation into the screen and returns to the user as motion in real world from the machine. Inspired from this Idea, we thought of having an Action Actuated Angry Birds Game. Implementation The problem with implementing this idea was mostly with having the object's projectile and orientation detected by the sensor. To make a simple working prototype, we decided to make something simpler that the novelty of the idea is kept. We decided to have a striker to hit a stack of dominoes that fall and the falling motion is perpetuated outside the screen as well. Our Objective behind the implementation was to make a gadget that was inexpensive yet did not compromise on 'magic' experience. Our Gesture Action-Reaction System Houdini was coded in Processing. We've used a simple microphone to record the pace at which the user is blowing air, and have accordingly calibrated the speed of the ball in Processing. When the ball reaches the edge of the screen, we have a server motor that triggers the real ball. An ultrasonic motor on the other side of the ramp detected the speed/ motion of the ball and accordingly triggers the projectile ball giving it the appropriate speed as the real ball. On Screen, the projectile hits into a stack of dominoes, which triggers another server motor that causes real domino blocks outside the screen to fall.
  15. The ingenuity of this idea is having a completely randomized

    input that depends on the whim of the user. Everything right from the speed to the object dimension is into the users control. Nothing is 'pre-programmed' or timed. Every Action of yours in the Real world has a reaction back inside the virtual world. Any change in the virtual world affects the causality in the real world. The Physics and the object transaction of both the real and the virtual is so seamless that the pseudo gap between the digital and real is minimized to only presence of a screen.   Final Demo of the mixed reality system
  16. 4. Tangible Events. Paper computer Inspired by project ‘hi- human

    interface’ from Multitouch-Barcelona. “Did you ever think that your computer was alive? That there was someone inside working for you? ‘Hi, a real human interface’ is a metaphor for how interaction with technology should be. It was our attempt to create the perfect interface; one that really understands our deepest needs, a human interface indeed.” Humans have grown in an environment where they interact with lot of physical objects around them. The ability to associate actions with these objects is a skill that plays a major role in interaction with them. Although, these objects are intuitive and tangible to use, their use is parochial as they are not associated with any actions in digital world. To overcome this, we designed an interface with maps actions associated with real world look alike objects (fake objects which imitate real world objects) to respective digital world action. The concept is to work with real-look-alike object in the real time to operate the computer. Think, what if you could cut a picture with scissor and put it in the another folder to actually perform the same action in a computer, crush a paper sheet and throw into the dustbin to perform the delete option in computer, send mails using a post box. The same computer becomes a fun place to be except the computer is our real room where all the things like files, folders, music player, CDs, pen, pencil etc. are placed and we interact with these objects to actually interact with computer. Following use cases were selected after a lot of brainstorming.
  17. Throw files in dustbin to delete data the file in

    your PC: Just throw and item deleted. Fake CD/ DVD to play music, see images and watch movies: The actual data is in computer, user inserts fake CD's in fake music player to play real music!! How cool is that! (see images above) Fake post box to send electronic mail: Just write on paper whatever you want, draw images (only thing you need is pen and paper) and drop it in post box. Swipe the user card to whom you want to send mail around it and mail is sent. Magic ball which lets you Skype: This feature lets you Skype to your loved ones, just wave your hand around the ball with the user name written on plastic card and Skype calls start!. CDs, Folders, Magic ball for music, storage and email respectively Implementation -RFID readers were embedded in the objects interfaced with Arduino. -The objects that would trigger interaction had RFID tags embedded within them.
  18. 5. Paperthin: Interactions with flexible displays Reference 1 How thin

    is your computer display? Some of the newer laptops have displays in the upper part of the "clamshell" that are a quarter of an inch or less. But, you want to talk thin ... how about paper-thin? Smart paper consisted of a network of infinitesimal computers sandwiched between mediatrons. A mediatron was a thing that could change its color from place to place... Bud took a seat and skimmed a mediatron from the coffee table; it looked exactly like a dirty, wrinkled, blank sheet of paper. "'Annals of Self-Protection,'" he said, loud enough for everyone else in the place to hear him. The logo of his favorite meedfeed coalesced on the page. Mediaglyphics, mostly the cool animated ones, arranged themselves in a grid. Bud scanned through them until he found the one that denoted a comparison of a bunch of different stuff, and snapped at it with his fingernail. New mediaglyphics appeared, surrounding larger pictures in which Annals staff tested several models of skull guns against live and dead targets. - From The Diamond Age, by Neal Stephenson. Published by Bantam Books in 1995 Reference 2 From A World Out of Time, by Larry Niven. Published by Random House in 1976 (Red Planet movie rollable map display) What if you could use any piece of paper as an interactive display? PaperThin converts ordinary paper into flexible, foldable, and affordable displays. This can be used in any industry; medical, educational, entertainment etc. Interaction with computers has always been on unintuitive surfaces. PaperThin aims to provide an intuitive and tangible way of interacting with computers. A simple object, such as a sheet of paper can be turned into a tangible, flexible and foldable display, regardless of size. Initial  iteration
  19. This is achieved using a depth sensor that performs material

    analysis to detect a sheet of paper or any similar surface.   UI adapting to various screen sizes Implementation - Projector + Kinect setup - Calculate homography for projections - Map a game onto the projection - Use paper as projection screen. Up to 5 folds. - Restriction: Has to be square-ish
  20. 6. Delayed Reality- what if we had glasses that could

    slow our perception Reference He is wearing shiny goggles that wrap halfway round his head; the bows of the goggles have little earphones that are plugged into his outer ears. The earphones have some built-in noise cancellation features. This sort of thing works best on steady noise... The goggles throw a light, smoky haze across his eyes, and reflect a distorted wide-angle view of a brilliantly lit boulevard that stretches off into an infinite blackness. This boulevard does not actually exist; it is a computer-generated view of an imaginary place. From Snow Crash, by Neal Stephenson. Published by Bantam in 1992       Brainstorming - Ideas were discussed about playing with the way our brain is tuned with the sense of vision. We thought about creating a delay in the sense of sight by using a camera and a display screen. Problems and solutions were discussed about designing such a prototype. - Many interesting ideas came up about future add-ons to this project, for example- * Projecting a delayed sight in real time view so one can see a trail of movements of things around, hence giving a trippy feel. * Shifting the camera at the back of the head so that while walking forward, one would perceive as if walking backwards. Research - We began researching about similar projects online and discussed how devices such as Oculus Rift or Open Dive could help us in creating stereovision. - We looked for software that could help us delay the real time video capture of camera and learned through YouTube videos how similar things work.  
  21. Prototyping Phase 1: Model Making - We tried to 3D

    print the copy of 'Open Dive' available online, on the maker bot but it didn't really work out for us. - So we tried to design our own model and make it by hand, we had to test on a Samsung Galaxy- Quattro. - While working on our own prototype we later put the file for 3D print on the other machine Prototyping Phase 2: 3D Printed, vision - With the prototypes made, we began working on the display. Since we were unable to source the L.C.D screen, we took the mobile phone of one of the team members (Samsung Galaxy-Quattro). - Now the problem was creating a 3D view while keeping the screen at a very close distance form eyes. - We found software that could split the screen into two videos and we used two Biconvex lenses to give a sense of depth.   Final Prototype demo. Delayed Reality Glasses
  22. 3. SOLE SCORCHERS: The Angry Shoes ‘Problem two, and it's

    a doozey, is one of fuel. You see, there's another factor that can be seen even in the early images of Iron Man flying, and that is smoke. It appears that when Tony flies he produces a smoke trail, which makes sense given the name "rocket boots." But rockets require fuel, and lots of it. If you've ever seen a shuttle launch you probably remember the giant orange external tank (or ET, NASA has an acronym for everything) attached to the underside of the shuttle fuselage. That's liquid oxygen fuel for the shuttles' main engines. Seriously, that whole thing is one big gas tank. Once in orbit the shuttle just drops the empty ET to burn up in the atmosphere. So rockets take a lot of fuel. Since Iron Man isn't usually depicted with giant tanks of frozen gas on his back something else must be factoring in to his propulsion system.’ -Ironman’s flight capabilities, Marvel.com These shoes are hot, and we mean that literally! This project is all about fun, magic and playing with flames. The concept is to construct funky shoes that emit fire as you walk, with an added bonus of mystical smoke. We plan to use an alcoholic spray coupled with an electric lighter to ignite the gas that brings out the flame as you apply pressure on the soles. We are using simple dry ice + water concept for the smoky effect. Shoes can't get hotter than this. Fire-walks, people! On Jan 29th, the third day of the workshop, we had an amazing opportunity to have an interaction with Marco Tempest via Skype. We got a chance to discuss with him about our ideas and the problems we faced during the prototyping. Later that evening, we had another brainstorming session and modified the entire idea. The aerosol + lighter
  23. concept was probably not all that safe for use. Inspired

    by the concept of imitation flames with LEDs, we started to work on programming the Arduinos for the fire effect instead of actual fire. Here we have experimented using deodorant can & lighter to emit fire. As cool and exciting as it looks, it is very dangerous. After this experiment, we had to think about an alternative to take our idea forward safely. After getting the LEDs to rescue, we had them programmed to emit RED light at rest and rainbow colors by applying pressure. WE have used Arduino programmed RGB LEDs here around the shoe and kept the pressure sensor at the heel of the shoes. Fire imitation with LEDs and smoke
  24. FOG EFFECT We succeeded in getting the LED to emit

    fire (at least the illusion of it), but what about the smoke effect ? We decided experimenting with dry ice and water. The idea was to build a breathable reservoir of dry ice in the sole, and dispense water into the reservoir with the control of water in your hands. We are expecting to develop this idea in the near future. Another alternative was a smoke machine which we are trying to beg, borrow or steal (just kidding). Now for the display (as shown in the picture), we have just used incense sticks for the smoke effect !!
  25. [not yet submitted/documented] Magicus Papyrus Write on paper and get

    your handwriting beautified! Using the Wacom bamboo tablet, we were able to transfer the handwriting written on paper to the computer. Then using the API MyScript Stylus, simple software that converts the handwritten text into a standard format. And then using processing, we were able to convert the font and display it onto a screen on a piece of paper! One of the major issues we faced was actually taking the input. The initial thought was to take an image of the handwriting and convert that into standard font. Using OpenCV to read the data was thought about but the programming required was a bit too much for such a process. The idea of using Tesseract - an open source OCR (Optical Character Recognition) engine was also there. But feeling that it would be a roundabout process, we decided to use the Wacom tablet which simplified our work to a great extent. :) Magic ball- the electronic porcupine
  26. Other highlights - All code/schematics OpenSource on Github -112 projects

    MIT open-source license e.g. here -FabLab setup at the workshop venue. Four 3D printers, two laser-cutters, milling machine etc.