Speaker Deck

Video as a prototyping tool for connected products

by Martin Charlier

Published March 23, 2017 in Design

Martin Charlier @ O'Reilly Design Conference 2018
This is a talk about how video is a powerful tool for rapid low-fi prototyping of connected products & the Internet of Things. This talk argues how this method is useful, shows how it builds on existing prototyping methods and gives a practical example for how you can apply this method. Finally, the talk shows how the same ideas and principles of animated or filmed artefacts can be used at different levels of fidelity and focussed on different purposes.

Hi, I'm Martin, A bit about me:
- Product Manager, Come from an ID background, Been working in UX and Service Design in the last years.
Worked at various agencies in Germany and London, Industrial design but more recently UX strategy and research
Co-author of Designing Connected Products, Check it out at the O’Reilly booth
Now Product Manager at Unmade, Fashion technology startup in London, Technology to enable on-demand production and customisable garments with supply chain integrationI want to do a shout out to my friend and collaborator Tom Metcalfe here
Video prototyping is something I learned from him and together we developed it furtherSo, I’m here to talk about video as a prototyping tool for connected products
Let’s begin by saying that prototyping is probably my favourite part of any project
I’m a big believer in early, low fidelity rapid prototyping.
One of my favourite tools is Keynote - It’s built-in animation and transition effects can be so easily re-purposed to bring user interfaces to live and yet the crudeness of the tools and clear mis-use of them force you to not labour over the prototype, but create something good enough.But more important than the tool you choose,
I believe early, rough prototyping is one of the most important things to do in any product development - yet it doesn’t always happen.

Example: Jeff Hawkins, Palm, Wooden Block & ChopstickLow-fi prototyping is particularly useful for asking big questions
Sunk cost fallacy: It’s a logical fallacy – a flaw or tendency humans have in their thinking, where you continue to ”spend money” or incur cost (not just financially) because of what you have already spent (“the sunk cost”) - even though, on a rational basis, it may be better to change direction or abandon the thing you spent money on.Prototyping for connected products has some particular challengesLet’s look at some typical software UX prototyping tools & methods:
Paper prototyping
Wireframe user flows
Clickable screen prototypes
Native prototypesMy criticism of these for connected products would be:How about the tools & methods from industrial design and hardware?
Form mock ups
Looks-like & works-like model
Integrated prototypeAgain, not generically appropriate for connected productsLastly, service design methods:
Enactment & role play
Wizard of Oz
Business origamiNow let me tell you about video and why I think it’s interesting.The first association for ‘video’ in design is often a polished, CGI heavy vision film illustrating a shiny, seamless future.
But - these videos:
Took lots of time and money
The makers didn’t learn anything doing itBut generally speaking, there are some qualities of video that make it great and interestingAnd video is a natural fit for interaction design
If you look at the things that go into a film - you can see the parallels and why it’s interestingSo I think video is great - but how do you apply it early and rapidly?
I want to show you some examples of this technique Tom and I have been exploringThis is a prototype journey for a connected herb garden sensor.
It’s prototyping specifically a section of the setup and pairing experience.
More importantly: It took 30 minutes to make and film and was filmed using Instagram (which I’ll talk about in a second) - 30 minutes during which you go through many initial but key interaction design questions quickly.Here is what went into it.
UI states are separate pieces of paper
A post it note is used as a storytelling interstitial to show time has passed.So Tom and I have run workshops around this at some conferences and I want to show you some of the prototypes the teams created.First off, this is a prototype from a team that was testing how a gesture controlled cooking hob might work. The advantage being that you can control it touch-free. But the question was, how, fundamentally, might one approach the design of this hob? [VIDEO]
Within the time of around 2 hours, using cardboard mock ups of pots and a hob, the team actually iterated through a few different ways the gesture interface might work, trying different gestures for controlling the hob.
So you can see how useful this technique can be to rapidly iterate options - but still have them share-able and something to discuss at the end.For example - if you think about the “set” for the filming. In order for the people watching the video to make sense, it needs to be authentic. So we encourage teams to develop and film their prototypes in the right locations.
For example, if you’re working on the hob problem, then go and find a kitchen to work in.
If you’re working on a connected fishing rod (like this team here), then go to where fishing happens.
This lets you understand the physical context of the user and gain empathy that way.This team prototyped a connected coffee cup with an integrated loyalty functionality. The task was to see how it might alter the coffee ordering process and decide whether it would be an improvement or a distraction. [VIDEO]
What’s interesting here is that the team had multiple actors and two devices that interacted with each other.This team had quite a challenging, abstract brief. It was around prototyping how interacting with a robotic kitchen aid might work. The idea being, a robotic helper arm that could take over simple cooking tasks or follow pre-defined recipes. [VIDEO]
This showed how a sort of wizard-of-oz approach of a human playing the robot could also be integrated into the video technique.So, overall, the briefings for those teams wasn’t always to come up with a feasible solution, but also to use this technique to get an initial opinion and feeling for the product - would it even remotely work? If so, how might it have to be designed?So the value of this technique is largely in the act of applying it. And partly in the outcome itself.
And I would say this goes for most uses of video actually. Whether this low-fidelity, or higher fidelity.I want to tell you a little more about this technique - and address the two most common questions we getSo, Why Instagram?
Well, when Tom first started playing with this idea, it was actually using Vine - which maybe some people remember.
Two things I think come together that lead to the Instagram idea: 1) Smartphone cameras are now pretty good for simple video 2) Vine & Instagram have established a much quicker model for filming and editing. Rather than filming footage in one step, then editing it all together in a second - Instagram works by filming when you hold, and not filming when you don’t hold the button. So you edit while shooting.Another point about Instagram is that it forces users to keep the video short - there is a maximum length.
This means our “scene” can’t get too long - and the designers really have to think about what actions take place and in what order - it’s a nice simplification challenge. Both for storytelling - but also for the user journey you’re prototyping.so, How does it work?
We didn’t know this at the start, but turns out what we’re doing is actually an old cinematic “effect” (so there is the special effect I mentioned earlier). It’s called the “stop trick” or “substitution splice” and was used by film makers as early as 1895 to make things appear or disappear.
It’s pretty simple though: you record some action, then you stop the camera, replace or move things around, then carry on filming. It’s a bit of a cross between stop motion (which uses still images) and filming.[VIDEO] Here is what it looks like from another angle. A “Behind the Scenes” if you will.
You see the set including the camera, and if you pay close attention to my thumb - you see when I stop and start filming. While I’m not filming - the substitution happens.
So this is a little bit about good timing - which is why it makes sense for the cameraman and the main actor tapping the UI to be the same person, filming with one hand, tapping with the other - this means you can really synchronise the two things you’re doing.[VIDEO] Here is the final result, once again.
You can still see little jumps in this - this is because it actually takes a bit of practice to keep the camera completely still and not move the props around when substituting.That’s why what Tom and I are sort of dreaming of is an app that is better suited for this.
It would show you the last frame of your previous clip as a transparent overlay while you’re not recording, so you can really line up the shot again before you carry on filming.
It would also let you move around clips and remove them - which gives you a second chance if you mess up.
We’ve searched lots but couldn’t find one.
So this is a bit of a shout out if there are interested developers in the audience that want to get involved and help us build it.So this is all well and good for super low fidelity - but I want to show you a few video techniques that go beyond this to show you the potential.First off - the same sort of technique - just with more time, classic filming & editing.This is a really famous example that was done a while ago by two students at the RCA, Anab Jain and Louise Klinker,
It was a project for Mattel / Hotwheels
Let’s watch a minute or soIt works by moving cars around with magnets.
The cars, incidentally, are deliberately low-fidelity - it’s basically an interaction sketch, where the attention of the viewer needs to be guided to the things that matter, and away from the things that don’t. So in order to not get feedback on the design of the cars - but on the idea overall, the cars were made to look plain and “undesigned”This example showed another really nice aspect of videos like this. They can become things that convey an ambitious vision really well and inspire a team to get on and build it. So happened with a team of researchers in Germany that took this idea to the next level and started building functional prototypes to match the idea shown in the video.
Another thing that happened was that people that saw the video prototype online were curious as to when this might be available - signalling purchasing intent - so you can even use this to test desirability with users to some degree.Another interesting way to use video is to take storyboards to the next level[https://www.youtube.com/watch?v=QOeaC8kcxH0]
Now, I actually think storyboarding is a often not used to its potential. I see a lot of designers create storyboards of just what happens on the screen UI - and ignore the rest of the context, the physical location, interaction with other people or other devices, etc. Also there may be loss in translation as others have to read a storyboard document by themselves.
I actually think it’s relevant here to take a look at how the craft that first came up with storyboards uses them.
Here is a really nice clip I found of Pixar talking about storyboarding and showing how they read a storyboard.Some key statements Joe Ranft makes in the videoSo, a few things stand out:
The storyboard is actually almost performed by somebody - it’s much more than just the drawing, the whole sense of the scene and actions are conveyed.
It’s an in-person presentation - so it’s not put into a PDF and sent to somebody, because the context would be missing.
I think this is a really nice example to take on board to storyboarding your next UX or UI project - think about the whole context, not just what’s on screen, and convey that when you walk somebody through it.[https://www.youtube.com/watch?v=IXhIJgX7GR4]
Now - the other interesting thing Pixar (and many other movie studios I imagine) do, they don’t go straight from storyboard to film - they go through an intermediate step, which is to animate the storyboards into a film. Here is a short clip where you see the animated storyboard (sometimes called Animatic) next to the final film.Now, here is an example from Cooper that shows what this can look like for the user experience of a connected product-service system. Taking a storyboard (and notice that they included the context around it - not just the screen UI) they simply added narration and turned it into a video. [VIDEO]
So if you’re struggling with live-action - and if you’re actually building more of a service than a simple product - then this technique might be for you.What I think is really powerful about this - and this applies to all these video techniques - is that it is one single artefact, one video, that actually becomes relevant input for work that is often done by different teams that may not always be on the same page in terms of the desired user experience.
Shipping, packaging, user manual, physical device design, software UX - all often different teams. This specifies one overarching experience for all - and it isn’t much effort to create. Lastly, if you’re getting good at filming and editing software such as After Effects - then you can even create video output that looks like a real thing - even though underneath is actually still a fairly low-fidelity prototype that didn’t take very long to make. [https://www.youtube.com/watch?v=Isw7yOOzm5A]
Apologise in advance that I’m in this - that’s a bit awkward - but I’ll show you a short clip of a video Tom and I made for a project looking into the potential of printed electronics. The idea was to explore what a user experience might like, given what the technology might be capable of in a few years. So one example we explored was the idea that a printed magazine would hold all it’s content as audio and could play it back through a headphone jack in the spine.Now, to make this took one afternoon and involved buying a copy of the Economist, mocking up controls that would be inside the magazine and making them look & feel like they are part of the magazine. Then glueing them into the places where they best fit. And lastly, chopping apart a headphone cable extension to get the socket end and glueing that into the spine.[VIDEO]
Here is another example of a useful little technique.
This was a bit of a just-for-fun exploration into what physical Trello might be like. The idea was, what if you could buy a set of e-ink displays that are connected to a Trello board and you can tape them to the wall to have the board represented in space. One of the things that would have to happen is that the physical Trello cards tell you when additional cards have been added - and it asks you to add two eInk cards to the wall. Here is a short clip of that.The technique here is also very quick and simple once you’ve played with it.
It’s actually using a still image, an animated bit of UI and the two are put together in After Effects and then wiggled around to look like they were filmed using a handheld camera - to make it feel more real.[VIDEO]
As I mentioned in the beginning, Keynote is actually one of my favourite tools for prototyping. It’s a brilliant tool for fast prototyping and because it’s limited (and you’re essentially mis-using a presentation software) you can’t go too far with your prototype. Here are three prototypes for a photo interaction to explore meta-data behind a photo. All prototyped in a matter of hours and exported as video files so you can test them on the actual device form factor. [VIDEO]
Here is another example exploring a photo viewer that automatically stacks photos that seem to belong together (time, geolocation, etc) when you “squeeze” the timeline together.
This also shows how you can both create something that explores a realistic look & feel, and then equally quickly create something that is more of an explanatory sketch for developers to understand what is happening.I hope you saw value in this idea of using video to build on these techniques hereAnd you saw that quick to create doesn’t necessarily mean a rough outcome, and how useful it is to create a single artefact that defines the experience across departments and teamsAnd that the applications of this technique are wide-ranging - from using it for yourself to iterate and refine - to convincing investors or backers (like on Kickstarter - where it’s actually pretty frequent that the videos are faked to show the eventual experience)Any questions?