Architecture for Aliens: Using IA for Smarter Things

Architecture for Aliens: Using IA for Smarter Things

My 20 min talk for IA Summit 2017 in Vancouver, discusses how we can use information architecture to bridge the gap between humans and the artificial creatures we're releasing into our world.

811208bd3f77ab3d4080439115e7c0cf?s=128

Andrew Hinton

June 13, 2017
Tweet

Transcript

  1. 1.
  2. 2.

    So we’ve heard a lot about AI this week. And

    we can’t seem to get away from the news about this — it’s definitely on a major hype curve. But there are legitimate questions that we need to rigorously work through. How do we know what behavior will be if the behavior is different each time? What is a digital agent — a member of our house? a visitor owned by someone else? Does it have rights? And are we assuming these digital smart things are able to accomplish more than they can actually do? I mean, what could possibly go wrong with robot policemen, right?
  3. 3.

    The real story isn’t about the devices themselves or the

    vast technical networks and systems they’re part of — it’s about how they play a role in the overall environment.
  4. 4.

    But as I usually do, I want to start us

    from a different beginning point that’s not so much about the marvels of the latest gadgets. The ecological perspective takes us out of the technology headspace — and grounds us in the flesh-and-blood reality of humans trying to meet their needs in concrete environments. We evolved for a very long time in environments like this before the very recent explosion of technology in the last few hundred years. Framing my work as environmental design for humans has helped me get at the principles that sit underneath all these complexities.
  5. 5.

    So here’s why the title of this talk — I

    like Ian Bogost’s way of framing “smart things” as aliens. "...for the computer to operate at all for us first requires a wealth of interactions to take place for itself. As operators or engineers, we may be able to describe how such objects and assemblages work. But what do they experience?”
  6. 6.

    Jakob Johann von Uexküll’s concept of “umwelt” is a useful

    lens for this idea. Creatures can inhabit the same environment from a gods-eye-view, but be in very different environments in terms of how they perceive and act. A frog and a spider may traverse the same foliage, and may even eat the same food, but the frog acts based on different structures and affordances than the spider. For instance frog only pays attention to the twigs, but the spider is interested in the space between them. Even their sensory and cognitive experience is different, due to the way they evolved.
  7. 7.

    We have two Boston Terriers, Sigmund and Edgar. We’ve had

    to learn that if we anthropomorphize them and treat them like little humans, it ends up making everyone unhappy, including them. Just because dogs have been bred over time to be very human-centric — such as the behavior of looking humans in the eye — they’re still very different creatures. We have to establish a clear communication framework of signals and rewards, and we also have to arrange our environment to be dog-friendly as much as human-friendly.
  8. 8.

    But, there’s a big difference between digital agents and dogs

    or other animals, including humans. They didn’t evolve as an embodied species like we did. We’re born as embodied, squishy creatures that have to learn over time how to use language and think in abstractions. Digital agents start with abstraction and learn to artificially simulate an embodied understanding the world.
  9. 9.

    I often use a model that breaks down the situation

    the user is in and what needs and tasks spawn from that situation. It helps me to think through what connective tissue, so to speak, is needed to bridge the gap between the artificial, binary nature of the system and the organic, messy nature of the human. > But I’ve also argued that we need to think of it the other way around. Digital systems — artificial agents, in this case — have situations and needs and tasks too. But we almost never map them out the way we would for our human users (assuming we get a chance to do it even for the humans in a given project).
  10. 10.

    We have to broaden what we mean by “taxonomy” to

    be about all environmental elements that can be interpreted as signifiers for all the creatures in a system — including our alien friends. There's this concept of 'legibility' of the built environment, which refers to the clarity of information in a given set of surroundings -- not necessarily linguistic information, but surfaces and objects -- where can I walk, where can I sit, how close is the exit... When we’re designing the structure of an environment, we need to be paying attention to what these artificial agents perceive and understand, and making sure the environment has the right signifiers and structures for the agents as well. This gets back to the ecological perspective, which is that this is an ecosystem — not a one-way relationship.
  11. 11.

    Of course, the primary concern here is the human being.

    So it makes sense to start from that perspective. What questions is the human needing answers for about the environment and the role of digital agents in it? Is one agent going to behave like another that looks the same? Can I even perceive the agent, or only its effects on my surroundings? Are the agents colluding with each other? Etc. These could be different questions for any given service or product context… but it’s the sort of thing you want to get from doing research in your domain.
  12. 12.

    Also, what agents are humans needing to interact with or

    understand? What role does your product or service play in a broader ecosystem of interconnected things? Mapping this out can be really powerful — and it’s another form of architecture, because you’re establishing a schema of attributes and types. But not in the abstract, but based on the human’s perception. A person in an environment, > 1 might have a digital agent they interact with. They can see it, and understand what it does. The perception and action are known — human knows that if they do X it means X and only X to the agent. > 2 But that’s almost never the case. There’s a sort of shadow perception going on where our actions mean
  13. 13.

    something different in addition to what we think they mean.

    Facebook’s “like” and other interactions are used by algorithms to shape what you see on the platform. In the case of a Nest thermostat, you have explicit interactions that you know about and see, both with the device and its app, but it’s also watching the activity in the house and making decisions on that. So, in essence, you walking down the hall isn’t just about you walking down the hall anymore — it’s being used to make decisions about how to affect your environment. > 3 and in turn, that’s being driven by other systems you don’t see or understand, out of your perception — and being used by a corporation for decisions far removed from you. > 4 Or there can be agents that just invisibly keep up with some factors about your behavior that you aren’t fully even aware of — which may also be reporting back to some other out-of-sight system. :::CONSIDER SKIPPING SOME OF THIS::: > 5 The same goes for systems that only communicate with you through some mediated format — where the “agent” isn’t embodied in anything you can see… you only deal with it via a communication medium. > 6 then there’s things that can keep up with you in some way that is invisible to you — such as the data gathered from shopping at Target — but there’s explicit messaging or interaction that you can perceive but you’re left wondering why do they think I need baby food until you discover you’re pregnant, and they knew it before you did. > 7 there may be things that only gather info on you and you never hear about it. > 8 you may interact explicitly with an agent that’s using data
  14. 14.

    gathered through a different agent you know about, but you

    don’t know about the connection between them, because what’s driving that background info is invisible to you. > 9 and of course, all of these things could be talking to each other and you wouldn’t necessarily know it. > 10 and finally, keep in mind that most of these systems are managed and supplemented by humans. Who, themselves, are increasingly immersed in assistive and other artificial agents.
  15. 15.

    We also want to understand how the digital agent understands

    its environment, which includes us. Putting it in first person here not to treat it like a human, but to imagine what it’s like to have its perception and logic. What parts of the environment do I perceive? Color, sound, movement, temperature? How do I interpret the data I perceive? What makes me happy?
  16. 16.

    An example of this is something as basic as a

    Fitbit. A fitbit doesn’t perceive or comprehend actual calories burned. It’s averaging an estimate based on what you’ve told the system about your body, and the movements it senses throughout the day. The language it perceives and understands is very narrow and easy to misconstrue — just based on what it presumes are arm movements. Understanding how the fitbit perceives and acts upon that perception is key to creating
  17. 18.

    In a sense, these things we’re creating are new creatures

    in our world. So, not unlike the schemas we use to map and relate organic creatures, we ought to be able to do it for artificial ones too. The opportunity though is for our taxonomy to not just be descriptive, but even prescriptive — helping to define the nature of the agent, or what we need it to eventually be.
  18. 19.

    There’s been some good work done on this question of

    how to categorize. I really like this paper from 2010, that classifies “smart objects” along several dimensions. How interactive is it? How aware of the environment is it — just activities or also how defined policies apply to those activities? Can it logically model for itself just some basic, linear functions, or can it keep up with complex workflows — the difference between Siri just doing a web search for you versus carrying on a conversation about the topic in question. This is a great framework for working with IT and Business architects, because it speaks their language. But I think we also need work that’s not so much from the technology point of view.
  19. 20.

    Here’s a good example from Mike Kuniavsky at PARC, that

    he presented last year at O’Reilly Design. It’s a more human- centered scale specifically about the level of automation something has. This is a taxonomy, a classification scheme, that we could borrow from for our own work, that puts a name to the inflection points along a continuum of agency.
  20. 21.

    Kuniavsky’s talk also has this example of iconography that Timo

    Arnall worked on with students about a decade ago — the icons signify hidden functionality of RFID devices. This is similar to those laundry tags in clothing with symbols for how to wash, dry, and iron it, and whether it’s ok to use bleach. Kuniavsky muses about maybe we need something like this for predictive systems and digital agents. This is very much related to IA because any system of signifiers is a taxonomical system of meaning to bring clarity to the complexity of the human environment.
  21. 22.

    These dimensions are all framed in ways any person can

    learn to understand. Is this thing visible to me or is it obscured, just in the woodwork. Is it an independent agent, like a Roomba, or is it really part of a collective intelligence, like Alexa or Siri? Is it networked and sending and receiving data? If so what sort of information about my life is in the stream? That in itself needs its own schema. Is there a human behind this at all? Can I contact that human? Or is it nameless people behaving in a mechanical turk fashion backstage? There’s also the issue of invariant vs variant behavior — we’re used to being able to predict how things behave once we have used them. Automobiles, toasters, other mechanical
  22. 23.

    things have always been singular in their behavior — a

    toaster doesn’t turn into a microwave or a television overnight. But Software can change what it is without any external clue. That also speaks to this issue of learning — we’re used to more linear, predictable, procedure-driven systems that behave in consistent ways. But when things that learn will change their behaviors based on that… a simple example is just when we get used to typing in a few letters of someone’s name in an email and auto-complete finishes it for us… we get used to that shortcut, but then it learns about someone else with the same few letters and we hit “Return” before realizing we selected the wrong person. Imagine that sort of error writ large across our homes and schools and workplaces. This is why the ecological perspective is so powerful — it reminds us we’re creatures in an environment and we behave in messy, creaturely ways. So the environment needs to not go against our nature.
  23. 24.

    So, coming back to this diagram — the system of

    signifiers between the agent and the human needs to be reframed to work for both. And the environmental structures we co-inhabit — both the digital and the built environments — need to be more attuned to allowing us to be part of a coherent ecosystem — bridging the umwelts. This should allow us to better understand what digital agents are capable of, what to expect from them, and vice versa.
  24. 25.
  25. 26.