Название: Designing Agentive Technology
Автор: Christopher Noessel
Издательство: Ingram
Жанр: Маркетинг, PR, реклама
isbn: 9781933820705
isbn:
So those are the basics. Agentive technology watches a data stream for triggers and then responds with narrow artificial intelligence to help its user accomplish some goal. In a phrase, it’s a persistent, background assistant.
If those are the basics, there are a few advanced features that a sophisticated agent might have. It might infer what you want without your having to tell it explicitly. It might adapt machine learning methods to refine its predictive models. It might gently fade away in smart ways such that the user gains competence. You’ll learn about these in Part II, “Doing,” of this book, but for now it’s enough to know that agents can be much smarter than the basic definition we’ve established here.
How Different Are Agents?
Since most of the design and development process has been built around building good tools, it’s instructive to compare and contrast them to good agents—because they are different in significant ways.
One of the main assertions of this book is that these differences are enough to warrant different ways of thinking about, planning for, and designing technology. They imply new use cases to master and new questions for evaluating them. They call for a community of practitioners to form around them.
TABLE 2.1 COMPARING MENTAL MODELS
A Tool-Based Model | An Agent-Based Model |
A good tool lets you do a task well. | A good agent does a task for you per your preferences. |
A hammer might be the canonical model. | A valet might be the canonical model. |
Design must focus on having strong affordances and real-time feedback. | Design must focus on easy setup and informative touchpoints. |
When it’s working, it’s ready-to-hand, part of the body almost unconsciously doing its thing. | When the agent is working, it’s out of sight. When a user must engage its touchpoints, they require conscious attention and consideration. |
The goal of the designer is often to get the user into flow (in the Mihalyi Csikszentmihalyi sense) while performing a task. | The goal of the designer is to ensure that the touchpoints are clear and actionable, to help the user keep the agent on track. |
Drawing a Boundary Around Agentive Technology
To make a concept clear, you need to assert a definition, give examples, and then describe its boundaries. Some things will not be worth considering because they are obviously in; some things will not be worth considering because they are obviously out; but the interesting stuff is at the boundary, where it’s not quite clear. What is on the edge of the concept, but specifically isn’t the thing? Reviewing these areas should help you get clear about what I mean by agentive technology and what lies beyond the scope of my consideration.
It’s Not Assistive Technology
Artificial narrow intelligences that help you perform a task are best described as assistants, or assistive technology. We need to think as clearly about assistive tech as we do agentive tech, but we have a solid foundation to design assistive tech. We have been working on those foundations for the last seven decades or so, and recent work with heads-up displays and conversational UI are making headway into best practices for assistants. It’s worth noting that the design of agentive systems will often entail designing assistive aspects, but they are not the same thing.
It seems subtle at first, but consider the difference between two ways to get an international airline ticket to a favorite destination. Assistive technology would work to make all your options and the trade-offs between them apparent, helping you avoid spending too much money or winding up with a miserable, five-layover flight, as you make your selection. An agent would vigilantly watch all airline offers for the right ticket and pipe up when it had found one already within your preferences. If it was very confident and you had authorized it, it might even have made the purchase for you.
It’s Not Conversational Agents
“Agent” has been used traditionally in services to mean “someone who helps you.” Think of a customer service agent. The help they give you is, 99 percent of the time, synchronous. They help you in real time, in person, or on the phone, doing their best to minimize your wait. In my mind, this is much more akin to an assistant. But even that’s troubling since “assistant” has also been used to mean “that person who helps me at my job” both synchronously—as in “please take dictation”—and agentively—as in “hold all my calls until further notice.”
These blurry usages are made even blurrier because human agents and assistants can act in both agentive and assistive ways. But since I have to pick, given the base meanings of the words, I think an assistant should assist you with a task, and an agent takes agency and does things for you. So “agent” and “agentive” are the right terms for what I’m talking about.
Complicating that rightness is that a recent trend in interaction design is the use of conversational user interfaces, or chatbots. These are distinguished for having users work in a command line interface inside a chat framework, interacting with software that is pretty good at understanding and responding to natural language. Canonical examples feature users purchasing airline tickets (yes, like a travel agent) or movie tickets.
Because these mimic the conversations one might have with a customer service agent, they have been called conversational agents. I think they would be better described as conversational assistants, but nobody asked me, and now it’s too late. That ship has sailed. So, when I speak of agents, I am not talking about conversational agents. Agentive technology might engage its user through a conversational UI, but they are not the same thing.
It’s Not Robots
No. But holy processor do we love them. From Metropolis’ Maria to BB-8 and even GLaDOS, we just can’t stop talking and thinking about them in our narratives.
A main reason I think this is the case is because they’re easy to think about. We have lots of mental equipment for dealing with humans, and robots can be thought of as a metal-and-plastic human. So between the abstraction that is an agent, and the concrete thing that is a robot, it’s easy to conflate the two. But we shouldn’t.
Another reason is that robots promise—as do agents—“ethics-free” СКАЧАТЬ