Designing Agentive Technology. Christopher Noessel
Чтение книги онлайн.

Читать онлайн книгу Designing Agentive Technology - Christopher Noessel страница 9

Название: Designing Agentive Technology

Автор: Christopher Noessel

Издательство: Ingram

Жанр: Маркетинг, PR, реклама

Серия:

isbn: 9781933820705

isbn:

СКАЧАТЬ

      So between the Nest Thermostat in the prior chapter and the handful just covered, you’ve seen some examples of agentive technology, but rather than relying on inference, let’s get specific about what an agent is and isn’t.

      In the simplest definition, an agent is a piece of narrow artificial intelligence that acts on behalf of its user.

      Looking at each aspect of that definition will help you understand it more fully. First, let’s take the notion of narrow intelligence, and then acting on behalf of its user.

      The Notion of Narrow Artificial Intelligence

      When most people think of AI, they think of what they see in the movies. Maybe you imagine BB-8 rolling along the sands of Jakku, werping and wooing as it trails Rey. If you know a bit of sci-fi history you might also have a black-and-white robot in mind, like Gort or Robbie, policing or protecting per their job description. Or maybe you realize that AI doesn’t need to be embodied in robot form, as with Samantha, the disembodied “operating system” OS1 in the movie Her—one minute sorting her user’s inbox, the next falling in love with and then abandoning him. Or if you have an affinity for the darker side of things, you might think of either HAL’s glowing red eye or MU/TH/R 6000’s cold, screen-green text, each AI assaulting its crew members to protect their secret missions.

      These sci-fi AIs—and, in fact, the overwhelming majority of sci-fi AIs—fall into three categories of strong artificial intelligence.

      The first is the most advanced category of strong AI, which is called artificial super intelligence, and describes an AI with capabilities advanced far beyond human capabilities, and far beyond what you can even imagine. As a bird’s intelligence is to human intelligence, a human intelligence is to ASI. As the scenario goes, if you program AGIs to evolve or make better and better copies of themselves, it will result in ever-accelerating improvements until they achieve what you can only call a godlike intelligence. Samantha from Her is a good scifi example, who by the end of the movie is accessing and contributing to the total body of human endeavor, having simultaneous conversations and relationships with users and other AIs, all while evolving to such a degree that she and the other AIs ultimately decide to leave humans behind as they sort of self-rapture to something or somewhere incomprehensible to humans.

      The second is artificial general intelligence, or AGI, so called because it displays a general or abstract problem-solving capability similar to a human intelligence. BB-8 and HAL are examples of this. They are artificial, but are fairly human in their capabilities. They’re one of the team. If/when we ever get to this, we’ll be in a categorically different place than agentive tech.

      The third category is “weak” or artificial narrow intelligence, or ANI. This is much more constrained AI, which might be fantastic at, say, searching a massive database of tens of millions of songs for a new one you’re likely to love, but is still unable to play a game of tic-tac-toe. The intelligence these systems display cannot generalize, cannot of its own accord apply what it “knows” to new categories of problems. It’s the AI that is in the world today, so familiar that you don’t think of it as AI as much as you think of it simply as smart technology.

      Whether or when we actually get to strong AGI is a matter for computer scientists, but for the purposes of design, it is immaterial. If AGI ever makes it to your Nest Thermostat, it will be making decisions about how best to use its resources to manage its task and communicate with its users, that is, to create its own interface and experience. Designers will not be specifying such systems as much as acting as consultants to the early AGIs on best practices. But until we’ve got AGI around to worry about, we have increasing numbers of examples of products and services built around ANI, and those will need good design to make them humane and useful.

      As you saw in the prior examples, narrow intelligence isn’t a binary quality. Different agents can embody different levels of intelligence. An agent can be said to be more intelligent when it has the following characteristics:

      • Its model of its domain is more reticulated and closer to our own. Anyone who has been plunged into darkness by spending “too much” time in a restroom with a motion-sensing light switch knows that it is less smart than one that could “see” when there is a human there who still needs the light.

      • It successfully monitors more—and more complex—data streams. Drebbel’s device monitored a single variable, but the Nest Thermostat monitors dozens.

      • It can make smart inferences. It can smartly infer what given data means and react accordingly. Steady weight gain over the course of the month might mean a homecare patient’s sedentary choices may be increasing their body mass index. But rapid weight gain can mean dangerous swelling in the tissues—signs of a more serious medical concern.

      • It can plan. This means considering multiple options for achieving a goal, taking into account the trade-offs between them, and selecting the best one.

      • It is adaptable. It’s able to use feedback to track its progress toward its goal and adjust its plans accordingly.

      • In advanced agents, this can mean the capability to refine predictive models with increasing experience and as new real-time information comes in. Called machine learning in the vernacular, this helps narrow AIs adapt to an individual’s behavior and get better over time. I’ll touch on machine learning a bit more later, but for now understand that software can be programmed to make itself better at what it does over time.

      So agents are properly defined as artificial narrow intelligence—AI that is strictly fit to one domain. But where ANI is a category, the agent is the instance, the thing that will act on behalf of those users. So let’s talk about that second aspect of the definition.

      Acting on Behalf of Its User

      Similar to intelligence, agency can be thought of as a spectrum. Some things are more agentive than others. Is a hammer agentive? No. I mean if you want to be indulgently philosophical, you could propose that the metal head is acting on the nail per request by the rich gestural command the user provides to the handle. But the fact that it’s always available to the user’s hand during the task means it’s a tool—that is, part of the user’s attention and ongoing effort.

      Less philosophically, is an internet search an example of an agent? Certainly the user states a need, and the software rummages through its internal model of the internet to retrieve likely matches. This direct cause-and-effect means that it’s more like the hammer with its practically instantaneous cause-and-effect. Still a tool.

      But as you saw before, when Google lets you save that search, such that it sits out there, letting you pay attention to other things, and lets you know when new results come in, now you’re talking about something that is much more clearly acting on behalf of its user in a way that is distinct from a tool. It handles tasks so that you can use your limited attention on something else. So this part of “acting on your behalf”—that it does its thing while out of sight and out of mind—is foundational to the notion of what an agent is, why it’s new, and why it’s valuable. It can help you track something you would find tedious, like a particular moment in time, or a special kind of activity on the internet, or security events on a computer network.

      To do any of that, an agent must monitor some stream of data. It could СКАЧАТЬ