Название: Smart Swarm: Using Animal Behaviour to Organise Our World
Автор: Don Tapscott
Издательство: HarperCollins
Жанр: Социология
isbn: 9780007411078
isbn:
At this point, the ant computes the quality of the solution and lays down a pheromone trail according to its quality. This process is repeated, ant after ant, thousands of times. “The nice part is that, when you’re near the finish, you will see that the ants have left a clear distribution of pheromones around your system,” Donati says. Each new solution is compared to the best previous one. If it’s better, it becomes the best one. It’s all a matter of balance between exploration and exploitation, he says.
The pilot project was a big hit at Air Liquide, proving to managers that an ant-based model was flexible enough to handle the complexities of their routing problem. But what Air Liquide really wanted was to optimize production, since the cost of producing gas was ten times that of delivering it. So they enlisted Bios Group, which by then had merged with a company called NuTech Solutions, to develop a tool to optimize production. That tool, completed in 2004, is the one Air Liquide uses to guide its business today.
Technicians in the control center run this optimizer every night. They begin at eight p.m. by entering new data about plant schedules, truck availabilities, and customer needs into the model. A telemetry-based system called SCADA (Supervisory Control and Data Acquisition) feeds real-time information about the efficiency of each plant, gas levels in storage tanks, and the cost of power, among other factors. A neural network forecast engine provides estimates of which customers must get deliveries right away, based on telemetry readings and previous customer-usage patterns. Weather forecasts by the hour are entered, as are estimated power costs for the next week. Finally, any miscellaneous information is added that might affect schedules, such as which plants need maintenance in the near future.
The optimizer is then asked to consider every permutation—millions of possible decisions and outcomes—to come up with a plan for the next seven days. To do so, it combines the ant-based algorithm with other problem-solving techniques, weighing which plants should produce how much of which gas. To speed up run times, technicians divide the country into three regions: west of the Rockies, Gulf Coast, and the eastern states. Then they run the model three times for each region. By the time the day crew arrives at work at six a.m., the optimizer has solutions for each region.
People still make all the decisions. But now, at least, they know where they need to go. “One of the scientists called this approach the Yellow Brick Road,” Harper says. “Basically, instead of worrying about the absolute answer, we let the optimizer point us toward the right answer, and by the time we take a few more steps, we rerun the solutions and get the next direction. So we don’t worry about the end point. That’s Oz. We just follow the Yellow Brick Road one step at a time.”
This ant-inspired system has helped Air Liquide reduce its costs dramatically, primarily by making the right gases at the right plants. Exactly how much, company officials are reluctant to say, but one published estimate put the figure at $20 million a year.
“It’s huge,” Harper says. “It’s actually huge.”
Lessons from Checkers
During the 1950s, an electrical engineer at IBM named Arthur Samuel set out to teach a machine to play checkers. The machine was a prototype of the company’s first electronic digital computer called the Defense Calculator, and it was so big it filled a room. By today’s standards, it was a primitive device, but it could execute a hundred thousand instructions a second, and that was all that Samuel needed.
He chose checkers because the game is simple enough for a child to learn, yet complicated enough to challenge an experienced player. What makes checkers fun, after all, is that no two games are likely to be exactly the same. Starting with twelve pieces on each side and thirty-two squares on the game board to choose from (checkers is played only on the dark squares), the number of possible board configurations from start to finish is practically endless. You can play over and over and never repeat the same sequence of moves. This gives checkers what complexity experts call perpetual novelty.
For Samuel’s computer, that was a problem. If every move theoretically could lead to billions of possible configurations of the game board, how could it choose the best one to make? Compiling a comprehensive list of results for each move would simply take too long—just as it would for Marco Dorigo in the traveling salesman problem. So Samuel gave the machine a few basic features to look for. One was called pieces ahead, meaning the computer should count how many pieces it had left on the board and compare that with its opponent’s. Was it two pieces ahead? Three pieces ahead? If a particular move resulted in more pieces ahead, it was likely to be favored. Other features specified favorable regions of the board. Penetrating the opponent’s side was considered advantageous, for example. So was dominating the middle. And so on.
Samuel also taught the computer to learn from its mistakes. If a move based on certain features failed to produce a favorable outcome, then the computer gave less weight to those features the next time around. In addition, he showed the computer how to recognize “stage-setting” moves—those that didn’t help out in an obvious way right now, such as a move that sacrificed a piece, but set up a later move with a bigger payoff, such as a triple jump. The machine did this after the fact by increasing the weight of features that favored the stage-setting move. Finally, he told the computer to assume that its opponent knew everything that it knew so the opponent would inflict the greatest damage possible whenever it could. That forced the machine to factor in potentially negative consequences of moves as well as positive ones. If it got surprised by an opponent anyway, it adjusted the weights to avoid that mistake next time.
Samuel’s project was so successful that the computer was soon beating him on a regular basis. By the end of the 1960s, it was defeating checkers champions.
“All in all, his was a remarkable achievement,” writes John Holland, another pioneer of artificial intelligence, in his book Emergence: From Chaos to Order. “We are nowhere near exploiting these lessons that Samuel put before us almost a half century ago.”
To Holland, who shared a lab with Samuel at IBM, the true genius of the checkers program was the way it modified the weights of a handful of features to cope with the game’s daunting complexity. Because it was impractical at the time to “solve” the game of checkers mathematically by calculating the perfect sequence of moves, as you might do with a simpler game, such as tic-tac-toe, Samuel just wanted his computer to play the game better each time. “The emergence of good play is the objective of Samuel’s study,” Holland wrote.
What Holland meant by emergence was something quite specific. He was referring to the process by which the computer formed a master strategy as a result of individual moves, or, as he put it more generally, the phenomenon of much coming from little. Although everything the program did was “fully reducible to the rules (instructions) that define it,” he says, the behaviors generated by the game were “not easily anticipated from an inspection of those rules.”
We saw the same thing, of course, in Colony 550. Even though individual ants were following simple rules about foraging, their pattern of behavior as a group added up to a surprisingly flexible strategy for the colony as a whole. One colony might tend to be more aggressive in its style of foraging, sending out lots of foragers, while another might be more conservative, keeping them safe inside. Each colony didn’t impose its strategy on the foragers; the strategy emerged from their interactions with one another.
The same could be said about many complex systems, from beehives and flocks of birds to stock markets and the Internet. Whenever you have a multitude of individuals СКАЧАТЬ