Название: The Digital Big Bang
Автор: Phil Quade
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119617402
isbn:
More than just increasing transaction processing, the speed with which data can be collected, analyzed, and acted on may serve to protect both consumers and banks. For example, by more quickly analyzing spending patterns, banks can more quickly detect anomalous transactions and alert consumers of potential fraud. Perhaps more interestingly, banks themselves benefit from new computer security protections, particularly as they embrace cloud-based services. The reason is simple: In the cloud, we can harness new security capabilities that reduce security response times so that they more closely track the speed of malware distribution. This is best explained by example. Defenders of networks have long highlighted the importance of information sharing. Using the Financial Services Information Sharing and Analysis Center (FS-ISAC) as an example, if Bank A sees an attack, it notifies the ISAC. The ISAC notifies other banks, which then look for indicators of compromise on their networks. Other sectors have ISACs as well.
Although this model works well, it has challenges. First, it requires information sharing, which happens well in some sectors but not as well in others. Second, information distribution and analysis can take time. Third, the information may not flow to all players (for example, in the case of the financial sector, smaller banks may be less engaged). By contrast, as banks move to the cloud, the speed of response can increase dramatically. For example, Microsoft's Advanced Threat Protection strips attachments from emails, runs them in detonation chambers, and looks for malware. If found, that malware can be searched for throughout the cloud. This means if Bank A is attacked, all other banks—and other customers outside the financial sector—can quickly be protected from that attack. Simply put, protections can be broadly deployed to more entities without information-sharing delays. Moreover, as more customers move to the cloud, the amount of telemetry increases and protections improve. Thus, in the context of these financial transactions, speed provides significant benefits.
CONTEXT: AUTONOMOUS VEHICLES
Autonomous vehicles, which have started cruising the streets of major cities, offer a long list of benefits, including safer streets, more efficient use of vehicles leading to less congestion, increased mobility for seniors, and more. Data collection and analysis speeds are again critical, especially as vehicles analyze data on roads crowded with other cars, bikes, pedestrians, and traffic lights. It is also important that such technologies leverage the cloud (where data from multiple sources can be stored and analyzed) but also make “local” decisions without the latency caused by transmitting data to and from the cloud (this is called the “intelligent cloud/intelligent edge” model).
At the same time, we will be presented with new risks as the technology matures and perhaps comes under attack. Indeed, as of December 17, 2018, the California Department of Motor Vehicles had received 124 Autonomous Vehicle Collision Reports, and there have been reports of researchers hacking automated vehicles. Still, while autonomous vehicles may not be completely “safe,” they will, if correctly designed, undoubtedly be “safer”: Autonomous vehicles can process data far more quickly than humans, they will not panic in an emergency situation, and they are unlikely to be distracted by cell phones.
CONTEXT: AUTONOMOUS LETHAL WEAPONS
It is in the third scenario, international relations and autonomous lethal weapons, that speed may pose risks to humanity itself. That is a dramatic statement to be sure, but one that is not alarmist when the future of technology is considered in a military context. The problem is that human thinking and the processes humans have established to manage their international affairs do not map to Moore's law—we do not become exponentially faster over time.
In the last century, intercontinental ballistic missiles shortened military response times from days to minutes. Since then, the introduction of cyberwarfare capabilities has arguably reduced response times further. As General Joseph Dunford, the outgoing Chairman of the Joint Chiefs of Staff, noted in 2017, “The speed of war has changed, and the nature of these changes makes the global security environment even more unpredictable, dangerous, and unforgiving. Decision space has collapsed and so our processes must adapt to keep pace with the speed of war [emphasis added].” To take advantage of the remaining decision space, Dunford noted, there must be “a common understanding of the threat, providing a clear understanding of the capabilities and limitations of the joint force, and then establishing a framework that enables senior leaders to make decisions in a timely manner.” This suggests the need for greater pre-decisional planning, from collecting better intelligence about adversaries' capabilities and intentions to better scenario planning so that decisions can be made more quickly.
At the same time that we attempt to adapt, we must also grapple with a fundamental question: As the need for speed increases, will humans delegate decision making—even in lethal situations—to machines? That humans will remain in the loop is not a foregone conclusion. With the creation of autonomous lethal weapons, some countries have announced new policies requiring a “human-in-the-loop” (see Department of Defense Directive Number 3000.09, November 12, 2012). Additionally, concerned individuals and organizations are leading calls for international treaties limiting such weapons (see https://autonomousweapons.org/). But as we have seen with cybersecurity norms, gaining widespread international agreement can be difficult, particularly when technology is new and countries do not want to quickly, and perhaps prematurely, limit their future activities. And as we have seen recently with chemical weapons, ensuring compliance even with agreed-upon rules can be challenging.
THE RISK
The risk here is twofold. The first relates to speed: If decision times collapse and outcomes favor those who are willing to delegate to machines, those who accept delay by maintaining adherence to principles of human control may find themselves disadvantaged. Second, there is the reality that humans will, over time, come to trust machines with life-altering decisions. Self-driving cars represent one example of this phenomenon. At first, self-driving vehicles required a human at the wheel “for emergencies.” Later, automated vehicles with no steering wheel and no human driver were approved. While this may be, in part, because humans have proven to be poor at paying attention and reacting quickly in a crisis, experience and familiarity with machines may have also engendered complacency with automated vehicles, much as we have accepted technology in other areas of our lives. If this is correct, then humans may ultimately conclude that with decisional time shrinking, machines will make better decisions than the humans they serve, even when human lives are at stake.
ABOUT THE CONTRIBUTOR
Scott Charney – Vice President of Security Policy, Microsoft
Scott Charney is vice president for security policy at Microsoft. He serves as vice chair of the National Security Telecommunications Advisory Committee, as a commissioner on the Global Commission for the Stability of Cyberspace, and as chair of the board of the Global Cyber Alliance. Prior to his current position, Charney led Microsoft's Trustworthy Computing Group, where he was responsible for enforcing mandatory security engineering policies and implementing security strategy. Before that, he served as chief of the Computer Crime and Intellectual Property Section (CCIPS) at the U.S. Department of Justice (DOJ), where he was responsible for implementing DOJ's computer crime and intellectual property initiatives. Under his direction, CCIPS investigated and prosecuted national and international hacker cases, economic espionage cases, and violations of the federal criminal copyright and trademark laws. He served three years as chair of the G8 Subgroup on High-Tech Crime, was vice chair of the Organization of Economic Cooperation and Development (OECD) Group of Experts on Security and Privacy, led the U.S. Delegation СКАЧАТЬ