We Humans and the Intelligent Machines. Jörg Dräger
Чтение книги онлайн.

Читать онлайн книгу We Humans and the Intelligent Machines - Jörg Dräger страница 11

СКАЧАТЬ

       Normative blindness: Algorithms also pursue wrong objectives

      Better drunk than poor.9 That, apparently, is how car insurance companies feel about their customers in some parts of the US, where nothing drives up insurance rates like not being creditworthy. In Kansas, for example, customers with low credit ratings pay up to $1,300 per year more than those with excellent ratings. If, on the other hand, the police catch someone driving drunk, his insurance rate is increased by only $400.

      A similar example from the state of New York: An accident where the driver is at fault increases her premium by $430, drunk driving by $1,170, but a low credit rating sends it skyrocketing by $1,760. Driving behavior has less influence on the insurance rate than creditworthiness. In other words, anyone who is in financial difficulties pays significantly more than a well-to-do road hog.

      This practice was uncovered by the non-profit Consumer Reports. The consumer-protection organization evaluated and compared more than two billion policies from 700 insurance companies across the US, showing that the algorithms most insurers use to calculate their rates also forecast the creditworthiness of each customer. To do so, the computer programs use the motorists’ financial data, with which banks calculate the probability of loan defaults.

      However, the decisive difference between the two sectors is that such an algorithmic forecast would be appropriate for a bank because there is a plausible correlation between creditworthiness and the probability that a loan will be repaid. For car insurers, however, the financial strength of their customers should be an irrelevant criterion. It does not allow any conclusions to be drawn about driving behavior or the probability of an accident, even though the insurance rate should depend solely on this. Car insurance is compulsory and everyone, regardless of social status, should have equal access to it – and the premiums should provide an incentive to behave in a compliant and considerate manner on the road. This would benefit all drivers and thus society as a whole.

      The insurance practice denounced by Consumer Reports does not completely ignore this incentive; misconduct while driving continues to be sanctioned. However, this is undermined if accident-free driving is worth less to the insurer than the customer’s account balance. Those who suffer from that are often poorer people who are dependent on their car. Thus, those who are already disadvantaged by low income and low creditworthiness are burdened with even higher premiums.

      Economically it may make sense for an insurance company to have solvent rather than law-abiding drivers as customers. That is not new. Now, however, algorithmic tools are available that can quickly and reliably assess customers’ creditworthiness and translate it into individual rates. Without question, the computer program in this example works: It fulfils its mission and acts on behalf of the car insurers. What the algorithmic system, due to its normative blindness, is unable to recognize on its own is that it works against the interests of a society that wants to enable individual mobility for all citizens and increase road safety. By using this software, insurance companies are placing their own economic interests above the benefits to society. It is an ethically questionable practice to which legislators in California, Hawaii and Massachusetts have since responded. These states prohibit car insurers from using credit forecasts to determine their premiums.

       Lack of diversity: Algorithmic monopolies jeopardize participation

      Kyle Behm just does not get it anymore.10 He has applied for a temporary student job at seven supermarkets in Macon, Georgia. He wants to arrange products on shelves, label them with prices, work in the warehouse – everything you do in a store to earn a few extra dollars while going to college. The activities are not excessively demanding, so he is half horrified, half incredulous when one rejection after the other arrives in his e-mail inbox. Behm is not invited to a single interview.

      His father cannot understand the rejections either. He looks at the applications that his son sent – there is nothing to complain about. Kyle Behm even has experience in retail and is a good student. His father, a lawyer, starts investigating and discovers the reason. All seven supermarkets use similar online personality tests. Kyle suffers from bipolar disorder, a mental illness, which the computer programs recognized when they evaluated the tests. All the supermarkets rejected his application as a result.

      Behm’s father encourages him to take legal action against one of the companies. He wants to know whether it is permissible to categorically block a young man from entering the labor market simply because an algorithm is being used. Especially since Behm is in treatment for his illness and is on medication. Moreover, his doctors have no doubt that he could easily do the job he applied for. Before the case goes to trial, the company offers an out-of-court settlement. Behm obviously had a good chance of winning his case.

      Larger companies in particular are increasingly relying on algorithms to presort candidates before asking some in for an interview. The method is effective and inexpensive. An algorithmic system has no problem doing it, even if several thousand applications are to be considered. However, it can become a problem for certain groups of people if all companies in an industry use a similar algorithm. Where in the past a single door might have closed, they now all close at once. The probability of such “monopolies” being formed is increasing because digital markets in particular adhere to the principle “The winner takes it all,” i.e. one company or product wins out and displaces all competitors. Eventually only one software application remains – to presort jobseekers or to grant loans.

      That does not bother a lot of companies: Such software allows them to save time and increase the effectiveness of their recruiting procedures. And for some applicants the algorithmic preselection also works out since their professional competence and personal qualities count more than the reputation of the university they attended, or their name, background or whatever else might have previously prevented them from getting the job (see Chapter 12). While some people’s chances on the labor market increase and become fairer, other groups are threatened with total exclusion, such those who suffer from a health condition, as Behm does. Such collateral damage cannot be accepted by a society that believes in solidarity. In areas impacting social participation, an oversight authority is therefore required that recognizes algorithmic monopolization at an early stage and ensures diverse systems are present (see Chapter 16).

       No blind trust

      As these six examples have shown: Algorithms can be deficient and produce unwanted results, data can reflect and even reinforce socially undesirable discrimination, people can program software to achieve the wrong objectives or they can allow dangerous monopolies to take shape. Thus, blind faith is inappropriate. Algorithms are merely tools for completing specific tasks, not truly intelligent decision makers. They can even draw wrong conclusions while fulfilling their mission perfectly. After all, they do not understand when their goals are inappropriate, when they are not subject to the necessary corrections or when they deprive entire groups of the opportunity to participate in society. They can do considerable harm with machine-like precision. When algorithms are mistaken, we cannot let them remain in their error – to return to the adage by Saint Jerome quoted in the last chapter. People are responsible for any wrongdoing of this sort. They determine which objectives algorithms pursue. They determine which criteria are used to reach those objectives. They determine whether and how corrections are carried out.

      And just like Carol from the Little Britain sketch, they hide behind algorithms when they do not want to or cannot talk about the software’s appropriateness. For example, the head of Human Resources at Xerox Services reported that algorithms are helping her department reduce the high turnover at the company’s call center. The software used to parse applications predicts СКАЧАТЬ