We Humans and the Intelligent Machines. Jörg Dräger
Чтение книги онлайн.

Читать онлайн книгу We Humans and the Intelligent Machines - Jörg Dräger страница 7

СКАЧАТЬ one doctor examined 110 images per patient, compared to 640 in 2010. Mayo Clinic hired additional staff, but not as fast as the data to be analyzed grew. The result is a challenge: While in 1999 a doctor viewed and evaluated three images per minute, in 2010 she had to look at more than 16 images per minute – one every three to four seconds – in order to cope with the information flooding in over the course of an eight-hour work day.2

      For patients, the extra data can be life-saving. When Michael Forsting, Director of Radiology at the University Hospital in Essen, looked at cross-sectional images of the brain as a young doctor in the 1980s, each one showed a section 10 to 12 millimeters thick. There was a significant probability of overlooking a metastasis seven millimeters in diameter. Today, each image depicts one millimeter of the brain. The seven-millimeter metastasis, which used to remain undetected between images, is now visible in seven pictures. New technical processes are capturing reality in much greater detail. Hospitals, however, no longer have the human resources to take full advantage of the quality of their findings. As Forsting says: “We have 10 times more pictures. A CT of the brain used to consist of 24 images, now it’s 240. And someone has to take a look at them.”3

      The challenge in radiology is exemplary of those present in other areas, such as identifying the fastest route in urban traffic or coping with the mass of scientific literature on any given subject. Technical advances are improving the amount and quality of data, and technology must help determine the relevant parts of this flood of information. Doctors can now create images of the body down to the smallest cell using computer tomography. Instead of palpating for tumors, radiologists use CTs or MRTs to search for abnormal cellular changes. These days, more data are available than a physician can effectively process using traditional methods. Even the best radiologists would not be able to evaluate 160 images per minute, instead of today’s 16. Any attempt to achieve better results in this way is doomed to fail since the quality of a physician’s judgment declines as he or she grows tired.

      An increase in personnel would not be a solution. Apart from the question of funding such a move in today’s already expensive healthcare system, the race against the constantly growing amount of data cannot be won with new hires. Algorithmic tools are needed instead, and doctors should be open to that. After all, monotonously processing x-rays in a darkened room is not what humans do best, neither is it the core competence of highly trained radiologists – and it is certainly not the reason why someone chooses this profession.

       Flawed reasoning: Making mistakes and discriminating

      Tim Schultheiss and Hakan Yilmaz have a lot in common. Both are looking for an apprenticeship. Both were born in Germany in 1996 and are German citizens. Both attend a secondary school in a medium-sized town. Their biographies are almost identical – apart from their names. Tim and Hakan do not really exist. The two were invented by researchers for a study on discrimination in Germany’s vocational training system, commissioned by the Expert Council of German Foundations on Integration and Migration.4

      The researchers sent applications from Tim and Hakan to 1,794 companies. Result: Tim was invited to interview considerably more often than Hakan, whose success rate was 50 percent lower. Such discrimination is higher for small companies than for large ones, and higher for automotive technicians than for office clerks. Despite any differences in the details, however, the overall findings are clear: Even when they have the same qualifications as other candidates, applicants with a Turkish name are clearly at a disadvantage when they go looking for a position as an apprentice in Germany.

      Other studies show discrimination based on heritage in many countries: in Ireland against German names, in the Netherlands against Surinamese, in Norway against Pakistani, in Switzerland against Portuguese, in Spain against Moroccan, in the US against typically African-American names such as Lakisha and Jamal, as opposed to names such as Emily and Greg.5 This discrimination, whether intentional or not, is not limited to the labor market. It shapes our everyday lives. At universities in the US, a field trial has shown that professors working with PhD candidates are more likely to interview white men than women; Hispanics, blacks, Chinese and Indians have also been disadvantaged.

      We are guided in important decisions by mental associations and unconsciously activated stereotypes. Our judgments and also our perception and our thinking are subject to systematic errors. These so-called cognitive biases have been documented by psychologists for decades. They seem to be almost universally human and unconscious despite the various ways they are expressed.6

      Well over 100 such malfunctions in our thinking are known.7 Many of these have been researched by psychologist Daniel Kahneman, who was awarded the Nobel Prize for his work on these shortcomings. This is how Kahneman has summed up his many decades of empirical research on the way humans process information: “The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable.”8

      Cognitive biases lead to a wide variety of misjudgments. When a plane has just crashed and the media report about it, people temporarily estimate the frequency of plane accidents to be higher than usual, and higher than it actually is. We overestimate the risk of being murdered and underestimate the likelihood of having a stroke. People react more favorably towards others if they find them outwardly attractive. For example, they judge the same offense more mildly when it has been perpetrated by a good-looking defendant and consider attractive candidates for political office to be more knowledgeable.9

      In other words, in many areas of life people act neither rationally nor fairly. We use inappropriate criteria without necessarily being aware of it. In our memories, we are often firmly convinced that we have solely assessed a person’s expertise or competence, even if it was their appearance, skin color or gender that ultimately tipped the scales. Some misjudgments in everyday life are harmless or even amusing. However, systematic biases can also have significant negative social consequences. Human rights are based on the principle of equality and protection against discrimination. We should therefore use the “intelligence” offered by machines to create equitable opportunities for all demographic groups.

       Inconsistency: Rating the same things differently

      Two lawyers, three opinions – Germans use this aphorism to bemoan the inconsistency of legal judgments. Yet lawyers are not the only ones known for judging the same facts differently. Contradictory viewpoints are human and part of everyday life among physicians as well. This is why many patients with serious illnesses often seek a second opinion before agreeing to treatment. This does not necessarily reflect a lack of confidence in the medical profession. It more likely attests to a healthy gut instinct which senses that even experts can disagree with a colleague’s opinion and make a mistake.

      A test by the Boston radiologist Hani Abujudeh shows how justified this gut feeling is. In 2010, he invited three of his colleagues at Massachusetts General Hospital, specialists in abdominal and pelvic conditions, to participate in an experiment. Abujudeh gave each of them CT scans of 60 old medical cases, which he had randomly selected from the hospital’s database. What the three doctors did not know was, first, what diagnosis had originally been made and, second, that in half of the cases they had made it themselves. Abujudeh asked his colleagues to examine the scans again, before he compared the new with the original findings. The result: In one instance in four, the radiologists deviated considerably from their own past diagnosis. And when looking at scans that had previously been evaluated by someone else, the doctors came to a different conclusion one-third of the time.10

      Just so no wrong conclusions are reached: These findings do not mean that incorrect diagnoses or improper treatment are widespread. СКАЧАТЬ