The Success Equation. Michael J. Mauboussin
Чтение книги онлайн.

Читать онлайн книгу The Success Equation - Michael J. Mauboussin страница 11

Название: The Success Equation

Автор: Michael J. Mauboussin

Издательство: Ingram

Жанр: Экономика

Серия:

isbn: 9781422184240

isbn:

СКАЧАТЬ To illustrate this paradox, he tells the story of Sony Betamax and MiniDiscs. At the time those products were launched, Sony was riding high on the success of its long string of winning products from the transistor radio to the Walkman and compact disc (CD) player. But when it came to Betamax and MiniDiscs, says Raynor, “the company's strategies failed not because they were bad strategies but because they were great strategies.15

      The case of the MiniDisc is particularly instructive. Sony developed MiniDiscs to replace cassette tapes and compete with CDs. The disks were smaller and less prone to skip than CDs and had the added benefit of being able to record as well as play music. Announced in 1992, MiniDiscs were an ideal format to replace cassettes in the Walkman to allow that device to remain the portable music player of choice.

      Sony made sure that the MiniDisc had a number of advantages that put it in a position to be a winner. For example, existing CD plants could produce MiniDiscs, allowing for a rapid reduction in the cost of each unit as sales grew. Furthermore, Sony owned CBS Records, so it could supply terrific music and make even more profit. The strategy behind the MiniDisc reflected the best use of Sony's vast resources and embodied all of the lessons that the company had learned from the successes and failures of past products.

      But just as the MiniDisc player was gaining a foothold, seemingly out of nowhere, everyone had tons of cheap computer memory, access to fast broadband networks, and they could swap files of a manageable size that contained all their favorite music essentially for free. Sony had been hard at work on a problem that vanished from beneath their feet. Suddenly, no one needed cassette tapes. No one needed disks either. And no one could possibly have foreseen that seismic shift in the world in the 1990s. In fact, much of it was unimaginable. But it happened. And it killed the MiniDisc. Raynor asserts, “Not only did everything that could go wrong for Sony actually go wrong, everything that went wrong had to go wrong in order to sink what was in fact a brilliantly conceived and executed strategy. In my view, it is a miracle that the MiniDisc did not succeed.”16

      One of the main reasons we are poor at untangling skill and luck is that we have a natural tendency to assume that success and failure are caused by skill on the one hand and a lack of skill on the other. But in activities where luck plays a role, such thinking is deeply misguided and leads to faulty conclusions.

      Most Research Is False

      In 2005, Dr. John Ioannidis published a paper, titled “Why Most Published Research Findings Are False,” that shook the foundation of the medical research community.17 Ioannidis, who has a PhD in biopathology, argues that the conclusions drawn from most research suffer from the fallacies of bias, such as researchers wanting to come to certain conclusions or from doing too much testing. Using simulations, he shows that a high percentage of the claims made by researchers are simply wrong. In a companion paper, he backed up his contention by analyzing forty-nine of the most highly regarded scientific papers of the prior thirteen years, based on the number of times those papers were cited. Three-quarters of the cases where researchers claimed an effective intervention (for example, vitamin E prevents heart attacks) were tested by other scientists. His analysis showed a stark difference between randomized trials and observational studies. In a randomized trial, subjects are assigned at random to one treatment or another (or none). These studies are considered the gold standard of research, because they do an effective job of finding genuine causes rather than simple correlations. They also eliminate bias in many cases, because the people running the experiment don't know who is getting which treatment. In an observational study, subjects volunteer for one treatment or another and researchers have to take what is available. Ioannidis found that more than 80 percent of the results from observational studies were either wrong or significantly exaggerated, while about three-quarters of the conclusions drawn from randomized studies proved to be true.18

      Ioannidis's work doesn't touch on skill as we have defined it, but it does address the essential issue of cause and effect. In matters of health, researchers want to understand what causes what. A randomized trial allows them to compare two groups of subjects who are similar but who receive different treatments to see whether the treatment makes a difference. By doing so, these trials make it less likely that the results derive from luck. But observational studies don't make the same distinction, allowing luck to creep in if the researchers are not very careful in their methods. The difference in the quality of the findings is so dramatic that Ioannidis recommends a simple approach to observational studies: ignore them.19

      The dual problems of bias and conducting too much testing are substantial, and by no means limited to medical research.20 Bias can arise from many factors. For example, a researcher who is funded by a drug company may have an incentive to find that the drug works and is safe. While scientists generally believe themselves to be objective, research in psychology shows that bias is most often subconscious and nearly unavoidable. So even if a scientist believes he is behaving ethically, bias can exert a strong influence.21 Furthermore, a bit of research that grabs headlines can be very good for advancing an academic's career.

      Doing too much testing can cause just as much trouble. There are standard methods to deal with testing too much, but not all scientists use them. In much of academic research, scientists lean heavily on tests of statistical significance. These tests are supposed to indicate the probability of getting a result by chance (more formally, when the null hypothesis is true). There is a standard threshold that allows a researcher to claim that a result is significant. Here's where the trouble starts: if you test enough relationships, you will eventually find a few that pass the test but that are not really related as cause and effect.22

      One example comes from a paper published in The Proceedings of the Royal Society B, a peer-reviewed journal. The article suggests that women who eat breakfast cereal are more likely to give birth to boys than girls.23 The paper naturally generated a great deal of attention, especially in the media. Stan Young, a statistician at the National Institute of Statistical Sciences, along with a pair of colleagues, reexamined the data and concluded that the finding was likely the product of chance as a result of testing too much. The basic idea is that if you examine enough relationships, some will pass the test of statistical significance by virtue of chance. In this case, there were 264 relationships (132 foods and two time periods), and the plot of expected values of statistical significance between the various relationships was completely consistent with randomness. Young and his collaborators conclude flatly that their analysis “shows that the [findings] claimed as significant by the authors are easily the result of chance.”24

      So if we don't consider a sample that is large enough, we can miss the fact that a single strategy can always give rise to unanticipated results, as we saw in the case of the Sony MiniDisc. In contrast, we can comb through lots of possible causes and pick one that really has nothing to do with the effect we observe, such as women eating cereal and having boys as opposed to girls. What's common to the two approaches is an erroneous association between the effect, which is known, and the presumed cause. In each case, researchers fail to appreciate the role of luck.

      Where Is the Skill? It's Easier to Trade for Punters Than Receivers

      Many organizations, including businesses and sports teams, try to improve their performance by hiring a star from another organization. They often pay a high price to do so. The premise is that the star has skill that is readily transferable to the new organization. But the people who do this type of hiring rarely consider the degree to which the star's success was the result of either good luck or the structure and support of the organization where he or she worked before. Attributing success to an individual makes for good narrative, but it fails to take into account how much of the skill is unique to the star and is therefore portable.

      Boris Groysberg, a professor of organizational behavior at Harvard Business School, has studied this topic in depth. His research shows that organizations tend to overestimate the degree to which the star's skills are transferrable. His most thorough study was of analysts at Wall СКАЧАТЬ