Winning with Data. Bien Frank
Чтение книги онлайн.

Читать онлайн книгу Winning with Data - Bien Frank страница 4

Название: Winning with Data

Автор: Bien Frank

Издательство: Автор

Жанр: Зарубежная образовательная литература

Серия:

isbn: 9781119257394

isbn:

СКАЧАТЬ passenger pickups by responding over the CB, even if they are the furthest cab from the customer. “How long until the taxi arrives?”

      Dispatchers can handle only one request at a time, serially. In rush hour, potential passengers redial after hearing a busy tone. Let too much time elapse coming from the other side of town and your passenger has already jumped into an Uber. For the Yellow Cab driver, the gas, time, and effort are all wasted because of an information asymmetry. In comparison to Uber, Yellow Cab drivers are driving blind to the demand of the city, and Yellow Cab customers are blind to the supply of taxi cabs.

      Uber changes its pricing as a function of demand, telling drivers when it makes sense to start and stop working. Surge pricing, though controversial, establishes a true market for taxi services. Yellow Cab drivers don't know the best hours to work and prices are fixed regardless of demand.

      Data improves more than the marketplace efficiency. Uber employs drivers based on their customer satisfaction data provided by consumers. Drivers who score below a 4.4 on a 5.0 scale risk “deactivation” – inability to access Uber's passenger base. Meanwhile, the Yellow Cab company maintains an average Yelp review of less than 1.5 stars out of 5.

      The data teams that optimize Uber driver locations, maximize revenue for drivers, and drive customer satisfaction operate on a different plane from the management of the Yellow Cab company. Blind, Yellow Cab drivers are completely outgunned in the competitive transportation market. They don't have what it takes to compete: data.

      But the Uber phenomenon isn't just a revolution in the back office. It's also about a new generation of taxi drivers, who operate their own businesses in a radically different way. What cabbie in the 1990s could have dreamed that upon waking early in the morning, a mobile phone would suggest there's more money to be made in the financial district of San Francisco than at the airport? But the millennial driver knows the data is attainable: It's just a search query or text message away. This is the fundamental, secular discontinuity that data engenders.

      The Era of Instant Data: You Better Get Yourself Together

      Instant Karma's gonna get you

      Gonna knock you right on the head

      You better get yourself together

      Pretty soon you're gonna be dead

– John Lennon

      The demand for instant data will increase inexorably. Like Uber drivers seeking a passenger at this very moment, we expect answers instantly. If you're making Baked Alaska for company tonight, and you've forgotten the ratio of sugar to egg whites in the meringue that houses the ice cream, your phone will answer the question in just a few seconds.

      Where is Priceline stock trading? Where do the San Francisco Giants stand in this year's pennant race? When hiring a litigation attorney, what are the key questions to ask? Are there any grammatically sound sentences in English where every word starts with the same letter?

      All of these questions are instantly answerable. These are the types of questions we ask at the dinner table or when sharing a drink with a friend at a bar, and answer in a few seconds with a search query on a phone.

      Because of this new instant access to just about every kind of information, we expect the same instantaneity of answers at work. Why did our sales team outperform last quarter? Which of my clients are paying the most? Does this marketing campaign acquire customers more efficiently than the others? Should we launch our product in Japan in December?

      In most companies, these questions require days or weeks to answer. Consequently, data is a historical tool, a useful rearview mirror to the well-managed business. It's a lens through which we can understand what happened in the past. And, if we're lucky, it can help us understand a little bit about why the past unfolded in a particular way.

      But this level of analysis pales in comparison with the practices of best-in-class companies that operationalize their data. These are businesses that use the morning's purchasing data to inform which merchandise sits on the shelves in the afternoon.

      What have those companies done to access instant data? First, they've changed the way they manage themselves, their teams, and their companies; they've changed how they run meetings, how they make decisions, and how they collaborate. Employees are data literate: They understand how to access the data they need, how to analyze it, and how to communicate it well.

      Second, these companies have developed functional data supply chains that send insight to the people who need it. A data supply chain comprises all the people, software, and processes related to data as it's generated, stored, and accessed. While most of us think of data as the figures in an Excel spreadsheet or a beautiful bar chart, these simple formats often hide the complexity required to produce them.

      The simple Excel spreadsheet hides a churning sea of data, coursing through the company's databases, that must be synthesized and harmonized to create a single, accurate view of the truth. A data infrastructure that permits easy, instant access to answers to business questions by anyone in the company is the second step.

      Third, these businesses create a data dictionary, a common language of metrics used by the company. When sales and marketing refer to a lead, the definition of a lead must be consistent across both teams. Often, different teams within a company define metrics in unique ways. Though convenient for the individual team, this approach creates confusion, inconsistency, and consternation. Robust data pipelines ensure a universal language across the company.

      This combination of bottoms-up data literacy, top-down data infrastructure, and a single metrics lexicon has transformed many businesses. Google was one of the first to empower its employees with unfettered access to critical business data. Consequently, Google employees were able to leverage the company's enormous reach and resources to develop breakthrough products.

      That innovation in the early 2000s cascaded through many other large and small companies, including Facebook, LinkedIn, Zendesk, and others. Above all, these companies architected data supply chains that enable their employees to extract the insights they needed to advance the company's causes. Unfortunately, most businesses still operate with outdated supply chains buckling under the strain of data demand. You better get your data together, or pretty soon you're gonna be dead.

      Data Supply Chains: Buckling Under the Load

      Slow data is caused by an inefficient supply chain. Today's data supply chains suffer from a fundamental flaw in their architecture: The number of people seeking data dwarfs the number of people supplying data. The taxi dispatcher relaying passenger pickups by phone serves scores of drivers, each seeking their next fare. In many companies, this ratio may be much greater than 100:1. Is it any surprise that the data analyst team is seen as an enormous bottleneck, a chokepoint for the organization?

      In the past, this flawed architecture functioned because most companies had a relatively small amount of data, most of it created by humans, and the competition wasn't using data for a competitive advantage. Without a substantial corpus of data to interrogate, only a handful of executives asked questions of their company's data, limiting the total number of requests. Most of the time, these requests were financial in nature and managed by the CFO and his organization.

      But the amount of data that companies store today has exploded. According to IDC, from 2013 to 2020, the digital universe will grow by a factor of 10, from 4.4 trillion to 44 trillion gigabytes. It more than doubles every two years. This supernova of data contains insights relevant for every person within an organization.

      Today, computers generate data at rates that far outstrip humans. Facebook records more than 600 petabytes of data daily on its users, almost all of it generated by computers. This trend isn't constrained to social networks. For example, СКАЧАТЬ