Analytics for Insurance. Tony Boobier
Чтение книги онлайн.

Читать онлайн книгу Analytics for Insurance - Tony Boobier страница 4

СКАЧАТЬ will increasingly be replaced by new technologically driven solutions. New job descriptions will emerge. Old campaigners who cannot learn the new tools of the industry may find it difficult to cope. Professional institutes will increasingly need to reflect this new working environment in their training and examinations. The insurance industry as a whole also comprises multiple relationships (Figure 1.2), some of which are complex in nature.

Figure 1.2 Relationships between parties

Even within single insurance organizations there are many functions and departments. Some operate as relative silos with little or no interference from their internal peers. Others such as Head Office functions like HR sit across the entirety of the business (Figure 1.3). All of these functions have the propensity for change, and at the heart of all these changes rests the topic of Big Data and Analytics.

Figure 1.3 Insurance functions

      1.1.1 Big Data Defined by Its Characteristics

      Big Data may be ‘big news’ but it is not entirely ‘new news’. The rapid growth of information has been recognized for over 50 years although according to Gil Press who wrote about the history of Big Data in Forbes1 the expression was first used in a white paper published in 2008.

With multiple definitions available, Big Data is best described by five key characteristics (Figure 1.4) which are:

Figure 1.4 Big Data defined by its characteristics

      ■ Volume – the sheer amount of structured and unstructured data that is available. There are differing opinions as to how much data is being created on a daily basis, usually measured in petabytes or gigabytes, one suggestion being that 2.5 billion gigabytes of information is created daily.2 (A ‘byte’ is the smallest component of computer memory which represents a single letter or number. A petabyte is 1015 bytes. A ‘gigabyte’ is one-thousand million bytes or 1020 bytes.) But what does this mean? In 2010 the outgoing CEO of Google, Eric Schmidt, said that the same amount of information – 5 gigabytes – is created in 48 hours as had existed from ‘the birth of the world to 2003.’ For many it is easier to think in terms of numbers of filing cabinets and whether they might reach the moon or beyond but such comparisons are superfluous. Others suggest that it is the equivalent of the entire contents of the British Library being created every day.

      It is also tempting to try and put this into an insurance context. In 2012 the UK insurance industry created almost 90 million policies, which conservatively equates to somewhere around 900 million pages of policy documentation. The 14m books (at say 300 pages apiece) in the British Library equate to about 4.2 billion pages or equivalent to around five years of annual UK policy documentation. In other words, it would take insurers five years to fill the equivalent of the British Library with policy documents (assuming they wanted to). But let's not play games – it is sufficient to acknowledge that the amount of data and information now available to us is at an unprecedented level.

      Perhaps because of the enormity of scale, we seek to define Big Data not just by its size but by its characteristics.

      ■ Velocity – the speed at which the data comes to us, especially in terms of live streamed data. We also describe this as ‘data in motion’ as opposed to stable, structured data which might sit in a data warehouse (which is not, as some might think, a physical building, but rather a repository of information that is designed for query and analysis rather than for transaction processing).

      ‘Streamed data’ presents a good example of data in motion in that it comes to us through the internet by way of movies and TV. The speed is not one which is measured in linear terms but rather in bytes per second. It is governed not only by the ability of the source of the data to transmit the information but the ability of the receiver to ‘absorb’ it. Increasingly the technical challenge is not so much that of creating appropriate bandwidth to support high speed transmittal but rather the ability of the system to manage the security of the information.

      In an insurance context, perhaps the most obvious example is the whole issue of telematics information, which flows from mobile devices not only at the speed of technology but also at the speed of the vehicle (and driver) involved.

      ■ Variety – Big data comes to the user from many sources and therefore in many forms – a combination of structured, semi-structured and unstructured. Semi-structured data presents problems as it is seldom consistent. Unstructured data (for example plain text or voice) has no structure whatsoever.

      In recent years an increasing amount of data is unstructured, perhaps as much as 80 %. It is suggested that the winners of the future will be those organizations which can obtain insight and therefore extract value from the unstructured information.

      In an insurance context this might comprise data which is based on weather, location, sensors, and also structured data from within the insurer itself – all ‘mashed’ together to provide new and compelling insights. One of the clearer examples of this is in the case of catastrophe modeling where insurers have the potential capability to combine policy data, policyholder input (from social media), weather, voice analysis from contact centers, and perhaps other key data sources which all contribute to the equation.

      ■ Veracity – This is normally taken to mean the reliability of the data. Not all data is equally reliable as it comes from different sources. One measure of veracity is the ‘signal to noise’ ratio which is an expression for the usefulness of information compared to false or irrelevant data. (The expression has its origin in the quality of a radio signal compared to the background noise.)

      In an insurance context this may relate to the amount of ‘spam’ or off-topic posts on a social media site where an insurer is looking for insight into the customers' reaction to a new media campaign.

      As organizations become obsessed with data governance and integrity there is a risk that any data which is less than perfect is not reliable. This is not necessarily true. One major UK bank for example gives a weighting to the veracity, or ‘truthfulness’ of the data. It allows them to use imperfect information in their decisions. The reality is that even in daily life, decisions are made on the best information available to us even if not perfect and our subsequent actions are influenced accordingly.

      ■ Value – the final characteristic and one not widely commentated on is that of the value of the data. This can be measured in different ways: value to the user of the data in terms of giving deeper insight to a certain issue; or perhaps the cost of acquiring key data to give that information, for example the creditworthiness of a customer.

      There is a risk in thinking that all essential information is out there ‘in the ether’ and it is simply a matter of finding it and creating a mechanism for absorption. It may well be that certain types of data are critical to particular insights, and there is a cost benefit case for actively seeking it.

      In an insurance context, one example might be where remote aerial information obtained from either a satellite or unmanned aerial device (i.e., СКАЧАТЬ