Self-Service Data Analytics and Governance for Managers. Nathan E. Myers
Чтение книги онлайн.

Читать онлайн книгу Self-Service Data Analytics and Governance for Managers - Nathan E. Myers страница 15

СКАЧАТЬ order when we have our meals delivered? If you consider the vast array of high-velocity data types and the number of observations being collected, it is easy to see how the term “Big Data” was coined. How your organization mobilizes to connect to this data for logistical and client relationship benefits will set the tone for success in the next decade. How you, as an individual, can adopt emerging technologies to get on board in the new digital landscape will position you for personal success along this time frame.

      Cloud Storage and Cloud Computing

      We have already described that data is being captured at a rate never before seen. Some say that today, companies like Amazon, Google, and others know what we need before we do. They capture data surrounding how we shop, what we buy, our online browsing patterns, our spending patterns, and the likely order of our transactions. One consequence of harnessing the enterprise utility of customer data is that data volumes have multiplied and exploded over the last 10 to 15 years. In many cases, enterprises require data storage that far exceeds what can be accommodated with their own hardware in their own facilities. Further, the number of operations performed on the data has increased commensurately. However, with advances in connectivity, the availability of capacious networks, increased speed of information transmission, and advances in data security, companies may elect to upload their data to data centers outside of their organization in the cloud, to be administered by cloud service providers (CSPs).

      Since the advent of cloud computing, many companies no longer deem it necessary to purchase licenses and install software for dozens of required programs on every individual connected machine across an enterprise. Instead, every computer on a connected network can subscribe to and run software that is housed in the cloud to process data that is stored in the cloud, assisted by the expertise of CSPs. A user may only pull down fully processed information and outputs, as required for local consumption. It is important to introduce the cloud, given some of the largest service providers are packaging up tools and expertise surrounding some of the subject technologies of this book – artificial intelligence, machine learning, and analytics, to name a few. Let's begin by providing an introductory overview of artificial intelligence.

      Artificial Intelligence

      Artificial intelligence (AI) is one of the broadest and most all-encompassing of the data analytics references the reader will hear. It is the over-arching theory and science of development of computer systems and processes that can consider facts and variables to perform processes that typically require human intelligence and the uniquely human capability of learning new things and applying them. Any number of sciences and disciplines are brought to AI such as mathematics, computer science, psychology, and linguistics, among many others. One need only picture the ways that humans think, interact, and understand one another to perform daily tasks to see the breadth of fields, disciplines, and specialty branches of learning that must be brought to bear.

      Blockchain and Distributed Ledger Technology

      The next technology we will introduce in this chapter is distributed ledger technology (DLT) upon which blockchain is based. In order to transact digitally and with confidence, the ownership chain of assets of value must be trackable and auditable. If we think about all the transactions our companies engage in, one activity that often represents manual work and a break in straight through processing (STP) is verifying transactions when questions arise after-the-fact. Think of the number of reconciliations performed across accounting, finance, and operations functions in business today. Often, reconciliations are aimed at comparing and agreeing things like transactions, assets, securities, and account balances to confirm the true state of a ledger. A reconciliation is essentially the comparison of two datasets to either confirm their agreement or to identify any exceptions or breaks. Once exceptions are identified, countless hours of investigation can follow, tracing the exceptions back to transactional source data to confirm which of the two data sets under comparison are correct, and to take the necessary resolution steps to correct the faulty dataset. What if this could be solved in a different way?

      What if counterparties to transactions were both (or all) participants on the same distributed ledger? If they each (or all, respectively) agreed on the validity of ownership or asset movements, and each subscribed to the resulting golden source of truth, would there be a need for the vast numbers of after-the-fact reconciliations or the audits that are undertaken to resolve exceptions? Would there be an opportunity for exceptions to emerge at all? So goes the theoretical benefits case for distributed ledger technology to the accounting, finance, and operations functions in large organizations.