Название: Heterogeneous Computing
Автор: Mohamed Zahran
Издательство: Ingram
Жанр: Компьютерное Железо
Серия: ACM Books
isbn: 9781450360982
isbn:
5.4Quantum Computing
Preface
The term heterogeneous computing has become famous lately (lately, meaning in the last five years!). It started infiltrating many articles. Research papers have been, and are still being, written about heterogeneous computing and its implications on both software and hardware. The definition of this term is quite straightfor-ward: executing programs on a computing platform with computing nodes of different characteristics. What is tricky is whether this is a good thing or a bad thing.
From a hardware perspective, as we will see later in this book, it is a good thing. Each computing node is efficient in specific types of applications. Efficiency here means it gets the best performance (e.g., speed) with lowest cost (e.g., power). This excellence in price-performance is very much needed in our current era of big data, severe power consumption, and the road to exascale computing. So if we can assign to each node the part of the program that it excels at, then we get the results of price-performance, and this is the main challenge facing the software community.
From a software perspective, heterogeneous computing seems like bad news because it makes programming much more challenging. As a developer, you have way more tasks than with traditional homogeneous computing. You need to know about different execution units, or at least learn about the computing nodes in the system you are writing code for. Then you need to pick algorithms to make your program, or different parts of your program, suitable for these nodes. Finally, you need to tweak your code to get the needed performance by overcoming many bottlenecks that certainly exist in heterogeneous computing, like communication overhead between the different units, overhead of creating threads or processes, management of memory access of those different units, and so on.
We cannot then say that heterogeneous computing is good or bad news. But we can say that heterogeneous computing is now the norm and not the exception. It is here, it will continue to be here, and we need to deal with it. But how do we deal with it? This is the topic of this book.
This book discusses the topic of heterogeneous computing from different angles: hardware challenges, current hardware state-of-the-art, software issues, how to make the best use of the current heterogeneous systems, and what lies ahead. All the systems we use, from portable devices to supercomputers, embody some type of heterogeneity. The main reason for that is to have good performance with power efficiency. However, this opens the door to many challenges that we need to deal with at all levels of the computing stack: from algorithms all the way to process technology. The aim of this book is to introduce heterogeneous computing in the big picture. Whether you are a hardware designer or a software developer, you need to know how the pieces of the puzzle fit together.
This book will discuss several architecture designs of heterogeneous system, the role of operating system, and the need for more efficient programming models. The main goal is to bring researchers and engineers to the forefront of the research frontier in the new era that started a few years ago and is expected to continue for decades.
Acknowledgments
First and foremost, I would like to thank all my family for their support, encouragement, and unconditional love. I would like to thank Steve Welch, who is actually the one who gave me the idea of writing this book. A big thank you goes also to Tamer Özsu the editor-n-chief of ACM books, for his encouragement, flexibility, and willingness to answer many questions. Without him, this book wouldn’t have seen the light of day.
Anything I have learned in my scientific endeavors is due to my professors, my students, and my colleagues. I cannot thank them enough. Dear students, we learn from you as much as you learn from us.
Mohamed Zahran
September 2018
1Why Are We forced to Deal with Heterogeneous Computing?
When computers were first built, about seven decades ago, there was one item on the wish list: correctness. Then soon a second wish appeared: speed. The notion of speed differs of course from those old days and applications to today’s requirements. But in general we can say that we want fast execution. After a few more decades and the proliferation of desktop PCs and then laptops, power became the third wish, whether in the context of battery life or electricity bills. As computers infiltrated many fields and were used in many applications, like military and health care, we were forced to add a fourth wish: reliability. We do not want a computer to fail during a medical procedure, for example; or it would have been a big loss (financially and scientifically) if the rover Curiosity, which NASA landed on Mars in 2012, failed. (And yes, Curiosity is a computer.) With the interconnected world we are in today, security became a must. And this is the fifth wish. Correctness, speed, power, reliability, and security are the five main wishes we want from any computer system. The order of the items differs based on the application, societal needs, and the market segment. This wish list is what directs the advances in hardware and software. But the enabling technologies for fulfilling this wish list lie in hardware advances and software evolution. So there is a vicious cycle between the wish list and hardware and software advances, and this cycle is affected by societal needs. This chapter explains the changes we have been through from the dawn of computer systems till today that made heterogeneous computing a must.
In this chapter we see how computing systems evolved till the current status quo. We learn about the concept of heterogeneity and how to make use of it. At the end of this chapter, ask yourself: Have we reached heterogeneous computing willingly? Or against our will? I hope by then you will have an answer.
1.1The Power Issue
In 1965 Gordon Moore, cofounder of Intel together with Robert Noyce, published a four-page paper that became very famous [Moore 1965]. This paper, titled “Cramming More Components onto Integrated Circuits,” made a prediction that the number of components (he did not mention transistors specifically, but the prediction evolved to mean transistors) in an integrated circuit (IC) will double every year. This prediction evolved over time to be two years, then settled on 18 months. This is what we call Moore’s law: transistors on a chip are expected to double every 18 months. The 50th anniversary of Moore’s law was in 2015! More transistors per chip means more features, which in turn means, hopefully, better performance. Life was very rosy for both the hardware community and the software community. On the hardware side, faster processors with speculative execution, superscalar capabilities, simultaneous multithreading, etc., were coupled with better process technology and higher frequency, which produced faster and faster processors. On the software side, you could write your program and expect it to get faster with every new generation of processors with any effort on your part! Until everything stopped around 2004. What happened?
Around 2004 Dennard scaling stopped. In 1974 Robert Dennard and several other authors [Dennard et al. 1974] published a paper that predicted that voltage and current should be proportional to the linear dimensions of the transistors. This has been known as Dennard scaling. It works quite well with Moore’s law. Transistors get smaller and hence faster and their voltage and current also scale down, so power can stay almost constant, or at least will not increase fast. However, a closer look at the Dennard scaling prediction shows that the authors ignored leakage current (was very insignificant at the time when the paper was published). Now as transistors get smaller and smaller, leakage becomes more significant. The aforementioned СКАЧАТЬ