Название: The Self-Taught Computer Scientist
Автор: Cory Althoff
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119724339
isbn:
While algorithms are a fundamental concept in computer science, computer scientists have not agreed on a formal definition. There are many competing definitions, but Donald Knuth's is among the best known. He describes an algorithm as a definite, effective, and finite process that receives input and produces output based on this input.
Definiteness means that the steps are clear, concise, and unambiguous.
Effectiveness means that you can perform each operation precisely to solve the problem.
Finiteness means that the algorithm stops after a finite number of steps.
A common addition to this list is correctness. An algorithm should always produce the same output for a given input, and this output should be the correct answer to the problem the algorithm solves.
Most, but not all, algorithms fulfill these requirements, and some of the exceptions are important. For example, when you create a random number generator, your goal is to generate randomness so someone can't use the input to guess the output. Also, many algorithms in data science are not strict about correctness. For example, it may be sufficient for an algorithm to estimate output, as long as the estimate's uncertainty is known. In most cases, however, your algorithms should fulfill all the previous requirements. If you write an algorithm for making scrambled eggs, the user might not be happy if, occasionally, the algorithm produces an omelet or boiled eggs instead.
Analyzing Algorithms
There is often more than one algorithm we can use to solve a problem. For example, there are several different ways to sort a list. When several algorithms solve a problem, how do you know which one is best? Is it the simplest? The fastest? The smallest? Or something else?
One way to judge an algorithm is by its run time. An algorithm's run time is the amount of time it takes your computer to execute an algorithm written in a programming language like Python. For example, here is an algorithm in Python that counts from 1 to 5 and prints each number:
for i in range(1, 6): print(i)
You can measure this algorithm's run time using Python's built-in time
module to track how long your computer takes to execute it:
import time start = time.time() for i in range(1, 6): print(i) end = time.time() print(end – start) >> 1 >> 2 >> 3 >> 4 >> 5 >> 0.15141820907592773
When you run your program, it prints the numbers from 1 to 5 and outputs the time it took to execute. In this case, it took 0.15 seconds.
Now, rerun your program:
import time start = time.time() for i in range(1, 6): print(i) end = time.time() print(end – start) >> 1 >> 2 >> 3 >> 4 >> 5 >> 0.14856505393981934
The second time you run your program, you should see a different run time. If you rerun your program, you will see yet another run time. The algorithm's run time keeps changing because the available processing power your computer has when it runs your program varies and in turn affects the program's run time.
Further, this algorithm's run time would be different on another computer. If you run it on a computer with less processing power, it would be slower, whereas it would be faster on a more powerful computer. Furthermore, this program's run time is affected by the programming language you wrote it in. For example, the run time would be faster if you run this same program in C because C can be faster than Python.
Because an algorithm's run time is affected by so many different variables, such as your computer's processing power and the programming language, run time is not an effective way to compare two algorithms. Instead, computer scientists compare algorithms by looking at the number of steps they require. You can input the number of steps involved in an algorithm into a formula that can compare two or more algorithms without considering the programming language or computer. Let's take a look at an example. Here is your program from earlier that counts from 1 to 5:
for i in range(1, 6): print(i)
Your program takes five steps to complete (it goes through a loop five times and prints i
each time). You can express the number of steps your algorithm requires with this equation:
f(n) = 5
If you make your program more complicated, your equation will change. For example, you may want to keep track of the sum of all the numbers you are printing:
count = 0 for i in range(1, 6): print(i) count += i
Now, your algorithm takes 11 steps to complete. First, it assigns the variable count to zero. Then, it prints five numbers and increments five times (1 + 5 + 5 = 11).
This is the new equation for your algorithm:
f(n) = 11
What happens if you change the 6 in your code to a variable?
count = 0 for i in range(1, n): print(i) count += i
Your equation changes to this:
f(n) = 1 + 2n
Now the number of steps your algorithm takes depends on whatever the value of n is. The 1 in the equation represents the first step: count = 0
. Then, there are two times n steps after that. For example, if n is 5, f(n)
= 1 + 2 × 5. Computer scientists call the variable n in an equation that describes the number of steps in an algorithm the size of the problem. In this case, you could say the time it takes to solve a problem of size n is 1 + 2n, or in mathematical notation, T(n) = 1 + 2n.
An equation describing the number of steps in an algorithm is not very helpful, however, because, among other things, you can't always reliably count the number of steps in an algorithm. For example, if an algorithm has many conditional statements, you have no way of knowing which of them will execute in advance. The good news is, as a computer scientist, you don't care about the exact number of steps in an algorithm. What you want to know is how an algorithm performs as n gets bigger. Most algorithms perform fine on a small data set but may be a disaster with larger data sets. Even the most inefficient algorithm will perform well if n is 1. In the real world, however, n will probably not be 1. It may be several hundred thousand, a million, or more.
The important thing to know about an algorithm is not the exact number of steps it will take but rather an approximation of the number of steps it will take as n gets bigger. As n gets larger, one part of the equation will overshadow the rest of the equation to the point that everything else becomes irrelevant. Take a look at this Python code:
def print_it(n): # loop 1 for i in range(n): print(i) # loop 2 for i in range(n): print(i) for j in range(n): print(j) for h in range(n): print(h)
What part of this program is most important for determining how many steps your algorithm takes to complete? You may think both parts of the function (the first loop and the second loop containing other loops) are important. After all, if n is 10,000, your computer will print many numbers СКАЧАТЬ