Название: The Self-Taught Computer Scientist
Автор: Cory Althoff
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119724339
isbn:
In a linear algorithm, as n gets bigger, the number of steps your algorithm takes increases by however much n increases (Figure 1.3).
Figure 1.3 Linear complexity
Log-Linear Time
An algorithm that runs in log-linear time grows as a combination (multiplication) of logarithmic and linear time complexities. For example, a log-linear algorithm might evaluate an O(log n) operation n times. In big O notation, you express a log-linear algorithm as O(n log n). Log-linear algorithms often divide a data set into smaller parts and process each piece independently. For example, many of the more efficient sorting algorithms you will learn about later, such as merge sort, are log-linear.
Figure 1.4 shows what a log-linear algorithm looks like when you plot it on a graph.
Figure 1.4 Log-linear complexity
As you can see, log-linear complexity is not as efficient as linear time. However, its complexity does not grow nearly as quickly as quadratic, which you will learn about next.
Quadratic Time
After log-linear, the next most efficient time complexity is quadratic time. An algorithm runs in quadratic time when its performance is directly proportional to the problem's size squared. In big O notation, you express a quadratic algorithm as O(n**2).
Here is an example of an algorithm with quadratic complexity:
numbers = [1, 2, 3, 4, 5] for i in numbers: for j in numbers: x = i * j print(x)
This algorithm multiplies every number in a list of numbers by every other number, stores the result in a variable, and prints it.
In this case, n is the size of your numbers
list. The equation for this algorithm's time complexity is as follows:
f(n) = 1 + n * n * (1 + 1)
The (1 + 1) in the equation comes from the multiplication and print statement. You repeat the multiplication and print statement n * n times with your two nested for
loops. You can simplify the equation to this:
f(n) = 1 + (1 + 1) * n**2
which is the same as the following:
f(n) = 1 + 2 * n**2
As you may have guessed, the n**2 part of the equation overshadows the rest, so in big O notation, the equation is as follows:
O(n) = n**2
When you graph an algorithm with quadratic complexity, the number of steps increases sharply as the problem's size increases (Figure 1.5).
Figure 1.5 Quadratic complexity
As a general rule, if your algorithm contains two nested loops running from 1 to n (or from 0 to n − 1), its time complexity will be at least O(n**2). Many sorting algorithms such as insertion and bubble sort (which you will learn about later in the book) follow quadratic time.
Cubic Time
After quadratic comes cubic time complexity. An algorithm runs in cubic time when its performance is directly proportional to the problem's size cubed. In big O notation, you express a cubic algorithm as O(n**3). An algorithm with a cubic complexity is similar to quadratic, except n is raised to the third power instead of the second.
Here is an algorithm with cubic time complexity:
numbers = [1, 2, 3, 4, 5] for i in numbers: for j in numbers: for h in numbers: x = i + j + h print(x)
The equation for this algorithm is as follows:
f(n) = 1 + n * n * n * (1 + 1)
Or as follows:
f(n) = 1 + 2 * n**3
Like an algorithm with quadratic complexity, the most critical part of this equation is n**3, which grows so quickly it makes the rest of the equation, even if it included n**2, irrelevant. So, in big O notation, quadratic complexity is as follows:
O(n) = n**3
While two nested loops are a sign of quadratic time complexity, having three nested loops running from 0 to n means that the algorithm will follow cubic time. You will most likely encounter cubic time complexity if your work involves data science or statistics.
Both quadratic and cubic time complexities are special cases of a larger family of polynomial time complexities. An algorithm that runs in polynomial time scales as O(n**a), where a = 2 for quadratic time and a = 3 for cubic time. When designing your algorithms, you generally want to avoid polynomial scaling when possible because the algorithms can get very slow as n gets larger. Sometimes you can't escape polynomial scaling, but you can find comfort knowing that the polynomial complexity is still not the worst case, by far.
Exponential Time
The honor of the worst time complexity goes to exponential time complexity. An algorithm that runs in exponential time contains a constant raised to the size of the problem. In other words, an algorithm with exponential time complexity takes c raised to the nth power steps to complete. The big O notation for exponential complexity is O(c**n), where c is a constant. The value of the constant doesn't matter. What matters is that n is in the exponent.
Fortunately, you won't encounter exponential complexity often. One example of exponential complexity involving trying to guess a numerical password consisting of n decimal digits by testing every possible combination is O(10**n).
Here is an example of password guessing with O(10**n) complexity:
pin = 931 n = len(pin) for i in range(10**n): if i == pin: print(i)
The number of steps this algorithm takes to complete grows incredibly fast as n gets larger. When n is 1, this algorithm takes 10 steps. When n is 2, it takes 100 steps. When n is 3, it takes 1,000 steps. As СКАЧАТЬ