Algorithms For Dummies. John Paul Mueller
Чтение книги онлайн.

Читать онлайн книгу Algorithms For Dummies - John Paul Mueller страница 19

Название: Algorithms For Dummies

Автор: John Paul Mueller

Издательство: John Wiley & Sons Limited

Жанр: Программы

Серия:

isbn: 9781119870005

isbn:

СКАЧАТЬ because longer algorithm runs determinate the maximum number of nodes created to solve the problem. In addition, it’s important to consider the branching factor, which is the average number of nodes created at each step in the problem space graph to solve a problem. For the same solution, an algorithm with a higher branching factor will generate more nodes than one with a lower branching factor.

      Going random and being blessed by luck

      Solving a search problem using brute-force techniques (described in “Avoiding brute-force solutions,” earlier in this chapter) is possible. The advantage of this approach is that you don’t need any domain-specific knowledge to use one of these algorithms. A brute-force algorithm tends to use the simplest possible approach to solving the problem. The disadvantage is that a brute-force approach works well only for a small number of nodes. Here are some of the common brute-force search algorithms:

Technique Description Cons Pros
Breadth-first search Begins at the root node, explores each of the child nodes first, then moves down to the next level. It progresses level by level until it finds a solution. Must store every node in memory, which means that it uses a considerable amount of memory for a large number of nodes. Can check for duplicate nodes to save time and always comes up with a solution.
Depth-first search Begins at the root node and explores a set of connected child nodes until it reaches a leaf node. It progresses branch by branch until it finds a solution. Can’t check for duplicate nodes, which means that it might traverse the same node paths more than once. It’s memory efficient.
Bidirectional search Searches simultaneously from the root node and the goal node until the two search paths meet in the middle. It’s time efficient and uses memory more efficiently than other approaches, and it always finds a solution. Complexity of implementation, translating into a longer development cycle.

      Using a heuristic and a cost function

       Pure heuristic search: Expands nodes in order of their cost. It maintains two lists. The closed list contains the nodes it has already explored; the open list contains the nodes it must yet explore. In each iteration, the algorithm expands the node with the lowest possible cost. All its child nodes are placed in the closed list and the individual child node costs are calculated. The algorithm sends the child nodes with a low cost back to the open list and deletes the child nodes with a high cost.

       A * search: Tracks the cost of nodes as it explores them (and choosing the least expensive ones) using this equation: f(n) = g(n) + h(n), wheren is the node identifier.g(n) is the cost of reaching the node so far.h(n) is the estimated cost to reach the goal from the node.f(n) is the estimated cost of the path from n to the goal.

       Greedy best-first search: Chooses the path that is closest to the goal using the equation f(n) = h(n). It can find solutions quite quickly, but it can also get stuck in loops, so many people don't consider it an optimal approach to finding a solution.

      Gaining insights into precisely how algorithms work is important because otherwise you can’t determine whether an algorithm actually performs in the way you need it to. Also, without good measurements, you can’t perform accurate comparisons to know whether you really do need to discover a new method of solving a problem when an older solution works too slowly or uses too many resources. Knowing the basis to use to compare different solutions and deciding between them is an essential skill when dealing with algorithms.

      The issue of efficiency has been part of discovering and designing new algorithms since the concept of algorithms first came into being, which is why you see so many different algorithms competing to solve the same problem. The concept of measuring the size of the functions within an algorithm and analyzing how the algorithm works isn’t new; both Ada Lovelace and Charles Babbage considered the problems of algorithm efficiency in reference to computers as early as 1843 (see https://www.computerhistory.org/babbage/adalovelace/).

      Analysis of algorithms requires some mathematical understanding and some computations, but it’s extremely beneficial in your journey to discover, appreciate, and effectively use algorithms. This topic is considerably more abstract than other topics in this book. To make the discussion less theoretical, later chapters present more practicalities of such measurement by examining algorithms together in detail. The following sections give you the basics.

      Simulating using abstract machines

      The more operations an algorithm requires, the more complex it is. Complexity is a measure of algorithm efficiency in terms of time usage because each operation takes some time. Given the same problem, complex algorithms are generally less favorable than simple algorithms because complex algorithms require more time. Think about those times when speed of execution makes the difference, such as in the medical or financial sector, or when flying on automatic pilot on an airplane or space rocket. Measuring algorithm complexity is a challenging task, though a necessary one if you want to employ the right solution. The first measurement technique uses abstract machines like the Random Access Machine (RAM).

      Abstract machines aren’t real computers but rather theoretical ones — computers that are imagined in their functioning. It’s sort of like daydreaming for computer scientists. You use abstract machines to consider how well an algorithm would work on a computer without testing it on the real thing, yet is bound by the type of hardware you’d use. A RAM computer performs basic arithmetic operations and interacts with information in СКАЧАТЬ