Название: Algorithms For Dummies
Автор: John Paul Mueller
Издательство: John Wiley & Sons Limited
Жанр: Программы
isbn: 9781119870005
isbn:
Applying greedy reasoning
Greedy reasoning is often used as part of an optimization process. The algorithm views the problem one step at a time and focuses just on the step at hand. Every greedy algorithm makes two assumptions:
You can make a single optimal choice at a given step.
By choosing the optimal selection at each step, you can find an optimal solution for the overall problem.
You can find many greedy algorithms, each optimized to perform particular tasks. Here are some common examples of greedy algorithms used for graph analysis (see Chapter 9 for more about graphs) and data compression (see Chapter 14 for more about data compression) and the reason you might want to use them:
Kruskal’s Minimum Spanning Tree (MST): The algorithm chooses the edge between two nodes with the smallest value, not the greatest value as the word greedy might initially convey. This sort of algorithm might help you find the shortest path between two locations on a map or perform other graph-related tasks.
Prim’s MST: This algorithm splits an undirected graph (one in which direction isn’t considered) in half. It then selects the edge that connects the two halves to make the total weight of the two halves the smallest it can be. You might find this algorithm used in a maze game to locate the shortest distance between the start and finish of the maze.
Huffman Encoding: The algorithm assigns a code to every unique data entry in a stream of entries, with the most commonly used data entry receiving the shortest code. For example, the letter E would normally receive the shortest code when compressing English text, because you use it more often than any other letter in the alphabet. By changing the encoding technique, you can compress the text and make it considerably smaller, reducing transmission time.
Reaching a good solution
Scientists and mathematicians use greedy algorithms so often that Chapter 15 covers them in depth. However, what you really want is a good solution, not just a particular solution. In most cases, a good solution provides optimal results of the sort you can measure, but the word good can include many meanings, depending on the problem domain. You must ask what problem you want to solve and which solution solves the problem in a manner that best meets your needs. For example, when working in engineering, you might need to weigh solutions that consider weight, size, cost, or other considerations, or perhaps some combination of all these outputs that meet a specific requirement.
To put this issue in context, say that you build a coin machine that creates change for particular monetary amounts using the fewest coins possible (maybe as part of an automatic checkout at a store). The reason to use the fewest coins possible is to reduce equipment wear, the weight of coins needed, and the time required to make change (your customers are always in a hurry, after all). A greedy solution solves the problem by using the largest coins possible. For example, to output $0.16 in change, you use a dime ($0.10), a nickel ($0.05), and a penny ($0.01).
A problem occurs when you can’t use every coin type in creating a solution. The change machine might be out of nickels, for example. To provide $0.40 in change, a greedy solution would start with a quarter ($0.25) and a dime ($0.10). Unfortunately, there are no nickels, so the coin machine then outputs five pennies (5 × $0.01) for a total of seven coins. The optimal solution in this case is to use four dimes instead (4 × $0.10). As a result, the greedy algorithm provides a particular solution, but not an optimal solution. The change-making problem receives considerable attention because it’s so hard to solve. You can find additional discussions such as “Combinatorics of the Change-Making Problem,” by Anna Adamaszeka and Michal Adamaszek (
https://www.sciencedirect.com/science/article/pii/S0195669809001292
) and “Coin Change” by Mayukh Sinha (https://www.geeksforgeeks.org/coin-change-dp-7/
).
Computing Costs and Following Heuristics
Even when you find a good solution, one that is both efficient and effective, you still need to know precisely what the solution costs. You may find that the cost of using a particular solution is still too high, even when everything else is considered. Perhaps the answer comes almost, but not quite, on time or it uses too many computing resources. The search for a good solution involves creating an environment in which you can fully test the algorithm, the states it creates, the operators it uses to change those states, and the time required to derive a solution.
Often, you find that a heuristic approach, one that relies on self-discovery and produces sufficiently useful results (not necessarily optimal, but good enough) is the method you actually need to solve a problem. Getting the algorithm to perform some of the required work for you saves time and effort because you can create algorithms that see patterns better than humans do. Consequently, self-discovery is the process of allowing the algorithm to show you a potentially useful path to a solution (but you still have to count on human intuition and understanding to know whether the solution is the right one). The following sections describe techniques you can use to compute the cost of an algorithm using heuristics as a method of discovering the actual usefulness of any given solution.
Representing the problem as a space
A problem space is an environment in which a search for a solution takes place. A set of states and the operators used to change those states represent the problem space. For example, consider a tile game that has eight tiles in a 3-x-3 frame. Each tile shows one part of a picture, and the tiles start in some random order so that the picture is scrambled. The goal is to move one tile at a time to place all the tiles in the right order and reveal the picture. You can see an example of this sort of puzzle at https://www.proprofsgames.com/puzzle/sliding/
.
The combination of the start state, the randomized tiles, and the goal state — the tiles in a particular order — is the problem instance. You could represent the puzzle graphically using a problem space graph. Each node of the problem space graph presents a state (the eight tiles in a particular position). The edges represent operations, such as to move tile number eight up. When you move tile eight up, the picture changes — it moves to another state.
Winning the game by moving from the start state to the goal state isn’t the only consideration. To solve the game efficiently, you need to perform the task in the least number of moves possible, which means using the smallest number of operations. The minimum number of moves used to solve the puzzle is the problem depth.
You must consider several factors when representing a problem as a space. For example, you must consider the maximum number of nodes that will fit in memory, and whether the number of nodes that memory can support matches the expected number of nodes necessary to solve the problem, which represents the space complexity. When you can’t fit all the nodes in memory at one time, the computer must generate them only when necessary and then discard the previous nodes to free memory or store some nodes in other locations. To determine whether the nodes will fit in memory, СКАЧАТЬ