List of Figures

Chapter 1. Small problems

Figure 1.1. The height of each stickman is the previous two stickmen’s heights added together.

Figure 1.2. The recursive function fib(n) calls itself with the arguments n-2 and n-1.

Figure 1.3. Every non-base-case call of fib2() results in two more calls of fib2().

Figure 1.4. The human memoization machine

Figure 1.5. Compressing a str representing a gene into a 2-bit-per-nucleotide bit string

Figure 1.6. A one-time pad results in two keys that can be separated and then recombined to re-create the original data.

Figure 1.7. The challenge is to move the three discs, one at a time, from tower A to tower C. A larger disc may never be on top of a smaller disc.

Chapter 2. Search problems

Figure 2.1. A nucleotide is represented by one of the letters A, C, G, and T. A codon is composed of three nucleotides, and a gene is composed of multiple codons.

Figure 2.2. In the worst case of a linear search, you’ll sequentially look through every element of the array.

Figure 2.3. In the worst case of a binary search, you’ll look through just lg(n) elements of the list.

Figure 2.4. In depth-first search, the search proceeds along a continuously deeper path until it hits a barrier and must backtrack to the last decision point.

Figure 2.5. In a breadth-first search, the closest elements to the starting location are searched first.

Figure 2.6. Euclidean distance is the length of a straight line from the starting point to the goal.

Figure 2.7. In Manhattan distance, there are no diagonals. The path must be along parallel or perpendicular lines.

Figure 2.8. The missionaries and cannibals must use their single canoe to take everyone across the river from west to east. If the cannibals ever outnumber the missionaries, they will eat them.

Chapter 3. Constraint-satisfaction problems

Figure 3.1. Scheduling problems are a classic application of constraint-satisfaction frameworks.

Figure 3.2. In a solution to the Australian map-coloring problem, no two adjacent parts of Australia can be colored with the same color.

Figure 3.3. In a solution to the eight queens problem (there are many solutions), no two queens can be threatening each other.

Figure 3.4. A classic word search, such as you might find in a children’s puzzle book

Figure 3.5. The circuit board layout problem is very similar to the word-search problem, but the rectangles are of variable width.

Chapter 4. Graph problems

Figure 4.1. A map of the 15 largest MSAs in the United States

Figure 4.2. A graph with vertices representing the 15 largest MSAs in the United States and edges representing potential Hyperloop routes between them

Figure 4.3. An equivalent graph to that in figure 4.2, with the location of Miami moved

Figure 4.4. The shortest route between Boston and Miami, in terms of the number of edges, is highlighted.

Figure 4.5. A weighted graph of the 15 largest MSAs in the United States, where each of the weights represents the distance between two MSAs in miles

Figure 4.6. In the left graph, a cycle exists between vertices B, C, and D, so it is not a tree. In the right graph, the edge connecting C and D has been pruned, so the graph is a tree.

Figure 4.7. The highlighted edges represent a minimum spanning tree that connects all 15 MSAs.

Chapter 5. Genetic algorithms

Figure 5.1. The general outline of a genetic algorithm

Figure 5.2. An example of roulette-wheel selection in action

Chapter 6. K-means clustering

Figure 6.1. An example of k-means running through three generations on an arbitrary data set. Stars indicate centroids. Colors and shapes represent current cluster membership (which changes).

Figure 6.2. State governors, as of June 2017, plotted by state longitude and governor age

Figure 6.3. Data points in cluster 0 are designated by circles, and data points in cluster 1 are designated by squares.

Chapter 7. Fairly simple neural networks

Figure 7.1. A researcher studies fMRI images of the brain. fMRI images do not tell us much about how individual neurons function or how neural networks are organized.

Figure 7.2. A single neuron combines its weights with input signals to produce an output signal that is modified by an activation function.

Figure 7.3. A simple neural network with one input layer of two neurons, one hidden layer of four neurons, and one output layer of three neurons. The number of neurons in each layer in this figure is arbitrary.

Figure 7.4. The mechanism by which an output neuron’s delta is calculated during the backpropagation phase of training

Figure 7.5. How a delta is calculated for a neuron in a hidden layer

Figure 7.6. The weights of every hidden layer and output layer neuron are updated using the deltas calculated in the previous steps, the prior weights, the prior inputs, and a user-determined learning rate.

Figure 7.7. The sigmoid activation function (S(x)) will always return a value between 0 and 1. Note that its derivative is easy to compute as well (S'(x)).

Chapter 8. Adversarial search

Figure 8.1. The one-dimensional list indices that correspond to each square in the tic-tac-toe board

Figure 8.2. A minimax decision tree for a tic-tac-toe game with two moves left. To maximize the likelihood of winning, the initial player, O, will choose to play O in the bottom center. Arrows indicate the positions from which a decision is made.

Chapter 9. Miscellaneous problems

Figure 9.1. The burgler must decide what items to steal because the capacity of the knapsack is limited.

Figure 9.2. Five cities in Vermont and the driving distances between them

Figure 9.3. The shortest path for the salesman to visit all five cities in Vermont is illustrated.

Figure 9.4. The Phone app in iOS retains the letters on keys that its telephone forebears contained.