Focuses only on the final configuration and not the path. Keep tracks of the current state only. Moves to neighboring states until a goal state is reached. Not systemetic, might never search the portion where the goal is. Uses less space. Can work for large search spaces.
Often suitable for optimization problems. Best configuration is searched for by optimizing an objective function.
Objective Function
Section titled “Objective Function”Evaluates the quality (fitness) of a given configuration or solution. Denoted by , where is a state.
Objective function is minimized or maximized depending on the use case. Search is continued until a local maximum or local minimum is reached.
Algorithms
Section titled “Algorithms”Hill Climbing
Section titled “Hill Climbing”Aka. greedy local search. Starts at a random state. Moves to the neighboring state having the highest value for objective function. If no such state exists, searching is done. Incomplete.
A region that is higher than its neighbors but itself has a slope. A special kind of local optimum. Makes it very difficult for greedy algorithms to navigate.
Plateau
Section titled “Plateau”A region where all neighboring states have the same value. Might result in random walk.
Variations of Hill Climbing
Section titled “Variations of Hill Climbing”Stochastic Hill Climbing
Section titled “Stochastic Hill Climbing”Chooses at random from among the uphill moves; the probability of selection can vary with the steepness of the uphill move. This usually converges more slowly than steepest ascent, but in some state landscapes, it finds better solutions.
First-Choice Hill Climbing
Section titled “First-Choice Hill Climbing”A variant of stochastic hill climbing. Selects a better successor from the randomly generated successors. Does not evaluate all the successor states. Used when states has too many of successors. Might be less optimal.
Random-Restart Hill Climbing
Section titled “Random-Restart Hill Climbing”Hill climbing searches are started from different starting positions. Reached local optimums are saved.
If all states have an equal probability of being generated, it is complete with a probability approaching 1. Goal state will eventually be generated. Surprisingly effective, if there aren’t too many local maxima or plateau.
Required number of restarts is the new problem.
Simulated Annealing
Section titled “Simulated Annealing”A method which combines randomness and hill-climbing. Allows the possibility of escape from a local optimum.
def simulated_annealing(initial_state, objective_function, neighbor_function, schedule): current_state = initial_state current_value = objective_function(current_state) t = 0
while not termination_condition(): temperature = schedule(t) if temperature == 0: break
next_state = neighbor_function(current_state) next_value = objective_function(next_state) delta = next_value - current_value
if delta > 0: current_state = next_state current_value = next_value else: probability = exp(delta / temperature) if random() < probability: current_state = next_state current_value = next_value
t = t + 1
return current_stateProbably of a worst move decreases with the amount of and increases with the temperature.
Local beam search
Section titled “Local beam search”Keep track of only states. Begins with randomly generated states. At each step, all successors of all states are generated. If goal is reached, algorithm halts. Otherwise, selects the best successors from the complete list, and repeats.
Stochastic local beam search chooses the successors with the probability proportional to their fitness, increases diversity.
Genetic algorithm
Section titled “Genetic algorithm”Similar to local beam search but for multiple states in parallel. A variant of stochastic local beam search. A successor state is generated by combining 2 parent states.
Start with a population of randomly generated states. The next generation is created by:
- selection
the best states from the current population - crossover
- mutation
Similar to human evolution. Can find solutions when local search gets stuck. But has too many tunable parameters.
Mutation rate
Section titled “Mutation rate”How often successor states have random mutations.
Elistism
Section titled “Elistism”Include a few top-scoring parents from the previous generation in addition to newly formed offspring.
Culling
Section titled “Culling”Discard individuals below a given threshold.
def genetic_algorithm(population, objective_function, crossover_function, mutation_function, selection_function, generations, mutation_rate, elitism_count): for gen in range(generations): # Evaluate fitness for each individual fitness_scores = [objective_function(individual) for individual in population]
# Select top individuals for elitism elites = select_top_individuals(population, fitness_scores, elitism_count)
# Select parents for crossover parents = selection_function(population, fitness_scores)
# Generate offspring through crossover and mutation offspring = [] while len(offspring) < len(population) - elitism_count: parent1, parent2 = select_two(parents) child = crossover_function(parent1, parent2) if random() < mutation_rate: child = mutation_function(child) offspring.append(child)
# Form new population with elites and offspring population = elites + offspring
# Return the best individual found best_index = fitness_scores.index(max(fitness_scores)) return population[best_index]