Local Search
A type of searching that focuses only on the final configuration and not the track. Uses a single state, to store the current configuration, and moves to neighboring states until a goal state is reached. Not systemetic, might never search the portion where the goal is. Uses less space. Can work for large search spaces.
Often suitable for optimization problems. Best configuration is searched for by optimizing an objective function.
Objective Function
A mathematical expression that evaluates the quality or fitness of a given configuration or solution.
In local search algorithms, the objective function assigns a numerical value to each possible state, allowing the algorithm to compare different states and decide which direction to move in the search space. The goal is typically to maximize or minimize this function, depending on the problem, in order to find the best possible solution.
Can be visualized as a landscape, where each point’s height represents the value of the objective function at that point. The algorithm starts at a random point and moves to neighboring points with higher or lower values, depending on the optimization goal. The search continues until a local optimum is reached, which may not be the global optimum.
Algorithms
Different algorithms can be used for choosing the state to visit next:
- Hill climbing
- Simulated annealing
- Genetic algorithms (extension for multiple states)
Hill Climbing
Aka. greedy local search. Starts at a random state. Checks all the neighboring states. Moves to the neighboring state having the highest value of the objective function. If no such state exists, the algorithm terminates. Incomplete.
Ridge
A region that is higher than its neighbors but itself has a slope. A special kind of local optimum. Makes it very difficult for greedy algorithms to navigate.
Plateau
A region where all neighboring states have the same value. Results in random walk.
Variations
- Stochastic hill climbing
Chooses at random from among the uphill moves; the probability of selection can vary with the steepness of the uphill move. This usually converges more slowly than steepest ascent, but in some state landscapes, it finds better solutions. - First-choice hill climbing
Implements stochastic hill climbing by generating successors randomly until one is generated that is better than the current state. This is a good strategy when a state has many (e.g., thousands) of successors. - Random-restart hill climbing
- Start different hill-climbing searches from random starting positions stopping when a goal is found.
- Save the best result from any search so far.
- If all states have an equal probability of being generated, it is complete with a probability approaching 1 (a goal state will eventually be generated).
- Finding an optimal solution becomes the question of a sufficient number of restarts.
- Surprisingly effective, if there aren’t too many local maxima or plateau.
Simulated Annealing
Pure random walk is very inefficient. Combines randomness to hill-climbing. Allows the possibility of escape from a local optimum.
def simulated_annealing(initial_state, objective_function, neighbor_function, schedule): current_state = initial_state current_value = objective_function(current_state) t = 0
while not termination_condition(): temperature = schedule(t) if temperature == 0: break
next_state = neighbor_function(current_state) next_value = objective_function(next_state) delta = next_value - current_value
if delta > 0: current_state = next_state current_value = next_value else: probability = exp(delta / temperature) if random() < probability: current_state = next_state current_value = next_value
t = t + 1
return current_state
Probably of a worst move decreases with the amount of and increases with the temperature.
Local beam search
Keep track of states. Begins with randomly generated states. At each step, all successors of all states are generated. If goal is reached, algorithm halts. Otherwise, selects the best successors from the complete list, and repeats
Stochastic local beam search chooses the successors with the probability proportional to their fitness, increases diversity.
Genetic algorithm
Similar to local beam search. A variant of stochastic local beam search. A successor state is generated by combining 2 parent states.
Start with a population of randomly generated states. Fitness function is used to evaluate the quality of each state, higher=better. The next generation is created by:
- selection
the best states from the current population - crossover
- mutation
Similar to human evolution. Can find solutions when local search gets stuck. But has many tunable parameters.
Mutation rate
How often offspring have random mutations.
Elistism
Include a few top-scoring parents from the previous generation in addition to newly formed offspring.
Culling
Discard individuals below a given threshold.
def genetic_algorithm(population, fitness_function, crossover_function, mutation_function, selection_function, generations, mutation_rate, elitism_count): for gen in range(generations): # Evaluate fitness for each individual fitness_scores = [fitness_function(individual) for individual in population]
# Select top individuals for elitism elites = select_top_individuals(population, fitness_scores, elitism_count)
# Select parents for crossover parents = selection_function(population, fitness_scores)
# Generate offspring through crossover and mutation offspring = [] while len(offspring) < len(population) - elitism_count: parent1, parent2 = select_two(parents) child = crossover_function(parent1, parent2) if random() < mutation_rate: child = mutation_function(child) offspring.append(child)
# Form new population with elites and offspring population = elites + offspring
# Return the best individual found best_index = fitness_scores.index(max(fitness_scores)) return population[best_index]