Skip to content
Sahithyan's S3
Sahithyan's S3 — Artificial Intelligence

Informed Searching

Uses domain-specific hints about the location of goals. The hint comes in a heuristic function.

Heuristic Function

Estimates the cost of the cheapest path from the state at node nn to a goal state. Denoted by h(n)h(n).

Used to guide the search algorithm towards promising paths, making search more efficient than uninformed methods.

Examples:

  • In route finding: Straight-line distance to destination
  • In 8-puzzle: Number of misplaced tiles or Manhattan distance

Admissible

When a heuristic function never overestimates the true cost to reach the goal.

Consistent/Monotonic

hh is consistent iff:

h(n)c(n,a,n)+h(n)h(n) \leq c(n, a, n') + h(n')

Here:

  • h(n)h(n) is the heuristic of a node nn
  • h(n)h(n') is the heuristic of a node nn'
  • nn' is the successor node of nn after applying action aa
  • c(n,a,n)c(n, a, n') is the cost of the action aa from node nn to node nn'.

Algorithms

A general search approach that prioritizes nodes based on an evaluation function. At each step, it selects the most promising node from the frontier according to some measure.

The frontier is typically implemented as a priority queue ordered by the evaluation function value.

Evaluation Function

Denoted by f(n)f(n). Used to determine which nodes to expand next. Assigns a numeric value to each node representing its promise or desirability. Exact details depends on the best-first search variant.

Uses the heuristic function h(n)h(n) as the evaluation function. f(n)=h(n)f(n) = h(n). Not complete, can lead to dead ends or infinite loops. Not optimal. Efficient.

def greedy_best_first_search(start, goal_test, successors, heuristic):
"""
Args:
start: The initial state.
goal_test: Function(state) -> bool, returns True if state is a goal.
successors: Function(state) -> list of (action, next_state).
heuristic: Function(state) -> estimated cost to goal.
Returns:
path: List of (action, state) from start to goal, or None if no path found.
"""
frontier = PriorityQueue()
frontier.put((heuristic(start), start, []))
explored = set()
while not frontier.empty():
h, state, path = frontier.get()
if goal_test(state):
return path + [(None, state)]
if state in explored:
continue
explored.add(state)
for action, next_state in successors(state):
if next_state not in explored:
frontier.put((heuristic(next_state), next_state, path + [(action, state)]))
return None

Most widely known form of best-first search that uses an evaluation function f(n)=g(n)+h(n)f(n) = g(n) + h(n) where:

  • f(n)f(n) - evaluation function
  • g(n)g(n) - cost so far to reach node nn from the start state
  • h(n)h(n) - estimated cost from nn to the goal

Combines the benefits of Dijkstra’s algorithm (which considers path cost) and greedy best-first search (which uses a heuristic). It expands nodes in order of their f(n)f(n) values, maintaining completeness while being more efficient than uniform-cost search.

Optimal when h(n)h(n) is admissible. If h(n)h(n) is also consistent (satisfies triangle inequality), A* becomes even more efficient as it never needs to reopen closed nodes.

Combines h(n)h(n) with the actual cost g(n)g(n) to reach node nn. Explores the node that appears to be closest to the goal while considering the actual cost.

Complete and optimal given an admissible heuristic h(n)h(n).

import heapq
def a_star_search(start, goal_test, successors, heuristic):
"""
Generic A* search algorithm.
Args:
start: The initial state.
goal_test: Function(state) -> bool, returns True if state is a goal.
successors: Function(state) -> list of (action, next_state, cost).
heuristic: Function(state) -> estimated cost to goal.
Returns:
path: List of (action, state) from start to goal, or None if no path found.
"""
frontier = []
heapq.heappush(frontier, (heuristic(start), 0, start, []))
explored = {}
while frontier:
f, g, state, path = heapq.heappop(frontier)
if goal_test(state):
return path + [(None, state)]
if state in explored and explored[state] <= g:
continue
explored[state] = g
for action, next_state, cost in successors(state):
new_g = g + cost
new_f = new_g + heuristic(next_state)
heapq.heappush(frontier, (new_f, new_g, next_state, path + [(action, state)]))
return None

More efficient than A* search.

Puts a limit on the size of the frontier. Not complete. Not optimal.

Comparison

AlgorithmCompletenessOptimalityTime ComplexitySpace Complexity
Greedy Best-FirstNoNoO(b^m) worstO(b^m) worst
A* SearchYesDependsO(b^d) exponentialO(b^d) stores all nodes
Bidirectional A*YesYes (with consistent h)O(b^(d/2))O(b^(d/2))
Beam SearchNo (discards paths)NoO(b·k) where k=beam widthO(k) linear in beam width

Where:

  • b = branching factor
  • d = depth of the shallowest goal
  • m = maximum depth of the search tree

Depends on the quality of the heuristic function. A good heuristic can significantly reduce the search space and time. A poor heuristic can lead to inefficient search.

A good heuristic function can be constructed by:

  • Relaxing the problem definition
  • Storing precomputed solution costs for subproblems in a pattern database
  • Defining landmarks
  • Learning from the experience with the problem class

Heuristics from Relaxed Problems

A relaxed problem is a problem with fewer restrictions. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem.

Heuristics from Pattern Databases

Solution cost of a subproblem can also be used as an admissible heuristic for the original problem.

Pattern database stores these exact solution costs for every possible subproblem instance.