阅读视图

发现新文章,点击刷新页面。
🔲 ☆

CS188 Search Lecture Notes III

Here are the notes for CS188 Local Search.

Local Search algorithms allow us to find local maxima or minima to find a configuration that satisfies some constraints or optimizes some objective function.

Local Search

Hill-Climbing Search

The hill-climbing search algorithm (or steepest-ascent) moves from the current state towards the neighboring state that increases the objective value the most.

The “greediness” of hill-climbing makes it vulnerable to being trapped in local maxima and plateaus.

There are variants of hill-climbing. The stochastic hill-climbing selects an action randomly among the possible uphill moves. Random sideways moves, allows moves that don’t strictly increase the objective, helping the algorithm escape “shoulders.”

1
2
3
4
5
6
7
function HILL-CLIMBING(problem) returns a state
current ← make-node(problem.initial-state)
loop do
neighbor ← a highest-valued successor of current
if neighbor.value ≤ current.value then
return current.state
current ← neighbor

The algorithm iteratively moves to a state with a higher objective value until no such progress is possible. Hill-climbing is incomplete.

Random-restart hill-climbing conducts a number of hill-climbing searches from randomly chosen initial states. It is trivially complete.

Simulated Annealing Search

Simulated annealing aims to combine random walk (randomly moving to nearby states) and hill-climbing to obtain a complete and efficient search algorithm.

The algorithm chooses a random move at each timestep. If the move leads to a higher objective value, it is always accepted. If it leads to a smaller objective value, the move is accepted with some probability.

This probability is determined by the temperature parameter, which initially is high (allowing more “bad” moves) and decreases according to some “schedule.”

Theoretically, if the temperature is decreased slowly enough, the simulated annealing algorithm will reach the global maximum with a probability approaching 1.

1
2
3
4
5
6
7
8
9
function SIMULATED-ANNEALING(problem,schedule) returns a state
current ← problem.initial-state
for t = 1 todo
T ← schedule(t)
if T = 0 then return current
next ← a randomly selected successor of current
ΔE ← next.valuecurrent.value
if ΔE > 0 then current ← next
else current ← next only with probability e^(ΔE/T)

Local Beam Search

Local beam search is another variant of the hill-climbing search algorithm.

The algorithm starts with a random initialization of kk states, and at each iteration, it selects kk new states. The algorithm selects the kk best successor states from the complete list of successor states from all the threads. If any of the threads find the optimal value, the algorithm stops.

The threads can share information between them, allowing “good” threads (for which objectives are high) to “attract” the other threads to that region as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
function LOCAL-BEAM-SEARCH(problem, k) returns a state
states ← k randomly generated states
loop do
if any state in states is a goal then return that state

all_successors ← an empty list
for each state in states do
all_successors.add( all successors of state )

current_best ← the highest valued state in all_successors
if current_best.value ≤ max(states).value then
return the highest valued state in states (Local Maxima Reached)

states ← the k highest-valued states from all_successors

Genetic Algorithms

Genetic Algorithms are a variant of local beam search.

Genetic algorithms begin as beam search with kk randomly initialized states called the population. States (called individuals) are represented as a string over a finite alphabet.

Their main advantage is the use of crossovers — large blocks of letters that have evolved and lead to high valuations can be combined with other such blocks to produce a solution with a high total score.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function GENETIC-ALGORITHM(population, FITNESS-FN) returns an individual
inputs: population, a set of individuals
FITNESS-FN, a function that measures the fitness of an individual

repeat
new_population ← empty set
for i = 1 to SIZE(population) do
x ← RANDOM-SELECTION(population, FITNESS-FN)
y ← RANDOM-SELECTION(population, FITNESS-FN)
child ← REPRODUCE(x, y)
if (small random probability) then child ← MUTATE(child)
add child to new_population
population ← new_population
until some individual is fit enough, or enough time has elapsed
return the best individual in population, according to FITNESS-FN
1
2
3
4
5
function REPRODUCE(x, y) returns an individual
inputs: x, y, parent individuals

n ← LENGTH(x); c ← random number from 1 to n
return APPEND(SUBSTRING(x, 1, c), SUBSTRING(y, c + 1, n))
🔲 ☆

CS188 Search Lecture Notes II

Here are the lecture notes for UC Berkeley CS188 Lecture 3.

Heuristics

Heuristics are estimates of distance to goal states.

Heuristics are typically solutions to relaxed problems (where some of the constraints of the original problem have been removed).

With heuristics, it becomes very easy to implement logic in our agent that enables them to “prefer” expanding states that are estimated to be closer to goal states when deciding which action to perform.

Greedy Search

Greedy Search always selects the frontier node with the lowest heuristic value for expansion.

The frontier is represented by a priority queue.

Greedy search is not guaranteed to find a goal state if one exists, nor is it optimal. (If a very bad heuristic function is selected)

A* Search

A* search is a strategy for exploration that always selects the frontier node with the lowest estimated total cost for expansion.

A* search also uses a priority queue to represent its frontier.

A* search is both complete and optimal, if the heuristic is admissible.

  • g(n)g(n): The function representing total backwards cost computed by UCS.
  • h(n)h(n): The heuristic value function, or estimated forward cost, used by greedy search.
  • f(n)f(n): The function representing estimated total cost, used by A* search.

f(n)=g(n)+h(n)f(n) = g(n) + h(n)

Admissibility

The condition required for optimality when using A* tree search is known as admissibility.

Defining h(n)h^*(n) as the true optimal forward cost to reach a goal state from a given node nn, we can formulate the admissibility constraint mathematically as follows:

n,0h(n)h(n)\forall n, \quad 0 \leq h(n) \leq h^*(n)

Theorem. For a given search problem, if the admissibility constraint is satisfied by a heuristic function h, using A* tree search with h on that search problem will yield an optimal solution.

We can prove this theorem using proof by contradiction.

Graph Search

We keep track of which states you’ve already expanded, and never expand them again.

Tree search with this added optimization is known as graph search.

1
2
3
4
5
6
7
8
9
10
11
function A*-GRAPH-SEARCH(problem, frontier) return a solution or failure
reached ← an empty dict mapping nodes to the cost to each one
frontier ← INSERT((MAKE-NODE(INITIAL-STATE[problem]),0), frontier)
while not IS-EMPTY(frontier) do
node, node.CostToNode ← POP(frontier)
if problem.IS-GOAL(node.STATE) then return node
if node.STATE is not in reached or reached[node.STATE] > node.CostToNode then
reached[node.STATE] = node.CostToNode
for each child-node in EXPAND(problem, node) do
frontier ← INSERT((child-node, child-node.COST + CostToNode), frontier)
return failure

Note that in implementation, it’s critically important to store the reached set as a disjoint set and not a list.

Dominance

If heuristic aa is dominant over heuristic bb, then the estimated goal distance for aa is greater than the estimated goal distance for bb for every node in the state space graph. Mathematically,

n:ha(n)hb(n)\forall n : h_a(n) \geq h_b(n)

Dominance very intuitively captures the idea of one heuristic being better than another - if one admissible heuristic is dominant over another, it must be better because it will always more closely estimate the distance to a goal from any given state.

Additionally, the trivial heuristic is defined as h(n)=0h(n) = 0, and using it reduces A* search to UCS.

All admissible heuristics dominate the trivial heuristic.

As a general rule, the max function applied to multiple admissible heuristics will also always be admissible.

Consistency

A heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighbouring vertex to the goal, plus the cost of reaching that neighbour.

Mathematically, this is expressed as the triangle inequality:

n,n:h(n)c(n,n)+h(n)\forall n, n' : h(n) \le c(n, n') + h(n')

where c(n,n)c(n, n') is the actual cost of the edge from nn to nn'.

If hh is consistent, the first time A* reaches a node, it is guaranteed to be via the optimal path.

🔲 ☆

CS188 Search Lecture Notes I

Here are the lecture notes for UC Berkeley CS188 Lecture 1 and 2.

Agents

The central problem of AI is to create a rational agent. A rational agent is an entity that has goals or preferences and tries to perform a series of actions that yield the best/optimal expected outcome given these goals.

The agent exists in an environment. Together, the agent and the environment constitute the world.

Types of Agents

A reflex agent selects its action based on only the current state of the world.

A planning agent maintains a model of the world and uses this model to simulate performing various actions.

Sometimes agents fail due to wrong world models.

Task Environment

The PEAS (Performance Measure, Environment, Actuators, Sensors) description is used for defining the task environment.

The performance measure describes what utility the agent tries to increase.

The environment summarizes where the agent acts and what affects the agent.

The actuators and the sensors are the methods with which the agent acts on the environment and receives information from it.

Types of Environment

We can characterize the types of environments in the following ways:

  • partially observable environments
  • fully observable environments
  • Stochastic environments (uncertainty in the transition model)
  • deterministic environments
  • multi-agent environments
  • static environments
  • dynamic environments
  • environments with known physics
  • environments with unknown physics

State Space and Search Problems

A search problem consists of the following elements:

  • A state space - The set of all possible states that are possible in your given world
  • A set of actions available in each state
  • A transition model - Outputs the next state when a specific action is taken at current state
  • An action cost - Incurred when moving from one state to another after applying an action
  • A start state - The state in which an agent exists initially
  • A goal test - A function that takes a state as input, and determines whether it is a goal state

A search problem is solved by first considering the start state, then exploring the state space using the action and transition and cost methods, iteratively computing children of various states until we arrive at a goal state.

A world state contains all information about a given state, whereas a search state contains only the information about the world that’s necessary for planning.

State Space Graphs and Search Trees

CS188 State Space Graphs and Search Trees

A state space graph is constructed with states representing nodes, with directed edges existing from a state to its children. These edges represent actions, and any associated weights represent the cost of performing the corresponding action.

They can be noticeably large, but they represent states well. (All states occur only once)

Search trees encode the entire path (or plan) from the start state to the given state in the state space graph.

Since there often exist multiple ways to get from one state to another, states tend to show up multiple times in search trees.

Since these structures can be too large for computers, we can choose to store only states we’re immediately working with, and compute new ones on-demand.

Uninformed Search

The standard protocol for finding a plan from the start state to the goals is to maintain an outer frontier and continually expanding the frontier by replacing a node with its children.

Most implementations of such algorithms will encode information about the parent node, distance to node, and the state inside the node object. This procedure is known as tree search.

1
2
3
4
5
6
7
8
function TREE-SEARCH(problem, frontier) return a solution or failure
frontier ← INSERT(MAKE-NODE(INITIAL-STATE[problem]), frontier)
while not IS-EMPTY(frontier) do
node ← POP(frontier)
if problem.IS-GOAL(node.STATE) then return node
for each child-node in EXPAND(problem, node) do
add child-node to frontier
return failure

The EXPAND function appearing in the pseudocode returns all the possible nodes that can be reached from a given node by considering all available actions.

When we have no knowledge of the location of goal states in our search tree, we are forced to do uninformed search.

  • The completeness of each search strategy - if there exists a solution to the search problem, is the strategy guaranteed to find it given infinite computational resources?
  • The optimality of each search strategy - is the strategy guaranteed to find the lowest cost path to a goal state?
  • The **branching factor ** - The increase in the number of nodes on the frontier each time a frontier node is dequeued and replaced with its children is . At depth k in the search tree, there exists nodes.
  • The maximum depth m.
  • The depth of the shallowest solution s.

Depth-First Search

Depth-First Search (DFS) always selects the deepest frontier node.

The frontier is represented by a stack.

Depth-first search is not complete. If there exist cycles in the state space graph, this inevitably means that the corresponding search tree will be infinite in depth.

Depth-first search is not optimal.

Breadth-Frist Search

Breadth-first search (BFS) always selects the shallowest frontier node.

The frontier is represented by a queue.

If a solution exists, then the depth of the shallowest node s must be finite, so BFS must eventually search this depth. Hence, it’s complete.

BFS is generally not optimal.

Iterative Deepening

Iterative Deepening (ID) is basically BFS implemented with DFS. It repeatedly executes depth-first search with an increasing depth limit.

The frontier is represented by a stack.

Iterative deepening is complete. Because it exhaustively searches each depth level before increasing the limit, it avoids the infinite loops of DFS and will find a solution if one exists at a finite depth.

ID is generally not optimal.

If the cost is uniform, ID and BFS will be optimal.

Uniform Cost Search

Uniform cost search (UCS) always selects the lowest cost frontier node.

The frontier is represented by a priority queue.

Uniform cost search is complete and optimal.

The three strategies outlined above are fundamentally the same - differing only in expansion strategy

🔲 ☆

CS188 Search Lecture Notes III

Here are the notes for CS188 Local Search.

Local Search algorithms allow us to find local maxima or minima to find a configuration that satisfies some constraints or optimizes some objective function.

Local Search

Hill-Climbing Search

The hill-climbing search algorithm (or steepest-ascent) moves from the current state towards the neighboring state that increases the objective value the most.

The “greediness” of hill-climbing makes it vulnerable to being trapped in local maxima and plateaus.

There are variants of hill-climbing. The stochastic hill-climbing selects an action randomly among the possible uphill moves. Random sideways moves, allows moves that don’t strictly increase the objective, helping the algorithm escape “shoulders.”

1
2
3
4
5
6
7
function HILL-CLIMBING(problem) returns a state
current ← make-node(problem.initial-state)
loop do
neighbor ← a highest-valued successor of current
if neighbor.value ≤ current.value then
return current.state
current ← neighbor

The algorithm iteratively moves to a state with a higher objective value until no such progress is possible. Hill-climbing is incomplete.

Random-restart hill-climbing conducts a number of hill-climbing searches from randomly chosen initial states. It is trivially complete.

Simulated Annealing Search

Simulated annealing aims to combine random walk (randomly moving to nearby states) and hill-climbing to obtain a complete and efficient search algorithm.

The algorithm chooses a random move at each timestep. If the move leads to a higher objective value, it is always accepted. If it leads to a smaller objective value, the move is accepted with some probability.

This probability is determined by the temperature parameter, which initially is high (allowing more “bad” moves) and decreases according to some “schedule.”

Theoretically, if the temperature is decreased slowly enough, the simulated annealing algorithm will reach the global maximum with a probability approaching 1.

1
2
3
4
5
6
7
8
9
function SIMULATED-ANNEALING(problem,schedule) returns a state
current ← problem.initial-state
for t = 1 todo
T ← schedule(t)
if T = 0 then return current
next ← a randomly selected successor of current
ΔE ← next.valuecurrent.value
if ΔE > 0 then current ← next
else current ← next only with probability e^(ΔE/T)

Local Beam Search

Local beam search is another variant of the hill-climbing search algorithm.

The algorithm starts with a random initialization of kk states, and at each iteration, it selects kk new states. The algorithm selects the kk best successor states from the complete list of successor states from all the threads. If any of the threads find the optimal value, the algorithm stops.

The threads can share information between them, allowing “good” threads (for which objectives are high) to “attract” the other threads to that region as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
function LOCAL-BEAM-SEARCH(problem, k) returns a state
states ← k randomly generated states
loop do
if any state in states is a goal then return that state

all_successors ← an empty list
for each state in states do
all_successors.add( all successors of state )

current_best ← the highest valued state in all_successors
if current_best.value ≤ max(states).value then
return the highest valued state in states (Local Maxima Reached)

states ← the k highest-valued states from all_successors

Genetic Algorithms

Genetic Algorithms are a variant of local beam search.

Genetic algorithms begin as beam search with kk randomly initialized states called the population. States (called individuals) are represented as a string over a finite alphabet.

Their main advantage is the use of crossovers — large blocks of letters that have evolved and lead to high valuations can be combined with other such blocks to produce a solution with a high total score.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function GENETIC-ALGORITHM(population, FITNESS-FN) returns an individual
inputs: population, a set of individuals
FITNESS-FN, a function that measures the fitness of an individual

repeat
new_population ← empty set
for i = 1 to SIZE(population) do
x ← RANDOM-SELECTION(population, FITNESS-FN)
y ← RANDOM-SELECTION(population, FITNESS-FN)
child ← REPRODUCE(x, y)
if (small random probability) then child ← MUTATE(child)
add child to new_population
population ← new_population
until some individual is fit enough, or enough time has elapsed
return the best individual in population, according to FITNESS-FN
1
2
3
4
5
function REPRODUCE(x, y) returns an individual
inputs: x, y, parent individuals

n ← LENGTH(x); c ← random number from 1 to n
return APPEND(SUBSTRING(x, 1, c), SUBSTRING(y, c + 1, n))
🔲 ☆

CS188 Search Lecture Notes II

Here are the lecture notes for UC Berkeley CS188 Lecture 3.

Heuristics

Heuristics are estimates of distance to goal states.

Heuristics are typically solutions to relaxed problems (where some of the constraints of the original problem have been removed).

With heuristics, it becomes very easy to implement logic in our agent that enables them to “prefer” expanding states that are estimated to be closer to goal states when deciding which action to perform.

Greedy Search

Greedy Search always selects the frontier node with the lowest heuristic value for expansion.

The frontier is represented by a priority queue.

Greedy search is not guaranteed to find a goal state if one exists, nor is it optimal. (If a very bad heuristic function is selected)

A* Search

A* search is a strategy for exploration that always selects the frontier node with the lowest estimated total cost for expansion.

A* search also uses a priority queue to represent its frontier.

A* search is both complete and optimal, if the heuristic is admissible.

  • g(n)g(n): The function representing total backwards cost computed by UCS.
  • h(n)h(n): The heuristic value function, or estimated forward cost, used by greedy search.
  • f(n)f(n): The function representing estimated total cost, used by A* search.

f(n)=g(n)+h(n)f(n) = g(n) + h(n)

Admissibility

The condition required for optimality when using A* tree search is known as admissibility.

Defining h(n)h^*(n) as the true optimal forward cost to reach a goal state from a given node nn, we can formulate the admissibility constraint mathematically as follows:

n,0h(n)h(n)\forall n, \quad 0 \leq h(n) \leq h^*(n)

Theorem. For a given search problem, if the admissibility constraint is satisfied by a heuristic function h, using A* tree search with h on that search problem will yield an optimal solution.

We can prove this theorem using proof by contradiction.

Graph Search

We keep track of which states you’ve already expanded, and never expand them again.

Tree search with this added optimization is known as graph search.

1
2
3
4
5
6
7
8
9
10
11
function A*-GRAPH-SEARCH(problem, frontier) return a solution or failure
reached ← an empty dict mapping nodes to the cost to each one
frontier ← INSERT((MAKE-NODE(INITIAL-STATE[problem]),0), frontier)
while not IS-EMPTY(frontier) do
node, node.CostToNode ← POP(frontier)
if problem.IS-GOAL(node.STATE) then return node
if node.STATE is not in reached or reached[node.STATE] > node.CostToNode then
reached[node.STATE] = node.CostToNode
for each child-node in EXPAND(problem, node) do
frontier ← INSERT((child-node, child-node.COST + CostToNode), frontier)
return failure

Note that in implementation, it’s critically important to store the reached set as a disjoint set and not a list.

Dominance

If heuristic aa is dominant over heuristic bb, then the estimated goal distance for aa is greater than the estimated goal distance for bb for every node in the state space graph. Mathematically,

n:ha(n)hb(n)\forall n : h_a(n) \geq h_b(n)

Dominance very intuitively captures the idea of one heuristic being better than another - if one admissible heuristic is dominant over another, it must be better because it will always more closely estimate the distance to a goal from any given state.

Additionally, the trivial heuristic is defined as h(n)=0h(n) = 0, and using it reduces A* search to UCS.

All admissible heuristics dominate the trivial heuristic.

As a general rule, the max function applied to multiple admissible heuristics will also always be admissible.

Consistency

A heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighbouring vertex to the goal, plus the cost of reaching that neighbour.

Mathematically, this is expressed as the triangle inequality:

n,n:h(n)c(n,n)+h(n)\forall n, n' : h(n) \le c(n, n') + h(n')

where c(n,n)c(n, n') is the actual cost of the edge from nn to nn'.

If hh is consistent, the first time A* reaches a node, it is guaranteed to be via the optimal path.

🔲 ☆

CS188 Search Lecture Notes I

Here are the lecture notes for UC Berkeley CS188 Lecture 1 and 2.

Agents

The central problem of AI is to create a rational agent. A rational agent is an entity that has goals or preferences and tries to perform a series of actions that yield the best/optimal expected outcome given these goals.

The agent exists in an environment. Together, the agent and the environment constitute the world.

Types of Agents

A reflex agent selects its action based on only the current state of the world.

A planning agent maintains a model of the world and uses this model to simulate performing various actions.

Sometimes agents fail due to wrong world models.

Task Environment

The PEAS (Performance Measure, Environment, Actuators, Sensors) description is used for defining the task environment.

The performance measure describes what utility the agent tries to increase.

The environment summarizes where the agent acts and what affects the agent.

The actuators and the sensors are the methods with which the agent acts on the environment and receives information from it.

Types of Environment

We can characterize the types of environments in the following ways:

  • partially observable environments
  • fully observable environments
  • Stochastic environments (uncertainty in the transition model)
  • deterministic environments
  • multi-agent environments
  • static environments
  • dynamic environments
  • environments with known physics
  • environments with unknown physics

State Space and Search Problems

A search problem consists of the following elements:

  • A state space - The set of all possible states that are possible in your given world
  • A set of actions available in each state
  • A transition model - Outputs the next state when a specific action is taken at current state
  • An action cost - Incurred when moving from one state to another after applying an action
  • A start state - The state in which an agent exists initially
  • A goal test - A function that takes a state as input, and determines whether it is a goal state

A search problem is solved by first considering the start state, then exploring the state space using the action and transition and cost methods, iteratively computing children of various states until we arrive at a goal state.

A world state contains all information about a given state, whereas a search state contains only the information about the world that’s necessary for planning.

State Space Graphs and Search Trees

CS188 State Space Graphs and Search Trees

A state space graph is constructed with states representing nodes, with directed edges existing from a state to its children. These edges represent actions, and any associated weights represent the cost of performing the corresponding action.

They can be noticeably large, but they represent states well. (All states occur only once)

Search trees encode the entire path (or plan) from the start state to the given state in the state space graph.

Since there often exist multiple ways to get from one state to another, states tend to show up multiple times in search trees.

Since these structures can be too large for computers, we can choose to store only states we’re immediately working with, and compute new ones on-demand.

Uninformed Search

The standard protocol for finding a plan from the start state to the goals is to maintain an outer frontier and continually expanding the frontier by replacing a node with its children.

Most implementations of such algorithms will encode information about the parent node, distance to node, and the state inside the node object. This procedure is known as tree search.

1
2
3
4
5
6
7
8
function TREE-SEARCH(problem, frontier) return a solution or failure
frontier ← INSERT(MAKE-NODE(INITIAL-STATE[problem]), frontier)
while not IS-EMPTY(frontier) do
node ← POP(frontier)
if problem.IS-GOAL(node.STATE) then return node
for each child-node in EXPAND(problem, node) do
add child-node to frontier
return failure

The EXPAND function appearing in the pseudocode returns all the possible nodes that can be reached from a given node by considering all available actions.

When we have no knowledge of the location of goal states in our search tree, we are forced to do uninformed search.

  • The completeness of each search strategy - if there exists a solution to the search problem, is the strategy guaranteed to find it given infinite computational resources?
  • The optimality of each search strategy - is the strategy guaranteed to find the lowest cost path to a goal state?
  • The **branching factor ** - The increase in the number of nodes on the frontier each time a frontier node is dequeued and replaced with its children is . At depth k in the search tree, there exists nodes.
  • The maximum depth m.
  • The depth of the shallowest solution s.

Depth-First Search

Depth-First Search (DFS) always selects the deepest frontier node.

The frontier is represented by a stack.

Depth-first search is not complete. If there exist cycles in the state space graph, this inevitably means that the corresponding search tree will be infinite in depth.

Depth-first search is not optimal.

Breadth-Frist Search

Breadth-first search (BFS) always selects the shallowest frontier node.

The frontier is represented by a queue.

If a solution exists, then the depth of the shallowest node s must be finite, so BFS must eventually search this depth. Hence, it’s complete.

BFS is generally not optimal.

Iterative Deepening

Iterative Deepening (ID) is basically BFS implemented with DFS. It repeatedly executes depth-first search with an increasing depth limit.

The frontier is represented by a stack.

Iterative deepening is complete. Because it exhaustively searches each depth level before increasing the limit, it avoids the infinite loops of DFS and will find a solution if one exists at a finite depth.

ID is generally not optimal.

If the cost is uniform, ID and BFS will be optimal.

Uniform Cost Search

Uniform cost search (UCS) always selects the lowest cost frontier node.

The frontier is represented by a priority queue.

Uniform cost search is complete and optimal.

The three strategies outlined above are fundamentally the same - differing only in expansion strategy

❌