Problem-Solving Agent

Intelligent agents are supposed to maximize their performance measure. To solve a problem, the agent can adopt a goal and aim at satisfying it. Here’s the steps:

  1. Goal formulation: we need to identify a goal on what are we achieving based on the current situation and the agent’s performance measure
  2. Problem formulation: given a goal, we need to decide what actions and states to consider.
  3. Take action: the agent might encounter both known and unknown environments. An agent with several immediate options of unknown value can decide what to do by first examining future actions that eventually lead to states of known value.
  4. Assume each action is deterministic, we will eventually arrive at a solution

The process of looking for a sequence of actions that reaches the goal is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence.

Single-state problem definition

A problem is defined by the following items:

A solution is a sequence of actions leading from the initial state to a goal state

Searching for Solutions

The possible action sequences starting at the initial state form a search tree with the initial state at the root; the branches are actions and the nodes correspond to states in the state space of the problem.

Starting from the initial state, we want to expand the current state. We can apply each legal action to the current state, thereby generating a new set of states. We add all the possible branches from the parent node **leading to different new child nodes, and then we can choose which of the newly added child nodes to consider further. Eventually we’ll be arriving at the leaf nodes, and the set of all leaf nodes are called frontiers.