Intelligent agents are supposed to maximize their performance measure. To solve a problem, the agent can adopt a goal and aim at satisfying it. Here’s the steps:
The process of looking for a sequence of actions that reaches the goal is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence.
A problem is defined by the following items:
A solution is a sequence of actions leading from the initial state to a goal state
The possible action sequences starting at the initial state form a search tree with the initial state at the root; the branches are actions and the nodes correspond to states in the state space of the problem.
Starting from the initial state, we want to expand the current state. We can apply each legal action to the current state, thereby generating a new set of states. We add all the possible branches from the parent node **leading to different new child nodes, and then we can choose which of the newly added child nodes to consider further. Eventually we’ll be arriving at the leaf nodes, and the set of all leaf nodes are called frontiers.