Agents and Environments play a big part in Artificial Intelligence and in the post I am just going to lay out the basics of what Agents and Environments are made up of.
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. You can think of an agent as being like a robotic player in a chess game. Its sensors are the ability to see the other players moves in the game. The environment is the game of chess, the board, the other player, and all of the pieces. The actuators of the chess game agent could be a robotic arm or in software the ability to make or making moves. There are many different examples of agents and environments of artificial intelligence in the world today, for example the self driving car, the car is the agent and the world is the environment.
A rational agent could be seen as an agent that tries its best to make the right decision.
The definition of a rational agent is:
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and the agents built-in knowledge.
The Performance Measure is an objective criterion for success of an agents behavior.
The performance measure embodies criterion for success and is generally defined in terms of desired effect on the environment (not on actions of agent)
When specifying a task environment we use what is called PEAS. The task environment must be defined to design a rational agent.
PEAS: Performance measure, Environment, Actuators, Sensors
Performance Measure: a function the agent is maximizing (or minimizing)
Environment: a formal representation for world states
Actuators: actions that change the state according to a transition model
Sensors: observations that allow the agent to infer the world state
When thinking about the environment there are many different types of environments. It is good to know what type of environment your agent will be interacting with and the types can tell you the difficulty of defining your agent altogether.
Fully observable vs. Partially observable
Do the agents sensors give it access to the complete state of the environment? For any given world state, are the values of all the variables known to the agent?
Deterministic vs. Stochastic
Is the next state of the environment completely determined by the current state and the agents action. Strategic: The environment is deterministic except for the actions of other agents.
Episodic vs. Sequential
Is the agents experience divided into unconnected single decisions/actions, or is it a coherent sequence of observations and actions in which the world evolves according to the transition model?
Static vs. Dynamic
Is the world changing while the agent is thinking?
Semi-dynamic: the environment does not change with the passage of time, but the agents performance score does.
Discrete vs. Continuous
Does the environment provide a fixed number of distinct percepts, actions, and environment states?
Are the values of the state variables discrete or continuous?
Time can also evolve in a discrete or continuous fashion
Single Agent vs. Multi Agent
Is an agent operating by itself in the environment?
Known vs. Unknown
Are the rules of the environment (transition model and rewards associated with states) known to the agent?
With the types of environments laid out they can be easy or hard:
Easy: Fully Observable, Deterministic, Episodic, Static, Discrete, Single Agent
Hard: Partially Observable, Stochastic, Sequential, Dynamic, Continuous, Multi-Agent
The environment type largely determines the agent design.
The Structure of Agents:
There are four basic types of agents, here they are in order of increasing generality:
- Simple Reflex Agents
- Reflex Agents with State
- Goal-based Agents
- Utility-based Agents
Each kind of agent program combines particular components in particular ways to generate actions.
A Simple Reflexive Agent handles the simplest kind of world. This agent embodies a set of condition-action rules. Basically works with If perception then action. The agent simply takes in a percept, determines which action could be applied, and does that action. The action is dependent on the current precept only. This type of agent only works in a fully observable environment.
A Model-Based Reflex Agent works so when it gets a precept it updates the state, chooses a rule to apply, and then schedules the action associated with the chosen rule.
A Goal-Based Agent is like a model based agent but it has goals so it will think about the state that it is in and then depending on the goals that it has it will take an action based on reaching its goals.
A Utility-Based Agent is the same as a goal based agent but it evaluates how performant the action it will perform to achieve its goal will be. In other words how happy will the agent be in the state that would come if the agent made an action.
Finally there is Learning Agents it says above that there are four agent types but a learning agent is a special kind of agent. One part of the learning agent is a utility-based agent and it is connected to a critic, a learning element, and a problem generator. These three other parts make the learning agent able to tackle problems that are very hard. The critic of a learning agent is just what it sounds like. It criticizes the agents actions with some kind of score so the agent knows the difference between good actions and bad actions. The problem generator is used by the learning element to maybe introduce a small measure of error because if the agent always does the highest critic graded actions then the agent may be missing a more optimal solution because they have not tried something that should be unlikely but was better.
I hope you liked this post. I am going to continue doing more Artificial Intelligence posts if I get the time as I am very busy. I hope you learned a bit about agents and environments in AI because making this post has helped me solidify some of this knowledge in my own mind.