Conversation
Signed-off-by: bardia-mhd <bardia.mohammadi@yahoo.com>
| { | ||
| "cells": [ | ||
| { | ||
| "cell_type": "markdown", |
There was a problem hiding this comment.
First of all, lecture notes must be in markdown format. It means you should have a .md file as output, not a Jupyter notebook!
| " <br>\n", | ||
| " <br>\n", | ||
| " <br>\n", | ||
| " <h1 style=\"font-size: 40px; margin: 10px 0;\">AI - Intelligent Agent</h1>\n", |
| "source": [ | ||
| "# Intelligent agents\n", | ||
| "An <b>intelligent agent</b> is anything that perceives its environment through sensors and acts upon that environment through its actuators. \n", | ||
| " we will use the term <b>percept</b> to refer to the agent's perceptual inputs at any given moment.\n", |
There was a problem hiding this comment.
Every sentence must be started with a capital case letter and obviously, there are some similar problems in next sections. Please revise them all.
| "metadata": {}, | ||
| "source": [ | ||
| "# Rational agents and performance measure\n", | ||
| "a <b>rational</b> agent choose the set of action in order to maximize its performance. agents use a performance measure to evaluate the desirability of any given sequence. In other words, an agent will choose the action (or a sequence of them) that maximize the expected value of its performance measure." |
| "metadata": {}, | ||
| "source": [ | ||
| "#### Rationality vs perfection\n", | ||
| "Keep in mind that rationality is distinct from omniscience. an omniscience agent knows the actual outcome of its actions but in reality, an agent only knows the expected outcome of its action.\n", |
There was a problem hiding this comment.
an omniscience agent -> an omniscient agent
provide an example to clarify this
| "#### Rationality vs perfection\n", | ||
| "Keep in mind that rationality is distinct from omniscience. an omniscience agent knows the actual outcome of its actions but in reality, an agent only knows the expected outcome of its action.\n", | ||
| "#### Autonomy\n", | ||
| "a rational agent should be autonomous meaning it mustn't only rely on the prior knowledge of its designer and must learn to compensate for partial or incorrect prior knowledge. In other words, rational agents should learn from experience. for example, in the vacuum world our agent could start to learn when the rooms usually get dirty based on its experience." |
There was a problem hiding this comment.
Some of your sentences look very similar to the reference book! It would be better if you try to express them in your own words.
| "source": [ | ||
| "# Task environment (PEAS)\n", | ||
| "we have already talked about performance measure, task environment, actuators and sensors. we group all these under the heading of the <b>Task enviroment </b> and we abbreviate it as <b>PEAS</b>(<b>P</b>erformance measure, <b>E</b>nviroment, <b>A</b>ctuators, <b>S</b>ensors). When designing an agent our first step should be specifying the task enviroment.\n", | ||
| "#### Types of environment\n", |
There was a problem hiding this comment.
This part is too brief. You should explain them in much more detail using examples. The reader should gain more information when he/she studies your markdown in comparison with slides!
| "metadata": {}, | ||
| "source": [ | ||
| "#### PEAS example\n", | ||
| "here are a few example of specifying PEAS for different agents.\n", |
| "metadata": {}, | ||
| "source": [ | ||
| "# Type of agents\n", | ||
| "In this section we will introduce three basic kinds of basic agent programs.(The agent program is simply a program which implement the agent function.)\n", |
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "# Type of agents\n", |
There was a problem hiding this comment.
Try to expand this part. There are other types of agents that aren't in slides but you can cover them here.
| "metadata": {}, | ||
| "source": [ | ||
| "## Goal-based agents\n", | ||
| "This kind of agent has a specific goal and its tries to reach that goal efficiently. They have a model of how the world evolves in response to actions and they make decisions based on (hypothesized) consequences of actions to reach their goal state. Search and Planning are two subfields that are closely tied with these kind of agents. In other words, this kinds of agents act on <b>how the world WOULD BE.</b> \n", |
| "source": [ | ||
| "## Reflex agents\n", | ||
| "This is the simplest kind of agent. they choose their next action only based on their current percept. In other words, they do not consider the future consequences of their actions and only consider <b>how the world IS.</b> \n", | ||
| "as an example look at this Pacman agent below at each turn the agent look at its surrounding and chooses the direction that has a point in it and stops when there are no points around it.\n", |
nimajam41
left a comment
There was a problem hiding this comment.
- Try to write out more details in commented sections.
- Find out the wrong usages of grammar in your text.
- Start every sentence with a capital case letter.
- Try to use your own words in sentences.
| - [Conclusion](#Conclusion) | ||
| - [References](#References) | ||
|
|
||
| # Introduction |
There was a problem hiding this comment.
Write out at least a paragraph for this section and try to explain why this topic is important.
| This kind of agent like goal-based agents has a goal. But they also have a Utility function they seek to reach their goal in a way that maximizes the utility function. For example, think about an automated car agent. They are many ways for this agent to get from point A to point B. But some of them are quicker, safer, cheaper. The utility function allows the agent to compare different states with each other and ask the question how happy am I in this state. | ||
| In other words, this kind of agent act on <b>how the world will LIKELY be.</b> | ||
|
|
||
| # Conclusion |
There was a problem hiding this comment.
In this part, it's better to write some sentences instead of just listing sub-topics. For example:
"We discussed intelligent agents which are ... "
"We also tried to explain PEAS using some examples ... "
| - [Properties of task environments](#Properties-of-task-environments) | ||
| - [Types of environment](#Types-of-environment) | ||
| - [Types of environment example](#Types-of-environment-example) | ||
| - [Type of agents](#Type-of-agents) |
| An <b>intelligent agent</b> is anything that perceives its environment through sensors and acts upon that environment through its actuators. | ||
| We will use the term <b>percept</b> to refer to the agent's perceptual inputs at any given moment. | ||
| We can describe an agent's behavior by the agent function. | ||
| <b>Agent function</b> maps any given percepts sequence to an action. But how does the agent know what sequence it must choose? we will try to answer this question using a simple example. |
|
|
||
|
|
||
| #### Rationality vs perfection | ||
| Keep in mind that rationality is distinct from omniscience. An omniscient agent knows the actual outcome of its actions but in reality, an agent only knows the expected outcome of its action. For example, imagine your trying to cross the street and no cars are on the street Naturally, you will cross the street to reach your goal. now imagine as you are passing the street a meteorite falls on you. Can anyone blame you for being irrational and not expecting a meteorite to flatten you? |
| # Properties of task environments | ||
|
|
||
| #### Types of environment | ||
| we can categorize an environment in many ways, you will find some of the most important ones listed below. |
| we can categorize an environment in many ways, you will find some of the most important ones listed below. | ||
|
|
||
| <ul> | ||
| <li><b>Fully observable or partially observable</b> (Do the agent sensors give access to the complete state of the environment at each time?)</li> |
|
|
||
| <li><b>Single agent or multiagent</b> (Are there more than one agent in the environment?)</li> | ||
| <ul> | ||
| <li>We say an environment is a multiagent environment if there is more than one agent operating in it otherwise we say the environment is sigle agent.</li> |
There was a problem hiding this comment.
we say the environment is single agent.
| </ul> | ||
| <br> | ||
|
|
||
| <li><b>Single agent or multiagent</b> (Are there more than one agent in the environment?)</li> |
There was a problem hiding this comment.
Using hyphen is better: single-agent and multi-agent
| <ul> | ||
| <li>We say an environment is a multiagent environment if there is more than one agent operating in it otherwise we say the environment is sigle agent.</li> | ||
| <li>In some cases, we can model our environment both as a single agent and multiagent environment. For example, imagine an automatic taxi agent. Should this agent treat the other cars as objects or as another agent? It's better to model our environment as a multiagent environment if the behavior of the other entities can be modeled as an agent seeking to maximize its performance measure which is somehow affected by our agent.</li> | ||
| <li>a multiagent environment could be competitive or cooperative or even a mix of both.</li> |
| <li>We say an environment is a multiagent environment if there is more than one agent operating in it otherwise we say the environment is sigle agent.</li> | ||
| <li>In some cases, we can model our environment both as a single agent and multiagent environment. For example, imagine an automatic taxi agent. Should this agent treat the other cars as objects or as another agent? It's better to model our environment as a multiagent environment if the behavior of the other entities can be modeled as an agent seeking to maximize its performance measure which is somehow affected by our agent.</li> | ||
| <li>a multiagent environment could be competitive or cooperative or even a mix of both.</li> | ||
| <li><b>examples</b>: chess and automatic driving are multiagent environments. solving a crossword puzzle is a single agent environment.</li> |
| </ul> | ||
| <br> | ||
|
|
||
| <li><b>Episodic or sequential</b> (Is the agent's experience divided into atomic "episodes“ where the choice of action in each episode depends only on the episode itself?)</li> |
There was a problem hiding this comment.
double quotations are not in the same format, the first one is " while the other is “
|
|
||
| <li><b>Episodic or sequential</b> (Is the agent's experience divided into atomic "episodes“ where the choice of action in each episode depends only on the episode itself?)</li> | ||
| <ul> | ||
| <li>We say an environment is episodic if the agent experience can be divided into atomic "episodes" In a way that the action taken in an episode is independent of the previous episodes actions.</li> |
| <ul> | ||
| <li>We say an environment is episodic if the agent experience can be divided into atomic "episodes" In a way that the action taken in an episode is independent of the previous episodes actions.</li> | ||
| <li>We say an environment is sequential if the current decision could affect all future decisions. </li> | ||
| <li><b>examples</b>: Chess and automatic driving are sequential. a part picking robot is episodic.</li> |
| <li><b>Static or dynamic</b> (Is the environment unchanged while an agent is deliberating?)</li> | ||
| <ul> | ||
| <li>We say an environment is dynamic if it can change while the agent is deliberating.</li> | ||
| <li>There is a special case that the environment doesn't change but the performance score has a time penalty we call these environments semi-dynamic.</li> |
| <br> | ||
| <li><b>Discrete or continuous</b> (Are there a limited number of distinct, clearly defined states, percepts, and actions?)</li> | ||
| <ul> | ||
| <li>We say an environment's state is discrete if there are a finite number of distinct states otherwise we say the environment's state in continuous.</li> |
There was a problem hiding this comment.
we say the environment's state is continuous.
| </ul> | ||
|
|
||
| #### Types of environment example | ||
| Here are a few examples of Identifying an environment's different dimensions. |
| #### Types of environment example | ||
| Here are a few examples of Identifying an environment's different dimensions. | ||
|
|
||
| | environment| Fully observable? | Deterministic? | Episodic? | Static? | Discrete?|Single agent?| |
|
|
||
| ## Reflex agents | ||
| This is the simplest kind of agent. They choose their next action only based on their current percept. In other words, they do not consider the future consequences of their actions and only consider <b>how the world IS.</b> | ||
| As an example look at this Pacman agent below, at each turn the agent look at its surrounding and chooses the direction that has a point in it and stops when there are no points around it. |
| </ul> | ||
|
|
||
| ## Reflex agents | ||
| This is the simplest kind of agent. They choose their next action only based on their current percept. In other words, they do not consider the future consequences of their actions and only consider <b>how the world IS.</b> |
|
|
||
| ## Goal-based agents | ||
| This kind of agent has a specific goal and it tries to reach that goal efficiently. They have a model of how the world evolves in response to actions, and they make decisions based on (hypothesized) consequences of actions to reach their goal state. Search and Planning are two subfields that are closely tied with these kinds of agents. In other words, these kinds of agents act on <b>how the world WOULD BE.</b> | ||
| as an example look at this Pacman agent below. the goal is to collect every point. |
|
|
||
|
|
||
| ## Utility-based agents | ||
| This kind of agent like goal-based agents has a goal. But they also have a Utility function they seek to reach their goal in a way that maximizes the utility function. For example, think about an automated car agent. They are many ways for this agent to get from point A to point B. But some of them are quicker, safer, cheaper. The utility function allows the agent to compare different states with each other and ask the question how happy am I in this state. |
There was a problem hiding this comment.
But they also have a utility function.
and use a period to end the sentence
|
|
||
| ## Utility-based agents | ||
| This kind of agent like goal-based agents has a goal. But they also have a Utility function they seek to reach their goal in a way that maximizes the utility function. For example, think about an automated car agent. They are many ways for this agent to get from point A to point B. But some of them are quicker, safer, cheaper. The utility function allows the agent to compare different states with each other and ask the question how happy am I in this state. | ||
| In other words, this kind of agent act on <b>how the world will LIKELY be.</b> |
|
|
||
|
|
||
| ## Learning agents | ||
| This kind of agent usually has 4 parts. the most important two are "the learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions. The learning element uses feedback from a "critic" on how the agent is doing and determines how the performance element, or "actor", should be modified to do better in the future. |
| ## Learning agents | ||
| This kind of agent usually has 4 parts. the most important two are "the learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions. The learning element uses feedback from a "critic" on how the agent is doing and determines how the performance element, or "actor", should be modified to do better in the future. | ||
| The last part of these agents is the "problem generator" which is responsible for suggesting actions that will lead to new unexplored states. | ||
| These agents try to do their best by both exploring the environment and using the gathered information to decide rationally. one of the advantages of Learning agents is that they can be deployed in an environment that they don't have a lot of prior knowledge on. they will gain this knowledge over time by exploring that environment. |
There was a problem hiding this comment.
One of the advantages of learning agents is - They will gain
|
|
||
| # Conclusion | ||
| We discussed the concept of an intelligent agent and the difference between a rational agent and a perfect agent. | ||
| then we talked about specifying the task environment for an agent and how can we categorize some main concepts of an environment. We also talked about some agent architectures that are commonly used. |
nimajam41
left a comment
There was a problem hiding this comment.
There are some problems with grammar, etc. that should be solved. But the content is good!
@nimajam41