and

Rao

So we made a big deal about the importance of independence assertions in deciding the shape of the joint

Rao

distribution.

How come we didn't need to think about independence assertions while in propositional logic?

Rao

Hello,

We've had a few questions from students on Project 2, so we've made a small 'FAQs' section on the project page. Please take a look at it as it may answer some of your own questions. If you have others, please send them in to the TA help address. We will update the FAQs as we get more questions, so do keep track of that particular section.

Thanks.

Hi Srijith,

__If you are downloading the IPC3 domains__, you might want to try something from the "Tests1" or "Tests2" suite. "Tests3" may not always work. Also, **please stick to the "Strips" sub-folder** (since the others have some features that Sapa Replan may not currently support, and that we don't need for this project).

1. Yes, it supports preconditions. The problem with UMTS is the "or" that is present in that precondtion line that you've pasted below - Sapa Replan doesn't support those. Right now, if you want an action that supports 'or' kind of semantics, you have to implement it as two different actions with no 'or'.

More generally, for things of the nature of connected, if you don't want to manually encode (a conn. b) <-> (b conn. a), then you can think about some kind of 'domain axiom' action that fires and gives that effect. That is one way, the other of course is to just manually encode it.

2. Yes, you can give in object types as parameters. See the sample files for examples. Not too sure what your question is ... if you mean "can I use one action for reporting in both rooms and hallways", yes, you can. Think about the line where I said they are both subtypes of the larger 'location' type. Does that help?

Thanks.

On Sun, Feb 26, 2012 at 1:10 AM

1. Does the planner jar you gave us support in preconditions? I seam to get error on the ones that is available in http://planning.cis.strath.ac.uk/competition/domains.html (UMTranslog-2)

Here is my code:

:precondition (and (at ?p) (or (connected ?h ?p) (connected ?p ?h)))

Output:

Exception while parsing domain robby!

edu.asu.sapa.parsing.ParseException: Encountered "(" at line 13, column 29.

Was expecting:

")" ...

at edu.asu.sapa.parsing.PDDL21Parser.generateParseException(PDDL21Parser.java:4230)

at edu.asu.sapa.parsing.PDDL21Parser.jj_consume_token(PDDL21Parser.java:4109)

at edu.asu.sapa.parsing.PDDL21Parser.AllIConditions(PDDL21Parser.java:1063)

at edu.asu.sapa.parsing.PDDL21Parser.InstantAct(PDDL21Parser.java:1139)

at edu.asu.sapa.parsing.PDDL21Parser.Action(PDDL21Parser.java:843)

at edu.asu.sapa.parsing.PDDL21Parser.ActionList(PDDL21Parser.java:835)

at edu.asu.sapa.parsing.PDDL21Parser.parse_domain_pddl(PDDL21Parser.java:62)

at edu.asu.sapa.Planner.main(Planner.java:73)

It works if I change to and instead of or.

2. Is there a way to give or in the parameters? For example Report needs to take in room and hallway. Currently I solved it using two actions report-room and report-hallway. I feel there should be a better way.

I checked the tutorials you had attached in the bottom of the project description. I couldn't find direct answer to my question there which is why I am mailing you.

Regards,

Srijith Ravikumar

Homework 2 socket is open and there are three problems there on planning graphs, inference rules and propositional resolution refutation.

You should do the problems when the material is fresh in your minds. I can only guarantee 1 week lead time before I close the socket and make it due.

cheers

Rao

Hello,

We've put up a short 16 minute tutorial to help you get started with the project. We'd strongly recommend going through this before you start on the project, as it will answer a lot of your questions about PDDL and the project. The tutorial can also be accessed on the project 2 specification page.

Please start early!

Thanks.

For our next topic on probabilistic propositional logics, you can start reading chapters 13 and 14.

Sections 13.2--13.5 are a quick revision of your high school probability theory; we will go through it fast--you should read to make sure that

your memory is refreshed.

Rao

Folks

Project 2 is available for preview from the projects page. It becomes officially assigned tomorrow, and will be due on March 8th.

Please note that this project doesn't require any coding, but rather requires writing a domain model (like we did for blocks world in the class), and feeding it to a planner we provide for you.

Please note that I do not plan to do an extension of deadline for this project. We think it takes about one week, and so we are in a way already giving you double time.

Please don't wait until the last minute to get started--we really do not have the infrastructure to hold your hands the night before the deadline.

Regards

Rao

This is meant for the students in the 471 section.

**Experiences for Undergraduates** program that can support undergraduate students at 10$/hour; 10hours/week,

I have funds from National Science Foundation through their **Research**

and get research experience.

My group works in AI and Information Integration. With respect to this class (CSE471), the topic that is closest

to our research interests is planning (which we discussed, and you will do a project next).

If you either liked planning or are finding AI interesting, and want to explore the opportunity, talk to me.

A list of recent publications from the group can be found at http://rakaposhi.eas.asu.edu/papers.html

Rao

ps: A list of students in my group can be accessed by the URL

http://rakaposhi.eas.asu.edu/rao.html#studentsA list of recent publications from the group can be found at http://rakaposhi.eas.asu.edu/papers.html

-----

Subbarao Kambhampati

Just when you thought the cap is gone for good.. Here are some thinking cap questions you can think/comment on:

1. If I have n propositions, how many possible clauses are there? (Assume that clauses of the form A V A are automatically simplified to A). Does this help in convincing you that the resolution theoremproving search procedure is *complete* in that it will terminate whether or not the theorem you are trying to prove actually holds?

2. Suppose you have a propositional knowledge base KB, and you just proved a theorem alpha. Now more facts are added to the KB to give KB'. Will the theorem you proved in KB still hold in KB'? What if the facts I added made KB' *inconsistent*? [The property we are talking of here is called "Monotonicity"--propositional logic is monotonic.]

2.1. Question 2 points out that propositional logic is no John Kerry (or Mitt Romney--for you dems)--in that it doesn't "flip-flop". But is not "flip-flopping" really all that great a property? Consider this. I tell you Tweety is a bird. Do you think Tweety flies?

Now I tell you Tweety is an ostrich. Do you think it flies? Now I tell you Tweety is a magical ostrich. Do you think it flies? Now I tell you Tweety just got cursed by the local witch/druid. Do you think it flies?

Do you think it would be good to not "flip-flop" in the example above?

3. The "informedness" of a heuristic is not as great an idea as it is touted to be. You will likely found, during your project 1, that the search with manhattan distance heuristic sometimes does *more* expansions than the search with misplaced tiles

heuristic! In this thinking-cap question you will try to understand why this happens.

Consider a search problem, for which you have two *admissible* heuristics. h1(n) is h*(n)/5 ; h2(n) is a constant C (except of course at goal nodes, where it has to be zero), such that h2(n) is always higher than h1(n). Which is more informed? Which do *you* think is a better heuristic?

In particular, given a heuristics h2 which is more informed than h2 (in that forall nodes n, h2(n) >= h1(n)), we can only show that the "interior f-contour" nodes expanded by search with h1 are also expanded by h2.

Specifically, consider a node n' such that f2(n') = g(n') + h2(n') < f* (where f*, the optimal cost of the goal node, is the same for both heuristics). So, n' will clearly be expanded by the search with h2. Since h1 <= h2, clearly f1(n') is also less than f*, and so search with h1 also expands node n'. This shows that the number of nodes on the interior f-contours for h1 can not be fewer than the nodes on the interior f-contours for h1. (An interior f-contour is the set of equi-f-value nodes with f-values less than f*)

The tricky part is that this subsumption doesn't necessrily hold for nodes that are on the "f* contour"--i.e. nodes that have

f1 or f2 equal to f*. Based on the tie-breaking strategy, a subset of these nodes will likely be expanded before the search terminates. If the subset expanded by h2 is more than that expanded by h1, search with h2 can do more expansions even though it is more informed.

See if you can think of possible reasons why this could happen. [hint: there is a slide in the informed search lecture--with annotation "advanced"--that gives the insights needed, if you are having trouble..]

In particular, given a heuristics h2 which is more informed than h2 (in that forall nodes n, h2(n) >= h1(n)), we can only show that the "interior f-contour" nodes expanded by search with h1 are also expanded by h2.

Specifically, consider a node n' such that f2(n') = g(n') + h2(n') < f* (where f*, the optimal cost of the goal node, is the same for both heuristics). So, n' will clearly be expanded by the search with h2. Since h1 <= h2, clearly f1(n') is also less than f*, and so search with h1 also expands node n'. This shows that the number of nodes on the interior f-contours for h1 can not be fewer than the nodes on the interior f-contours for h1. (An interior f-contour is the set of equi-f-value nodes with f-values less than f*)

The tricky part is that this subsumption doesn't necessrily hold for nodes that are on the "f* contour"--i.e. nodes that have

f1 or f2 equal to f*. Based on the tie-breaking strategy, a subset of these nodes will likely be expanded before the search terminates. If the subset expanded by h2 is more than that expanded by h1, search with h2 can do more expansions even though it is more informed.

See if you can think of possible reasons why this could happen. [hint: there is a slide in the informed search lecture--with annotation "advanced"--that gives the insights needed, if you are having trouble..]

Rao

At the risk of stating the obvious, note that you have to click the submit vote for your input

to be recorded.

rao

On Tue, Feb 21, 2012 at 9:38 PM, Subbarao Kambhampati <rao@asu.edu> wrote:

Folks:

Here is a mr. poll link where you can give anonymous text-based feedback on the course.

<Removed from Blog; see email for link>

I encourage you to participate in it.

Rao

Folks:

<Link Removed from blog; see email for the link>

Here is a mr. poll link where you can give anonymous text-based feedback on the course.

<Link Removed from blog; see email for the link>

I encourage you to participate in it.

Rao

Before we forget all about the exam, I thought I should post the solutions. So I did.

There is both a khan academy style 41min recording of me developing the solutions (just

46MB--very easy download), and the pdf file that got developed that way.

Note that I only guarantee the completeness of the video--the doc is like trace left on the

board..

I highly encourage you to view the video.

Rao

ps: ..and you thought Bill Gates is busier than me! Tsk. Tsk. The guy is retired, you know...

I graded test 1. I will return them today in class.

**471 section**

**598 section**

Here are the statistics:

Avg 32.73

Stdev 17.37

Max 62

Min 4.5

Avg 47.6

Stdev 12.3

Max 64

Min 23.5

Rao

Instead of 1-2, I will be available 2:30--3:30 for office hours today

rao

The maximum possible points are 76.

The maximum homework scores are 65 and 64. After that are a handful of mid-50s. The median score is something like 43.

The mean is 39ish, the mean excluding the lowest scores is 41ish. The vast majority of the class scored in the mid 30s to mid 40s.

(The minimums are several 0s, of course, and among actual assignments are several sub-20 scores such as 6 and 12, as I recall.)

So a mid 40 is something like a B or B+. Low 50s and higher is definitely an A; I think there are about 7 such scores.

--

As I'm sure you can tell I began the grading in `reviewer' mode, i.e., how I would review a submission to an international conference or journal on artificial intelligence.

This is a mistake which was fortunately halted before total catastrophe ensued. Please

don't take my comments personally, indeed, feel free to entirely ignore them. Note that the grading is consistent and fair, at least, as well as can be expected

from a human being. In particular the drastic change in the level of commentary between the first and second halves means little besides a drastic change

in the level of commentary --- the grading itself was just as `harsh'.

It is worth noting that at least one (if not several) student(s) got a perfect (or above perfect) score on any given question (not the same students each time).

Also, the entire class average would likely go up by 10 or even 15 points had all the assignments been in fact complete.

The following is a break-down of how the final judgments were converted into points. This varies a little from question to question,

but the general idea is that correctness is worth a point and justification is worth a point, and that each question no matter

how large or how small was worth 2. Except question 1, where its 8 sub-questions are only worth 1 each.

--

All rounding is rounding up; all sub-scores are truncated to the maximum possible where scoring over maximum was possible.

Question 1: 8 possible points.

0/3 of a point for no example,

1/3 of a point for a flawed example,

2/3 of a point for a dubious example,

3/3 of a point for a fine example.

For any agent whatsoever in any domain, say X, a fine example consists of:

X with faulty/good sensors, with/without multiple agents, and with faulty/good effectors.

Other idioms are more appropriate for certain domains. For driving, for example, one might say:

Driving (not) in a blizzard, (not) on a closed course, and (not) drunk.

For poker:

Poker with(out) Extra-sensory perception, with(out) opponent's families as hostages, and drunk/sober. (Or dealer/not-dealer.)

Question 2: 6 possible points.

1/4 of a point for a incorrect answer with faulty explanation.

2/4 of a point for a correct answer with faulty explanation, or, for an incorrect answer with a good explanation.

3/4 of a point for a correct answer with a dubious explanation, and in some cases perhaps even for an incorrect answer (with fantastic explanation).

4/4 of a point for a correct answer with correct explanation.

The correct answers are that accessibility and determinism depend on the agent design, respectively due to sensor and motor/effector quality, while staticness is an environmental

parameter usually independent from agent design. (But there are exceptions.)

Questions 3 through 6: 22 possible points.

These are summed and rounded in one group. There are three possible marks in three possible positions:

A leading +/- indicates that the answer provided was deemed technically correct/incorrect. (X for not committing to a single clear answer.)

The following +/- indicates that the explanation provided was deemed adequate/inadequate. (X for not giving any explanation at all, or in some cases, an explanation worse than nothing at all.)

Each of those is worth a point.

Optionally a trailing +/- is a positive or negative half-point, modifying the score for the explanation.

All combinations are possible. In particular one could give an interesting explanation for a wrong answer and receive as much as 1.5 out of 2 points.

It is also possible to score 2.5 out of 2 points, which is the point of summing all of the questions together before rounding and truncating to the maximum score.

As the score "- - -" would be worse than no answer, these are first altered to "- - X", removing the negative modifier.

Most of the time the trailing modifiers cancel out or nearly so.

In theory one could have scored 2.5 points on questions 3 and 4, 10 points on 5.1 and 5.2, and another 2.5 points on question 6, which I may have neglected to truncate to the maximum score

which should be possible on the entire set: 2+2+8+8+2 = 22. If you have a sub-score above 22 on that part, you might want to avoid bringing my attention to it ;).

Question 7: 8 points total.

7.1 used the scale above. The correct answer to 7.1 is of course DFS, because of its vastly superior space-complexity and no loss of quality under these ideal-for-DFS conditions.

The other 3 parts were marked using +/- letter grades with the following meaning, as I recall:

A+ = 2.5

A- to A = 2

B+ = 1.5

C+ to B = 1

D to C = 0.5

X = 0.

The order these are listed in is 7.2.1, 7.2.3, and 7.2.2. A+ on 7.2 was for drawing the tree to depth 3 proper (and getting everything else right).

The correct answers to 7.2.1 and .3 are virtually the same; a barren sub-tree is effectively a special case of a highly non-uniform goal distribution at the leaves. The grading was

looking for non-uniformity as the reason to prefer iterative-broadening over DFS in 7.2.1, and the grading was looking for an appropriate connection to barrenness being made in

the answer to 7.2.3. Many answers were extremely difficult to parse or understand. Using specific examples greatly benefited those students who did so.

It was also a good observation to note that: if the left-most goal is halfway through the leaves in such a way that it is found at the cutoff b'=b/2 then IBDFS has a huge (astronomical for, say, b=100)

advantage. Note the solution guide suggests that the 3.1 etc. nodes are good for IBDFS --- this is incorrect. b'=b is the worst-case for IBDFS. For b=3 and d=3 there are actually

only 4 possible leftmost goals for which IBDFS would perform better than DFS, and so, `on average' it likely would not perform better except of course that we have little idea what

`on average' means except that it *isn't* a straight average over the leaves of the tree treating each as having equal likelihood of being the leftmost goal.

Questions 8-12: 32 possible points.

2 points per question mark plus 2 more for "Is the manhattan distance heuristic admissible." which I think was missing a question mark.

+ means full credit, i.e., 2.

- means partial credit, i.e., 1.

x means no credit.

These were all pretty straightforward.

-Will

Further notes on what was considered correct/incorrect:

Question 8 (2 points): There were quite a few similar strategies for attempting to prove question 8 using simpler arguments than the standard of "consider any other path at the time A* returns".

Virtually no one got points for this question.

Question 3 (2 points): In order of difficulty: partial-accessibility, complete inaccessibility, full accessibility. With partial-accessibility one has the option of designing the agent to pre-plan for contingencies, or re-plan when

the unexpected happens. With no sensors, neither is possible, so neither need be implemented. Not implementing is of course easier than implementing.

Question 4 (2 points): Detail is not always helpful. Up to a point a better model can help an agent improve its performance --- but a model of the entire universe at nanometer resolution is too far. There are two points: the computational burden

of too much detail, also, the more precise your model the more likely it is that it is inaccurate, perhaps gravely so. (In learning theory this is called *overfitting*, and deliberate simplification of a model to avoid such is called regularization. In planning

one would say abstraction rather than regularization.)

Question 5.2: There was a lot of confusion on this question. One point in particular is that the solution guide uses a very strict notion of what it means

to achieve a goal -- ruling out stochastic achievement almost as a matter of definition. It also isn't very clear that A2 isn't meant to be doubly handicapped.

Question 6 (2 points): IDDFS is a complete algorithm and so cannot possibly be performing infinitely more work than DFS --- the conclusion doesn't make sense.

The actual ratio is the ratio between the sum of the integers from 1 to D (which is D choose 2) by just D, which is (D+1)/2. (Where D=d+1, the number of vertices in the single path that is the entire search tree.)

EC (4 points): The ratio (b+1)/(b-1) stems from an average-case analysis, which involves much more annoying algebra than the far easier worst-case analysis.

I think that everyone who made a serious attempt got at least 1 point? Elegant psuedo-proofs got 2.

The maximum homework scores are 65 and 64. After that are a handful of mid-50s. The median score is something like 43.

The mean is 39ish, the mean excluding the lowest scores is 41ish. The vast majority of the class scored in the mid 30s to mid 40s.

(The minimums are several 0s, of course, and among actual assignments are several sub-20 scores such as 6 and 12, as I recall.)

So a mid 40 is something like a B or B+. Low 50s and higher is definitely an A; I think there are about 7 such scores.

--

As I'm sure you can tell I began the grading in `reviewer' mode, i.e., how I would review a submission to an international conference or journal on artificial intelligence.

This is a mistake which was fortunately halted before total catastrophe ensued. Please

don't take my comments personally, indeed, feel free to entirely ignore them. Note that the grading is consistent and fair, at least, as well as can be expected

from a human being. In particular the drastic change in the level of commentary between the first and second halves means little besides a drastic change

in the level of commentary --- the grading itself was just as `harsh'.

It is worth noting that at least one (if not several) student(s) got a perfect (or above perfect) score on any given question (not the same students each time).

Also, the entire class average would likely go up by 10 or even 15 points had all the assignments been in fact complete.

The following is a break-down of how the final judgments were converted into points. This varies a little from question to question,

but the general idea is that correctness is worth a point and justification is worth a point, and that each question no matter

how large or how small was worth 2. Except question 1, where its 8 sub-questions are only worth 1 each.

--

All rounding is rounding up; all sub-scores are truncated to the maximum possible where scoring over maximum was possible.

Question 1: 8 possible points.

0/3 of a point for no example,

1/3 of a point for a flawed example,

2/3 of a point for a dubious example,

3/3 of a point for a fine example.

For any agent whatsoever in any domain, say X, a fine example consists of:

X with faulty/good sensors, with/without multiple agents, and with faulty/good effectors.

Other idioms are more appropriate for certain domains. For driving, for example, one might say:

Driving (not) in a blizzard, (not) on a closed course, and (not) drunk.

For poker:

Poker with(out) Extra-sensory perception, with(out) opponent's families as hostages, and drunk/sober. (Or dealer/not-dealer.)

Question 2: 6 possible points.

1/4 of a point for a incorrect answer with faulty explanation.

2/4 of a point for a correct answer with faulty explanation, or, for an incorrect answer with a good explanation.

3/4 of a point for a correct answer with a dubious explanation, and in some cases perhaps even for an incorrect answer (with fantastic explanation).

4/4 of a point for a correct answer with correct explanation.

The correct answers are that accessibility and determinism depend on the agent design, respectively due to sensor and motor/effector quality, while staticness is an environmental

parameter usually independent from agent design. (But there are exceptions.)

Questions 3 through 6: 22 possible points.

These are summed and rounded in one group. There are three possible marks in three possible positions:

A leading +/- indicates that the answer provided was deemed technically correct/incorrect. (X for not committing to a single clear answer.)

The following +/- indicates that the explanation provided was deemed adequate/inadequate. (X for not giving any explanation at all, or in some cases, an explanation worse than nothing at all.)

Each of those is worth a point.

Optionally a trailing +/- is a positive or negative half-point, modifying the score for the explanation.

All combinations are possible. In particular one could give an interesting explanation for a wrong answer and receive as much as 1.5 out of 2 points.

It is also possible to score 2.5 out of 2 points, which is the point of summing all of the questions together before rounding and truncating to the maximum score.

As the score "- - -" would be worse than no answer, these are first altered to "- - X", removing the negative modifier.

Most of the time the trailing modifiers cancel out or nearly so.

In theory one could have scored 2.5 points on questions 3 and 4, 10 points on 5.1 and 5.2, and another 2.5 points on question 6, which I may have neglected to truncate to the maximum score

which should be possible on the entire set: 2+2+8+8+2 = 22. If you have a sub-score above 22 on that part, you might want to avoid bringing my attention to it ;).

Question 7: 8 points total.

7.1 used the scale above. The correct answer to 7.1 is of course DFS, because of its vastly superior space-complexity and no loss of quality under these ideal-for-DFS conditions.

The other 3 parts were marked using +/- letter grades with the following meaning, as I recall:

A+ = 2.5

A- to A = 2

B+ = 1.5

C+ to B = 1

D to C = 0.5

X = 0.

The order these are listed in is 7.2.1, 7.2.3, and 7.2.2. A+ on 7.2 was for drawing the tree to depth 3 proper (and getting everything else right).

The correct answers to 7.2.1 and .3 are virtually the same; a barren sub-tree is effectively a special case of a highly non-uniform goal distribution at the leaves. The grading was

looking for non-uniformity as the reason to prefer iterative-broadening over DFS in 7.2.1, and the grading was looking for an appropriate connection to barrenness being made in

the answer to 7.2.3. Many answers were extremely difficult to parse or understand. Using specific examples greatly benefited those students who did so.

It was also a good observation to note that: if the left-most goal is halfway through the leaves in such a way that it is found at the cutoff b'=b/2 then IBDFS has a huge (astronomical for, say, b=100)

advantage. Note the solution guide suggests that the 3.1 etc. nodes are good for IBDFS --- this is incorrect. b'=b is the worst-case for IBDFS. For b=3 and d=3 there are actually

only 4 possible leftmost goals for which IBDFS would perform better than DFS, and so, `on average' it likely would not perform better except of course that we have little idea what

`on average' means except that it *isn't* a straight average over the leaves of the tree treating each as having equal likelihood of being the leftmost goal.

Questions 8-12: 32 possible points.

2 points per question mark plus 2 more for "Is the manhattan distance heuristic admissible." which I think was missing a question mark.

+ means full credit, i.e., 2.

- means partial credit, i.e., 1.

x means no credit.

These were all pretty straightforward.

-Will

Further notes on what was considered correct/incorrect:

Question 8 (2 points): There were quite a few similar strategies for attempting to prove question 8 using simpler arguments than the standard of "consider any other path at the time A* returns".

Virtually no one got points for this question.

Question 3 (2 points): In order of difficulty: partial-accessibility, complete inaccessibility, full accessibility. With partial-accessibility one has the option of designing the agent to pre-plan for contingencies, or re-plan when

the unexpected happens. With no sensors, neither is possible, so neither need be implemented. Not implementing is of course easier than implementing.

Question 4 (2 points): Detail is not always helpful. Up to a point a better model can help an agent improve its performance --- but a model of the entire universe at nanometer resolution is too far. There are two points: the computational burden

of too much detail, also, the more precise your model the more likely it is that it is inaccurate, perhaps gravely so. (In learning theory this is called *overfitting*, and deliberate simplification of a model to avoid such is called regularization. In planning

one would say abstraction rather than regularization.)

Question 5.2: There was a lot of confusion on this question. One point in particular is that the solution guide uses a very strict notion of what it means

to achieve a goal -- ruling out stochastic achievement almost as a matter of definition. It also isn't very clear that A2 isn't meant to be doubly handicapped.

Question 6 (2 points): IDDFS is a complete algorithm and so cannot possibly be performing infinitely more work than DFS --- the conclusion doesn't make sense.

The actual ratio is the ratio between the sum of the integers from 1 to D (which is D choose 2) by just D, which is (D+1)/2. (Where D=d+1, the number of vertices in the single path that is the entire search tree.)

EC (4 points): The ratio (b+1)/(b-1) stems from an average-case analysis, which involves much more annoying algebra than the far easier worst-case analysis.

I think that everyone who made a serious attempt got at least 1 point? Elegant psuedo-proofs got 2.

Hello:

We got one question yesterday as to whether you can submit your project both today and the next Tuesday. If the announcement was not clear enough, the answer is that please submit your project only one time. If you have already submitted (which I saw some on the blackboard), you are done.

Best,

Tuan

On 13 February 2012 12:01, Tuan A. Nguyen <tanguye1@asu.edu> wrote:

Hi all:I have created a link on Blackboard so that you can submit your Project-1 *code* (your report should be submitted before class, and in hard-copy). Please name your file as<first_name>_<last_name>.lisp.And as a reminder: the "early" deadline is 4:30pm, 2/14; and the extended one is exactly one week later.Please let me know if you have any question.Thanks,Tuan

Hi all:

I have created a link on Blackboard so that you can submit your Project-1 *code* (your report should be submitted before class, and in hard-copy). Please name your file as **<first_name>_<last_name>.lisp**.

And as a reminder: the "early" deadline is 4:30pm, 2/14; and the extended one is exactly one week later.

Please let me know if you have any question.

Thanks,

Tuan

Folks

As you know if you attended the planning graph class and/or viewed the video afterwards, our next topic is

knowledge representation--and in particular representations based on logic. Please read Chapter 7 (3rd ed--or equivalent

in earlier editions).

Rao

Dear all:

In view of the continuing questions about project 1, we will extend the deadline for submission by one week (i.e., due 2/21).

I realize that dead-line extensions like this can be seen to be unfair to those who actually got their lives structured to get the project done by the original deadline. In view of this, we will give a 5% extra credit for anyone who turns in the completed project tomorrow (note that this will be 5% of the grade you get on the project--so turning in an empty project will only get you 0 extra credit).

To those of you who are just about done, you might consider using the additional time to improve your analysis and/or do additional extra-credit tasks.

regards

Rao

Hello all:

I hope you are doing well on your project 1, which will be due on the lovely *Valentine's Day.*

You will need to submit both the code and the hard copy of your report. I will have to send another email on how you will submit your code, but for your report please make sure that it includes:

(1) the commented code,

(2) the sample runs for input cases,

(3) the statistics as required in the project description, and

(4) your analysis based on the results.

Although there will not be any specific point on what to address in your report, it should at least show that you understand the search algorithms and the differences between them, and give us a feeling that your code is actually working (which will also be tested). Here are some guidelines:

- What are your observations on the performance of the search strategies (both individually and in comparing to each other), in terms of running time, space, completeness, the admissibility/informness of heuristics etc. Do they actually match the theories you learned in the class?

- Displaying the results on graphs, especially for those comparing different approaches, would be very helpful.

- Some runs to show that your code is correct (part of the trajectory, for example).

- Any suggestion on how to improve the algorithms, data structures etc.

- Write your report as clearly as possible (for the code, fonts like courier seem nicer, and you can reduce the font size and space between lines as well).

Also, please print your report in the 2-sided format.

Thanks,

Tuan

Folks

I hope you all remember that there will be a test on Thursday during class time.

The test will be closed book and closed notes. It will be a short exam (by my standards anyway) and you should have plenty of time to complete it. (It will also be the first ever "Made in Taiwan" exam that many of you may have taken ;-)

Rao

The video and audio are online.

Thanks to the folks who showed up. The others should make sure to hear the lecture so they can do the planning graph homework problems (which will be included in the next graded homework). [As I said, what I covered in this lecture is not going to be on the test

on Thursday).

Rao

BYENG--just half floor above our class room, next to the advising office.

rao

On Fri, Feb 3, 2012 at 12:12 AM, Ivan Zhou <izhou@asu.edu> wrote:

Is this BYENG or BYAC?--On Thu, Feb 2, 2012 at 6:12 PM, Subbarao Kambhampati <rao@asu.edu> wrote:

Just a reminder that Friday (tomorrow's) make-up class is going to be in BY 210.It will be from 10AM to 11:15AM.Rao

Ivan ZhouGraduate StudentGraduate Professional Student Association (GPSA) Assembly MemberEta Kappa Nu (HKN) Active MemberSchool of Computing, Informatics and Decision Systems EngineeringIra A. Fulton School of EngineeringArizona State University

Folks

**You are strongly encouraged to look at the solutions once before Tuesday and go to Tuesday's optional class if you have questions. The TA will be there to help go over them with you. You can also go to the TA office hours. **

I posted the solutions to homework 1 on the homework page

I also posted the solutions to the progression and regression part of the planning question. These parts are fair-game for the Thursday test.

I didn't post the solutions to the rest of the planning questions--especially on planning graph--because it is not going to be on the test and I want to make it part of a required graded homework later.

By the way, your graded home works are not likely to get returned in time for Thursday's test.

cheers

Rao

Just a reminder that Friday (tomorrow's) make-up class is going to be in BY 210.

It will be from 10AM to 11:15AM.

Rao

Folks

I added a question planning (see http://rakaposhi.eas.asu.edu/cse471/hw1-s12-planning-qn.txt )

This part will not be graded (i.e., you don't need to submit it), but you should do it so you have an idea about the kind of

questions that can be asked on planning topics covered yesterday and tomorrow.

I will supply the answers by next week.

Rao

Folks

You may want to note that Qn 11 on the homework, i.e., the one at http://rakaposhi.eas.asu.edu/cse471/hw2-f06-qn3-4.pdf

is directly from a previous midterm. So it gives you an idea of how the exam questions will look like

Also, I might ask True/False *with explanation* short answer questions such as the ones below:

For each of the following statements below, indicate whether the statement is true or false, and give a brief but precise justification for your answer. Correct answers *with correct justifications* will carry 2points. No points will be awarded for answers without correct justifications. Example qn: The time and memory requirements of IDA* can be improved by using A* algorithm to do search in individual iterations. Answer: False. Because A* in the worst case can take as much memory as breadth-first, and thus using A* in the individual iterations will make IDA* require exponential memory (instead of linear memory). A. A* search does b^(d/2) node expansions when searching a unifrom tree of branching factor b and depth d, using a perfect heuristic. B. Consider a uniform search tree of depth d and branching factor b, where there are many goal nodes, all of which are uniformly distributed at the leaf level d. Assuming that memory consumption is not a problem, we are better off using breadth-first search than depth-first search in this scenario. C. A* search with heuristic h=0 will always have to search the entire tree before finding the optimal solution. D. Suppose A* search uses an evaluation function f(n) = (1-w) g(n) + w h(n). For any value of w between 0 and 1 (inclusive), A* will terminate and return optimal solution. E. If h1 and h2 are two admissible heuristics, and h3 is defined as h3(n) = max(h1(n) , h2(n)) , then A* search with h3 is guaranteed to return an optimal solution while expanding as many or fewer nodes than either h1 or h2.

Folks

Too many people looked lost yesterday during the discussion of state-variable models and

progression.

I will be continuing with the discussion of regression and planning graph heuristics tomorrow.

Please make sure to read ahead even if cursorily so you have some bearings during the lecture.

[Note that yesterday's and tomorrow's material will be included in the test]

rao

Subscribe to:
Posts (Atom)