Monday, April 30, 2012

A little levity after the final

I saw this and thought I had to share it with the class. Enjoy.

Homework 3 Grading

Folks,

Homework 3 has been graded (the points were entered last week before the grade snapshot was sent to you) and can now be picked up from the AI lab at BY 557 (the homeworks are in a pile on the circular table).

The grading scheme was as follows:

A. 2+3
B. 2+3+5
C. 5
D. 4+1
E. 2+3
F. 5
G. 1+2+5
H. 3+2+2

Total: 50

If you have questions about the grading, please let me know soon. You can send me an email to set up a time to meet if required.

Thanks
--
Kartik Talamadupula

Re: CSE471: Where to Turn In Final

Also, I understand that the dept office will be closed from 4pm today (Monday). If you come after 4, you can slide your exam under my office door.

Rao


On Mon, Apr 30, 2012 at 3:56 PM, Subbarao Kambhampati <rao@asu.edu> wrote:
yes. If  I am not in my office, you can turn it in at the main office and ask them to put it in my mailbox (they will put a stamp and put it in my mailbox).

Rao


On Mon, Apr 30, 2012 at 3:47 PM, Kevin Wong <kkwong5@asu.edu> wrote:
Hello,

Will the take-home final exam be turned in at your office?

Thank you


Re: CSE471: Where to Turn In Final

yes. If  I am not in my office, you can turn it in at the main office and ask them to put it in my mailbox (they will put a stamp and put it in my mailbox).

Rao


On Mon, Apr 30, 2012 at 3:47 PM, Kevin Wong <kkwong5@asu.edu> wrote:
Hello,

Will the take-home final exam be turned in at your office?

Thank you

Sunday, April 29, 2012

Missing image in part G of homework 3 Re: CSE 471: Homework 3 solution

Here is the image that is missing.. I am just simulating XOR with OR and AND gates (both of which are linearly separable and so can be simulated by
the perceptrons)

Rao


On Sun, Apr 29, 2012 at 12:56 PM, Phien Pham <pdpham@asu.edu> wrote:
Hi,
In the solution for part G of homework 3, the image cannot be displayed. Can you check this, please?
Thank you

--
Phien Pham


Saturday, April 28, 2012

Reminder about the "NO CONSULTATIONS WITH OTHER STUDENTS" clause of the take-home

Folks

 I just wanted to repeat that the final take-home must be done by you without consulting anyone (other than request for clarifications from me).
You can use the class resources such as the text, notes, videos etc, but are not allowed to trawl for answers on the whole-wide-web. Your signature on the front page
of the exam is your oath that you followed the rules. 

This is not a homework, but a test that gives you ample time and learning opportunity. 

I reserve the right to interview you on your answers to verify your understanding.

regards
Rao

Friday, April 27, 2012

Final Exam released..

Folks

 The final exam is available at http://rakaposhi.eas.asu.edu/cse471/final-12.html 

Please note that there may be a few changes and potential additions. 
In the interests of giving you plenty of time to work on the exam, I am releasing the exam right now rather than waiting until it is completely set in stone.

regards
Rao




Thanks for all the ballot stuffing ;-)

To 
  The students who took classes with me in 2011-12:


Dear all:

 I have been told by the department that I was voted by the students as the best teacher in CSE for 2011-12 year. 

Assuming that my constituency would have been the students who took classes with me this year, which includes you, I would like to thank you for all the  energetic ballot stuffing ;-)

I am supposed to get the (no doubt very substantial cash)  award on Monday during the CIDSE awards shindig.


Cheers
Rao

 

Thursday, April 26, 2012

Heads up on Final Exam release..

Folks

 I am planning to release the final exam tomorrow morning (Friday) and it will be due Tuesday.

Rao

Wednesday, April 25, 2012

Cumulatives with homework 3 grades

Folks

HW3 has already been graded. 

 Attached please find the cumulatives with the homework 3 grade included.

Rao



Monday, April 23, 2012

Fwd: course evaluations

As per the message attached below, the Dean requested that we remind you about taking part in teaching evaluations. So, I am sending you this message. 

I do take your comments on the teaching evaluations seriously and thus encourage you to take the time to complete the evaluations. 

For those of you who are new to ASU, note that  evaluations are returned to us, in an anonymized form, much after your grades are reported. So your comments cannot affect your course grades in any way. 

regards
Rao


---------- Forwarded message ----------
From: James Collofello <JAMES.COLLOFELLO@asu.edu>
Date: Mon, Apr 9, 2012 at 9:20 AM
Subject: course evaluations
To: "DL.WG.FSE.Faculty" <DL.WG.FSE.Faculty@mainex1.asu.edu>


Colleagues,

 

I sent the message below to all of our students.  Please reinforce this messaging with your students.

 

Engineering Students,

 As this semester comes to an end, you will soon be receiving requests to complete engineering course evaluations.  We are committed to continuously improving the educational experience of our students.  Your course evaluations provide important input to this process which is reviewed by Program Chairs and School Directors as they evaluate curriculum.  Your course evaluations are also an important component in faculty performance evaluations impacting decisions regarding reappointment, tenure, promotion and merit.  Please complete your evaluations.

 

James S. Collofello

Associate Dean of Academic and Student Affairs

Professor of Computer Science and Engineering

School of Computing Informatics and Decision Systems Engineering

Ira A. Fulton Schools of Engineering

Arizona State University

 


Agenda for tomorrow's class---is in your hands..

Folks

 Tomorrow being the last meeting of CSE471, I have two alternative agendas we can follow:

1. You can come equipped with questions about the whole semester (questions about connections between topics, 
about state of art in various areas, about deeper issues in areas are welcome; very specific questions about
"can you show me one more worked out example of X" are probably better done during office hours)

2. I can do a  fast introduction to First order logic--a topic I normally cover but didn't get to this year. 

I am fine with either alternative. You can vote with your questions (or lack there of, as the case might be).

You have been forewarned (and I hope you won't be forearmed). 

Rao


Saturday, April 21, 2012

Re: Question related to previous lecture.



---------- Forwarded message ----------
From: William Cushing <william.cushing@gmail.com>
Date: Fri, Apr 20, 2012 at 11:25 PM
Subject: Re: Question related to previous lecture.
To: Subbarao Kambhampati <rao@asu.edu>
Cc: Juan Guzman <jcguzma1@asu.edu>


Well I suppose Rao CC'd me since I know a bit about games.  So here's info...but perhaps more than you wanted...

Opponent modeling is (very, very, ...) hard; an opponent can figure out that you've figured them out (or just get worried about the possibility) and suddenly change their style of play (for example, play randomly for a spell).

Last time I checked the case of Poker, which is perhaps as much as 5 years ago (or more!), the state-of-the-art had shifted back to not using opponent modeling, partially because of this sort of issue,
and partially because the state-of-the-art in finding the optimal line of play had advanced.
Specifically the advance that I'm aware of is to use state abstraction in order to produce a smaller, simpler game.
One that can be brute-force solved for the optimal strategy.
In that instance, it beat the then reigning approach based on opponent modeling.

One thing to keep in mind about Poker is that `bluffing' is a part of optimal play---in the Nash sense---and is something that two infinitely computationally powerful and rational agents would do to one another.
In *real-life* poker, the people who make the money are the people who ``play the odds''. 
It is only in popular culture that we imagine Poker to be a game largely about opponent modeling. 
In truth it is a fantastic exercise in relatively complicated statistical theory---and self-control.
The practical variant is of course investing in the stock market.

The most interesting work in opponent modeling that I'm aware of is for Rock Paper Scissors.
Here the Nash strategy is easy to compute, so, in a tournament setting, if you want to win the tournament, you *have* to be better at opponent modeling than everyone else is.
That is, the Nash player(s) for RPS will place in the dead-middle of a tournament.
Those who attempt to win will make profit off of each other; half will place above average, and half below.
The winner of the year I looked at in detail is called Iocaine (a reference to Princess Bride, and descriptive of its algorithm).
You may have seen the relevant Star Trek episode concerning Data playing against a grand master of a made-up game (Stratagus?).
(The lesson is that the only way to lose is to try to win.)

--

Final word on opponent modeling: No one really wants to play in such lopsided settings.
Either the stronger player is teaching, in which case the point is far from trying to punish the student as much as possible for making mistakes,
or a handicap is imposed---which is still a form of teaching, in that the stronger player isn't going to be upset by losing (but they also aren't going to make any hasty or kind moves).
Kasparov against a not-chess-genius 10-year-old at a two-rook and queen handicap, for example, could be excused for taking 10 minutes to think.

Some like to look at a handicap as a challenge in opponent modeling; how can you provoke the weaker player into mistakes, working towards getting rid of their material advantage?
But the better way is just to play normally, and let the mistakes happen naturally, if they do.
If they don't, then play another game, and reduce the handicap :).

Long story short, even when you far outclass your opponent in domain knowledge or raw computational ability, in almost all situations (but not tournament-RPS) it still makes sense to pretend that you don't.

-Will

P.S. Oh, and regarding Deep Blue, basically IBM hired every grandmaster they could get their hands on, especially those with the most experience playing Kasparov, and they collaborated on extensive study of all of his games ever recorded (something such masters of course had already spent years of their life doing), largely to produce what is known as an ``opening book'' geared especially towards playing Kasparov, that is, one containing the greatest pertinent details on every game he has ever played (and has since been analyzed to death).  I think little about the hardware or algorithms was very specifically targeted at Kasparov; after all, how can you?  He wouldn't be best if he had blind spots that anyone else can identify.  Recently much that was mysterious about Deep Blue is no longer `classified', but still much about the whole affair is hush-hush.  Whether or not that opening book helped at all is debatable, obviously the games were not repeats of past games.  From what I understand, the modern assessment is that Kasparov was defeated by his own humanity; the final games he lost in frustration, that is, it seems he should have been able to force a win if not for the unusual stress he had no experience in dealing with.  (The type of stress is hilarious: he couldn't stare-down Deep Blue; his inability to intimidate the opponent befuddled him, or so the story goes.  The poor technician tasked with implementing the computer's moves on the physical board describes his fear and trepidation upon approaching Kasparov quite colorfully.)

Modern chess-playing software is largely held to have surpassed Deep Blue, without resorting to special hardware, nor with any concerns about possibly being opponent-specific.
I don't know what the current reigning champ is, but for a spell I know Fritz was used as a study tool by most grandmasters (read: "Is stronger than.").



On Fri, Apr 20, 2012 at 5:51 PM, Subbarao Kambhampati <rao@asu.edu> wrote:
Opponent modeling is done a lot of time (and for games like Poker, it is pretty much de rigueur. )

It may be less critical in chess where, at least at the grand master level, "optimal opponent" is a reasonable assumption.

I don't off hand know whether Deep Blue did opponent modeling to any extent--the classic reference on 
Deep Blue is this paper:


Rao


On Fri, Apr 20, 2012 at 5:02 PM, Juan Guzman <jcguzma1@asu.edu> wrote:

I had this thought during class, but I wasn't sure if it was necessarily relevant to the lecture.
Do agents like deep blue always assume an optimal opponent such as Kasparov? It seems to me that one would be able to deduce the expertise level of an opponent using the existing information of the game tree and analyzing the opponent's moves. Like the example you gave in class, If Kasparov would take 10 minutes to make a move against a 5 year old, we would consider it silly; If deep blue saw that the opponent kept making moves that were highly in its favor (mistakes) could we use that information to make the agent execute moves more suited to the situation? Rather than assuming that it's playing against a grand master and using min max, we can calculate the possibility of the opponent making an arbitrary (non optimal) move and make "bolder" moves as a result? Or does deep blue already make optimal moves regardless of the skill level of opponents?

- Juan
Sent from my mobile device




Friday, April 20, 2012

Re: Question related to previous lecture.

Opponent modeling is done a lot of time (and for games like Poker, it is pretty much de rigueur. )

It may be less critical in chess where, at least at the grand master level, "optimal opponent" is a reasonable assumption.

I don't off hand know whether Deep Blue did opponent modeling to any extent--the classic reference on 
Deep Blue is this paper:


Rao


On Fri, Apr 20, 2012 at 5:02 PM, Juan Guzman <jcguzma1@asu.edu> wrote:

I had this thought during class, but I wasn't sure if it was necessarily relevant to the lecture.
Do agents like deep blue always assume an optimal opponent such as Kasparov? It seems to me that one would be able to deduce the expertise level of an opponent using the existing information of the game tree and analyzing the opponent's moves. Like the example you gave in class, If Kasparov would take 10 minutes to make a move against a 5 year old, we would consider it silly; If deep blue saw that the opponent kept making moves that were highly in its favor (mistakes) could we use that information to make the agent execute moves more suited to the situation? Rather than assuming that it's playing against a grand master and using min max, we can calculate the possibility of the opponent making an arbitrary (non optimal) move and make "bolder" moves as a result? Or does deep blue already make optimal moves regardless of the skill level of opponents?

- Juan
Sent from my mobile device


Thursday, April 19, 2012

*Mandatory* Interactive Review Question (Must answer as a comment on the BLOG)

As mentioned in the class today, all of you are required to answer the following Interactive Review question on the Blog.
It has to be done by 2PM on Tuesday (notice that this is *before* the start of the last class).

===========

List five or more non-trivial ideas you were able to appreciate during the course of this semester. 

(These cannot be gratuitous jargon dropping of the "I thought Bayes Nets were Groovy" variety--and have to include some justification).  

The collection of your responses will serve as a form of interactive review of the semester.

=============


If you need to refresh the memory, the class notes page at http://rakaposhi.eas.asu.edu/cse471 has description of what was covered in each of the lectures. 

Rao

A cool Java applet for Chess that shows the computer "thinking" its moves using min-max and alpha-beta & Quiescence search

Here is a cool applet that allows you to play chess with the computer, and shows all the moves that
the computer is considering and their relative strength..


See the "About" link on the page for information on what they use..

Rao

Office hours 2-3pm today..

Folks

 I have to attend an event until 1:30pm today, so I will hold office hours 2-3pm today.

rao

Monday, April 16, 2012

Bayes Net Project Answers and Stats

Max Possible Points: 69

Undergrad:
 max 64.5
 min 42
 avg 56.5
 stddev ~7.2

Grad:
 max 67.5
 min 41.5
 avg 61
 std dev ~7

--

Check marks are one point, slashes are half points, and x's are no points. 
Plus and Minus are for exceptional answers.

Showing the screenshots is worth a point, except for part 3, which is worth two points, and part 2, which is worth no points.
Every `interesting' number in a CPT is worth a point.
Short answer questions are one point each.

The following is a detailed breakdown and meant only as a reference, not gospel --- it may contain errors since I wrote it from memory.

--

Question 1:
The correct numbers are straightforward, very few lost points. 
1 point for the diagram, 10 more for the CPTs, 5 for the calculations, 2 for the short answers.

Intuition:
Our belief in inferior plutonium should increase if we see that the slushies are liquified, as a possible cause of that is core meltdown, and in turn, a possible cause of core meltdown is inferior plutonium.
Of course if we directly observe a core meltdown then our belief should only be that much stronger in inferior plutonium.
At that point the state of the slushies is irrelevant, since slushie liquification is just a poor man's test for predicting whether core meltdown occurred.
Interestingly, if we know that a core meltdown has occurred and the water is poor quality, then we already have a good explanation for the core meltdown and so our belief should be somewhat lessened in the possible fault with the plutonium.
(For two things to both have gone wrong at the same time is harder to believe then just one thing going wrong!)

D-SEP: Regarding irrelevence of slushies once we know core meltdown: The formal test is to cut every path between the two nodes (with or without evidence depending on the type of path). 
There is just one path connecting IP and SL, going through CM, and the configuration is not -> CM <-.  Then knowing CM cuts that, and thus all, paths between IP and SL and we have that they are D-separated.

--

Question 2:

All but a handful got this wrong, so, a little theory first.
There are two key words: "perfect" and "exhaustive".
For a cause to be perfect it gives its effects with 100% probability, or, in propositional logic, a forward implication. 
For causes to be exhaustive means that the effect occurs with 0% probability in the absence of all causes.  In propositional logic, one could write a backwards implication (the converse), or the inverse.

So, encoding 1 (effects+converses):
 (LW or IP) <=> CM
 CM <=> GD
 CM <=> SL

Or encoding 2 (effects+inverses):
 (LW or IP) => CM
 CM => GD
 CM => SL
not (LW or IP) => not CM
 not CM => not GD
 not CM => not SL

There were 6 possible points for the encoding, 3 points for the calculations, and 2 points for the short answers.

Interpretation:
1) If SL, since causes are exhaustive, a cause had to be true, and there is only one: Core meltdown.  Again there must be a cause, but, there are two possible causes.  In the lack of any further evidence the best we can do is count those situations where IP is true versus false and infer the appropriate ratio, i.e., the best we can do is compute the probability.  Assuming you didn't change the priors, then, the number is 0.52.
2) As before, but now we have additional evidence ruling out bad water as a cause.  Left with only one possible explanation, again by the assumption of "exhaustive", it must be that the plutonium was bad.  So the probability is 1.0.
3) The probability is 0, because there is no possible world in which IP holds given the evidence.  This query is special though, because the probability of any and all queries given the evidence ~GD and SL is 0. 
By exhaustive causes and SL, CM must be true.  By perfect causes, then GD must also be true.  But since ~GD is given, there cannot be any world satisfying any property. 

Relation to Propositional Logic:
1) We could count possible worlds even in propositional logic, but, not in a `weighted' way (otherwise we are just doing straight up probabilistic propositional calculus already). 
One can get the tool to tell you what the right numbers are by giving priors on IP and LW that are fair coin flips.
2) Exactly the same as in the probabilistic case, formally: SL => CM and SL gives CM; CM => (LW or IP) and CM gives (LW or IP), finally (LW or IP) and ~LW gives IP.
3) Given a contradiction, we can infer anything we like, the opposite of the Bayes Net situation.  (In the net, IP either true or false is probability 0.  In the propositional encoding, one could infer that it is simultaneously true and false.)


One could of course still get points for shorter answers, I am just being verbose.  It was very unlikely to get points for either interpretation or relation to propositional logic if your encoding was wrong to begin with.

As a total freebie the CPTs were given 10 points for this question, where one was even permitted to change the priors around somewhat on IP and LW.

--

Question 3:

2 points is for giving GD just the parent CM, LW the parent CM, and IP the parents LW and CM.
1.5 points is for extra edges.
Less is for downright wrong networks. 

In the ideal network there are 11 probabilities that need to be assessed, and there are 11 points awarded if your network includes those values in the right places (duplicated however many times needed by redundant parents).
Note that you can use the original network as an oracle for more than just values --- you can also test conditional independence assumptions in order to figure out that GD does not need SL as a parent, yet, IP does need LW as a parent.

1 point if you said this network was worse than the original, or implied it well enough. 

1 point if you demonstrated that your network reproduced the right values.

--

Question 4:
part a) 1 point for the diagram; 5 more points for the `interesting' numbers, i.e., the CPTs that are new/different.
part b) 1 point for the diagram; 8 more points for the differing numbers. 
  I was kind here regarding rounding---not that there was a choice since only one student noticed that one needed to provide 8 digits of precision to do the question justice.
  (In general throughout the assignment any answer rounded consistently was acceptable.)

--

I was not kind regarding simple oversights, like swapping two numbers; the tool+assignment are too easy to use and complete and anyways there were tons of freebie points.


-Will


Latest cumulatives (with the project 3 as well as the at home version of the test 2 included)

Folks

 Thanks to your hardworking TAs, here are the latest cumulatives for the course (out of approximately 84 points) 

These include Project 3--which was included at a 10% weight, and the at home version of test 2.

The formula for effective grade of test 2 is

max(in-class,  w*in-class+(1-w)*at-home)   where w=.5 for 471 and .6 for 598 section 

The max is there so that the lowest you can get is your in-class score (thus the couple of people who didn't do
the test at home don't lose any effective points). 

The only other grades to be entered are the homework 3, final exam and participation credit. 


If you find any discrepancies, please let us know. 

regards
Rao

Sunday, April 15, 2012

Applets URL corrected for homework 3

As some of you pointed out, the URL for the applets specified in the homework 3 last part has changed.


I modified the homework writeup also to reflect this.

Rao

Saturday, April 14, 2012

Thinking Cap: A Post-Easter Resurrection..

Considering that this is the last quiet weekend before the beginning of the end of the semester, I could sense a collective yearning
for one last thinking cap. So here goes...

1. We talked about classification learning in the class last couple of days. One important issue in classification learning is access to training data that is "labeled" --i.e., training examples that are pre-classified.   Often, we have a lot of training data, but only part of it is pre-classified.
Consider for example, spam mails. It is easy to get access to a lot of mails, but only some of them may be known for sure to be spam vs. non-spam.   It would be great if learning algorithms can use not just pre-labeled data, but also unlabeled one. Is there a  technique that you can think of that can do this?  (Hint; Think a bit back beyond decision trees..)  

(Learning scenarios where we get by with some labeled and some unlabeled data are are "sem-supervised learning tasks").


Okay. One is enough for now, I think..

Rao

Thursday, April 12, 2012

Next topic: Search in perfect information games: Chapter 5.1--5.3

Folks

 Rather than do first-order logic, I decided you might enjoy hearing about game tree search (and how Deep Blue works, for example) --as it also gives us a chance to think about scenarios that involve more than one agent. 

 So we will do Chapter 5--section 5.1--5.3 next. 

Rao

Tuesday, April 10, 2012

Last homework-cum-micro-mini-project released

Folks

 I added the third homework with one multi-part question that asks you to do decision trees, naive bayes classifier and perceptrons for a Seinfeld party example.
Note that the last part asks you to experiment with applets. 

This will be due on the last day of the class.

I might add some First order logic questions, for which I will provide answers--but you won't need to submit your answers.

Rao

My mis-statement in class about the standard resolution refutation proof error

In class, I said that a standard error that I had pointed out to you during resolution refutation class was still made by 3-4 people in the class.

Actually I mis-spoke. 

The error that students did in the exam was  to say that

~(A V B) resolves with A VB to give empty clause.

This is semantically correct (~(A V B) is indeed inconsistent with A V B and allows us to derive false) but syntactically wrong in that ~(A V B) is not in the clausal form.

If you put it in clausal form you will get two clauses, ~A and ~B which can then be resolved with AV B in sequence to get empty clause.


The error I had pointed out in the class earlier was a more egregious one in that it is both syntactically and semantically incorrect.
This one involves saying  ~A V ~B resolves with A V B  giving empty clause. 

Here ~A V ~B is not actually inconsistent with A V B (The world  A=True, B =False, is a model of both formulas, for example). 

Rao


Grading for your homework 2

Hi all:
You will receive the grading for your homework 2 today. The maximal point is 81. Here are the stats:
- Undergrad: max = 60, min = 9, average = 37.6, stdev = 15.5 (excluding one student not submitting the homework)
- Graduate: max = 79, min = 37, average = 65.2, stdev = 10.4.

And here are the detailed points for each question and some observations that I have:

Question 1: totally 14
e, f: 3 each
g: 4 (1 for each heuristic)
h: 4 (2 for the mutex propagation, 1 for each heuristic)
Most of you did well for this planning question (some students however did not know the difference between a relaxed planning graph and a transition graph on the state space).

Question 2: totally 8
Soundness using truth table: 4 (you should be able to explain it correctly to get the full credits). Wrong explanation: -2
Completeness: 2. Wrong explanation: 0 (many students are confused between the completeness of an inference rule with the "equivalence" relation between two formula).
Resolution: 2

Question 3: totally 13
Propositional theory: 2
Clause form: 2
Resolution refutation: 3 for each. Total: 9
One mistake that I saw was that some students represented "mortal mammal" with a single proposition. Although with this KB, you can still prove/disprove the conjectures about "magical", "horned" and "mythical", doing so means that "mortal mammal" and "immortal" are two independent propositions, but apparently they are not.

Question 4: Totally 30
4.1 (Short answer questions): Totally 7
A: 3
B: 4 (2 for each question)
4.2 (Modeling the Springfield nuclear power plant): Totally 23
A: 5
B: 4 (1 point for the first two, 2 points for the last)
C: 3
D: 3
E: 8
Most of you did well in this question (however some student didn't even write Bayesian rule correctly for 4.1.A). For 4.2.C, many students used the argument that knowing "inferior plutonium" reduces the belief in "low quality heavy water" to claim that p3 < p1 (or the opposite), which is not true. It only means that p3 <= p2.

Question 5: Totally 16. Four for each part.
Many of you didn't convince me that you actually knew why P(AC) != P(A)P(C) for both 5a and 5c to get the full credits; fortunately the other two were easier. Also note that if you didn't actually try to prove these parts using probability and Bayesian rule, you won't get any credits.

Please let me know if you have any questions.
Thanks,
Tuan





Monday, April 9, 2012

Current Grade Book (with test 2 and homework 2 scores)

Folks

 Attached please find the current gradebook with test 2 and homework 2 scores. 

The current "totals" are computed with 20% weight for each of the tests, 10% weight for each of the projects and 5% weight for each of the homeworks, and 1% for the initial lisp homework.
(thus a 71%).   This is only roughly indicative of the cumulatives--the weights for the tests will have to come down to make room for the project 3 (which is still being graded), the final test as well as
the last homework/project. 


Rao


Saturday, April 7, 2012

An easter egg in the form of Test 2 do-over..

Folks

 If you want a second coming of Thursday's test, here is your opportunity. 

I am attaching a pdf version of the test. You can do it  and submit it on Tuesday, and your test grade will be a weighted average of
your in-class and at-home grades. Note that if you already thought you did well, this may not be worth the time for you.

The at-home version will be open book and notes, but you are required to do it yourself without consulting anyone else.


Happy easter..
Rao


Wednesday, April 4, 2012

Availability tomorrow

I will be generally available most of the day tomorrow for any last minute exam related questions. Just drop by my office (BY 560)

(You can call 965-0113 if you want to confirm that I am there before coming)

Rao

Some errata corrected in the Spring Field problem solution

The following parts are corrected (this is the correct version)

Part B.[4pt] Given your network above, which of the following (conditional) independences hold? Briefly justify your answer in each case:

    1. Independence of inferior-quality plutonium and low-quality heavy water in Springfield nuclear plant

Given no evidence at all, they are independent of each other.

    1. Independence of inferior-quality plutonium and low-quality heavy water in Springfield nuclear plant, given watery slurpees over at Apu's.

Using D-SEP, we can see that once SL is given, IP and LH are not independent (the "common-effect" case; since neither CM nor any of its descendants can be in the evidence for IP and LH to be independent)

    1. Independence of low-quality heavy-water and wateryness of slurpees at Apu's, given core-meltdown

Using D-Sep again, LH is indepedent of SL given CM (since the only path from LH to SL goes through CM, and it is an inter-causal path, and it gets blocked once CM is known). Another way of seeing this is that CM is the markov blanket of SL, and so given CM, SL is independent of everything else.

Tuesday, April 3, 2012

Cleaned up most of the UTF-8 error chars in the springfield bayesnet example

Folks

 I cleaned up most of the annoying UTF-8 error chars (the dreaded question mark in the diamond thingies)--you may want to reload the page for a less annoying
(but not any more correct) version

rao

Forgot to tell you about the Turing award for PAC...

In my haste to complete the class on time, I forgot to mention that the Turing award for last year
went to Leslie Valiant for inventing the notion of PAC (probably approximately correct) as part of his work
on the theory of the learnable.


(I hope you still remember that the Turing award for this year went to Judea Pearl for his work on
bayes networks)

Rao

ps: I misspoke a bit when talking about "sample complexity"--it is the least number of examples needed to learn the
     concept by any algorithm (not the "most"). You saw that the inequality on the slide was N >= 


Re: Review for Thursday

Some students expressed an interest in having a review session prior to the thursday test.  I'll be holding a review in BYENG 210 from 12:00 to 1:00 pm tomorrow (and normal office hours before that). 

If you cannot make those times work for you, let me know and we can arrange to meet individually at another time.

-Will

Monday, April 2, 2012

Project 2 Grading

Hello,

Project 2 will be returned to you in class tomorrow. Most of you did quite well.

The project was graded out of a total of 100 points. The undergrad section average was 87.8, and the graduate average was 95.8.

If you've had points marked off, I have indicated in most places why that is if it isn't fairly obvious already. Extra credit has been assigned based on the nature of the extensions (if any), and is marked separate from your total points on the main part.

If I have asked for clarifications from you (via comments on the report), or if you have questions about the grading, please send me an email at krt@asu.edu.

Thanks

Solutions to Homework 2 posted online

FYI 

Rao