Thursday, April 19, 2012

*Mandatory* Interactive Review Question (Must answer as a comment on the BLOG)

As mentioned in the class today, all of you are required to answer the following Interactive Review question on the Blog.
It has to be done by 2PM on Tuesday (notice that this is *before* the start of the last class).

===========

List five or more non-trivial ideas you were able to appreciate during the course of this semester. 

(These cannot be gratuitous jargon dropping of the "I thought Bayes Nets were Groovy" variety--and have to include some justification).  

The collection of your responses will serve as a form of interactive review of the semester.

=============


If you need to refresh the memory, the class notes page at http://rakaposhi.eas.asu.edu/cse471 has description of what was covered in each of the lectures. 

Rao

38 comments:

  1. 1. The Davis-Putnam procedure for SAT solving. I especially liked how it started with a straightforward algorithm and how it added more into it, and how variable ordering (which people ignored for a long time) ended up being a nice addition.

    2. The hill-climbing style SAT solver. I liked how straightforward it was and how it ended up doing quite well overall when there wasn't really much calculation the algorithm had to do.

    3. The resolution rule for propositional logic. I liked how there was just one rule that was sound and complete, and how there was an algorithm that fit very well with that rule.

    4. The Naive Bayes classifier. I liked how there was just one topography that worked for any situation, and how we went through all those calculations only to get what we expected. I also liked the idea of adding a small amount of each output when determining the probabilities.

    5. Alpha-beta pruning. I wanted to hear something about game-playing AI in this class and I found this idea really neat. I liked how it essentially allows the algorithm to skip a whole subtree. (I liked the whole part about game trees in general)

    ReplyDelete
    Replies
    1. Oh also, I liked how each learner had distinct classes of situations where they did well and how some couldn't do well at all in some cases. For example, the neural network being entirely unable to learn XOR.

      Delete
  2. 1. A* Search: It was interesting to learn how you are able to search near infinite space with iterative deepening search and how influential the heuristics functions is to be cutting down the search time.

    2. Planning: With a set of actions and pre condition and effects you are able to set a set of step to get to a goal. It gave me the idea that planning can be used as a method to build google maps route finder for Bus where there is a specified timings available.

    3. Horn Satisfiability: Given a set of horn classes, we could answer questions by negating the question and adding to the model to check if the model is inconsistent or not. It is a interesting method to use along with NLP to answer true or false questions given a set of logics.

    4. Bayes Network: With a given database, we are able to learn the bayes network and the conditional probabilities. With this probabilities we are able to fix missing values or inconsistency in database itself.

    5. Alpha-Beta Pruning: I was always interested in chess playing agents. Alpha-Beta pruning was an interesting way to make the search more efficient and helps to go deeper in the search tree with lesser resources. Online Search was simple and effective idea to search against a time constraint.

    ReplyDelete
  3. 1. I thought Bayes Networks were a really neat (as in cool and as in elegant) way to express the way certain events affect the probabilities of other events and so on. Without them, it's a lot more confusing to derive a probability with many factors.

    2. The value of heuristics in informed search was not totally apparent at first, but completely make sense now. I found it especially interesting to see the details of the trade-off between more expensive heuristics and more lengthy search time.

    3. State-variable models and planning: This seems like one of the most important concepts because it very effectively allows an intelligent agent to decide on a course of action based on its understanding of the world and the options available to it.

    4. Propositional logic was especially interesting to me because this is where the intelligent agent seemed to be thinking like a person and deriving facts from other ones.

    5. Learning is very interesting to me because the intelligent agent can now take past experience into account when making predictions or potentially even deciding what to do next (if applied that way).

    ReplyDelete
    Replies
    1. (From Elliot D., by the way - forgot to mention in original post)

      Delete
  4. 1. Evolution of Search : I liked how people invented various search algorithms, from BFS, DFS to ID-DFS and finally reaching A* search. The unique properties of each type of search (related to memory/time complexity/optimality) and advantages/disadvantages of each type of search and how there are wide range of applications of each kind of search was extremely interesting. This was mainly due to the fact that we cannot really come to the conclusion that this is the best search for all applications.

    2. Heuristics: The idea of heuristics both for search and planning. After learning about heuristics for search, it became clear why the more complex the heuristic is, the simpler the search. Also, in the case of planning, the war between progression/regression and how heuristics change for both was interesting to know. It explained why people always strive for the hardest and most complete algorithms , but eventually settle! (relaxed plan)

    3. Logic (Propositional to Probabilistic): This one really made me think about the parameter that I would have never considered (The expressiveness of a language). It showed the importance that the role that a language plays and how difficult it is if there are certain measures (probability) that can never be shown using a particular language. The idea of Bayes networks and how conditional independence helps to simplify complex problems especially the enumeration technique.

    4. Expectation-maximization : As you always mentioned "it looks like getting something out of nothing". Made me think of an analogy related to energy. It is said that energy can never be created/destroyed and can only be converted from one form to another and the one who is able to create energy will rule the world. This algorithm looks as good as that!

    5. Alpha-Beta pruning : Even though many have mentioned this, the list would be incomplete without this. The idea of how you explained it with a simple game like tic tac toe made me help understand the significance of this algorithm.

    6. A note on projects: All projects were highly interesting and the one I liked the most was modelling Robby. It gave me a feel of creating my own robot :)

    ReplyDelete
  5. 1. Alpha-Beta Pruning
    In general, I enjoyed the game theory section the most. I can appreciate Alpha Beta Pruning because, like Rao said in class, it is clever. It seems like a very simple idea but it can help avoid a great deal of work by reducing the number of nodes checked by half. Ideas that are "clever" always seem to be like that: they are a lot of help and are simple, yet beg the question "Would anyone else have thought of that?"

    2. Bayes Nets and Variable Elimination
    I think the Bayes net section was great also. Inference is something we use all the time, especially for finding strong relationships (not necessarily causal). The section interested me because I can see how to apply it to troubleshooting. I think the part I appreciate most about Bayes Nets was variable elimination. Personally, I didn't fully understand it very well and I think if Rao hadn't given the take home midterm, I still wouldn't. I spent a lot of time walking through examples with the Bayes Net tool we used for our projects to understand marginalizing and normalizing. I think I appreciate it more because it was difficult for me but I finally got it down.

    3. A* Search
    I appreciated using A* search with various levels of heuristic algorithms. I think I got the most of A* when we implemented it for solving the 8-puzzle. I tend to understand and get more out of learning algorithms when I have to actually use them. I also liked that Rao had us use the different heuristics and explain the results. This helped understand the relationship between time used calculating heuristic, time spent searching, and overall search time.

    4. Perceptron Networks
    I know we didn't talk very much about perceptrons but I found them interesting, nonetheless. I was intrigued by the analogy to a neuron. I am often curious about ways that technology can benefit from bio mimicry. It is also nice to know there is a relatively simple learning algorithm for linearly separable data sets.

    5. The EM Algorithm
    I appreciate that, even though the idea behind EM is complex, it is simple to use. Also, it is useful for missing data. Since most of the world is at least partially stochastic, I think this algorithm is great for learning because we are more than likely not going to have the complete picture in our data set.

    ReplyDelete
  6. 1.A* Search: This was one of the interesting topic. It was interesting to understand how it used admissible heuristic to estimate the cost to the goal state.

    2.Alpha-beta pruning: I liked the way it cut down the branches that cannot influence the final decision

    3.Progression and Regression: I liked the idea that in progression you start from what you know(initial state) and try to reach what you want to prove(final state). In regression you start from what you want to prove(goal state) and see if you can get back to what you want to know(initial state). Progression searches in the space of complete states (2^n) and Regression searches in the space of partial states (3^n). I also liked the bidirectional search where it stops when a (leaf) state in the progression tree entails a (leaf) state in the regression tree.

    4.Refutation was also a really good concept learned during the course. The Mad chase for empty clause :) You must have everything in CNF clauses before you can resolve.Goal must be negated first before it is converted into CNF form.

    5.D-separation: The way it was applied to bayes network to find the independence between the nodes.

    ReplyDelete
  7. 1. Search Algorithms: I am able to understand the differences between DFS, BFS, IDDFS and others. When a particular algorithm will be advantageous compared to others depending on the scenario.

    2. Heuristic Algorithms: During the class I was able to understand Heuristic Algorithms and was always wondering how to calculate Heuristics in real world scenarios. The Project 1 which was 8-puzzle game showed a practical example to arrive at solution by calculating the Heuristics.

    3. Lisp project0: The first project which helped to learn to write recursive algorithms.

    4. Davis Putnam : I was always wondering how can we reduce the number of possibilities for SAT problem. The answer came from Davis Putnam procedure.

    5. Bayes Network project: I liked the project and tool used for the same which made me understand the conditional independence concept.

    ReplyDelete
  8. 1. Bayes net: Bayes net representation of conditional independence between a set of random variables by encoding the local dependence structure and using the D-Sep criterion. These ideas helped me immensely when I was trying to understand a paper on confounding effect of homophily in information cascade in a social network. (“Homophily and Contagion are Generically Confounded in Observation Social Network Studies “- Shalizi et al.)
    2. Equivalence of Bayesian learning and Bayesian inference and how hypothesis can also be considered as a random variable in Bayesian learning. MAP and MLE learning.
    3. Game Trees: How min-max trees can be used to model a game and how finding a winning strategy is equivalent to searching the tree.
    4. Factored representation of state: How factored representation of the states help find heuristics in a domain independent way. How intelligent agents plan to achieve goals in a world with factored representation of states.
    5. Search: How search in AI is different from search in graph theory (i.e. when the complete graph is given and stored in memory in the form of adjacency matrix or adjacency list vs. when the complete graph is too big to fit inside the main memory and explored only through the child-generator function). A* search and how knowledge of heuristic functions (and sophisticated heuristics) can drastically improve the performance of the search.
    6. Programming in Lisp: I didn’t know lisp before and this was my first interaction with the language. I loved programming in lisp in the first two projects. I especially liked its syntax which is, on one hand, so simple and easy to learn, and on the other hand so powerful (higher order functions and macros etc.) I wish I could do some more coding in Lisp!

    ReplyDelete
  9. 1) A* Search & Heuristics: It was intriguing to me that based on the A* Search Tree, it was able to solve an 8-tile-puzzle problem. I didn't see any relation at first between a tree and a... graph-like problem. But after adapting it to the problem, the solution didn't seem that complicated. This was my first time solving a puzzle using a program. The heuristic chosen is also important for the A* Search.
    2) MiniMax & Alpha-Beta Pruning: I would've not thought how simple, at least graphically, is the concept of the algorithm that was designed to play chess. It looks easier than some sorting algorithms I’ve seen. I’ve heard that one gets to see the chess game in the AI class, and I was glad I got to understand what’s going on in there. Grandmasters and 12-ply… wow, that was mind-blowing. And also the alpha-beta pruning is clever as to not waste time on useless nodes once the minimum is found.
    3) Bayes Network & D-Sep: I liked how the Bayes network helped greatly in computing the probabilities, and how easy it was to build one based on text description. Also, the conditional independence assertions with the help of D-Sep were easy enough to grasp just like addition or subtraction in Math.
    4) Planning Graphs: Learned how to build the relaxed graphs, and choose a relaxed plan from there, involving all the propositions and actions. It’s simple to know when it levels off. The mutex section was complicated because of all the rules that apply but I believe I got it down.
    5) Entailment & Soundness: I learned how the entailment of a model from a Knowledge Base can be shown through the use of the truth tables.
    6) Lisp: This was my second time using something similar to this (Scheme). It was really good to have revisited this, and I had already forgotten how much I find it entertaining. It was sad that we only had one project based on this but this language just makes coding so much shorter and challenging. Its error messages are just hard to decipher.

    -Ivan Zhou

    ReplyDelete
  10. 1) Search Algorithms: This course offered a deep insight into various search algorithms and how they can be used to solve the real world problems. Especially the concept of the heuristics that was introduced with A* search was very well clubbed with examples for better understanding of the concept. The project also helped me to understand better the algorithm and not just learn it theoretically.

    2)Planning Graph: This was one of the concepts that i liked the most, because this taught me how to design and evaluate the planning graph. This part also offered insight on creating real plan by combining planning graphs with Mutexes.

    3)Bayes Net D-sep: The bayes net was one of the most interesting concepts of this course, the concept of conditional independence and D-Sep conditions were very well explained and also i liked the fact there were projects on this topic which made this topic very very interesting for me.

    4) Learning Decision Trees and Neural Networks: This part of course was very informative and explained how to select the attributes in the decision using the information values.It also explained the concept of Linear seperability and how to overcome the problem of XOR function using multi layer neural networks.

    5) Min-Max and Alpha-Beta Pruning: Explaining this concept in comparison to some games really helped me understand this concept. Undoubtedly this was the most interesting part of the course as it helped me understand how games are created.

    --
    Arun Balaji Buduru

    ReplyDelete
  11. Zachary Rodriguez

    1) I was surprised to learn about how admissible heuristics were found for A* search, and that they even worked as well as they do.

    2) Refutation Resolution in propositional logic appeared very straight forward.

    3) It was interesting to see how the use of a Bayesian Network could encode the data present in a joint probability distribution with little space.

    4) Defining the planning domain in project 2 was an interesting experience. Seeing the output of the solver gave a small insight into the workings of planners.

    5) Adversarial search simplified the idea of a game into an optimization problem. Performance metrics used for adversarial search is reminiscent of A* search

    ReplyDelete
  12. 1) Bayes nets:
    I found this topic absolutely fascinating. While the concepts themselves were a bit difficult to grasp at first, I found that the applications (at least so far as we used them in this course) were relatively straightforward and easy to understand. I did a bit of research on my own on this subject and was amazed to find how many applications this has in the “real world”.

    2) Game Trees:
    This was a subject that I was very much hoping would be covered in this course, so I was delighted to hear that it would be covered. Much like Bayes nets, I love the idea that a few relatively straightforward (but extremely clever) ideas can greatly effect the complexity of the problem.

    3) General learning:
    Although I have taken probability and statistics courses in the past, I have never learned to apply them in such an interesting and powerful manner. The idea that different methods and structures for learning yield different results was an interesting topic to me.

    4) A* search
    I felt like this was a great way to get my feet wet in the subjects to be covered in this search. Compared to what I’ve learned in the past, this was a very unique search method. I especially enjoyed implementing this in the first project.

    5) SAT problems:
    I found it particularly interesting that the hard SAT problems are located in such a small area and that a relatively simple method such as the hill-climbing one can prove to be extremely effective in solving these.

    ReplyDelete
  13. 1. Blind and informed searching. BFS and DFS are basics, but it was interesting to note that there was more to it than just the optimality/completeness versus memory consumption argument. IDDFS is optimal, complete, and has the compact memory usage of DFS, yet only performs mildly worse than either BFS and DFS (but still isn't best for all situations). Also noted was the use of heuristics in search algorithms, especially since admissibility, informedness, and computation time plays a part in how well the overall search performs.

    2. SAT problems are easy when the ratio of constraints to variables is either very low or very high (which results in the probability of a solution being very high or very low, respectively). It makes some sense that in the middle, when we aren't sure of whether a solution exists, that we need to work for the answer. It is interesting to note that there seems to be a "magic number" where the difficulty increases drastically.

    3. Applying Bayseian probabilities for handling uncertainty (as long as we know the probabilities anyway). Bayes networks were also useful for visualizing the interaction of events (though topologies need not reflect the causal ordering), such as conditional independence.

    4. Perceptron learning appears to be a curious creature. While noted that they can only learn linear decision boundaries, it is interesting that they still were (and are) useful in so many applications nonetheless. Multi-layer neural networks are something else entirely though, being difficult to visualize and use.

    5. Obligatory alpha-beta pruning, seeing as how the only reason to not use alpha-beta pruning is if one doesn't know what it is (in which case, one should learn). For some reason, it seems odd that something so simple can have such a major impact on how game trees should be done.

    ReplyDelete
  14. 1)I was surprised to see A* search using various Heuristic approaches to solve 8-puzzle problem and it was interesting to compare the performance of various heuristics in the first project using LISP. It was my first time to write some code snippet which executed for 30 mins :)

    2)I was amazed to see how powerful was Alpha-beta pruning algorithm, used in deep blue machine to beat the human in chess. I also got convinced that small logics and techniques can solve complex puzzles.

    3)Using Planning Graph in the project to model a domain for rooby made me feel like programming for a general purpose robot. I was amazed to see something like Sapa Replan planner which is a general planner.

    4)I was a guy with Indian traditional background. Now I understand why people believed something bad happens when they see a black cat when they are going out. This can be explained by Bayes Networks when there a cause to explain something we ignore the other causes which explains it.

    5)Percepron Learning and decisions trees helped me to understand how to use the training data and learn the patterns in the model and predict the future of the model.

    ReplyDelete
  15. 1) Search algorithms: I learned about the difference between several search algorithms like BFS, DFS, IDFS, A*; and also learned about admissible heuristics in A* search. Implementing A* search in project 1 was a big challenge, but it was also an interesting one. It is my first time using LISP, but I really like this programming language.
    2) Planning graph: I like how to construct a planning graph to get to a goal, given a set of actions with their preconditions and effects. The mutex concept was complicated in the beginning, but it became easier when I understood all of the mutex rules.
    3) Using refutation resolution in propositional logic was an interesting topic. I understood the difference between Forward Resolution vs Refutation Resolution, and how to use refutation to prove a theorem.
    4) Davis Putnam for SAT solving: this is a straightforward algorithm, using to reduce the number of probabilities in SAT problem.
    5) Bayes network: I think this is one of the most interesting topic in this class. I learned about Bayes rule; how to draw a Bayes Network and compute the needed probabilities given the database. Determine the independence between nodes using D-sep is also an interesting problem.
    Phien Pham

    ReplyDelete
  16. This comment has been removed by the author.

    ReplyDelete
  17. 1) Searching: Many clever adaptations to search algorithms were presented. I especially like iterative deepening, which is a simple idea, though I had never considered before. A* Search was also very interesting in that it revealed the power of using heuristics.

    2) Resolution rule: I found it interesting that a single powerful inference rule subsumes simpler inference rules that I have previously seen such as modus ponens.

    3) Bayes nets: This concept showed me how one can consider complex probabilistic relationships in a succinct way. I like how it is possible to determine what is dependent on what. I was surprised to see this term show up in my bioinformatics class, and wonder how many other times I have been exposed to it without realizing it.

    4) Game trees: This is definitely among the more interesting topics of the class. I have been intrigued by Deep Blue ever since seeing the documentary about it, "Game Over: Kasparov and the Machine". I have wanted to figure out how to implement an AI that can play a simple game at varying levels of difficulty. Now I feel I have a basic understanding of the problem and I am going to give it a try.

    5) Model finding vs. theorem proving: Revealing the connection between these two as "yin and yang" was rather enlightening, and is a connection I likely would not have made on my own.

    ReplyDelete
  18. 1. The search algorithms.
    I am glad I learned about the various different kinds of searching strategies (BFS, DFS, IDDFS, A*). It allowed me to appreciate why BFS is usually a stupid search strategy. Simply learning about A* search would not have given me this inuition.

    2. Planning domains.
    I really enjoyed the planning domain project, with respect to PDDL. I was always curious about how I could model an environment in a way that a machine could understand and interact in. I felt that it merged AI theory and application very well.

    3. The LISP programming language.
    Before this class, I never really understood why recursion was necessary. I always felt that, for simplicity, I could write something equivalent in a non-recursive form. However, seeing how A* search works, I am not so sure that I could not.

    4. The power of heuristics.
    Before this class, I felt that it was better to exhaust the search just to ensure the the best solution is found. However, I learned that this is not always feasible, and that exhausting the search is not necessary at all to confirm that the best solution is found.

    5. Game trees.
    I now have an appreciation for how artificial intelligence works when playing a game against a computer. Even though example for tic-tac-toe in class was relatively simple, I can only imagine how complicated it can be for a game like chess. I very much enjoyed the discussion regarding opponent modeling.

    Richard McCahill

    ReplyDelete
  19. This comment has been removed by the author.

    ReplyDelete
  20. 1. Searching using Heuristics AKA A* search:
    I found this topic very interesting. Before this course, I did not even know that there exists a search technique which uses heuristics and doing better than usual search technique. I enjoyed implementing A* search in first project.

    2. Planning:
    This topic was very interesting. I learned a lot from this topic. Specially planning graph heuristics were very knowledgeable. I learned how interaction between the two actions is handle properly.

    3. Propositional probabilistic logic (Bayes Net)
    This topic covered the most interesting and challenging topic in the class. I learned many concept from this topic, such as applying Bayes networks for handling uncertainty, Concept of calculating any probability given joint probability distribution, D-Sep, Markov Blanket etc.

    4. Learning:
    I learned a lot from "Learning of Bayes net" as well as "general learning". Concept like EM, Decision tree, NBC, Smoothing of NBC were very good.

    5. Game Tree
    This topic is one of the best topic in whole course. Specially Alpha beta pruning, the simple and clever idea. I learned how deep blue was able to compete with human.

    6. Knowledge Representation
    This part has changed the way i think and talk. Every thing I know, is my knowledge base KB, whenever I have to answer something, I first check whether answer is entailed by knowledge base. I use sentences like " Please do not tell me this, I do not want to make my knowledge inconsistent." "I can infer from the given condition that you must be joking".

    ReplyDelete
  21. 1. I thought the A* search was cool. I'd never heard of it prior to the course. I was fascinated by the adaptability of the search; simply use a good heuristic function and it can be applied to solve many problems. The comparison between it and breadth-first and depth-first algorithms definitely highlighted the shortcomings of the latter two methods.

    2. I also found the planning unit interesting. The regression and progression search techniques are both things that can happen intuitively when people plan a series of actions, so to see them formalized was neat. I enjoyed playing with PDDL as well, as it made defining a problem and finding a solution seem very simple.

    3. Modeling probabilities and the relationships between different variables via Bayes networks was also intriguing. I thought it was a very useful way to produce an informative model even in the presence of unknowns. Manually computing probabilities from the network can still be a pain in the butt, though.

    4. I found the Davis-Putnam technique (and resolution refutation itself) to be a really novel way to determine whether or not a problem is satisfiable (or a sentence is valid). It seemed like something I should have learned way back in our discrete mathematics class given its simplicity.

    5. The final section on game trees was the one of the ones that really got my attention. This is one of the areas of computer science that I have often seen (in games), but never quite understood. Now I have some insight as to how AI opponents decide their moves, even under a time constraint. I agree with Richard that the discussion on opponent modeling and the assumptions we make was enjoyable; it certainly gave me food for thought.

    ReplyDelete
    Replies
    1. Also, with respect to #5, alpha-beta pruning really does seem like a no-brainer in terms of cutting down on useless searching.

      Delete
  22. 1. A* Search : It was interesting to see how heuristics reduce search time to a large extent. Comparing all the search algorithms gave a very clear picture on when to use what (memory consideration, depth of tree,etc.) The project which involved solving a 8x8 puzzle gave a clear picture on how A* search is so much better!. Though I had heard about A* search in the first lecture of the Stanford AI course, I understood it much better here!

    2. Planning. It was interesting to see how when given a set of actions, their prerequisites and their effects, we can find whether a destination state can be reached or not. The project was simple but gave a lot of insights on how an intelligent agent can plan to reach its goal in a modeled environment.

    3. SAT problems : They sound so simple but are so hard to solve!. It was interesting to see how the hill-climbing "guesser" solved SAT problems with ease!.

    4. Bayes network : The idea of how a well-formed bayes network reduces the number of probabilities to be computed and the concept of conditional independence.

    5. Game trees : I have always wondered how games like chess work and how can they compute "good" moves so fast facing a human opponent. I wish we could spend more time on this, write some small games and understand the concepts better... I guess we would be doing that in that Graduate AI course.

    ReplyDelete
  23. 1. A* and IDA* - These topics gave me a good understanding of using heuristics and using measures like informedness and admissibility to judge its effectiveness.

    2. Enjoyed learning LISP and implementing the 8 puzzle problem. I got to code different heuristics and see them in action. I really liked programming in LISP.

    3. Found DPLL very interesting. Liked how two simple methods - unit propagation and pure literal elimination can be used for theorem proving.

    4. D-Sep Approach - This is a very interesting topic. This was explained very well in the class. Learnt how to calculate independence in Bayes Network.

    5.Game Trees - Most interesting among all the topics discussed. Realized hoe simple technique like alpha-beta pruning can help reduce the search space.

    ReplyDelete
  24. 1. Heuristics:

    Truth be told, i did not grasp the idea of Heuristics when Dr. Rao explained the concept of Manhattan distance Heuristic. Only in the next class when Dr. Rao gave the Quiz in class and explained the concepts of Admissibility with graphs, i could understand the true power of the idea of Heuristics . The Relaxed Plan Heuristic and Pattern Database Heuristics are some of the interesting Heuristics in the sense that the idea behind their implementation is deep. And the idea of calculating a moderate heuristic with which the search time could be minimized instead of a cheap heuristic with more search time is truly great.

    2. SAT problems:

    The satisfiability problems are also interesting in the sense that it is nothing but search for a model in a finite state space with all the models in the leaf level and any of the models is a solution. And in order to make it efficient, we considered branching on most constrained variable first. Also, We can assign any values to the propositions and if the values does not satisfy the model, then we can flip any of the variable and for this we use Min-cost algorithm and this is essentially Hill Climbing.

    3. Bayes Networks and D-sep Criterion:

    This concept is very interesting because given a Bayes network and the CPT's, we can compute the Joint Probability Distribution from which any probabilistic query can be answered. The inference on Bayes network using different techniques and especially Enumeration among them is interesting. The D-Sep Criterion which specifies the conditional independence given evidence is also very useful because it reduces the number of values we should know to compute Joint probability distribution because if two nodes are independent, we can just multiply their probabilities.

    4. Naive Bayes Classifier:
    Bayesian Learning and Learning with unknown topology , missing data and hidden variables are other set of interesting topics. The concept of Expectation Maximization is very powerful in the sense that it combines both Inference and Learning in it. And it can be applied in the case of missing data and the hidden variables. Naive Bayes Classifier is useful in the case of Unknown topology. And it the fact that it works in many cases is truly amazing.

    5. Alpha-Beta Pruning:

    This technique of Alpha-Beta Pruning in the game trees is very useful because of the fact that this is not sacrificing Optimality to lessen the computation time. We saw some techniques like Decision trees in which we are greedy and we are not guaranteed to be Optimal. But this technique of Alpha-Beta Pruning results in an optimal search tree in the sense that it cuts down the tree which cannot contain (mathematically proven) valid outcomes.

    Vishnu Teja Kilari

    ReplyDelete
  25. Intelligent Agents : The discussion on agents, different types of environments in the beginning of the course was very interesting and gave an idea of what AI is about.

    Searching : How a agent finds a solution through searching and how the heuristics are applied was interesting. Also practically implementing A* in LISP was a good way to learn.

    Planning : The project where we wrote the PDDLs, helped me understand the planning concepts clearly.

    Propositional logic and Probability: Though it used basic mathematics, it was a quite difficult to understand. Had to put lot of effort for this. I liked the DPLL search algorithm, the way in which lot of cases considered in designing this algorithm.

    Learning: I liked the way probablilty learning was explained in class especially MAP and ML. But I found a little more time could have been spent in Perceptron learning.

    ReplyDelete
  26. 1- Bayesian Learning and its connection with inference: It was quite surprising to me that bayesian learning was, on the beginning, basically the same as doing inference.

    2 - Game Trees: once, I had implemented a crazy (and sometimes suicidal) version of the English Checkers game, so it was interesting to see how it's really supposed to be done, especially its ways of optimizing it, with alpha-beta prunning.

    3 - the leap from propositional to probabilistic propositional logic: I was interesting to see the limitations with propositional logic, and how probabilistic propositional logic addresses them. This was especially interesting in the Bayesian project, when we used our Simpsons topology with CPTs that reflected a non-probabilistic way of thinking, showing int practice that normal propositional logic is a special case of probabilistic propositional logic.

    4 - The comparison between the different approaches to general learning: when each of the learning methods is good or bad at. This is interesting not only on the AI point of view, but also as a discussion of what humans like and use as valid hypothesis.

    5 - Searching and heuristics: how to compute and use heuristics while searching for solutions. Also, the discussion on pattern databases was very interesting, adding to the discussion of time to compute heuristics vs. time to find the solution.

    ReplyDelete
  27. 1) Searching Algorithm's: I had a basic idea about the search algorithms earlier but the learning about their complexity and the time the take for the execution was something which i liked, plus their partial implementation as a project was fun and a challenge. Project0 and Project1 was really a nice working on.

    2) Propositional and probabilistic Logic: The transition from propositional logic to probabilistic in order to reduce the number of state variable and then the transition to the Bayes network was interesting. I liked working on the problems and concepts related to Bayes network. In my data mining class the professor taught us the transition from Naive Bayes to the Bayes net, studying the reverse transition was interesting and added to my knowledge.

    3) Game theory (Alpha-beta pruning): I really enjoyed studying the pruning part, as Tic-tack-too is one the popular games and this theory actually explained the way we think while playing the game and gave me a idea how 2-player game can me won by thinking few folds ahead ;)

    4) Planning: Understanding about the progression and regression as two different ways to plan was something new. It was something that human brains may be doing (backward thinking) but the explicit explanation and the example considering mutual exclusions while planning was interesting and it took me time to get it right (may be m a slow learner)

    5) EM: The methodology to improve the classifier to classify the test records correctly using the bootstrapping was interesting and if given a chance would be interesting to implement. It would be big challenge in case it has to be implemented, but it would be a good project to work on.

    6) The projects and homework where very good, it made me think and understand all the concepts from basics. Also the online lectures where a great help. Thank you.

    ReplyDelete
  28. This comment has been removed by the author.

    ReplyDelete
  29. IDDFS and A* search were interesting because it is the first time I had seen both a heuristic (in the case of A*) and an iterative deepening (in both) used in search. It was amazing to me the improvement that adding a heuristic can make in search.

    I found Bayes Networks to be particularly interesting, both because they provided a succinct way to determine the probability of an event based on multiple factors and because they provide an easy way to visualize how probabilities of different events were related to each other.

    Prepositional logic was interesting to me for similar reasons to Bayes networks. It provides an easy way to visualize how certain facts are derived from others, and forward resolution and refutation resolution were interesting ways to think about how different things related to each other.

    Though it is connected to logic, I also thought that the resolution rule was fascinating, because it was very interesting to me that a single rule that is both sound and complete could be used on its own to solve problems.

    I have always been an avid game player, so game trees were another interesting topic to me. The idea that better players look deeper in the game tree when deciding their moves made me feel pretty good about how I play the games that I am good at, though the idea that a novice chess player looks 4 moves ahead reassured me that I am, indeed, terrible at chess.

    ReplyDelete
  30. 1)A* search: This is the first course where i learnt about heuristics and extensive searching using various algorithms. I liked its effectiveness in solving puzzle problem and the importance of trade off between building heuristic and searching costs.

    2) Propositional Logic: This seemed to be simple but it was interesting in the way we can represent problem domain into propositional logic domain and how effectively problems could be solved in propositional logic.

    3)Alpha beta pruning: This seemed to be too simple and trivial yet its astonishing in how effective it could be reducing the complexity of the problems.

    4)Regression: I liked the regression because till now all my problem solving approach was progression approach. Regression approach has given good insight of problem and sometimes help break deadlocks.

    5)Bayes network: I liked the bayes network and conditional probability part as it helped me revisit my highschool math. I enjoyed solving probability problems through out the course.

    ReplyDelete
  31. 1)The variations of search algorithms like depth limited search variation of DFS, Iterative DFS and A* and IDA* ; How heuristics matter for A* alg. And the Pattern Database heuristics.

    2)The D-Sep rule for the deciding the independency In a Bayes Network and how it simplifies the inference on Bayes Network is amazing as it makes our task easier and how the approximate methods of sampling the Bayes Network work. The Hidden Markov Models – We don’t get to observe the state we only observe the observation variables of the state.
    3)The Bayes Net learning and how it helps us in predictions and how hard it is to come up with a topology and complete data to fit the topology and hence we use the MAP and ML approximation and Laplacian smoothing to overcome the zero problem; The NBC and the reason why it works though from theory we may conclude that it may not work.

    4)Learning the decision trees and how entropy lets us decide which attribute to split on and sometimes causes the decision tree to overfit the data because it searches for a pattern even though it may not be statistically significant.

    5)The alpha Beta pruning in the Game trees and how it helps us cut the branches in a Game tree which would otherwise have been such a huge task and thus helping the agent decide faster on its next move.

    ReplyDelete
  32. 1) The fact that admissible heuristics lead A* search to find the optimal solution and the incredible speed increase that an informed heuristic can make. This was particularly noticeable in the 8-puzzle project. Using a heuristic that was always 0 took a very long time but using manhattan distance for example led to very speedy solutions, despite the fact that they are both admissible heuristics.

    2) The effectiveness of hill-climbing algorithms, especially for 3-SAT. It seems like such an algorithm should never work and the fact that it actually works quite well was a very important and interesting lesson.

    3) On the subject of algorithms that seem like they should not work, the EM algorithm for learning with unlabeled data. As Dr. Rao said in class, it seems very much like the EM algorithm invents knowledge out of thin air, but it is in fact a very clever way of extracting the information that already exists in the data. When Dr. Rao pointed out that using EM on randomly-generated data would be pointless, this connection was made clear.

    4) Resolution refutation to prove or disprove statements given a knowledge base. I was very surprised to learn that resolution was both sound and complete, while the inference rules that I am used to using (modus ponens, tollens) are complete.

    5) Min-max agents and alpha-beta pruning. It is surprising how effective such agents are, considering how simple the theory behind it is. While many of the topics in the class were very theoretical and took quite some effort to understand, the theory behind such agents is very intuitive.

    ReplyDelete
  33. The concept of most-constrained variable being the variable to be first searched for in SAT problems. The analogy that all of us even apply this in real life sub-consciously to make decisions was very interesting.

    The difference between frequentists and bayesians and the fact that bayesian model can be right as long as it is consistent

    with its belief/knowledge, irrespective of how the numbers originated.

    The recursive loop of theorem proving involving model search, and model search involving theorem proving/inference.

    The usage of elemetary probability in constructing the powerful bayes networks to find conditional probabilities of events.

    The D-Sep conditions to figure out the independency between two nodes, which decreases the number of conditional probabilities required to calculate the joint probability distribution was a nice concept to make the network more efficient.

    ReplyDelete
  34. 1. This is the first time I understood A* search perfectly and got how to apply it correctly.
    2. I liked the SAT-Z solver idea. and how it beats the heck out of other ideas and how it made people think if there are hard problems existing
    3. The planning assignment. It was really nice to model the domain and put into planner.
    4. alpha beta pruning : such a simple idea but works great practically.
    5. Bayes network topic as a whole : How it reduces the number of probabilities to be considered by modelling domain as a network. Bayesian learning.

    ReplyDelete
  35. 1) Atomic Agents were very interesting to me, since it surprised me that many how search functions were used in finding goals.

    2) A* Search: I felt like this class really taught me how to apply A* and how incredibly useful it is in many applications. It surprised me to see how many games and problems can be solved using it, especially since at first, search functions seemed a bit simple to solve such problems such as the tile puzzle

    3) Bayes Networks were interesting for me for the fact that you can model such large amounts of data interacting with each other in such a concise way. It really helps with visualizing certain probability problems(Such as the one where a woman with a mammogram testing positive still has a large chance she does not actually have cancer due to the extremely low probability of having it in the first place)

    4) D-Separation was great to know, especially since it really cuts down on the time it takes to solve problems within Bayes nets

    5) Game Trees were extremely interesting, as I did not know how agents such as Deep Blue behaved. It was surprising to see that it worked much like a two-agent search function.

    ReplyDelete