Hacker News Comments on
Artificial Intelligence: A Modern Approach
·
81
HN points
·
16
HN comments
- Ranked #26 all time · view
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this book.> study textbooks. Do exercises. Treat it like academic studyingThis. Highly recommend Russel & Norvig [1] for high-level intuition and motivation. Then Bishop's "Pattern Recognition and Machine Learning" [2] and Koller's PGM book [3] for the fundamentals.
Avoid MOOCs, but there are useful lecture videos, e.g. Hugo Larochelle on belief propagation [4].
FWIW this is coming from a mechanical engineer by training, but self-taught programmer and AI researcher. I've been working in industry as an AI research engineer for ~6 years.
[1] https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
[2] https://www.amazon.com/Pattern-Recognition-Learning-Informat...
[3] https://www.amazon.com/Probabilistic-Graphical-Models-Princi...
⬐ jimmy-deanOof those are all dense reads for a new comer... For a first dip into the waters I usually suggest Introduction to Statistical Learning. Then from there move into PRML or ESL. Were you first introduced to core ML through Bishop? +1 for a solid reading list.⬐ sampoPGMs were in fashion in 2012, but by 2014 when Deep Learning had become all the rage, I think PGMs almost disappeared from the picture. Do people even remember PGMs exist now in 2019?⬐ srean⬐ godelmachineFashion is relevant only if you want to approach it as a fashion industry.⬐ vazambPGMs also provide the intuition behind GANs and variational autoencoders.⬐ KidComputerYou'll find plate models, PGM junk, etc in modern papers on explicit density generative models and factorizing latents on such models.Hands up for Bishop and Russel Norvig.Russel Norvig should be treated as a subtle intro to AI.
The start Bishop to understand concepts.
⬐ magoghmI would also include some books about statistics. Two excellent introductory books are:Statistical Rethinking https://www.amazon.com/Statistical-Rethinking-Bayesian-Examp...
An Introduction to Statistical Learning http://www-bcf.usc.edu/~gareth/ISL/
Would you enjoy something that gives a broad overview? Norvig's AI book https://www.amazon.com/Artificial-Intelligence-Modern-Approa... should give you a very broad perspective of the entire field. There will be many course websites with lecture material and lectures to go along with it that you may find useful.The book website http://aima.cs.berkeley.edu/ has lots of resources.
But it sounds like you are specifically interested in deep learning. A Google researcher wrote a book on deep learning in Python aimed at a general audience - https://www.amazon.com/Deep-Learning-Python-Francois-Chollet... - which might be more directly relevant to your interests.
There's also what I guess you would call "the deep learning book". https://www.amazon.com/Deep-Learning-Adaptive-Computation-Ma...
(People have different preferences for how they like to learn and as you can see I like learning from books.)
(I apologize if you already knew about these things.)
⬐ mlejvaThank you for the tips.The Deep Learning Book (http://deeplearningbook.org) was one of my main studying materials. How would you compare the other DL book you mentioned (https://www.amazon.com/Deep-Learning-Python-Francois-Chollet...) against this one?
I think you misunderstood me.I do not believe "you need to understand all these deep and hard concepts before you start to touch ML." That is a contortion of what I said.
First point: ML is not a young field- term was coined in 1959. Not to mention the ideas are much older. *
Second Point: ML/'AI' relies on a slew of various concepts in maths. Take any 1st year textbook -- i personally like Peter Norvig's. I find the breadth of the field quite astounding.
Third Point: Most PhDs are specialists-- aka, if I am getting a PhD in ML, i specialize in a concrete problem domain/subfield, so I can specialize in all subfields. For example, I work on event detection and action recognition in video models. Before being accepted into a PhD you must pass a Qual, which ensures you understand the foundations of the field. So comparing to this is a straw man argument.
If your definition of ML is taking a TF model and running it, then I believe we have diverging assumptions of what the point of a course in ML is. Imo the point of an undergraduate major is to become acquainted with the field and be able to perform reasonably well in it professionally.
The reason why so many companies (Google,FB,MS etc) are paying for this talent, is that it is not easy to learn and takes time to master. Most people who just touch ML have a surface level understanding.
I have seen people who excel at TF (applied to deep learning) without having an ML background, but even they have issues when it comes to understanding concepts in optimization, convergence, model capacity that have huge bearings on how their models perform.
https://en.wikipedia.org/wiki/Machine_learning *https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
For what it's worth, I feel like we already have a version of that book by Peter Norvig:https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
What? No. Why in the world do people even ask this kind of question. To a first approximation, the answer to "is it too late to get started with ..." question is always "no".If no, what are the great resources for starters?
The videos / slides / assignments from here:
http://ai.berkeley.edu/home.html
This class:
https://www.coursera.org/learn/machine-learning
This class:
https://www.udacity.com/course/intro-to-machine-learning--ud...
This book:
https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
This book:
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-T...
This book:
https://www.amazon.com/Introduction-Machine-Learning-Python-...
These books:
http://greenteapress.com/thinkstats/thinkstats.pdf
http://www.greenteapress.com/thinkbayes/thinkbayes.pdf
This book:
https://www.amazon.com/Machine-Learning-Hackers-Studies-Algo...
This book:
https://www.amazon.com/Thoughtful-Machine-Learning-Test-Driv...
These subreddits:
http://machinelearning.reddit.com
These journals:
This site:
Any tips before I get this journey going?
Depending on your maths background, you may need to refresh some math skills, or learn some new ones. The basic maths you need includes calculus (including multi-variable calc / partial derivatives), probability / statistics, and linear algebra. For a much deeper discussion of this topic, see this recent HN thread:
https://news.ycombinator.com/item?id=15116379
Luckily there are tons of free resources available online for learning various maths topics. Khan Academy isn't a bad place to start if you need that. There are also tons of good videos on Youtube from Gilbert Strang, Professor Leonard, 3blue1brown, etc.
Also, check out Kaggle.com. Doing Kaggle contests can be a good way to get your feet wet.
And the various Wikipedia pages on AI/ML topics can be pretty useful as well.
TL;DR - read my post's "tag" and take those courses!---
As you can see in my "tag" on my post - most of what I have learned came from these courses:
1. AI Class / ML Class (Stanford-sponsored, Fall 2011)
2. Udacity CS373 (2012) - https://www.udacity.com/course/artificial-intelligence-for-r...
3. Udacity Self-Driving Car Engineer Nanodegree (currently taking) - https://www.udacity.com/course/self-driving-car-engineer-nan...
For the first two (AI and ML Class) - these two MOOCs kicked off the founding of Udacity and Coursera (respectively). The classes are also available from each:
Udacity: Intro to AI (What was "AI Class"):
https://www.udacity.com/course/intro-to-artificial-intellige...
Coursera: Machine Learning (What was "ML Class"):
https://www.coursera.org/learn/machine-learning
Now - a few notes: For any of these, you'll want a good understanding of linear algebra (mainly matrices/vectors and the math to manipulate them), stats and probabilities, and to a lessor extent, calculus (basic info on derivatives). Khan Academy or other sources can get you there (I think Coursera and Udacity have courses for these, too - plus there are a ton of other MOOCs plus MITs Open Courseware).
Also - and this is something I haven't noted before - but the terms "Artificial Intelligence" and "Machine Learning" don't necessarily mean the same thing. Based on what I have learned, it seems like artificial intelligence mainly revolves around modern understandings of artificial neural networks and deep learning - and is a subset of machine learning. Machine learning, though, also encompasses standard "algorithmic" learning techniques, like logistic and linear regression.
The reason why neural networks is a subset of ML, is because a trained neural network ultimately implements a form of logistic (categorization, true/false, etc) or linear regression (range) - depending on how the network is set up and trained. The power of a neural network comes from not having to find all of the dependencies (iow, the "function"); instead the network learns them from the data. It ends up being a "black box" algorithm, but it allows the ability to work with datasets that are much larger and more complex than what the algorithmic approaches allow for (that said, the algorithmic approaches are useful, in that they use much less processing power and are easier to understand - no use attempting to drive a tack with a sledgehammer).
With that in mind, the sequence to learn this stuff would probably be:
1. Make sure you understand your basics: Linear Algebra, stats and probabilities, and derivatives
2. Take a course or read a book on basic machine learning techniques (linear regression, logistic regression, gradient descent, etc).
3. Delve into simple artificial neural networks (which may be a part of the machine learning curriculum): understand what feed-forward and back-prop are, how a simple network can learn logic (XOR, AND, etc), how a simple network can answer "yes/no" and/or categorical questions (basic MNIST dataset). Understand how they "learn" the various regression algorithms.
4. Jump into artificial intelligence and deep learning - implement a simple neural network library, learn tensorflow and keras, convolutional networks, and so forth...
Now - regarding self-driving vehicles - they necessarily use all of the above, and more - including more than a bit of "mechanical" techniques: Use OpenCV or another machine vision library to pick out details of the road and other objects - which might then be able to be processed by a deep learning CNN - ex: Have a system that picks out "road sign" object from a camera, then categorizes them to "read" them and use the information to make decisions on how to drive the car (come to a stop, or keep at a set speed). In essence, you've just made a portion of Tesla's vehicle assist system (first project we did in the course I am taking now was to "follow lane lines" - the main ingredient behind "lane assist" technology - used nothing but OpenCV and Python). You'll also likely learn stuff about Kalman filters, pathfinding algos, sensor fusion, SLAM, PID controllers, etc.
I can't really recommend any books to you, given my level of knowledge. I've read more than a few, but most of them would be considered "out of date". One that is still being used in university level courses is this:
https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
Note that it is a textbook, with textbook pricing...
Another one that I have heard is good for learning neural networks with is:
https://www.amazon.com/Make-Your-Own-Neural-Network/dp/15308...
There are tons of other resources online - the problem is separating the wheat from the chaff, because some of the stuff is outdated or even considered non-useful. There are many research papers out there that can be bewildering. I would say if you read them, until you know which is what, take them all with a grain of salt - research papers and web-sites alike. There's also the problem of finding diamonds in the rough (for instance, LeNet was created in the 1990s - but that was also in the middle of an AI winter, and some of the stuff written at the time isn't considered as useful today - but LeNet is a foundational work of today's ML/AI practices).
Now - history: You would do yourself good to understand the history of AI and ML, the debates, the arguments, etc. The base foundational work come from McCulloch and Pitts concept of an artificial neuron, and where that led:
https://en.wikipedia.org/wiki/Artificial_neuron
Also - Alan Turing anticipated neural networks of the kind that wasn't seen until much later:
http://www.alanturing.net/turing_archive/pages/reference%20a...
...I don't know if he was aware of McCulloch and Pitts work which came prior, as they were coming at the problem from the physiological side of things; a classic case where inter-disciplinary work might have benefitted all (?).
You might want to also look into the philosophical side of things - theory of mind stuff, and some of the "greats" there (Minsky, Searle, etc); also look into the books written and edited by Douglas Hofstadter:
https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
There's also the "lesser known" or "controversial" historical people:
* Hugo De Garis (CAM-Brain Machine)
* Igor Aleksander
* Donald Michie (MENACE)
...among others. It's interesting - De Garis was a very controversial figure, and most of his work, for whatever it is worth - has kinda been swept under the rug. He built a few computers that were FPGA based hardware neural network machines that used cellular automata a-life to "evolve" neural networks. There were only a handful of these machines made; aesthetically, their designs were as "sexy" as the old Cray computers (seriously).
Donald Michie's MENACE - interestingly enough - was a "learning computer" made of matchboxes and beads. It essentially implemented a simple neural network that learned how to play (and win at) naughts and crosses (TIC-TAC-TOE). All in a physically (by hand) manipulated "machine".
Then there is one guy, who is "reviled" in the old-school AI community on the internet (take a look at some of the old comp.ai newsgroup archives, among others). His nom-de-plume is "Mentifex" and he wrote something called "MIND.Forth" (and translated it to a ton of other languages), that he claimed was a real learning system/program/whatever. His real name is "Arthur T. Murray" - and he is widely considered to be one of the earliest "cranks" on the internet:
http://www.nothingisreal.com/mentifex_faq.html
Heck - just by posting this I might be summoning him here! Seriously - this guy gets around.
Even so - I'm of the opinion that it might be useful for people to know about him, so they don't go to far down his rabbit-hole; at the same time, I have a small feeling that there might be a gem or two hidden inside his system or elsewhere. Maybe not, but I like to keep a somewhat open mind about these kinds of things, and not just dismiss them out of hand (but I still keep in mind the opinions of those more learned and experienced than me).
EDIT: formatting
⬐ ep103Oh man, thank you! Thank you!
Introduction to Algorithms (CLRS) - probably the closest you'll get for Algorithms, next to Knuth. It's also updated relatively often to stay cutting edgehttps://mitpress.mit.edu/books/introduction-algorithms
AI, a modern approach (Norvig & Russel) - For classic AI stuff, although nowadays it might fade a bit with all the deep learning advances.
https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
While it's not strictly CS, Tufte's Visual Display of Quantitative information should probably be on every programmer's shelf.
https://www.amazon.com/Visual-Display-Quantitative-Informati...
I found this on Amazon, although pricey it looks like it might be a good solid introduction:https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
I would be curious if anyone has gone through this book and what their thoughts were on it.
Depending on your level of programming ability, one algorithm a day, IMHO, is completely doable. A number of comments and suggestions say that one per day is an unrealistic goal (yes, maybe it is) but the idea of setting a goal and working through a list of algorithms is very reasonable.If you are just learning programming, plan on taking your time with the algorithms but practice coding every day. Find a fun project to attempt that is within your level of skill.
If you are a strong programmer in one language, find a book of algorithms using that language (some of the suggestions here in these comments are excellent). I list some of the books I like at the end of this comment.
If you are an experienced programmer, one algorithm per day is roughly doable. Especially so, because you are trying to learn one algorithm per day, not produce working, production level code for each algorithm each day.
Some algorithms are really families of algorithms and can take more than a day of study, hash based look up tables come to mind. First there are the hash functions themselves. That would be day one. Next there are several alternatives for storing entries in the hash table, e.g. open addressing vs chaining, days two and three. Then there are methods for handling collisions, linear probing, secondary hashing, etc.; that's day four. Finally there are important variations, perfect hashing, cuckoo hashing, robin hood hashing, and so forth; maybe another 5 days. Some languages are less appropriate for playing around and can make working with algorithms more difficult, instead of a couple of weeks this could easily take twice as long. After learning other methods of implementing fast lookups, its time to come back to hashing and understand when its appropriate and when alternatives are better and to understand how to combine methods for more sophisticated lookup methods.
I think you will be best served by modifying your goal a bit and saying that you will work on learning about algorithms every day and cover all of the material in a typical undergraduate course on the subject. It really is a fun branch of Computer Science.
A great starting point is Sedgewick's book/course, Algorithms [1]. For more depth and theory try [2], Cormen and Leiserson's excellent Introduction to Algorithms. Alternatively the theory is also covered by another book by Sedgewick, An Introduction to the Analysis of Algorithms [3]. A classic reference that goes far beyond these other books is of course Knuth [4], suitable for serious students of Computer Science less so as a book of recipes.
After these basics, there are books useful for special circumstances. If your goal is to be broadly and deeply familiar with Algorithms you will need to cover quite a bit of additional material.
Numerical methods -- Numerical Recipes 3rd Edition: The Art of Scientific Computing by Tuekolsky and Vetterling. I love this book. [5]
Randomized algorithms -- Randomized Algorithms by Motwani and Raghavan. [6], Probability and Computing: Randomized Algorithms and Probabilistic Analysis by Michael Mitzenmacher, [7]
Hard problems (like NP) -- Approximation Algorithms by Vazirani [8]. How to Solve It: Modern Heuristics by Michalewicz and Fogel. [9]
Data structures -- Advanced Data Structures by Brass. [10]
Functional programming -- Pearls of Functional Algorithm Design by Bird [11] and Purely Functional Data Structures by Okasaki [12].
Bit twiddling -- Hacker's Delight by Warren [13].
Distributed and parallel programming -- this material gets very hard so perhaps Distributed Algorithms by Lynch [14].
Machine learning and AI related algorithms -- Bishop's Pattern Recognition and Machine Learning [15] and Norvig's Artificial Intelligence: A Modern Approach [16]
These books will cover most of what a Ph.D. in CS might be expected to understand about algorithms. It will take years of study to work though all of them. After that, you will be reading about algorithms in journal publications (ACM and IEEE memberships are useful). For example, a recent, practical, and important development in hashing methods is called cuckoo hashing, and I don't believe that it appears in any of the books I've listed.
[1] Sedgewick, Algorithms, 2015. https://www.amazon.com/Algorithms-Fourth-Deluxe-24-Part-Lect...
[2] Cormen, et al., Introduction to Algorithms, 2009. https://www.amazon.com/s/ref=nb_sb_ss_i_1_15?url=search-alia...
[3] Sedgewick, An Introduction to the Analysis of Algorithms, 2013. https://www.amazon.com/Introduction-Analysis-Algorithms-2nd/...
[4] Knuth, The Art of Computer Programming, 2011. https://www.amazon.com/Computer-Programming-Volumes-1-4A-Box...
[5] Tuekolsky and Vetterling, Numerical Recipes 3rd Edition: The Art of Scientific Computing, 2007. https://www.amazon.com/Numerical-Recipes-3rd-Scientific-Comp...
[6] https://www.amazon.com/Randomized-Algorithms-Rajeev-Motwani/...
[7]https://www.amazon.com/gp/product/0521835402/ref=pd_sim_14_2...
[8] Vazirani, https://www.amazon.com/Approximation-Algorithms-Vijay-V-Vazi...
[9] Michalewicz and Fogel, https://www.amazon.com/How-Solve-Heuristics-Zbigniew-Michale...
[10] Brass, https://www.amazon.com/Advanced-Data-Structures-Peter-Brass/...
[11] Bird, https://www.amazon.com/Pearls-Functional-Algorithm-Design-Ri...
[12] Okasaki, https://www.amazon.com/Purely-Functional-Structures-Chris-Ok...
[13] Warren, https://www.amazon.com/Hackers-Delight-2nd-Henry-Warren/dp/0...
[14] Lynch, https://www.amazon.com/Distributed-Algorithms-Kaufmann-Manag...
[15] Bishop, https://www.amazon.com/Pattern-Recognition-Learning-Informat...
[16] Norvig, https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
To anyone who's never done the Pacman projects: I highly recommend them[1]. They are an absolute blast and incredibly satisfying. Plus, if you don't know Python, they are a great way to learn.The course I took used the Norvig text[2] as a textbook, which I also recommend.
[1]http://ai.berkeley.edu/project_overview.html. See the "Lectures" link at the top for all the course videos/slides.
[2]http://www.amazon.com/Artificial-Intelligence-Modern-Approac... Note that the poor reviews center on the price, the digital/Kindle edition and the fact that the new editions don't differ greatly from the older ones. If you've never read it and you have the $$, a hardbound copy makes a great learning and reference text, and it's the kind of content that's not going to go out of date.
⬐ togeliusMy intro to AI course at NYU uses the Ms. Pac-Man vs Ghost Teams framework for all of the assignments. It is indeed a very good starter problem.⬐ navbakerI'll second the recommendation for Norvig and Russell's text. It's the first textbook I've ever actually wanted to sit down and read outside of assignments.-edit for spelling
Peter Norvig's Artificial Intelligence:http://www.amazon.com/Artificial-Intelligence-Modern-Approac...
Has plenty of examples in Python. You can also look at different Udacity courses. They have a couple dealing with ML with Python.
⬐ craigchingI have Norvig's book, I'm not sure I'd recommend that as an introduction ;) Awesome book though!⬐ plinkplonkAIMA is about as introductory as these texts get (and still be valuable). It in an undergrad textbook after all.⬐ craigchingI guess my point is that it's such a broad overview of all topics that fall under artificial intelligence that you don't get much of a good introduction to applying machine learning. But point taken, you're right, it is an introductory text.
I recently implemented depth-first iterative deepening in an Artificial Intelligence class project to solve the classic missionaries and cannibals problem [0]. The professor remarked that while there have been some optimizations over the last few decades, using them can be quite messy – to the point where the combination of A* and iterative deepening is still commonly used in the field.I'm fairly certain that the claim in the introduction – "Unfortunately, current AI texts either fail to mention this algorithm or refer to it only in the context of two-person game searches" – is no longer true.
From my current textbook (Artificial Intelligence: A Modern Approach [1]):
"Iterative deepening search (or iterative deepening depth-first search) is a general strategy, often used in combination with depth-first tree search, that finds the best depth limit. It does this by gradually increasing the limit — first 0, then 1, then 2, and so on — until a goal is found... In general, iterative deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known."
[0] http://en.wikipedia.org/wiki/Missionaries_and_cannibals_prob...
[1] http://www.amazon.com/Artificial-Intelligence-Modern-Approac...
Combinatorial game theory would be the wrong tool for this, I believe, since this isn't a strictly finite turn-based combinatorial game. (And in my opinion combinatorial game theory is more useful for analysis and proof techniques than for creating AI algorithms, but I've only had one class on it.) The blog mentioned "fruitwalk-ab" uses minimax (with the alpha-beta optimization of course), which is the bread-and-butter algorithm for these kinds of games and I expected it to be in 1st place. Sure enough, it is at the moment. (Edit2: No longer.)In an intro machine learning course you'd learn about minimax and others, but skip paying any money and just read the basic algorithms here and look up the wiki pages for even more examples: http://www.stanford.edu/~msirota/soco/blind.html (The introduction has some term definitions.)
Edit: Also, the obligatory plug for Artificial Intelligence: A Modern Approach http://www.amazon.com/Artificial-Intelligence-Modern-Approac...
⬐ lacksconfidenceThanks for the info, interesting stuff. I've got more first-gen minimax going right now, sorta works but gets stuck in a repeating move pattern. Interesting stuff to work on though, i really appreciate the pointers. Hopefully this weekend i can work out a better heuristic valuation of a game state, for a first run i'm just counting my objects vs theirs(very simple), but i need to somehow weight it such that it prefers picking objects up sooner than later. And of course it would better match if it also took into account groupings, but thats for after i get basic board-clearing working.
I wish.http://www.amazon.com/Artificial-Intelligence-Modern-Approac...
⬐ andykingWon't the second-hand price of this particular book be pushed up by the numbers of people after it for the free Stanford AI class that uses it?⬐ fuzzythinker⬐ MalcxThe price drop after the class is over may be higher than the increase.⬐ dmanTracking price and availability of used books might be interesting to plot against course difficulty ramp up.Look to the international editions. These are often shipped from India - quite thin paper but the same content:
My Multiagent Systems class at UT Austin (taught by Peter Stone) discussed the Kiva system, along with many, many other topics pertaining to AI today.A couple of videos about Kiva: http://www.raffaello.name/KivaSystems.html
A paper on Kiva: http://www.raffaello.name/Assets/Publications/CoordinatingHu...
http://www.cs.utexas.edu/~pstone/Courses/344Mfall10/assignme... has all of the readings for the class (with more on the 'resources' page). If you want to learn something about AI, it's certainly a good place to start!
If you're more into the algorithms side of AI, you should certainly read AI: A Modern Approach (http://www.amazon.com/Artificial-Intelligence-Modern-Approac...). It's a text book, don't get me wrong, but it clearly explains many relevant algorithms in AI today with accompanying pseudocode and theory. If you've got a CS background, it's a great reference/learning tool. I bought mine for a class, and won't be returning it!
⬐ jahAh very nice, the chess position shown on the cover appears to be the final position from game 6 of the 1997 Kasparov vs. Deep Blue match.http://www.amazon.com/gp/product/images/0136042597/ref=dp_im...
http://en.wikipedia.org/wiki/Deep_Blue_%E2%80%93_Kasparov,_1...
⬐ physcabIf anyone is looking for a good reference to one of AI's subtopics--Machine Learning--then I highly recommend Christopher Bishop's Pattern Recognition and Machine Learning.I believe the book was published in 2006, so a vast majority of the material is cutting edge. It's a difficult read, and not really meant for the duct tape programmer. But if you have the patience to stick with this book for as long as I have (an entire year), then you'll be well positioned to tackle any problem in Artificial Intelligence.
⬐ osipovdoes anyone know what are the differences from the 2nd edition?⬐ gcheongWill it be available and optimized for the Kindle DX?⬐ dmixWell, it's a good thing I didn't spend $100 on the 2nd edition last month at the book store.⬐ ovi256⬐ holdenkSame here, I'm reading the 2nd ed we have at work, but was thinking to buy my own, and now I'll get the 3rd directly. With the revised introduction, now I'll have to re-read that :-)⬐ mahmudThe 3rd Ed. was announced, IIRC, over a year ago, and sample chapters have been released.Any plans on an e-book of some format (pdf, mobi, azw) or otherwise?⬐ pnorvigA revised web site should be up shortly, detailing what's new. Here's what the preface says:This edition captures the changes in AI that have taken place since the last edition in 2003. There have been important applications of AI technology, such as the widespread deployment of practical speech recognition, machine translation, autonomous vehicles, and household robotics. There have been algorithmic landmarks, such as the solution of the game of checkers. And there has been a great deal of theoretical progress, particularly in areas such as probabilistic reasoning, machine learning, and computer vision. Most important from our point of view is the continued evolution in how we think about the field, and thus how we organize the book. The major changes are as follows: \begin{itemize} \item We place more emphasis on partially observable and nondeterministic environments, especially in the nonprobabilistic settings of search and planning. The concepts of {\em belief state} (a set of possible worlds) and {\em state estimation} (maintaining the belief state) are introduced in these settings; later in the book, we add probabilities. \item In addition to discussing the types of environments and types of agents, we now cover in more depth the types of {\em representations} that an agent can use. We distinguish among {\em atomic} representations (in which each state of the world is treated as a black box), {\em factored} representations (in which a state is a set of attribute/value pairs), and {\em structured} representations (in which the world consists of objects and relations between them). \item Our coverage of planning goes into more depth on contingent planning in partially observable environments and includes a new approach to hierarchical planning. \item We have added new material on first-order probabilistic models, including {\em open-universe} models for cases where there is uncertainty as to what objects exist. \item We have completely rewritten the introductory machine-learning chapter, stressing a wider varie\ ty of more modern learning algorithms and placing them on a firmer theoretical footing. \item We have expanded coverage of Web search and information extraction, and of techniques for learning from very large data sets. \item 20\% of the citations in this edition are to works published after 2003. \item We estimate that about 20\% of the material is brand new. The remaining 80\% reflects older work but has been largely rewritten to present a more unified picture of the field. \end{itemize}
⬐ silentbicycleTo ask a somewhat nuanced question, what do you feel the modern relevance of Lisp and Prolog are in AI? After writing a great deal about both language families, your first "go-to" language these days seems to be Python. Have major features for exploratory programming historically associated with Lisp been incorporated into dynamic/scripting languages such as Python, Ruby, and Lua?⬐ pnorvig⬐ zitterbewegungI think that when I was in grad school, Lisp was unique in the power it brought for the type of exploratory programming that was necessary for AI. I think that today Lisp is still a great choice, but there are other choices that are also good---as you say, other languages have incorporated many (but not all) of the good parts of Lisp, so that today the choice of language can be made based on other factors.: for example, what language do you already know, do your friends know, etc.There is a lot of content in an AI course, and I didn't think it made sense for an instructor to take a week or two out of the semester to teach Lisp, so we added Java and Python support. Java because it is widely-known, and Python because it is fairly widely-known and because, of all the languages I know, it happens to be closest to the pseudocode we invented in the book.
I never programmed at a serious level in Prolog, so I'll let other people comment on that.
Do you plan on releasing the book as an ebook anywhere? Such as the kindle or any other form?⬐ pnorvig⬐ rsaarelmThat choice is up to the publisher -- the authors have no control. Pearson has traditionally released eBooks, but so far not in Kindle format.⬐ kliptThere used to be an "I'd like to read this book on kindle" option on Amazon to request an ebook edition from the publisher.In fact I'm sure that option was available a few days ago, but now it seems to have disappeared...
However halfway down the page it does say "If you are a publisher or author and hold the digital rights to a book, you can sell a digital version of it in our Kindle Store."
Is there going to be any mention of the recent developments in researching Artificial General Intelligence (http://www.cis.temple.edu/~pwang/Writing/AGI-Intro.html) as opposed to GOFAI and narrow AI?⬐ cedthere has been a great deal of theoretical progress, particularly in areas such as probabilistic reasoning, machine learningThose are wide fields, are there any breakthroughs that really stand out in this decade?
⬐ osipovDoes the book discuss the Markov logic network (MLN) formulation for structured representations of belief states? In your opinion, how promising is the MLN approach? Thank you!⬐ mrduncanI took the liberty to clean up the formatting on that, hope you don't mind:This edition captures the changes in AI that have taken place since the last edition in 2003. There have been important applications of AI technology, such as the widespread deployment of practical speech recognition, machine translation, autonomous vehicles, and household robotics. There have been algorithmic landmarks, such as the solution of the game of checkers. And there has been a great deal of theoretical progress, particularly in areas such as probabilistic reasoning, machine learning, and computer vision. Most important from our point of view is the continued evolution in how we think about the field, and thus how we organize the book. The major changes are as follows:
- We place more emphasis on partially observable and nondeterministic environments, especially in the nonprobabilistic settings of search and planning. The concepts of belief state (a set of possible worlds) and state estimation (maintaining the belief state) are introduced in these settings; later in the book, we add probabilities.
- In addition to discussing the types of environments and types of agents, we now cover in more depth the types of representations that an agent can use. We distinguish among atomic representations (in which each state of the world is treated as a black box), factored representations (in which a state is a set of attribute/value pairs), and structured representations (in which the world consists of objects and relations between them).
- Our coverage of planning goes into more depth on contingent planning in partially observable environments and includes a new approach to hierarchical planning.
- We have added new material on first-order probabilistic models, including open-universe models for cases where there is uncertainty as to what objects exist.
- We have completely rewritten the introductory machine-learning chapter, stressing a wider variety of more modern learning algorithms and placing them on a firmer theoretical footing.
- We have expanded coverage of Web search and information extraction, and of techniques for learning from very large data sets.
- 20% of the citations in this edition are to works published after 2003.
- We estimate that about 20% of the material is brand new. The remaining 80% reflects older work but has been largely rewritten to present a more unified picture of the field.