Hacker News Comments on
The Next Generation of Neural Networks
Google TechTalks
·
Youtube
·
35
HN points
·
33
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.There's an interesting Google TechTalk on YouTube called "The Next Generation of Neural Networks" from 2007 [1]. In that video, there is a demo [2] that showcases the neural network recognizing numbers when given a drawing of a number as inputs, but more interesting is the follow on where the demo shows fixing the output to a given number and letting the neural network generate the "input" image to show what it is thinking that number can be. That is a strong indication to me that that particular neural network has a good understanding of what the particular number glyphs look like, even if it does not know what those are conceptually or how they relate to each other and to mathematics -- that is, that the neural network would not be able to work out what number 42 is, how it relates to 37 (i.e. 37 < 42), or how to manipulate those numbers (e.g. 2 + 7 = 9).Dall-E will likely be similar in that it is effectively doing that perception step where you fix the text description from the classifier output and run that in reverse to show what the neural network is "seeing" when it is "thinking" about that given output. So it won't be able to describe features of a giraffe, or information about where they live, etc. but it will be able to show you what it thinks they look like.
[1] https://www.youtube.com/watch?v=AyzOUbkUf3M [2] https://youtu.be/AyzOUbkUf3M?t=1293
Geoffrey Hinton's tech-talk in 2007 (!) at Google is a great watch (with a heavy dose of technical jargon [0] plus some dry British humour interlaced throughout). He explains digit recognition (vs SVM) [1], document classification (vs LSH), and breifly summarises image classification [2] problems and how they were solved: https://youtu.be/AyzOUbkUf3MYou could instantly see the results he presents were way better than what was state of the art at that time. Amazing.
---
[0] Grant Sanderson (3Blue1Brown) started a youtube-series covering Neural Networks (4 episodes, so far) that helps gain an intuitive grasp on the topic: https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_6700...
⬐ li4ickAlthough 3b1b's videos are awesome, his NN videos are heavily inspired by this: http://neuralnetworksanddeeplearning.com/. Even the code on his github page is taken from there.⬐ ignoramousYes. I think Sanderson does mention as much in one of his videos that he's borrowing the course and structure from Michael Nielsen.
These three had a big impact on me:Geoff Hinton, The Next Generation of Neural Networks (2007): https://www.youtube.com/watch?v=AyzOUbkUf3M
While the exact approach described there didn't end up being necessary (restricted Boltzmann machines), all the summaries of the competition results made me realize machine image and voice recognition was going to accelerate massively and rival humans in many areas in the very near term.
----
Cracking the neural code: Speaking the language of the brain with optics (2009): https://www.youtube.com/watch?v=5SLdSbp6VjM
This one made me realize how gene manipulation would be a near term thing and how big of an impact it would have. They used mostly old techniques but all the in situ modifications of cells in mammals were something I hadn't been aware were possible to that degree. One of the guys from his lab, Feng Zhang, went on to be one of the major forces behind CRISPR.
----
Breakthrough in Nuclear Fusion? - Prof. Dennis Whyte (2016): https://www.youtube.com/watch?v=KkpqA8yG9T4
New design for a tokamak fusion reactor, made much cheaper by new super conductors that use liquid nitrogen instead of helium/etc. and which have more structural strength by being bound into a metallic ribbon. This one made me really optimistic (it hasn't been borne out like the others yet, but they recently raised $50 million).
⬐ robertelderI wasn't going to bother commenting on this topic, but when I read the title I instantly thought ofGeoff Hinton, The Next Generation of Neural Networks (2007): https://www.youtube.com/watch?v=AyzOUbkUf3M
I watched that talk probably 10 times after it first came out and wrote some visual basic stuff to try and replicate his results.
he was well known name in the 80s and back again with RBMs in the 00s https://www.youtube.com/watch?v=AyzOUbkUf3M . He and Sejnowski are some of the few names i remember when i took an NN class a long time ago. He was insistent on working on it when many others saw it as a peripheral curiosity to their career.what's with everyone here?
Technically deep learning started before 2008. Here is a trends paper from back then:http://www.cs.toronto.edu/~fritz/absps/tics.pdf
Here is a google tech talk from 2007:
https://www.youtube.com/watch?v=AyzOUbkUf3M
Companies didn't pick it up until more recently. GPU-ification happened in 2009 with Ng's group:
http://robotics.stanford.edu/~ang/papers/icml09-LargeScaleUn...
And yes, Krizhevsky et al (Hinton's lab) applied GPU deep learning to ImageNet in 2010:
https://papers.nips.cc/paper/4824-imagenet-classification-wi...
⬐ streblerThose are great links! But the 2008 Hinton paper would not be considered deep learning, it is classic neural nets. It makes no mention of CNNs or GPUs, which is what really got this all going back in 2012 with ImageNet / Krizhevsky.The ImageNet paper is from 2012, not 2010. That's when the computer vision community really went "wow". IIRC, almost every entry in ImageNet 2013 was using CNNs.
⬐ mattkrause> it is classic neural nets. It makes no mention of CNNs or GPUsIs using a GPU "essential" for something to be deep learning? I'd always thought that the important part was some sort of hierarchical representation learning.
GPUs certainly help, in that you don't want to wait all day while your code does that, but they're not necessary.
⬐ edfernandez⬐ daveguyI think Tsvi Achler's video here will be useful to understand better what the article is about https://www.youtube.com/watch?v=9gTJorBeLi8Good call on the 2012 not 2010 date. I missed that. GPU are not requirements of deep nn. Hinton's pseudo-bayesian + ReLU approach was the last piece of the deep neural net functionality. CNNs dated back to 1995-1998 with LeCun and Bengio. Although GPUs do accelerate deep NNs enough to be feasible on image data (thanks to Ng).
Geoffrey Hinton "The Next Generation of Neural Networks". A google tech talk from 2007 about this newfangled "deep neural network" thing:
I think it went 'mainstream' around 2007, with Hinton's TechTalk at Google http://www.youtube.com/watch?v=AyzOUbkUf3M And even being pretty far from the ML/AI community at the time, I remember playing with Bengio's GPU based Theano Deep Learning Tutorial at around 2008/09. What had happened right now is just that it had finally started beating SVMs consistently and is fast enough to be used for practical purposes.
⬐ WillNotDownvoteHinton's Coursera class a couple years ago pulled me in. I wish there were more Coursera classes at that level.
Here's a talk by Geoffrey Hinton back in 2007 about Deep Learning neural networks:
This is a very welcome read but it hypes neural networks a bit. I've been working with in JavaScript using IndexedDB and while researching I was disappointed to find that some smart people seem to think they are much more limited than made out to be here and elsewhere https://www.youtube.com/watch?v=AyzOUbkUf3M#t=242To summarize, people generally abandoned backpropigation trained neural networks for Support Vector Machines because neural nets require labeled and limited datasets, and work slowly and especially so when dealing with multiple layers which is sort of the whole point.
In my work in JavaScript, I was able to pull off only a single layer perceptron and it is neat but limited in what it can model.
⬐ jbarrowIt's true that the beginning of the talk is about how people abandoned backprop trained neural networks because they often underperformed SVMs, but the rest of the talk is about deep learning, which introduced a new generation of neural nets (since 2006) that are the current state of the art for a lot of problems.In fact, deep neural networks are trained in an unsupervised manner at first, but then back propagation is used to "fine tune" and improve the results. Because they require unlabeled data sets, and can perform so well, research into neural networks has experienced a recent resurgence.
By the way, any talk by Geoff Hinton is fantastic. If you are interested in neural networks and their capabilities, and you haven't already seen it, his Coursera course [1] builds from a simple linear perceptron to the current deep learning methods.
[1] https://class.coursera.org/neuralnets-2012-001 (You'll have to sign in to see it)
⬐ agibsonccc⬐ ma2rtenI want to add to this, that there's a lot of work that doesn't require pretraining. I recently implemented the more advanced hessian free optimization methods that don't require pretraining in my deep learning framework. The results are amazing. I'm hoping to demonstrate a lot of the tradeoffs of the different methods in a more comprehensive manner here shortly. This was an updated extension by some of hinton's students. The paper I implemented was:⬐ taylorbuleyThanks for the extra resources. Any more are very welcome. Neural nets sometimes feel like quite the black box, despite their ease of implementation and apparent power.You are linking to a talk by Geoffrey Hinton. A strong advocate for Neural Networks. A single layer perceptron is limited, but a Neural Network with hidden layer(s) is a very powerful tool, and we have now figured out how to deal with multiple hidden layers (given enough data).⬐ agibsoncccFrankly, the reason there's hype around neural nets again is because of the newer ways they can be trained to augment backprop. Neural nets have made a lot of progress in representation learning in recent years. A major problem that I'm hoping to fix is having the industry catch up to what academia and some of the bigger companies are doing with nets now.Neural nets are from a silver bullet, and shouldn't be used where feature introspection is a huge requirement (this is why decision tree/random forest is popular), but they are far from being what they were in the 90s.
Note that I have a commercial interest in this so there's going to be inherent bias in my opinions.
To be fair, javascript isn't a scientific computing language. To do most neat training with neural nets, you're going to want to either scale them out, add more layers, and/or use GPUs. That being said, a neat toy example in javascript is convnetjs[1].
Here's a pretty good talk about restricted Boltzmann machines by Geoffrey Hinton. He explains the concepts and problems very well and basically without maths:
⬐ ExcavatorHis Neural Networks for Machine Learning course¹ is quite a pleasant journey going into everything from simple perceptrons to RBMs, and DBNs² and their uses. As a bonus he's got a quirky sort of dry humour that kept things interesting.
Thirded. Before Diaspora, I first read "Wang's Carpets"[1] which is a short story of his. Then found out this story had later been incorporated as a chapter into the book. I remember basically immediately ordering said book that night.fwiw, that "Webly-Supervised Visual Concept Learning" reminds me of the stuff that Hinton et al. do re: unsupervised (concept, etc.) learning (using restricted Boltzmann machines, and so on.) Good talk on the subject (of deep learning, etc.): https://www.youtube.com/watch?v=AyzOUbkUf3M
[1]: read online here: http://bookre.org/reader?file=222997
⬐ MachineElfUmm.. fourthed? I just couldn't help but jump in and also recommend Greg Egan's "Permutation City". That book is just wonderful... think simulation, cellular automata as a model for computation, artificial life and all that other good stuff :).Also, about the LEVAN thing... given the amount of data available online, both in various structured formats and unstructured formats, don't be surprised if deep learning will yield better and better results moving forward. To me though, they mostly seem evolutionary rather than revolutionary. I mean if you look back at the AI field, during the days before the "AI winter" came, huge amounts of data is one thing researchers back then didn't have available. This is not to say that there haven't been advances in learning algorithms at all recently. ..
⬐ agibsoncccThis is happening now in deep learning. Deep autoencoders[1] are allowing for computer representations of "similar" concepts. I recently gave a talk on this very concept to assist in QA systems.⬐ arethuzaAs well as adding my own strong recommedatios for Egan's "Permutation City" and "Diaspora" I would also recommend "Quarantine" - which has a rather splendid idea for mobile apps - "neural mods" that actually augment the brains own congnitive capabilities (including augmenting sensory data for the ultimate in VR).And there is what one group chooses to do with a very special neural mod...
That reminds me of a video I saw about something called restricted boltzmann machines:
⬐ kyzylCaveat emptor. Geoff's videos are a great way to launch into the field of deep learning, but do bear in mind that they are beginning to age. A lot of the stuff about why things work, what is state-of-the-art, and where the work is headed is now dated (even according to Hinton himself).
For starters - see Hebbian theory. [1]Backprop falls within the class of 'supervised learning' which can indeed be said not to be very biologically realistic. However, reinforcement learning is observed, so the overall picture is probably much more complex: e.g. associative/recurrent/etc networks with Hebb-like unsupervised learning developing neuronal group testing and selection systems that involve reinforcement learning. (see first lecture/talk in [3].)
Perhaps worth a watch is a very nice talk by Geoffrey Hinton [2], which is oft referred to on HN. (Hinton does refer to the notion of biological plausibility etc. in this talk as far as I recall, but the focus is elsewhere (developing next generation state-of-the-art (mostly unsupervised) machine learning techniques/systems.))
[1]: https://en.wikipedia.org/wiki/Hebbian_theory
[2]: https://www.youtube.com/watch?v=AyzOUbkUf3M
[3]: http://kostas.mkj.lt/almaden2006/agenda.shtml (The original summary HTML file is gone from the original source, so this is a mirror; the links to videos and slides do work, though.) The first and the second talks are somewhat relevant (particularly the first one, re: bio plausibility etc ("Nobelist Gerald Edelman, The Neurosciences Institute: From Brain Dynamics to Consciousness: A Prelude to the Future of Brain-Based Devices")), but all are great. Rather heavy, though. (Also, skip the intros.)
edit that first talk/lecture from Almaden (Edelman's) is actually a very nice exposure of the whole paradigm in which {cognitive,computational,etc} neuroscience rests; it does get hairy later on; overall, it's a great talk for the truly curious.
Restricted Boltzmann Machines. https://www.youtube.com/watch?v=AyzOUbkUf3MThis video drives the point home, and is made by the author of this technique.
As Radim noted, there might be better approaches to using SVMs. One example may be Restricted Boltzmann Machines (see Hinton's google tech talk [0]). Some folks have tried using them to detect spam (or at least using RBMs as a part in another architecture), achieving better results than SVMs (they actually did a proper comparison). [1] [2] Might be something worth to be looked at ;) At any rate, RBMs are rather fascinating, I also plan to try experimenting with them when I have time.[0] https://www.youtube.com/watch?v=AyzOUbkUf3M
[1] For a general overview of the study, here are its slides (pptx): http://users.cs.uoi.gr/~gtzortzi/docs/publications/Deep%20Be...
[2] The same study in full (pdf): http://users.cs.uoi.gr/~gtzortzi/docs/publications/Deep%20Be...
"The Next Generation of Neural Networks" -- a Google TechTalk by Geoffrey Hinton in 2007. I have never been able to sit through 60 minutes of lectures without fidgeting constantly, however this one managed to keep my attention until the end.Truly an amazingly great talk and worth watching through (even if you only only peripherally care about ANNs).
Something really interesting must be happening with AI at Google, because in the past few months both Ray Kurzweil (the best-known proponent of the singularity) and Geoff Hinton (the crazy-talented individual who invented deep-belief networks using interconnected "restricted Boltzman machines"[1]) have joined the company.--
[1] For an overview of deep belief networks, see these videos: http://www.youtube.com/watch?v=AyzOUbkUf3M , http://www.youtube.com/watch?v=VdIURAu1-aU , and http://www.youtube.com/watch?v=DleXA5ADG78
⬐ notatoadI think it's pretty clear what's so interesting: google has the some of the largest datasets ever assembled, on a hugely diverse range of subjects. If you want to play with that data, you've got to work for google. Probably the only other place you get to play with that much data is at the NSA, and their goals are a bit less fun than google's.⬐ simonsterThere's also the fact that Google likely has more computing power than any other organization worldwide, which is what seems to be the attraction for Hinton, at least according to the last sentence of his post.
First, it's Geoffrey, not Gregory Hinton.Here's a very good tech talk from him about RBMs: http://www.youtube.com/watch?v=AyzOUbkUf3M
That said, both approaches loosely mirror the function of the brain, as neurons are not simple threshold devices, and both backpropagation and the RBMs training algorithms do not have a biophysical equivalent.
⬐ tjakeOh sorry. I fixed it. Sorry Geoffrey!⬐ freyr⬐ wfnSecond, it's Geoffrey, not Gregory Hinton.That's a very good lecture by the way, basically explaining RBMs in more detail, and showcasing some interesting applications of deep unsupervised learning.
OMG I have been waiting for something like this. Deep Belief Networks have been smashing machine learning records in jsut about every domain. The only problem was that they were annoyingly slow to converge, and hard to program/debugsee Hinton's google code slides for more info on how powerful these things are:- http://www.youtube.com/watch?v=AyzOUbkUf3M (that's 2007, things are even spicier now)
⬐ deadairspaceThat was a great talk, and an impressive demo of feature generation.
I really liked the Google talk http://www.youtube.com/watch?v=AyzOUbkUf3M and there are a bunch of advances in machine learning mixing technologies, like inductive learning & genetic programming. The Google video also shows some combinations of techniques to make it learn much faster.Fortunately I can find videos and whitepapers on all those subjects, but seems the libraries are all very much in 'the past'. Maybe I don't know about some, but is there a library/toolbox like Weka which implement all modern & old algorithms and allow you to play on datasets mixing and matching them? Maybe I just couldn't find that, but Weka seems to be too primitive for that?
Disclaimer: I majored in AI a long time ago and I understand most of these concepts, but I have never touched it after I finished, so I'm not up to date/aware of everything, so sorry if I missed a famous tool or something.
⬐ utungaIf you enjoyed Geoff Hinton's talk you will probably find the theano 'deep learning' library to be of use. Still undergoing quite a lot of iteration but powerful and you get to run your stuff on the GPU for added fun. http://deeplearning.net/software/theano. Incidentally Hinton gave another google tech talk in 2010 http://www.youtube.com/watch?v=VdIURAu1-aU.⬐ tluyben2Thank you for that deeplearning link; I guess that's my missing link! I did Google for that many times and it's the first hit, so I have no clue how I missed that. Anyway, thanks!
There's a very good Google Tech Talk by Geoff Hinton (who has worked closely with Dahl on a lot of this research and developed some of the key algorithms in this field) that explains how to build deep belief networks using layers of RBMs: http://www.youtube.com/watch?v=AyzOUbkUf3MThat video focuses on handwritten digit recognition, but it's great for understanding the basics. There's a second Google Tech Talk video from a few years later that talks directly about phoneme recognition as well: http://www.youtube.com/watch?v=VdIURAu1-aU
Have you seen the online courses?https://www.coursera.org/course/ml (From one of the authors of this paper!)
https://www.coursera.org/course/vision
https://www.coursera.org/course/computervision
Prof. Hinton's videos are very watchable:
⬐ sownYes, often. Thanks for the links.⬐ magoghmIf you like math, Caltech's "Learning from Data" is awesome http://work.caltech.edu/telecourse.html
The article mentions two videos on ML; I believe these are the two:1. Hinton on RBMs http://www.youtube.com/watch?v=AyzOUbkUf3M&feature=plcp
2. Gilbert Strang on SVD: http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebr...
Rereading my post it I realize it sounds like a big break though occurred in the last few month. That is not the case. I was basically talking about deep learning neural networks. Anyway, here a number of things that caused me to change my mind:1. I saw this talk:
http://videolectures.net/nips09_collobert_weston_dlnl/
this is their paper: http://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf but if you are interested in the bigger picture, I think you have to see the talk.
2. A saw this talk by Geoffrey Hinton about deep learning neural networks:
http://www.youtube.com/watch?v=AyzOUbkUf3M
3. I saw a talk and read some papers by these guys:
http://www.gavagai.se/ethersource-technology.php
It is not necessarily, that I believe in their approach, but it made rethink my idea of meaning.
EDIT: this is also a cool paper in this regard http://nlp.stanford.edu/pubs/SocherLinNgManning_ICML2011.pdf
⬐ karpathyI'm a PhD student at Stanford working with Andrew Ng, who is known for his work on Deep Learning. I've worked on these networks for the last few years.I think it's great that people get excited about these advances, but it is also easy to extrapolate their capabilities, especially if you're not familiar with the details.
Indeed, we are making good progress but most of it relates specifically to perceptual parts of the cortex-- the task of taking unstructured data and automatically learning meaningful, semantic encodings of it. It is about a change of description from raw to high-level. For example, taking a block of 32x32 pixel values between 0 and 1 and transforming this input to a higher-level description such as "there is stimulus number 3731 in this image." And if you were to inspect other 32x32 pixel regions that happen to get assigned stimulus id 3731, you could for example find that they are all images of faces.
This capability should not be extrapolated to the general task of intelligence. The above is achieved by mostly feed-forward, simply sigmoid functions from input to output, where the parameters are conveniently chosen according to the data. That is, there is absolutely no thinking involved.
The mind, an intelligence, is a process of combining many such high-level descriptions, deciding what to store, when, how, retrieving information from the past, representing context, deciding relevance, and overall loopy process of making sense of things. A deep network is much less ambitious, as it only aims to encode its input in more semantic representation, and it's interesting that it turns out that you can do a good job at that just by passing inputs through a few sigmoids. Moreover, as far as I'm aware, there are no obvious extensions that could make the same networks adapt to something more AI-like. Depending on your religion, you may think that simply introducing loops in these networks will do something similar, but that's controversial for now, and my personal view is that there's much more to it.
Overall, I found this article to be silly. There is no system that I'm currently aware of that I consider to be on a clearly promising path to Turing-like strong AI, and I wouldn't expect anything that can reliably convince people that it is human in the next 20 years at least. Chat bot is a syntactical joke.
⬐ ma2rtenI am current working on an algorithm for unsupervised grammar learning. Part of what made me change my mind about this, is that I realized that what is required to learn syntax of language is also what is required to learn semantic relationships between objects, based on this syntactic data. You just have to go up one level of abstraction.I believe we are not too far from having algorithms, which can parse a natural language sentence into a semantic representation which link abstract concepts in a way that is powerful enough for e.g. question answering beyond just information retrieval (statistical guess work based on word frequencies). I am not so sure how or if we can build this into strong AI, though.
The linked video is a must see as well: http://www.youtube.com/watch?v=AyzOUbkUf3M
Geoffrey Hinton "Next Generation Neural Networks"http://www.youtube.com/watch?v=AyzOUbkUf3M
-It is more biologically plausible then any other NN algorithm I've seen
-It results in creativity (in the video he has the computer "imagine the number 2")
-It pretty much explains why we need to sleep/dream. The network has to be run both forward (accepting sensory input) and backwards (generating simulated sensory input) in order to learn
-It emphasizes the point that the brain is NOT trying to do matrix multiply (or any other deterministic calculation) with random elements (if it was trying to be an analog computer it would be). The randomness is an essential part of the algorithm.
⬐ reader5000I agree. Hopfield networks, of which Hinton's Boltzmann machines are substantial elaborations, have many human-like properties:-can fill-in details as a result of noisy or missing input -can sometimes "see" patterns in random noise
To answer your question you can watch this presentation by Prof Hinton: http://www.youtube.com/watch?v=AyzOUbkUf3MHe shows how he trained a restricted bolzmann machine to recognize handwritten numbers and how he can run it in reverse as a generative model, in effect the machine 'dreams' about all kinds of numbers that it's not been trained on but nonetheless makes up properly formed legible digits.
In heuristics, I like Asynchronous teams [ http://www.cs.cmu.edu/afs/cs/project/edrc-22/project/ateams/... ]In multi-robot coordination I like the free market system [ http://www.frc.ri.cmu.edu/projects/colony/architecture.shtml ]
And regarding machine learning I like neural nets (MLP), however the algorithms that currently blow my mind are convoluted neural networks (CNNs) and deep belief networks trained with auto-encoders. http://www.youtube.com/watch?v=AyzOUbkUf3M on the 21-minute mark to see it in action.
http://www.youtube.com/watch?v=AyzOUbkUf3MGreat Google TechTalk about exactly this simulation and so much more.
:D Love this video.
There was a great Google teck talk posted here a while ago: The Next Generation of Neural Networks by Geoffrey Hinton http://www.youtube.com/watch?v=AyzOUbkUf3M that might be applicable.
There's a great Google tech talk on this subject:
⬐ robgYou're not going to get a more up-to-date state of the field than from Hinton. Plus, their approach works on real world problems - they're consistently at the top of the Netflix leaderboard.⬐ SomeIdiot⬐ lsbDo you know what the name of his/their team name is?the making of the 2 and 5 was pretty cool; how far advanced are techniques for content generation?⬐ TichyAny texts on the subject available? Watching a one hour video requires a lot of patience...I mean not texts on neural networks, but on this next generation thing.
⬐ zyroth⬐ nikolajSearch for 'deep belief hinton' on google.this makes me want to figure out how to make a neural network.. of the 20% i understood, very interesting.⬐ ivankirigin⬐ downerMitchell's book on Machine Learning is a great introduction.⬐ KaizynProgramming Collective Intelligence (O'Reilly book) also discusses this along with other AI topics. Everything in the book has fairly straightforward python code demonstrating it as well.Nice that YouTube's player can finally skip forward.⬐ Tichy⬐ lkozmaNow they only need subtitles, and we can finally watch them Microserfs-Style (fast forward while reading the subtitles).That would be mightily cool and save everyone a lot of time. Aren't there several people on news.yc who work on such a thing (adding content to existing videos)?
⬐ bkmrkrtry viddler.comI was a bit disappointed that he didn't mention Kohonen's self-organizing maps, another interesting unsupervised method, or independent component analysis, which was also used successfully for image feature extraction.Som was also used for a similar document mapping task, the demo is online here: http://websom.hut.fi/websom/milliondemo/html/root.html