HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
How to Create a Mind: The Secret of Human Thought Revealed

Ray Kurzweil · 3 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "How to Create a Mind: The Secret of Human Thought Revealed" by Ray Kurzweil.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
The bold futurist and bestselling author explores the limitless potential of reverse-engineering the human brain Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse engineering the brain to understand precisely how it works and using that knowledge to create even more intelligent machines. Kurzweil discusses how the brain functions, how the mind emerges from the brain, and the implications of vastly increasing the powers of our intelligence in addressing the world’s problems. He thoughtfully examines emotional and moral intelligence and the origins of consciousness and envisions the radical possibilities of our merging with the intelligent technology we are creating. Certain to be one of the most widely discussed and debated science books of the year, How to Create a Mind is sure to take its place alongside Kurzweil’s previous classics which include Fantastic Voyage: Live Long Enough to Live Forever and The Age of Spiritual Machines.
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
May 20, 2013 · maeon3 on How Did Einstein Think?
I've been working through the following book:

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/06...

And it has a chapter (2 and 3) on stepping through exactly what algorithms and data structures Einstein used to figure out that time itself slows down for you to explain why traveling at fractions of the speed of light does not change how fast light passes you by. More important than guessing time as the variable property, he was expert at creating experiments to disprove his hypothesis. "If this is the case, we should be able to do this exact experiment to expose the exact value for time dilation as you approach speed of light."

The book is about creating a program which exposes the operating principles of Einstein's neo cortex that can do what Einstein did. To create simplistic models that explain the underlying principles of physics, and has the ability to say: "If we model the phenomenon like such, than we should be able to observe the following phenomenon". Then to go out and perform a test gathering evidence or disproving it. Then brute forcing this process and selecting for the most simple model that explains all available data.

Show all hypothesis that explain all available data, that have not been disproved, sorted by complexity of the model with most evidence collected for it, and least evidence levied against it.

If each of these processes could be automated, we could use the world's supercomputers to crunch out 500 years of physics scientific discovery in a few years.

Your brain is a neural network of neural networks, during sleep, a cost function is applied across the entire grid. Important aspects of your day are done, and redone at high velocity, simultaneously (leading to dreams).

Cost benefit analysis are done against what you might have done, and the results of that, and actions that would have caused more desirable outcomes are projected, as best as it can see, and the habits, and motor neurons are reconfigured accordingly, this explains why when you get good sleep, and you wake up, you find yourself much better able to do tasks than had you not slept. If you don't sleep, you die.

Source of these points:

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/06...

https://www.coursera.org/course/ml

Title is misleading, this function also has to do with encoding short term memories to long term memories. Since the mind only has limited space (limited number of neurons to configure), that only the most useful memories are stored into permanent disk. Disruption of the 7 to 9 hour sleep cycle garbage collects the memories that were about to be stored. The mind queues them up to be dealt with the following day, but sometimes are displaced or missed by more passionate things in the present.

Sleep is one of the most important things you can do to maintain your mind and keep it in top running condition for as long as possible, not too little, not too much, sleep in intervals of 90 minutes. If you consume garbage knowledge on a daily basis, your mind will encode that garbage to permanent disk, and you will become that garbage.

Conspiracy theorists suffer from a mental misconfiguration where the cost function applied to the neural network of neural networks suffers from "over fitting". Finding patterns in randomness leading to conclusions are not valid. A lambda function can be applied against the cost function which will alleviate this. I can do it in software, and when I discover the operating principles of the neo cortex, I will be able to fix all the conspiracy nuts in the local nut house. Take care to not take for granted the fresh slate of your mind while you are young, because when you are old, it'll be mostly full and encoding new skills to disk much more difficult, the cost function is more reluctant to modify the grids since doing so would damage your ability to consume resources, find mates and create more of you. Fill you mind with timeless wisdom and get good sleep before your hard disks become full.

elteto
I can't say if you are entirely accurate, but you couldn't have explained that in better terms!
etherael
Conspiracy theorists suffer from a mental misconfiguration where the cost function applied to the neural network of neural networks suffers from "over fitting". Finding patterns in randomness leading to conclusions are not valid. A lambda function can be applied against the cost function which will alleviate this. I can do it in software, and when I discover the operating principles of the neo cortex, I will be able to fix all the conspiracy nuts in the local nut house.

Wouldn't this work the other way, too? Not finding patterns in what turns out to not be randomness sometimes ends up getting you killed. It's a fine line between paranoia and attention to detail. Anyone with aspirations to "fix" this should probably take that into consideration.

maeon3
correct, the opposite of overfitting is underfitting, knowing that whenever you talk to joe you get punched, and you have 10 training examples, but this time is different, he's wearing his brown shirt, so it's probably safe now.

not finding the signal from the noise, because joe hitting me 10 times in a row is not conclusive, because most humans never hit me. and joe is wearing new clothing, so it's safe because joe is a human.

rgbrenner
"If you don't sleep, you die."

No human has ever died from simply not sleeping (excluding accidents, etc caused by lack of sleep)

http://www.scientificamerican.com/article.cfm?id=how-long-ca...

http://www.abc.net.au/health/talkinghealth/factbuster/storie...

fuzzythinker
See also http://en.wikipedia.org/wiki/Thai_Ngoc http://en.wikipedia.org/wiki/Al_Herpin
The field is now called AGI. It isn't mentioned in this article. Everyone seems to be ignoring the whole field of AGI (artificial general intelligence). Or maybe they truly are ignorant of it.

Anyway, suffice to say, AI and AGI didn't stop progressing, and Chomsky is no longer any sort of expert in those fields.

Even Norvig isn't up to speed on the most advanced approaches to AGI, but at least he enters the same room with people who are aware of the field. For example, he gave a talk at the recent Singularity Summit.

The Fifth Conference on Artificial General Intelligence is going to be in Oxford in December. http://agi-conference.org/2012/

Here is some information for people who are interested in pertinent ideas related to AGI.

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/06...

http://opencog.org/theory/

>OpenCog is a diverse assemblage of cognitive algorithms, each embodying their own innovations — but what makes the overall architecture powerful is its careful adherence to the principle of cognitive synergy.

>The human brain consists of a host of subsystems carrying out particular tasks — some more specialized, some more general in nature — and connected together in a manner enabling them to (usually) synergetically assist rather than work against each other.

http://wiki.opencog.org/w/Probabilistic_Logic_Networks

> PLN is a novel conceptual, mathematical and computational approach to uncertain inference. In order to carry out effective reasoning in real-world circumstances, AI software must robustly handle uncertainty. However, previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN is able to encompass within uncertain logic such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.

http://wiki.opencog.org/w/AtomSpace

Conceptually, knowledge in OpenCog is stored within large [weighted, labeled] hypergraphs with nodes and links linked together to represent knowledge. This is done on two levels: Information primitives are symbolized in individual or small sets of nodes/links, and patterns of relationships or activity found in [potentially] overlapping and nesting networks of nodes and links. (OCP tutorial log #2).

http://www.izhikevich.org/publications/large-scale_model_of_...

Large-Scale Model of Mammalian Thalamocortical Systems

> The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales.

http://www.sciencebytes.org/2011/05/03/blueprint-for-the-bra...

Essentials of General Intelligence: The direct path to AGI

http://www.adaptiveai.com/RealAI_chap_ver2c.htm

>General intelligence, as described above, demands a number of irreducible features and capabilities. In order to proactively accumulate knowledge from various (and/ or changing) environments, it requires:

>1. Senses to obtain features from ‘the world’ (virtual or actual),

>2. A coherent means for storing knowledge obtained this way, and

>3. Adaptive output/ actuation mechanisms (both static and dynamic).

>Such knowledge also needs to be automatically adjusted and updated on an ongoing basis; new knowledge must be appropriately related to existing data. Furthermore, perceived entities/ patterns must be stored in a way that facilitates concept formation and generalization. An effective way to represent complex feature relationships is through vector encoding (Churchland 1995).

>Any practical applications of AGI (and certainly any real-time uses) must inherently be able to process temporal data as patterns in time – not just as static patterns with a time dimension. Furthermore, AGIs must cope with data from different sense probes (e.g., visual, auditory, and data), and deal with such attributes as: noisy, scalar, unreliable, incomplete, multi-dimensional (both space/ time dimensional, and having a large number of simultaneous features), etc. Fuzzy pattern matching helps deal with pattern variability and noise.

>Another essential requirement of general intelligence is to cope with an overabundance of data. Reality presents massively more features and detail than is (contextually) relevant, or that can be usefully processed. This is why the system needs to have some control over what input data is selected for analysis and learning – both in terms of which data, and also the degree of detail. Senses (‘probes’) are needed not only for selection and focus, but also in order to ground concepts – to give them (reality-based) meaning.

http://en.wikipedia.org/wiki/Hierarchical_temporal_memory

> A typical HTM network is a tree-shaped hierarchy of levels that are composed of smaller elements called nodes or columns. A single level in the hierarchy is also called a region. Higher hierarchy levels often have fewer nodes and therefore less spacial resolvability. Higher hierarchy levels can reuse patterns learned at the lower levels by combining them to memorize more complex patterns.

> Each HTM node has the same basic functionality. In learning and inference modes; sensory data comes into the bottom level nodes. In generation mode; the bottom level nodes output the generated pattern of a given category. The top level usually has a single node that stores the most general categories (concepts) which determine, or are determined by, smaller concepts in the lower levels which are more restricted in time and space. When in inference mode; a node in each level interprets information coming in from its child nodes in the lower level as probabilities of the categories it has in memory.

>Each HTM region learns by identifying and memorizing spatial patterns - combinations of input bits that often occur at the same time. It then identifies temporal sequences of spatial patterns that are likely to occur one after another.

sown
>1. Senses to obtain features from ‘the world’ (virtual or actual), >2. A coherent means for storing knowledge obtained this way, and >3. Adaptive output/ actuation mechanisms (both static and dynamic).

What does that even mean?

If it's so easy to sum it up in a chapter of a book, why don't they build it and allow others to examine it, submit it for review, write papers and submit to ACM, build fantastic machines based on it? All I want is a bit of proof.

You and I have similar interests: we would like for AGI to happen. Even though I'm not sure what AGI means. It's a sort of dream right now for me, but perhaps more of a reality for you?

Most of that large post is neat, but it's not going to convince me if I've never heard of AGI and if I currently know something about AI.

The most you can do is to go do what I mentioned earlier: go build systems, investigate, write papers and go to conferences and I don't mean conferences where it's just AGI people.

ilaksh
Maybe you misunderstood. Those projects haven't achieved AGI or claimed to. But they are making tangible progress as I specified.
novaleaf
you sir, are awesome for providing such a thought provoking and detailed explanation .. with references!
ilaksh
Thanks :)
SeanLuke
> The field is now called AGI.

No it's not.

driax
Could you please stop doing that! If you don't know anything don't say anything. If you know that it truly isn't AGI (as you may do) say why!
SeanLuke
AI has forever been filled with buzzwords and trendy absurdity. Most of this lies on the soft computing end of the field, where self-styled visionaries hold forth with holistic mumbo-jumbo while valuable work is done by reputable researchers elsewhere.

AGI is one of those little goofy microtrends: so far as I can tell it's essentially a rebranding of Strong AI by soft computing reactionaries responding to AI getting dominated by domain specificity (otherwise known as "being successful"). To claim, as the grandparent appears to be doing, that AI is now properly called Artificial General Intelligence, is crackpottery at its finest.

Wake me up when AGI even appears on the first results page of a Google search for "AGI".

confluence
I thoroughly agree - see my comment above. The Singularity and SENS peeps give me the exact same feeling - as does Noam Chomsky. All talk - no walk.

I think it's fundamentally the difference between soft bullshit and hard calculations. Everyone can talk about AI, or linguistics, or statistics (or any complex field) in very general, undefined and bullshitty terms.

But what we need, and what the machine learning guys are bringing is hard calculation - 1 + 1 = 2 or input data, get features and make decisions well above human abilities.

My question to all the fringe folks: Where's the beef? What have they done? Where are the automatic cars built on Chomsky's theories? Where are the talking robots from the AGI? What methods have the SENS people got? Are the singularity folks just leeches off gullible rich people - selling them a future and taking their cash in the process without providing any real value?

corporalagumbo
http://en.wikipedia.org/wiki/Noam_Chomsky_bibliography

Chomsky's a pretty substantial walker.

confluence
I think you mean talker. Walker would be things that actually did stuff - you know like search, translation, locomotion or prediction. That's a lot of words and a lot of books in a bunch of the soft sciences (linguistics and politics) which are highly susceptible to class A bullshit. All of that doesn't mean anything - no different from Richard Dawkins who irritates me in a similar fashion - I ask once again - where's the beef?

> Every time I fire a linguist, the performance of the speech recognizer goes up.

-- http://en.wikipedia.org/wiki/Frederick_Jelinek

I still don't see Chomsky robots walking around, Chomsky translation translating my text to French or Chomsky AI driving cars. Nope - all Google/IBM/Microsoft/DARPA/Boston Dynamics/etc. AKA Hard science-engineers utilising statistics, not soft science blowhards.

Only thing I see Chomsky doing is talk - a lot.

cdavid
If your definition of real science is something that has good prediction/application potential, that's a rather unusual definition of science. Are mathematicians, theoretical physicists all talk as well ?
confluence
String theorists are bullshitty yes.

Mathematicians aren't - you can prove their correctness over abstract planes and use them to for example run a hedge fund or a software company into trillions of revenue when making testable predictions in macro reality.

Theoretical physicists that use mathematics to make testable predictions are too. You can use them to also make electric engines, statistical extractors and accurate physical simulations that are corroborated with empirical evidence.

If the prediction is not testable, is unfalsifiable, is unreproducible, is not independent and is not supported by overwhelming evidence - it is bullshit - no ifs, buts or ands.

Volpe
Better let Andrew Wiles know his 8 years spent on Fermat's Last Theorem was just a bullshit waste of time, because he couldn't use it to run a hedge fund, or software company...

It can only be 'proved' in the 'contrived' world of pure mathematics... what bullshit!

confluence
I said mathematicians weren't bullshit.
Volpe
No, you provided a qualification of why they weren't... I gave you an example of a mathematician who broke your qualification, and logically should fall into your definition of a 'quack'.

The idea being, that you'd have to back pedal, and change your qualification. Which I could then use to apply to other fields, that you deem as 'quackery', and thus undo the foundation of your argument.

Instead, you just denied the reality of what you said... I didn't count on that. Well done.

adrianhoward
As somebody who used to do a bit of NLP work in the dim and distant past I can testify that Chomsky's work on context free grammars, etc. (certainly used to anyway - not been poking at it for about fifteen years now) got applied a fuck of a lot.

The model probably doesn't have a great deal of relation to what happens inside folks heads - but it was stupidly useful for making computers do stuff though. Might be better techniques now - I don't know. But saying that it wasn't applicable practical work is basically ignoring the NLP stuff that was happening in the 80's and 90's.

(let alone the more obvious useful stuff for us geeky folk - the formal grammar stuff we use and think about for compilers. Chomsky Hierarchy, etc.)

mturmon
Amen.

And, I'm not enough of a historian of science to know, but it seems to me that the basic results Chomsky proved on Regular Languages and CFGs paved the way for the Hidden Markov Models (HMMs) that have been so effective in language understanding. Basically, the HMMs are the natural probabilistic extension of Regular Languages.

I'm not sure if Viterbi and the other developers of the basic HMM toolkit were directly influenced by Chomsky, or if state machines were just in the air. Certainly Chomsky's basic work in the late 1950s predated Viterbi's work in the 1960s.

ilaksh
I'm talking about the efforts to really reproduce human-like intelligence. I'm not saying that AI isn't a field, I am saying if you want human-like intelligence, the AGI people are the most far along, or at least the most serious about it.

Did you really look into AGI, for example the past conferences or those projects, and conclude that it is just invaluable holistic mumbo-jumbo?

That is so unfair and inaccurate, I can't see how you can possibly be evaluating things rationally if you really came to that conclusion.

ilaksh
What they are talking about is the idea of human-like general intelligence, and AI mostly doesn't try to do that anymore, although there are some people who are seriously trying and calling it AI and even a few who are sort of aware of what the AGI people are doing or have projects that are as sophisticated. But most of the researchers who are farthest along and most serious about it have been calling it AGI.

Anyway, you have to at least include AGI if you are serious about human-like AIs.

sown
> and AI mostly doesn't try to do that anymore,

I think part of the problem is that we don't even know what AGI really is. How do you define conscience in a rigourous way. Doesn't mean it can't be done without such work but it just seems soooo undefined right now that people are suspicious if someone comes along and claims to have a partial solution.

Do we even know what we're looking for? When we do know or have an idea, I am willing to imagine that we'd have more AI research in that area and the AGI would be taken more seriously.

koningrobot
The point of AI is not conscious machines but machines that can do useful things. Consciousness is not useful except to the extent that it helps a machine do useful things.
ilaksh
You mean 'consciousness' which is actually mainly a philosophical distraction in common usage.. anyway there are a lot of good starting definitions for what AGI is, including a bit in my comment and also in the descriptions of the projects that I referred to.
xyzzy123
Right; as far as I know there is no serious research in AGI. AI is applied statistics, and anyone who doesn't have that point of view isn't producing results right now.
krichman
The parent post spent a lot of time to try and inform us, including multiple links. To post a dismissive comment in response, with no explanation whatsoever, is well beyond rude.
sown
I think people are suspicious or dismissive of AGI, HTM, etc because...well...there doesn't seem to be anything really to it. People who know AI I've talked to in the know about HTM don't know anything about it or have mildly negative things to say. Ditto for AGI. It's a contentious topic and people just get defensive.

Many of those links in grandparent post were from or about opencog. I can make long blog posts about opencog that refer to opencog as proof, too...but it wouldn't mean anything. Religious people do that sort of thing all the time.

The proof would be in the pudding, right? So if AGI at least has some hypothesis, then it should be able to produce some results, right?

I very much want AGI to happen. You want AGI to happen. Our interests are in agreement. However, there isn't really much proof about any current hypothesis, as far as I can tell, that can produce any real system. It's a dream so far.

I don't mean it has to be a really solid understanding of conscience but it's so undefined, unknown area right now we can't even approach it.

Instead of making long blog posts and replies to comments, and then get offended when people don't buy into it, the most people can do right now is to go investigate, hypothesize and try to build something.

krichman
I'm totally fine with empiricism! Your post here is helpful. It's just rude and not helpful to respond to a post like that with nothing more than "no".
Volpe
It's pretty rude to post religious gumph as though it's fact too... I'd argue the response was reasonable.
adrianhoward
The field is now called AGI

S'funny... when I was at The University of Sussex a few weeks back I met a lot of folk who seem to think they're still doing AI. There are still Journals on the topic http://www.journals.elsevier.com/artificial-intelligence/. There are still degress on the topic http://is.gd/5ii0Eu. I know folk doing postgrad work on the topic.

My degree was in Computing an AI and is, I admit, more than 20 years old now. But I still keep a weather eye on the field - and have many friends and acquaintances who work in it. There are certainly some who use the AGI name for a subset of the strong AI work, but it's not anywhere close to a universally used term. It certainly hasn't taken over from AI - a field that seems to be quite happy trundling along thank you very much ;-)

confluence
I've heard of OpenCog before and it, along with the Singularity crowd gives me a weird amateur, bullshitty, vague, generalist feeling that Noam Chomsky does. Basically - where's the beef? What has been done by either crowd apart from taking credit from those who do things in the actual industry/real world?

My fundamental aversion to both OpenCog and the entire Singularity crowd is a) their statements are so general as to the point of being useless and b) they don't do anything. Google makes search simple - go to google.com and find out. Google makes cars drive themselves - ask Nevada/California and if you're a member of the press - request a test drive today. IBM's Watson definitively beat world champions in front of everyone and before that they did it with Blue Gene.

Everyone in the other communities fall under this category: All talk - no walk.

The entirety of what I've gotten out of both groups is essentially little more than what religious people get out of going to a sermon at a church. The future will be grand, lots of bullshitty buzz words, lots of hand waving with huge claims - no hard calculations, no hard examples of what they've actually achieved.

I'll stick with Norvig/Google and his/their demonstrated achievements and knowledge over the talk, hype and vaporware projects of groups that have yet to show any hard progress apart from a bunch of lectures to rich people with a lot of vague words.

The SENS movement gives me the exact same feeling.

All talk - no walk.

corporalagumbo
Chomsky's expertise is in linguistics and political analysis. Stephen Pinker's The Language Instinct is a good, readable introduction to some of Chomsky's work (and the wider field to which he is pivotal.) Chomsky's Manufacturing Consent is probably his classic work of political analysis.

http://en.wikipedia.org/wiki/Noam_Chomsky_bibliography

He's no quack.

confluence
You know in the soft sciences everyone is a quack because fundamentally they don't practice - wait for it - science. Science stops false connections by correctly attributing cause to its respective effect. Social sciences do not. For all intents and purposes, the vast majority of social science is either unreproducible, vague, mixing correlation with causation, uses dependent variables, poorly reasoned, statistical quirks, pushed by agendas or fundamentally flawed.

> are effective and powerful ideological institutions that carry out a system-supportive propaganda function by reliance on market forces, internalized assumptions, and self-censorship, and without overt coercion

-- http://en.wikipedia.org/wiki/Manufacturing_Consent:_The_Poli...

That's pretty self-evident to the point of being, well, pointless - admen of the 60s made their bread using this, and the PR pioneers of the 30s were already experts. But please let's all listen to what he has to say next. Let me guess: killing people is bad, and not killing people is good. If you call that amazing thinking, I'd hate to see the idiotic version.

Even better:

> Geoffrey Sampson maintains that universal grammar theories are not falsifiable and are therefore pseudoscientific theory. He argues that the grammatical "rules" linguists posit are simply post-hoc observations about existing languages, rather than predictions about what is possible in a language. Similarly, Jeffrey Elman argues that the unlearnability of languages assumed by Universal Grammar is based on a too-strict, "worst-case" model of grammar, that is not in keeping with any actual grammar. In keeping with these points, James Hurford argues that the postulate of a language acquisition device (LAD) essentially amounts to the trivial claim that languages are learnt by humans, and thus, that the LAD is less a theory than an explanandum looking for theories.

Sampson, Roediger, Elman and Hurford are hardly alone in suggesting that several of the basic assumptions of Universal Grammar are unfounded. Indeed, a growing number of language acquisition researchers argue that the very idea of a strict rule-based grammar in any language flies in the face of what is known about how languages are spoken and how languages evolve over time. For instance, Morten Christiansen and Nick Chater have argued that the relatively fast-changing nature of language would prevent the slower-changing genetic structures from ever catching up, undermining the possibility of a genetically hard-wired universal grammar. In addition, it has been suggested that people learn about probabilistic patterns of word distributions in their language, rather than hard and fast rules (see the distributional hypothesis). It has also been proposed that the poverty of the stimulus problem can be largely avoided, if we assume that children employ similarity-based generalization strategies in language learning, generalizing about the usage of new words from similar words that they already know how to use.

Another way of defusing the poverty of the stimulus argument is to assume that if language learners notice the absence of classes of expressions in the input, they will hypothesize a restriction (a solution closely related to Bayesian reasoning). In a similar vein, language acquisition researcher Michael Ramscar has suggested that when children erroneously expect an ungrammatical form that then never occurs, the repeated failure of expectation serves as a form of implicit negative feedback that allows them to correct their errors over time. This implies that word learning is a probabilistic, error-driven process, rather than a process of fast mapping, as many nativists assume.

Finally, in the domain of field research, the Pirahã language is claimed to be a counterexample to the basic tenets of Universal Grammar. This research has been primarily led by Daniel Everett, a former Christian missionary. Among other things, this language is alleged to lack all evidence for recursion, including embedded clauses, as well as quantifiers and color terms. Some other linguists have argued, however, that some of these properties have been misanalyzed, and that others are actually expected under current theories of Universal Grammar.

-- http://en.wikipedia.org/wiki/Universal_grammar#Criticisms

Looks like I'm not the only one that sees through bullshit.

Let me repeat - just to imprint on people's minds:

> This implies that word learning is a probabilistic, error-driven process, rather than a process of fast mapping, as many nativists assume.

Chomsky's theories are, and always were, DOA.

wololo
to nitpick:

Everett is very controversial, for example:

Everett (2005) has claimed that the grammar of Pirahã is exceptional in displaying 'inexplicable gaps', that these gaps follow from a cultural principle restricting communication to 'immediate experience', and that this principle has 'severe' consequences for work on universal grammar. We argue against each of these claims. Relying on the available documentation and descriptions of the language, especially the rich material in Everett 1986, 1987b, we argue that many of the exceptional grammatical 'gaps' supposedly characteristic of Pirahã are misanalyzed by Everett (2005) and are neither gaps nor exceptional among the world's languages. We find no evidence, for example, that Pirahã lacks embedded clauses, and in fact find strong syntactic and semantic evidence in favor of their existence in Pirahã Likewise, we find no evidence that Pirahã lacks quantifiers, as claimed by Everett (2005). Furthermore, most of the actual properties of the Pirahã constructions discussed by Everett (for example, the ban on prenominal possessor recursion and the behavior of WH-constructions) are familiar from languages whose speakers lack the cultural restrictions attributed to the Pirahã. Finally, following mostly Gonçalves (1993, 2000, 2001), we also question some of the empirical claims about Pirahã culture advanced by Everett in primary support of the 'immediate experience' restriction. We conclude that there is no evidence from Pirahã for the particular causal relation between culture and grammatical structure suggested by Everett. -- Pirahã Exceptionality: A Reassessment, http://dash.harvard.edu/handle/1/3597237

Pirahã actually has two color terms, 'dark' and 'light', which is Stage I in http://en.wikipedia.org/wiki/Basic_Color_Terms:_Their_Univer..., http://en.wikipedia.org/wiki/Linguistic_relativity_and_the_c...

corporalagumbo
Angry much? Have you actually read Chomsky, or are you just taking snippets from Wikipedia pages and saying told-you-so? Perhaps you should try reading Manufacturing Consent, it's a very careful and thorough work of analysis and not nearly as bleedingly obvious as you try and portray it.

One point: Sampson's criticisms about linguists producing post-hoc descriptions could just as easily have been (and were, I believe) applied to Newton's theories. Good science includes mapping and describing phenomena.

Another point: negative feedback on errors is not enough to account for the explosive speed of language acquisition in children. Not to say that this sort of feedback doesn't occur, or isn't useful, but it only really is used when children learn exceptions (I.e. irregular verb forms in English) or vocabulary (and even much of vocabulary is rule-generated.) Basic language rules are encoded, and children's brains only require minimal stimulus to record the specific settings of the rules for the language they are learning.

G5ANDY
> social science is either unreproducible, vague, mixing correlation with causation, uses dependent variables, poorly reasoned, statistical quirks, pushed by agendas or fundamentally flawed...

Dr. Freud would have had a good deal to say about your apparent fixation with bovine feces...

Seriously though, your comments are playing fast and lose with a range of fields that you’re conflating and dismissing. Not all social sciences are “soft” and many have empirically-based real world applications that shape your (and everyone’s really) everyday lives.

Volpe
> Science stops false connections by correctly attributing cause to its respective effect.

So was Aristotle a quack as well?

I ask because, he was pre-science, and pretty much laid the foundation for what became the scientific-method. (i.e empiricism).

Perhaps before you dismiss large bodies of knowledge you should look up the history of science, and see that it has flaws in and of itself...

None
None
justin66
> You know in the soft sciences everyone is a quack because fundamentally they don't practice - wait for it - science.

I wonder if you know you're being ironic here. Plenty of us have never even read Chomsky's political works and have been exposed to him solely through mentions in the CS literature, like the Dragon book, or more in-depth stuff on his theory of context-free grammars. There is a startling amount of proof that he not only writes about politics but, at one time or another, actually worked for a living and helped our field produce useful stuff.

ilaksh
So I figured it out. Basically, they take the idea of AGI seriously, and actually consider and talk about the repercussions, and therefore you dismiss them and their ideas as fringe and not worth investigating. I know that, because if you had investigated at all, you would see that all of those projects had really interesting results and these people are not being vague and hand-waving.

Not all of those projects I listed identify themselves as AGI. However, they should go in the same group.

And anyway, all of those projects have demonstrated progress. If you looked into them at all then you would see that. Ben Goertzel is using some aspects of his AGI research in mainstream (narrow) AI projects. OpenCog has released a number of solid demonstrations of current features. And Goertzel isn't hand-waving or bullshitting in his numerous books and scientific papers, for example Probabilistic Logic Networks: A Comprehensive Framework for Uncertain Inference (336 pages).

Hawkins has demonstrated very interesting progress with his software and has a commercial application https://www.numenta.com/grok_info.html

Voss is using his system at Adaptive AI as a commercial enterprise.

Qualcomm is funding Brain Corporation (Izhikevich et al) so obviously they are taking it seriously. A bakery in Tokyo has tested Brain Corporation's machine vision technology to power a semi-automated cashier system

http://www.diginfo.tv/v/12-0145-r-en.php

joe_the_user
I'm sympathetic to both Chomsky and Open Cog's aims.

I know Chomsky is a serious scientist with considerable accomplishment.

I have seen totally loony stuff in videos of AGI conferences (Tachyons and stuff). Open Cog may be better than that. But it hasn't proved that it is better than that.

The 1970-80's AI involved the Chomskyan paradigm of "draw up a naive design of the mind and/or brain and implement it". That failed so badly that you need a really good argument why you can do things differently - at least to move into mainstream science. That is, Ben Goertzel seems nice, smart and enthusiastic but I can't see him bringing anything new to the "table". Jeff Hawkins had interesting ideas with his temporal paradigm but it seemed like the model he chose to instantiate wasn't all that different from that used by the statistical-brute-force crowd. And Numenta has had really few announcements for a six year old enterprise.

And the companies paying for AI to be added to their systems. That happened from the start but it wasn't ever enough. What's different here from the stuff from twenty years ago?

bengoertzel
This is Ben Goertzel...

AGI is mainstream science, these days. The keynote of the 2012 AAAI conference (the major mainstream AI research conference each year), by the President of AAAI, was largely about how the time has come for the AI field to refocus on human-level AI. He didn't use the term "AGI" but that was the crux of it.

The "AI winter" is over. Maybe another will come, but I doubt it.

What's different from 20 years ago? Hardware is way better. The Internet is way richer in data, and faster. Software libraries are way better. Our understanding of cognitive and neural science is way stronger. These factors conspire to make now a much better time to approach the AGI problem.

As for my own AGI research lacking anything new, IMO you think this because you are looking for the wrong sort of new thing. You're looking for some funky new algorithm or knowledge structure or something like that. But what's most novel in OpenCog is the mode of organization and interaction of the components, and the emergent structures associated with them. I realize it's a stretch for most folks to realize that the novel ingredients needed to make AGI lie in the domain of systemic organizational principles and emergent networks rather than novel algorithms, data structures or circuits -- but so it goes. It wouldn't be the first time that the mass of people were looking for the wrong kind of innovation, hmm?

Regarding tachyons in videos of AGI conferences, could you provide a reference? AGI conference talks are all based on refereed papers published by major scientific publishers. Some papers are stronger than others, but there's no quackery there.... (There have been "Future of AGI" workshops associated with the AGI conferences, which have had some freer-ranging speculative discussions in them; could you be referring to a comment an audience participant made in a discussion there?)

joe_the_user
Thank you for you're reply Ben.

I wish you luck (well sort-of - with great power would come great responsibility and all-that).

I wasn't making up the tachyon guy. If I have time, I'll dig the video (it'd be a little hard since the hplus website reorganized). He was presenter and not an audience member, had at least a paper at one of these conferences. I can easily believe the AGI conferences have gotten better.

I would stick to the point that AGI needs to make clear how it will overcome previous problems - clear to mainstream science is useful for funding but clear to yourselves so you have ways to proceed is most important.

I don't necessarily agree exactly with Herb Dreyfus' critique but I think that in the minimum a counter-critique to his critique is needed to clarify how an AGI could work.

A good summary of his argument would be: http://leidlmair.at/doc/WhyHeideggerianAIFailed.pdf

I mean, I have worked in computer vision (not that much even). There's no shortage of algorithms that solve problem X but nothing in particular weds them together. Confronted with a new vision problem Y, you are forced to choose one of these thousand algorithms and modify it manually. You get no benefit from the other 999.

As far as open source methodologies solving the AGI question, I've followed multiple open source projects. While certain things might indeed work well developed using the "bazaar" style, I haven't seen something as exacting a computer language come out of such a process - languages tend to require an individual designer working rather exactly - with helpers certainly but in many, many situations almost alone (look at Ruby, Perl, Python, etc). I would claim AGI would at least exactly as a computer language, possibly more-so. Further, just consider how the "software crisis", the limitations involved in producing large software with large numbers of people, expresses the absence of AGI. Essentially, to create AGI, you would need to solve something like a boot strapping problem so that you cause the intentions of the fifty or five thousand people working together to add up to more than what fifty or five thousand intentions normally add up to in normal software engineer. I suppose I believe some progress on a very basic level is needed to address.

mturmon
As a point of reference, here's the agenda for the most recent AGI conference:

http://agi-conf.org/2011/conference-schedule/

And, just for comparison, here's the agenda for the most recent ICML conference:

http://icml.cc/2012/schedule/

To me, the AGI conference seems to have a much higher ratio of "speculative ideas"/"technical results" talks. Also to me, this pretty much justifies the "all talk - no walk" assessment.

bengoertzel
This is Ben Goertzel, chief founder of the AGI conference series.

You are correct that the AGI conferences have a higher ratio of "speculative ideas"/"technical results" to ICML. This is intentional and I belief appropriate -- because AGI is at an earlier stage of development than machine learning, and because it's qualitatively different in character than machine learning.

Machine learning (in the sense that the term is now typically used, i.e. supervised classification, clustering, data minign, etc.) can be approached mainly via a narrowly disciplinary approach. Some cross-disciplinary ideas have proved valuable, e.g. GAs and neural nets, but the cross-disciplinary ideas there have quickly been "computer science ized"...

OTOH, I think AGI is inherently more complex and multifarious than ML as currently conceived, and hence requires more "out of the box" and freely multi-disciplinary thinking.

I think that in 10-15 years, when the AGI field is much more mature, the conferences will seem a bit more like ML conferences in terms of the percentage of papers reporting strong technical results. BUT, they will never seem as narrowly disciplinary as ML conferences, because AGI is a different sort of pursuit...

mturmon
Thanks for the kind reply. I said ICML, but NIPS would have been a better point of reference -- since it was originally conceived as a cross-disciplinary enterprise. The NIPS TOC looks like this:

http://nips.djvuzone.org/nipsxx-toc.html

which indicates it's possible to have a selection of papers both technically sharp and interdisciplinary. We should all be so lucky to attract such a set of papers.

I'm reminded of the 1958 editorial by Peter Elias in the IEEE Information Theory Transactions ("Two Famous Papers"): http://oikosjournal.files.wordpress.com/2011/09/elias1958ire...

I sincerely wish you, your conference, and your research enterprise the best.

xyzzy123
I think AGI is an important field of study - but only from an ethical viewpoint!

In terms of engineering yeah, the trend in AI at the moment is applied statistics for sure, and it wins hard.

None
None
bengoertzel
Hi, this is Ben Goertzel, the chief founder of the OpenCog AGI-focused software project and of the AGI conference series.

Comparing Google Search and IBM Watson to OpenCog and other early-stage research efforts is silly. Google Search and IBM Watson have taken fairly mature technologies, pioneered by others over decades of research, and productized them fantastically. OpenCog is a research project and is aimed at breaking fundamentally new research ground, not at productizing and scaling-up technologies already basically described in the academic literature.

Lecturing is a very small percentage of what those of us involved with OpenCog do. We are building complex software and developing associated theory. Indeed parts of our approach are speculative, and founded in intuition alongside math and empirics. That's how early-stage research often goes.

Of course you can trash all early-stage research as not having results yet. And the majority of early-stage research will fail, probably making you tend to feel vindicated and high and mighty in your skepticism ;p .... But then, a certain percentage of early-stage research will succeed, because of researchers having the guts to follow their intuitions in spite of the ceaseless tedious sniping of folks like you ;p ...

- Ben Goertzel

HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.