HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Consciousness in Artificial Intelligence | John Searle | Talks at Google

Talks at Google · Youtube · 99 HN points · 7 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Talks at Google's video "Consciousness in Artificial Intelligence | John Searle | Talks at Google".
Youtube Summary
John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the potential for consciousness in artificial intelligence. This Talk was hosted for Google's Singularity Network.

John is widely noted for his contributions to the philosophy of language, philosophy of mind and social philosophy. Searle has received the Jean Nicod Prize, the National Humanities Medal, and the Mind & Brain Prize for his work. Among his notable concepts is the "Chinese room" argument against "strong" artificial intelligence.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Oct 05, 2019 · 24 points, 1 comments · submitted by rutenspitz
bra-ket
The AI book by the "Yale guys" he mentioned is likely "Scripts, plans, goals and understanding" by Schank & Abelson
Philosophy is not some sort of church where everyone praise some emotionally agreed upon idea... You seems to be turning this into some sort of us vs. them game based on your faulty projections, so I hope to stop here. (Btw, If you are getting your ideas about philosophy from popular youtube channels like https://www.youtube.com/user/schooloflifechannel or https://www.youtube.com/playlist?list=PL8dPuuaLjXtNgK6MZucdY... , maybe I can understand why you have such misunderstandings)

-----------

If you happen to be interested anytime, you can start with some very introductory resources I bothered to look up for you (most are video, they are easy to consume):

+ Donald Hoffman - computational theory of mind, someone closer to HN's demographic (https://youtu.be/cUhrK82seVY)

+ John Searl is good speaker so try his talk (https://youtu.be/rHKwIYsPXLg)

+ Some thought experiments (remember that thought experiments are not highlighters of issue, not complete arguments) https://www.youtube.com/playlist?list=PLz0n_SjOttTdUVuUqefi6...

+ If you want to know what a complete technical work looks like, here's one I've been reading: http://a.co/2xF4PPB

+ Not agreed upon but a fun one to not include - about: what is philosophy: https://youtu.be/dp8aTYUrPi0

+ Science vs philosophy sort of video, slow but good discussion in there: https://youtu.be/9tH3AnYyAI8

+ http://a.co/i96KFPs

In the unlikely case that you become very interested, you can look up "Introduction to philosophy of mind syllabus" and go through the materials and/or books of your choice on the subject.

-----------

> What is the philosophical method for interrogation an empirical phenomenon

That would go into philosophy of science, which I have absolutely no familiarity with. I'm guessing, to a philosopher of science, 'empirical' isn't such a simple subject as recording something that scientist would call it. I did watch this very interesting video once about phil. of science: https://youtu.be/5ng-t0o7E-w

Chris2048
> You seems to be turning this into some sort of us vs. them game based on your faulty projections

Really, how so? you're the one assuming the authority of philosophy, not me. How are my projections 'faulty'? You just keep pivoting, and claiming there to be some counterpoint, somewhere, even though you can't seem to supply them yourself.

> you can start with some very introductory resources

No thanks, implicit to this move is the suggestion that I need to read "very introductory" material. I don't.

> That would go into philosophy of science, which I have absolutely no familiarity with

Then you have no basis for arguing with me?

Searle would have us interpret this as the company taking the intelligence of the humans, refining and repackaging it.

https://www.youtube.com/watch?v=rHKwIYsPXLg

Your points are all good. But they have nothing to do with meaning, or with semantics.

Cellular automata are lookup tables, and Wolfram and others proved some cellular automata rules are Turing complete computers. https://en.wikipedia.org/wiki/Rule_110 My point was merely about the equivalence of computational mechanisms, not about lookup tables per se. And by corollary, that the computational complexity is equivalent regardless of the computational mechanism. (I think we agree on this point.)

Searle's Room is a just to explain that what computers are doing is syntactic.

Searle would posit that passing a Turing test in any amount of time is irrelevant to determining consciousness. It's a story we hypothetical use to "measure" intelligence, but it's only a story. it's not a valid test for sentience, and passing such a test would not confer sentience. Sentience is an entirely different question.

What would be more interesting is if a computer intentionally failed turing tests because it thinks turing tests are stupid.

We could test humans with Turing tests to determine their "level" of intelligence. But if you put teenage boys in a room and made them do Turing tests, pretty quick they would come up with remarkable ways to fail those tests, chiefly by not doing them! How could you write a program or create a system that intentionally fails Turing tests? or a program which avoids taking turing tests... because it thinks they are stupid?

Could you write a program that knows when it fails? (it's a pretty long standing problem...)

I like the speed (or space-bound question) you ask because it is not a thought experiment to me. It's an actual real problem I face! at what point does the speed of the underlying computing become so interminably slow that we say something is no longer sentient? In my work, I don't think there is some such slow speed. The slowness simply obscures sentience from our observation.

In the excellent example: "I think it is reasonable to believe that after enough clicks the entity is not sentient..."

How would you distinguish between the "loss" of sentience from reduced complexity, from the loss of your ability to perceive sentience from the reduced complexity? The question is, how could you tell which thing happened? If you don't observe sentience anymore, does that mean it's not there? (Locked in syndrome is similar to this problem in human beings.) And if you have a process to determine sentience, how do you prove your process is correct in all cases?

I do not think of these as rhetorical questions. I actually would like a decent way to approach these problems, because I can see that I will be hitting them if the model I am using works to produce homeostatic metabolic like behavior with code.

Computation is a subset of thinking. There is lots of thinking that is not computation. Errors are a classic example. The apprehension of an error is a representational process, and computation is a representational process. We may do a perfectly correct computation, but then realize the computation itself is the error. (As a programmer learns, it is exactly these realizations that lead to higher levels of abstraction and optimization.)

Searle's point is that a lookup table or any other computational mechanism, can not directly produce sentience because it's behavior is purely syntactic. "Syntax is not semantics and simulation is not duplication." https://www.youtube.com/watch?v=rHKwIYsPXLg

Aaronson's points are very well made, but none of them deal with the problem of semantics or meaning. Because they don't deal with what representation is and how representation itself works. All of the complexity work is about a sub-class of representations that operate with certain constraints. They are not about how representation itself works.

> "suppose there is this big lookup table that physics logically excludes from possibility."... That is the point!

Even if there were such a lookup table, it would not get us to sentience, because it's operations are syntactic. It is functional, but not meaningful. You are correct, it could never work in practice, but it could also never work under absolute conditions. That's why I figured Aaronson was poking fun of those critiquing Searle, because it would ALSO, not work in practice.

Aaronson writes, "I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient."

This statement supports Searle's argument, it doesn't detract from it. Hypothetically, an instantaneous lookup of an exponential table system would not be sentient but an instantaneous lookup of an algorithmically bound table system would be sentient? On what basis then does sentience confer, if the bound is the only difference between the lookup tables? Introducing the physical constraints doesn't change the hypothetical problem.

Searle and Aaronson are just talking about different things.

If Aaronson was actually refuting Searle, what is the refutation he makes?

Aaronson never says something like "Computers will be sentient by doing x, y, and z, and this refutes Searle." The arguments against Searle (which I take Aaronson as poking at) are based in computation. So... show me the code! Nobody has written code to do semantic processing because they don't know how. It could be no one knows how because it's impossible to do semantic processing with computation - directly.

That is my view from repeated failures, there simply is no path to semantics from symbolic computation. And if there is, it's strange voodoo!

p4wnc6
I think the reference to cellular automata is a bit misplaced. Yes, Rule 110 is Turing complete, but I don't think this has anything to do with the sort of lookup table that Searle is appealing to. You can write programs with Rule 110, by arranging an initial state and letting the rules mutate it. However, a lookup table that merely contains terminal endpoints can't do that. It doesn't have the necessary self-reference.

People always like to say this about Matthew Cook's result on Rule 110 and connect it to Searle's argument, but they are just totally different things. If Searle instead talked about encoding a general purpose AI program to translate sentences, and his substrate of computation happened to be a cellular automata, that's fine, but it would be no different than him postulating an imaginary C++ program that translates the sentences, meaning he would be assuming a solution to A.I. completeness from the start, whether it is via cellular automata or some typical programming language or whatever.

But the type of lookup table he is talking about is just an ordinary hash table, it's just a physical store of fixed immutable data which is not interpreted as self-referencing in a programmatic sense, but instead which simply holds onto, and is "unaware" of, translation replies for each possible input.

makebelieve
I was not trying to connect rule 110 to Searle's argument per se, but rather to the critique of Searle's argument. Namely, that criticisms of the lookup table are not criticisms of Searle's argument or the point he makes. a C++ program, brainfk, a CA, a one instruction set computer, or whatever computational process is used doesn't matter. The lookup table is just one component of the Rooms operation. I agree Searle is talking about a hash table, but he is also talking about the rules to interface an input value to a set of possible output values via some mechanical process, and the man in the room acts as a kind of stack machine.

You are right, Searle isn't making an argument about the translation of sentences. (translating them to what?)

He is making an argument about how the mechanism of computation cannot capture semantic content. He explains this in the google video very well: https://www.youtube.com/watch?v=rHKwIYsPXLg

And all of the... let's call them "structural" critiques are moot. Searle's point is that computer systems cannot understand semantic content because they are syntactic processing machines. And he shows this with his argument.

The opposite view is that computers can understand semantic content. (so there is understanding and there is meaning understood by the computer) and the reason Searle doesn't believe computers can do this is because his argument is flawed.

Which leaves us with a small set of options:

1) That the structure Searle proposes can in fact understand semantic content and Searle just doesn't understand that it does.

I don't think anyone believes this. My iphone is certainly more capable, a better machine, with better software than Searle's room, and no one believes my iphone understands semantic content. so the belief the Room does understand semantic content but not my iphone is plainly false.

2) Searle's Room is simply the wrong kind of structure, or the Room is not a computer, or not a computer of sufficient complexity and therefore it cannot understand semantic content

I think this is the point you are making, but correct me if I'm wrong. This is not an objection against Searle's point. It's a critque of the structure of the argument, but not the argument itself. Searle could rewrite his argument to satisfy this objection, but it wouldn't change his conclusion.

Which bring us to the generalized objection:

3) that sufficient complex computer would understand semantic content.

Aaronson's paper is about the complexity problem and how a sufficiently complex system would APPEAR to understand semantic content by passing a Turing test within some limited time.

There are many arguments to this line of reasoning. One of them is that all such limitations are irrelevant. You yourself are not engaged in a limited time turing test, no person is. The issue is not passing turing tests, it instantiating sentience.

But thinking about complexity gets us off the root of the objection. You intuit that increasing or decreasing complexity should give us some kind of gradient of sentience. So an insufficiently complex system would not be sentient and would not understand semantic content, but this isn't what Searle is arguing.

Searle is demonstrating that no syntactic processing mechanism can understand semantic content. Understanding semantic content is a necessary condition for sentience, therefore no computer which does syntactic processing can be sentient. A gradient of complexity related to sentience is irrelevant.

In the one case: our computers become so complex it becomes sentient -> because it is sentient it can understand semantic content. Vs. understand semantic content and that leads to sentience.

The gradient of complexity to sentience is an intuition. Understanding of semantic content can be atomic. Even if a computer only understands the meaning of one thing, that would disprove Searle's argument. A gradient of complexity isn't necessary. Searle is saying there is a threshold of understanding semantic content that a computer system must pass to even have a discussion about actual sentience. And if a computer is categorically incapable of understanding semantic content, it is therefore incapable of becoming sentient.

Said another way, sentience is a by-product of understanding semantic content. Sentience is not a by-product of passing turing tests. The complexity required to pass a turing test, even of finite or infinite length, says nothing about whether a machine does or does not understand semantic content.

All the structural critiques of Searle fail because they do not offer up a program or system that understands semantic content.

Show me the code that runs a system that understands semantic content. Even something simple, like true/false. or cat/not a cat. If Searle's structure of the room is insuffiently complex, then write a program that is sufficiently complex. And if you can't, then it stands to reason that Searle at least might be correct: computers, categorically, cannot understand semantic content BECAUSE they do syntactic processing.

Google's awesome image processing that can identify cats does not know what a cat is at all. It simply provides results to people who recognize what cats are, and recognize that the google machine is very accurate at getting the right pictures. but even when google gets it wrong, it does not know the picture does not have a cat in it. In fact, the google machine does not know if what it serves up is a cat picture even if there is a cat in the picture.

The Searle Google talk covers this very well: https://www.youtube.com/watch?v=rHKwIYsPXLg

If you fed googles cat NN a training corpus of penguin pictures and ranked the pictures of penguins as successes, it would serve up penguins as if they were cats. But no person would ever tell you a cat is a penguin. Because penguins and cats are different things, they have different semantic content.

I would love to see that Searle is wrong. I'm sure he would be just as pleased. So I am curious if you do have or know of a machine that does do, even the smallest amount, of semantic processing. Because solving that problem with symbolic computation would save me a ton of effort.

wsieroci
"The approach I am taking is a kind of metabolic computing"

1. What exactly do you mean by "a kind of metabolic computing"?

2. What is first step you want to accomplish?

3. What do you think (feel) is actually happening in any sentient animal that leads to semantic content? How it is possible that this happens? We know that it happens because we are sentient animals. The question is: where is this difference because as animals we are also machines and it seems that everything what is happening in our cells is purly syntactical.

makebelieve
If we look at how organism manage semantic information, we know it is done with cells and cell clusters and making "connections" between cells in nervous systems. (it isn't all nervous system cells though). The cells exist and function because of the molecular activity that goes on in the cell and to a lesser degree the surrounding environment. (a hostile environment can destroy cells through molecular interactions). But there is not "cell" level phenomena the produces cells or their behavior. It's all molecular interactions.

Molecules are driven not by exterior phenomena, but by changes intrinsic to the molecules and atoms and other particles they interact with. We live in a particle universe. We do not live in a universe with outside "forces" or "laws" that cause the particles to behave in any way. Everything about the physics is from the inside out, and the interactions are always "local" to particles. Large scale phenomena are actually huge quantities of particle phenomena that we perceive as a single large phenomena. (this is a kind of illusion).

When we try to write programs that simulate physical phenomena, like atoms, or molecules we write the code from the outside in. It is the program which causes the data changes to simulate some chemistry. But in nature, there is no program determining how the molecules act. chemical changes occur because of features about the individual molecules interacting, not because of a rule. Simulations like this do not replicate what happens between individual molecules, the replicate what would happen if molecules were controlled by an external rule (which they are not).

any rule based simulation can only express the set of possible outcome conditions from the rules and data. but it cannot capture it's axioms, and it cannot capture conditions that in fact exist outside it's axiomatic boundary. (Aaronson and p4wnc6 both remark on this limitation by pointing out the complexity necessary to achieve a good Turing test result or sentient AI).

My approach is to treat this intrinsic nature of molecular interactions as a fact and accept it as a requirement for a kind of computer system that can do "molecular interactions" from the inside out. And my supposition is (not proved yet!) that a mixture of such interactions could be found that is stable, that would be homeostatic. And if such a mixture could be found, then could a mixture be found that can be encapsulated in a membrane like structure. And could such a mixture store in it's set of code/data like "molecules" it's internal program - eg. DNA.

I think the answer is yes.

There are three different steps that all have to work together.

One is understanding how representation works (see my email to you, it's outside the bounds of this thread). So understanding how semantic content and awareness works, in all situations and conditions, is a precondition to recognizing when we have code that can generate semantic content.

The next is finding a model of how representation is instantiated in organisms to use as a basis for a machine model.

The third is then coding the machine model, to do what organisms do so that the machine understands semantic content, and the machine should produce awareness and consciousness.

I believe metabolic functioning is the key feature to allow us to do representational processing. hence why I call the approach I am taking, metabolic computing. The step I am currently on is writing up an interpreter that I think can do "molecular" interactions between code/data elements. Meaning that the data/code elements determine all the interactions between data and code intrinsically. the interpreter processes those "atomic" interactions based on intrinsic features of that code. Essentially, every bit of code/data is a single function automata and they can all change each other so the system may or may not work dependent on the constituent "molecules" of the system. I call this "the soup".

previous prototypes required me to do all the addressing, which itself was a big leap forward for me. But now the code/data bits do the addressing themselves. (each function interacts "locally" but interactions can create data structures, which is the corollary to molecules forming into larger structures and encapsulating molecular interactions into things like membranes).

So the next step is finish the interpreter, then see if I can get the right soup to make functions (like dna and membranes. I've written out RNA like replication examples and steady state management as discussed in systems biology so I there is a path forward). Then see if I can get to homeostasis and a "cell" from datastructures and interacting "molecules". the step after that is multiple "cells" and then sets of cells that form structures between inputs and outputs. eg. a set of "retina" cells that respond to visual inputs, a set of cells that "process" signals from those retina cells, and motor cells that take their cues from "process" cells etc.

the cell level stuff and above is mostly straightforward. it's forming different kinds of networks that interact with each other. Nodes themselves are semantic content. but how do you make networks from the inside out? from meaningless (syntactic) molecular interactions? that is where the metabolic systems (and stigmergy) come into play. (actually, stigmergy comes into play at many levels)

In biology, the syntactic to semantic jump happens at the cell. the cell itself is a semantic thing. the syntactic processes maintain the cell. the cells underlying mechanims and interactions are all syntactic. and the cell doesn't "cause" anything, everything happens in the cellular processes for their own intrinsic reasons, but the cell, it is semantic content. (embodiment).

the embodiment path is how to get representation, awareness, and consciousness.

My apologies that this is somewhat all over the map, but the problem of making machine sentience actually work requires that theory, model, and implementation all work. And if any of them don't work, then the outcome of sentience becomes impossible. And that's just a lot of different stuff to try to compress into a comment!

you might want to watch this lecture by Searle at Google: https://www.youtube.com/watch?v=rHKwIYsPXLg
wsieroci
I saw it and I am constantly pretty amazed just how people can't grasp his argument.
Dec 15, 2015 · 71 points, 59 comments · submitted by nolantait
DonaldFisk
I think Searle's mostly correct and Kurzweil's completely wrong on this. It took me a long time to understand Searle's argument, because Searle conflates consciousness and intelligence and this confuses matters. Understanding Chinese is a difficult problem requiring intelligence, but I don't think it requires consciousness.

It is important to distinguish between "understanding Chinese" and "knowing what it's like to understand Chinese". We immediately have a problem: knowing what it's like to understand Chinese involves various qualia, none of which is unique to Chinese speakers.

So I'll simplify the argument. Instead of having a room with a book containing rules about Chinese, and a person inside who doesn't Chinese, we have a room, with some coloured filters, and a person who can't see any colours at all (i.e. who has achromatopsia). Such people (e.g. http://www.achromatopsia.info/knut-nordby-achromatopsia-p/) will confirm they have no idea what it's like to see colours. If you shove a sheet of coloured paper under the door, the person in the room will place the different filters on top of the sheet in turn, and by seeing how dark the paper then looks, be able to determine its colour, which he'll write on the paper, and pass it back to the person outside. The person outside thinks the person inside can distinguish colours, but the person inside will confirm that not only can he not, but he doesn't even know what it's like. Nothing else in the room is obviously conscious.

A propos of the dog, this is the other minds problem. It's entirely possible that I'm the only conscious being in the universe and everyone else (and their pets) are zombies. But we think that people, dogs, etc. are conscious because they are similar to us in important ways. Kurzweil presumably considers computers to be conscious too. Computers can be intelligent, and maybe in a few years or decades will be able to pass themselves off over the Internet as Chinese speakers, but there's no reason to believe computers have qualia (i.e. know what anything is like), and given the above argument, every reason to believe that they don't.

leafee
> conflates consciousness and intelligence and this confuses matters

I think this is an excellent point. I like your example with colors, which shows that there is a difference between seeing (i.e. experiencing) colors and producing symbols which give the impression that an entity can see colors.

I don't follow any argument that proposed that computers can be conscious but other machines (e.g. car engines) cannot. In the end, symbols don't really exist in physical reality - all that exists is physical 'stuff' - atoms, electrons, photons etc. interacting with each other. So how can we say that one ball of stuff is conscious but another is not? And why isn't all of the stuff together also conscious? Why not just admit we don't know yet?

Consciousness may be hard to define, but lets take something simpler - experience, or even more specifically - pain. I can feel pain. While I can't be 100% sure, I believe other humans feel pain as well. However I don't believe my laptop has the capacity to feel pain, irrespective of how many times and in how many languages it can say 'I feel pain'.

Perhaps the ability to experience is the defining characteristic of consciousness?

redwood
I disagree completely. After time the color filter will start to associate various concepts and feelings add images with various colors. This association is what starts making the colors themselves have meaning even if they can't see the colors the same way that you and I can. There's no way to prove that we all see colors the same way anyway. But that doesn't mean that we don't believe that were conscious. I think I see that you're saying we cannot make any claims about others perhaps but only can talk about how we feel. But I feel like the room example is actually misleading in this respect. Another way of thinking about it is our brain starts to associate things and if those clusters of associations that give those things meaning. The experience of experience and color is only important because experience and color has a web of other associated experiences that those colors remind us of. So extending the room experiment to the experience of a baby who throughout the entire life sees colors or the filter image version of these colors at various moments to associate with various things. In this example we can imagine that the baby will in fact associate let's say blue with I don't know this great unknown half of our outside ceiling that we see during the day. And then that will take on something more to it but it is difficult admittedly to explain.
leafee
> The experience of experience and color is only important because experience and color has a web of other associated experiences that those colors remind us of

So what about those original experiences? How are they important at all if there is nothing to associate them with?

DonaldFisk
> After time the color filter will start to associate various concepts and feelings add images with various colors. This association is what starts making the colors themselves have meaning even if they can't see the colors the same way that you and I can.

The filters are just pieces of transparent coloured plastic. How are they capable of forming associations?

Also, associations on their own (e.g. blue with sky, red with blood, green with grass) don't give you any idea what colours are like. Knut Nordby (and many other people with achromatopsia) knew these associations as well as you or I know them, but made it quite clear that he had no idea what it was like to see in colour.

TheOtherHobbes
This is basically just the Hard Problem of consciousness. It's been a hard problem for decades, and we're no closer to having an answer.

>But we think that people, dogs, etc. are conscious because they are similar to us in important ways.

Specifically, mammals have mirror neurones. More complex mammals also seem to have common hard-wired links between emotions and facial expressions - so emotional expression is somewhat recognisable across species.

I'm finding the AI debates vastly frustrating. There are basic features of being a sentient mammal - like having a body with a complicated sensory net, and an endocrine system with goal/avoidance sensations and emotions, and awareness of social hierarchy and other forms of bonding - that are being ignored in superficial arguments about paperclip factories.

It's possible that a lot of what we experience as consciousness happens at all of those levels. The ability to write code or find patterns or play chess floats along on top, often in a very distracted way.

So the idea that an abstract symbol processing machine can be conscious in any way we understand seems wrong-headed. Perhaps recognisable consciousness is more likely to appear on top of a system that models the senses, emotions, and social awareness first, topped by a symbolic abstraction layer that includes a self-model to "experience" those lower levels, recursively.

toxik
This might be very tangential, but I had a very acute sensation of learning something new the other day. There are these stereographic images, where they put the left and right eye's intended image next to each other -- and with a little practice, you can angle your eyes so each eye looks at a separate picture. Suddenly your double vision starts to make MORE sense than regular vision, because you're now able to combine the two images to get a sense of depth in the image. The tricky part is now to focus your eyes; at first, you'll reflexively correct your eyes too and the illusion (or the combination rather) goes away.

But bit by bit, you learn to control your eye's focal length independent of "where" in space you want to look. It really is astonishing.

It made me think of consciousness as a measure of ability to integrate information, because this process is truly fascinating to anybody who tries it (and I really think you should!) Perhaps that's because with this trick, you were able to integrate more information, and thus tickle your brain more?

nova
I can only recommend reading this paper: http://www.scottaaronson.com/papers/philos.pdf

It really lives up to its title. Suddenly computational complexity is not just a highly technical CS matter anymore, and the Chinese Room paradox is explained away successfully, at least for me.

amoruso
Searle makes two assertions:

1) Syntax without semantics is not understanding. 2) Simulation is not duplication.

Claim 1 is a criticism of old-style Symbolic AI that was in fashion when he first formulated his argument. This is obviously right, but we're already moving past this. For example, word2vec or the recent progress in generating image descriptions with neural nets. The semantic associations are not nearly as complex as those of a human child, but we're past the point of just manipulating empty symbols.

Claim 2 is an assertion about the hard problem of consciousness. In other words, about what kinds of information processing systems would have subjective conscious experiences. No one actually has an answer for this yet, just intuitions. I can't really see why a physical instantiation of a certain process in meat should be different from a mathematically equivalent instantiation on a Turing machine. He has a different intuition. But neither one of us can prove anything, so there's nothing else to say.

DonaldFisk
I wouldn't be so critical of GOFAI. Much high-level reasoning either does or can involve symbol manipulation. There are some impressive systems, such as Cyc, which do precisely that. It isn't useful for low-level tasks like vision or walking, so other approaches are needed to complement it.

> but we're past the point of just manipulating empty symbols.

We've now reached the point where we can manipulate large matrices containing floating point numbers. I don't see how this makes systems any more conscious.

ttctciyf
Regards claim 2, Searle repeats the phrase "specific causal properties of the brain" quite a few times without spelling out just what he's referring to, but from other remarks he makes it seems clear he means actual electrochemical interactions, rather than generic information processing capabilities. I think his view is that consciousness (most likely) doesn't arise out of "information processing", which he would probably class as "observer-relative", but out of some as yet not understood chemistry/physics which takes place in actual physical brains.

So the question, to Searle, is not "about what kinds of information processing systems would have subjective conscious experiences", but "what kinds of electrochemical interactions would cause conscious experiences".

The intuition/assumption of his questioners seems to be that whatever electrochemical interactions are relevant for consciousness, they are relevant only in virtue of their being a physical implementation of some computational features, but plainly he does not share this assumption and favours the possibility that the electrochemical interactions are relevant because they physically (I think he'd have to say) produce subjective experience - and that any computational features we attribute to them are most likely orthogonal to this. Hence his example of the uselessness of feeding an actual physical pizza to a computer simulation of digestion. His point is that the biochemistry (he assumes) required for consciousness isn't present in a computer any more than that required for digestion is.

Another example might be: you wouldn't expect a compass needle to be affected by a computer simulating electron spin in an assemblage of atoms exhibiting ferromagnetism any more than it would be by a simulation of a non-ferromagnetic assemblage.

To someone making the assumption that computation is fundamental for explanations of consciousness, these examples seem to entirely miss the point, because it's not the physical properties of the implementation (the actual goings on in the CPU and whatnot) that matter, but the information processing features of the model that are the relevant causal properties (for them.)

But to Searle, I think, these people are just failing to grok his position, because they don't seem to even understand that he's saying the physical goings on are primary. You can almost hear the mental "WHOOSH!" as he sees his argument pass over their heads. In an observer-relative way, of course.

As you imply, until someone can show at least a working theory of how either information processing or biochemistry can cause subjective experience the jury will be out and the arguments can continue. I won't be surprised if it takes a long time.

(Edited to add the magnetic example and subsequent 2 paragraphs.)

mtrimpe
I think Claim 1 is actually more about determinism; that if by knowing all the inputs you can reliably get the same outputs what you have isn't consciousness.

Neural nets are somewhat starting to escape that dynamic but there still isn't a neural net that reliable pulls in a continuous stream of randomness to generate meaningful behaviour like our consciousness does.

Now to be honest; I'm not entirely sure if John Searle would agree that that is consciousness when we do get there but I do agree with him that deterministic consciousness is essentially a contradictio in terminis.

cromwellian
The systems response is pretty much the right answer. You can put yourself at any level of reductionism of a complex system and ask how in the hell the system accomplishes anything. If you imagine yourself running a simulation of physics on paper for the universe, you may ask yourself, how does this simulation create jellyfish.

I think people fall for Searle's argument the same way people fall for creationist arguments that make evolution seem absurd. Complex systems that evolve over long periods of time have enormous logical depth complexity and exhibit emergent properties that really can't be computed analytically, but only but running the simulation, and observing macroscopic patterns.

If I run a cellular automaton that computes the sound wave frequencies of a symphony playing one of Mozart's compositions, and it takes trillions of steps before even the first second of sound is output, you can rightly ask, at any state, how is this thing creating music?

spooningtamarin
Consciousness and understanding are human created symbolism. Talking about it seriously is a waste of time.

I could be an empty shell imitating a human perfectly, all other humans would buy my lack of consciousness, and nothing would be different, from their perspective I exist, from mine, I don't have mine.

How does one know that I really understand something? Maybe I can answer all the questions to convince them?

None
None
kriro
It's pretty frustrating to watch. Feels like an endless repetition of "well humans and dogs are conscious because that's self evident". There's no sufficient demarcation criterion other than "I know it when I see it" that he seems to apply. [I guess having a semantics is his criterion but he doesn't elaborate on a criterion for that]

The audience question about intelligent design summed up my frustration nicely (or rather the amoeba evolving part of it).

sethev
I think what it boils down to is that Searle believes consciousness is a real thing that exists in the universe. A simulation of a thing isn't the same as the thing itself, no matter how accurate the outputs. The Chinese Room argument just amplifies that intuition (my guess is that the idea of a room was inspired by the Turing Test).

I think studying the brain (as opposed to philosophical arguments) is the thing that will eventually answer these kinds of questions, though.

pbw
I think the argument about consciousness is vacuous. Searle admits we might create an AI which acts 100% like a human in every way.

Nothing Searle says stands in the way of creating intelligent or super-intelligent entities. All Searle is saying is those entities won't be conscious.

No can prove this claim today. But more significantly I think it's extremely likely no one will ever prove the claim. Consciousness is a private subjective experience. I think it's likely you simply cannot prove it exists or doesn't exist.

Mankind will create a human-level robots and we'll watch them think and create and love and cry and we'll simply not know what their conscious experience is.

Even if we did prove it one way or the other, the popular opinion would be unaffected.

Some big chunk of people will insist robots are conscious entities who feel pain and have rights. And some big chunk of people will insist they are not conscious.

It might be our final big debate. An abstruse proof is not going to change anyone's mind. Look at how social policies are debated today. Proof is not a factor.

orblivion
So, supposing there's any chance that it has consciousness, is there any sort of movement doing all it can to put the brakes on AI research? If it's true, it's literally the precursor to the worst realistic (or hypothetical, really) outcome I can fathom, which has been discussed before on HN (simulated hell, etc). I'm not sure why more people aren't concerned about it. Or is it just that there's "no way to stop progress" as they say, and this is just something we're going to learn to live with, the way we live with, say, the mistreatment of animals?
adrianN
We are sufficiently far away from creating machines that humans would consider to have consciousness that it's not really a problem so far. Eventually we'll probably have to think about robot rights, but I guess we still have a few decades until they're sufficiently advanced. But judging from how we treat, eg. great apes, who are so very similar to us, I wouldn't want to be a robot capable of suffering.
orblivion
I'd think that if there are people forward thinking enough to consider the consequences to humans (Elon Musk, Singularity Institute), there should be people forward thinking enough to consider the consequences to the AIs.
nnq
This guy so smart but at the same time such an idiot. SYNTAX and SEMANTICS are essentially SAME THING. It's only a context-dependent difference, and this difference is quantitative, even if we still don't have a good enough definition of what those quantitative variables underlying them are. You must have a really "fractured" mind not to instantly "get it". And "INTRINSIC" is simply a void concept: nothing is intrinsic, everything (the universe and all) is obviously observer dependent, it just may be that the observer can be a "huge entity" that some people choose to personalize and call God.

It's amazing to me that people with such a pathological disconnect between mind and intuition can get so far in life. He's incredibly smart, has a great intuition, but when exposed to some problems he simply can't CONNECT his REASON with his INTUITION. This is a MENTAL ILLNESS and we should invest in developing ways to treat it, seriously!

Of course that "the room + person + books + rule books + scratch paper" can be self conscious. You can ask the room questions about "itself" and it will answer, proving that it has a model of itself, even if that model is not specifically encoded anywhere. It's just like mathematics, if you have a procedural definitions for the set of all natural numbers (ie. a definition that can be executed to generate the first and the next natural number), you "have" the entire set of natural numbers, even if you don't have them all written down on a piece of paper. Same way, if you have the processes for consciousness, you have consciousness, even if you can't pinpoint "where" in space and time exactly is. Consciousness is closer to a concept like "prime numbers" than to a physical thing like "a rock", you don't need a space and time for the concept of prime numbers to exist in, it just is.

His way o "depersonalizing" conscious "machines" is akin to Hitler's way of depersonalizing Jews, and this "mental disease" will probably lead to similar genocides, even if the victims will not be "human" ...at least in the first phase, because you'll obviously get a HUGE retaliation in reply to any such stupidity, and my bet it that such a retaliation will be what will end the human race.

Now, of course the Chinese room discussion is stupid: you can't have "human-like consciousness" with one Chinese room. You'd need a network of Chinese rooms that talk to each other and also operate under constraints that make their survival dependent on their ability to model themselves and their neighbors, in order to generate "human-like consciousness".

nsns
Well, it's Searle after all. It's always funny to re-read Derrida's attack on his problematic line of thought[0].

0. https://en.wikipedia.org/wiki/Limited_Inc

bhickey
There isn't much new here. Skip ahead to the first audience question from Ray Kurzweil (http://www.youtube.com/watch?v=rHKwIYsPXLg&t=38m51s).

Kurzweil, in summary, asks "You say that a machine manipulating symbols can't have consciousness. Why is this different than consciousness arising neurons manipulating neurotransmitter concentrations?" Searle gives a non-answer: "My dog has consciousness because I can look at it and conclude that it has consciousness."

cscurmudgeon
I think we are missing the gist of the Chinese room argument here.

The correct question to ask: How is a machine manipulating symbols (that someone says is conscious) different from any other complex physical system? Is New York city's complex sewer system conscious. What about the entire world's sewer & plumbing system?

Does a machine have to compute some special function to be conscious? Does the speed of computation matter? If so who measures the speed? (let us not bring in general relativity as speed of computation can be different for different observers.

Kurzweil et al's definition of consciousness is exactly as silly as Searle saying "My dog has consciousness because I can look at it and conclude that it has consciousness."

chubot
Those are good questions, but I don't see how the Chinese room argument helps with any of them. If anything, it confuses things by dragging a fictional/impossible construct into the argument, rather than just using real examples that people actually understand.
cscurmudgeon
The Chinese room argument is a nice thought experiment that strips away irrelevant details. For example, if you were to use a real humanoid robot (e.g. the Nao) in the argument, people would probably not get the argument and be confused because the robot looks fuzzy and cute.
dang
I thought the point was to construct an example that most people agree wouldn't count as consciousness, then ask how a computer is any different.
escape_goat
I think that there's a fundamental cognitive-shift style problem with Searle's argument, because I remember encountering it when I was in my tweens and wondering why anyone thought there was any 'there' there.

I think that -- from memory -- this is approximately what Searle believes himself to be saying:

1. Imagine that you're having a conversation with a person in another room, in Chinese. You're writing stuff down on little scraps of paper and getting little scraps of paper back. It's 100% clear to you that this is a real live person you're talking to.

2. Except here's the thing, it isn't. There's actually just this guy in the other room, he doesn't speak read or write Chinese at all. He just has a whole bunch of books and ledgers that contain rules and recorded values that let him transform any input sentence in Chinese into an output sentence in Chinese that is completely indistinguishable from what a real live person who spoke Chinese might say.

3. So, it's ridiculous to imagine that someone could actually simulate consciousness with books and ledgers. There's no way. Since the guy doesn't understand Chinese, he isn't "conscious" in the sense of this example. So we can't describe him as conscious. And the idea that the books are conscious is ridiculous, because they're just information without the guy. So there actually can't be any consciousness there, even though it seems like it. Since consciousness can't be simulated by some books, it's clear that we're just interacting with the illusion of consciousness.

Meanwhile, this is what people like myself hear when he tries to make that argument:

1. Imagine that you're having a conversation with a person in another room, in Chinese. You're writing stuff down on little scraps of paper and getting little scraps of paper back. It's 100% clear to you that this is a real live person you're talking to.

2. Except here's the thing, it isn't. There's actually a system made up of books and ledgers of rules and values in the other room. There's this guy there who doesn't read or write Chinese; he just takes your input sentence in Chinese and applies the rules, noting down values as needed, transforming it until the rules say to send it back to you. That's the sentence that you get back. It's completely indistinguishable from what a real live person who spoke Chinese might say.

3. So, it's ridiculous to imagine that someone could actually simulate consciousness with books and ledgers, but we're doing it for the sake of argument because it's a metaphor that we can hold in our heads. No one would claim that the guy following the rules in the other room is the "conscious" entity that we believe ourselves to be communicating with. And no-one would claim that static information itself is conscious. So either the "conscious" entity must be the system of rules and information as applied and transformed, or else there is no conscious entity involved. If there is no conscious entity involved, and since this is a metaphor, we can substitute "books and ledgers" with "individual nerve cells with synaptic connections" and "potentials of activation", and the conclusion will still hold true; there will still be no consciousness there.

4. However, we feel that there is a consciousness there when we interact with a system of individual nerve cells, synaptically connected with various thresholds of potentiation: even if it's a system smaller by an order of magnitude or so* that the one in our skulls, like our dog has. Thus we must conclude that the "conscious" entity must be the system of rules and information as applied and transformed, or we must conclude that notion of consciousness is ill-founded and inarticulate, that our understanding of consciousness is incomplete, and that our sense of "knowing" that we or another person are conscious is likely an illusion.

*I am fudging on the figure, but essentially we're comparing melon volumes to walnut volumes, as dogs have thick little noggins.

zAy0LfpBZLC8mAC
> [...] we must conclude that notion of consciousness is ill-founded and inarticulate, that our understanding of consciousness is incomplete, and that our sense of "knowing" that we or another person are conscious is likely an illusion.

I think I mostly agree with you, but I would argue that if your notion of consciousness is ill-founded and inarticulate, you can't really decide whether it's an illusion either. After all, the subjective experience quite definitely does happen/is real, thus obviously not an illusion, while the interpretation offered for that subjective experience is incoherent, thus there is no way to decide whether it's describing an illusion or not.

xioxox
Interesting. It's also unclear to me why a system of books and ledgers of rules couldn't be conscious if they are self modifying. Who knows what property of the system insides of heads gives it this sense of "self" and how could you even test that it has one?
knughit
That's what's weird about Searle. He posited a great strawman that exposes the fallacy of taking "machines can't think" as an axiom, but he claims the straw man is a steel man. It is as though he is a sacrificial troll, making himself look silly to encourage everyone who can reject this straw man.
chubot
Maybe, but there are too many holes in the construct to make it useful IMO. It just provokes endless confused debate rather than illuminating anything.

IMO Kurzweil's response is actually spot on, although not that hard to come up with. You could make the argument: in your brain, there are just atoms following the laws of physics. The atoms have no choice in the matter, and know nothing about Chinese, or hunger, love, life goals, etc. Your brain is entirely composed of atoms so you can't be conscious.

Obviously "meaning" arises from mindless syntax or mindless physics somewhere in the process. We just don't know where. The Chinese room doesn't bring us any closer to that understanding, and doesn't refute anything.

vixen99
Why can't meaning just be the sensation of a local minimum? When you find meaning, you temporarily pause because there's nowhere else to go in the local environment. Subsequently of course, you might be jolted out of that and be compelled to find a new minimum.
Scarblac
How does that explain consciousness, qualia et cetera?
dekhn
So basically, the way to convince Searle (not that that is a real goal) is to build a robot automaton which passes the uncanny valley: very responsive eyes. A collection of tricks. Clever responses.

Searle would look at that and conclude it had consciousness.

chubot
Yeah honestly I don't get what he is really contributing (and I'm sort of an AI skeptic). In 2000 in undergrad, I recall checking out some of his books from the library because people said he was important, and I learned about the "Chinese Room" argument [1] in class.

How is it even an argument? It doesn't illuminate anything, and it's not even clever. It seems like the most facile wrong-headed stab at refutation, by begging the question. As far as I can tell, the argument is, "well you can make this room that manipulates symbols like a computer, and of course it's not conscious, so a computer can't be either"? There are so many problems with this argument I don't even know where to begin.

The fact that he appears to think that changing a "computer" to a "room" has persuasive power just makes it all the more antiquated. As if people can't understand the idea that computers "just" manipulate symbols? Changing it to a "room" adds nothing.

[1] http://plato.stanford.edu/entries/chinese-room/

simonh
It's just obfuscation. In reality the 'room' would have to be the size of a planet and if one person was manipulating the symbols it might take the whole thing the life span of the universe to think 'Hello World'. But by phrasing it as a 'room' with one person in it he makes it look inadequate to the task, and therefore the task impossible.
DonaldFisk
Neither the size of the room nor the speed of the computation is important to Searle's argument. You could replace the person in the room with the population of India (except for those who understand Chinese), and pretend to the Chinese speaker that the communication is by surface mail. Or use a bank of supercomputers if Indians aren't fast enough.
simonh
Fair enough. In which case Sear's argument is that even fantastically complex, sophisticated information processing systems with billions of moving parts and vast information storage retrieval resources operating over long periods of time cannot be intelligent. If that's what his position boils down to, what does casting it as a single person in a room add to the argument? As Kurzweil asked, how is that different from neurons manipulating neurotransmitter chemistry? Searl doesn't seem to have an answer to that.
DonaldFisk
No, his position, as I understand it, is that it cannot be conscious. It certainly can be intelligent.

Searle does try to explain why there's a difference. Although the person in the Chinese Room might be conscious of other things, he has no consciousness of understanding the Chinese text he's manipulating, and will readily verify this, and nothing else in the room is conscious. Chinese speakers are conscious of understanding Chinese.

drdeca
I thought the idea was that because the only part of the room actually doing things (the person), doesn't understand chinese?

I mean, agree with it or not, but I think that's a bit stronger than just, making it seem intuitively worse because its a room instead of "a computer"?

I think the important part isn't the swap of "room" for "computer", but instead the swap of "person" for "cpu"?

deepnet
The person in the room performing the lookup is a red-herring and can be replaced by suitable algorithm, e.g. a convnet - which could learn the lookup task.

Consciousness resides in the minds that created the lookup tables: they were constructed by conscious beings to map queries to meaningful responses.

The lookup tables are the very sophisticated part of Searle's Chinese Room.

The recent emergent semantic vector algebra discovered in human languages by Mikolov's word2vec [Mikolov et al] demonstrate that some of the computational power of language is inherent in the language ( but only meaningfully interpretable by a conscious listener.

Meaning requires Consciousness but language is unexpectedly sophisticated and contains a semantic vector space which can answer simple queries ("What is the Capital of...") and analogise ("King is to What as Man is to Woman") algebraicly. [Mikolov et al]

This inherent semantic vector space is discoverable by context encoding large corpii.

Language is a very sophisticated and ancient tool that allows some reasoning to be performed by simple algebraic transformations inherent to the language.

-

[Mikolov et al] : Efficient Estimation of Word Representations in Vector Space Mikolov, Chen, Corrado, & Dean http://arxiv.org/abs/1301.3781

& https://code.google.com/p/word2vec/

chubot
Yeah but it just makes me more confused? How does that say anything about a computer then? There's no human being who doesn't understand something inside a computer.
drdeca
The idea is, roughly, if a person in the place of the cpu does not understand chinese, then the cpu doesn't understand chinese.

And because the cpu is the part that does stuff, like the person, then if the system w/ the person doesn't understand chinese, then the computer w/ the cpu doesn't understand chinese.

Because there's nothing to make the cpu more understand-y, only things to make it less understand-y, and otherwise the systems are the same.

Chathamization
Yeah, but the system would be the person + the lookup tables, not just the person. The problem is, we don't tend to say "does a room with a person in it and several books have this knowledge?" Relying on a system that doesn't tend to get grouped together (there's no term for the system human + book inside room), and having only one animate object (so that people think of the animate object as the system instead of the animate and inanimate objects), as well as asking the question only about the animate part of the system, all seem to suggest that the purpose of the thought experiment is to mislead people.

A better example would be saying something like - does this company have the knowledge to make a particular product? We can say that no individual member of the company does, but the company as a whole does.

drdeca
I think this is called the "systems response".

Which, well there's a whole series of responses back and forth, with different ideas about what is or is not a good response.

One idea describes a machine where each state of the program is pre-computed, and the computer steps through the states one by one, but in each state, if the next of the pre-computed states was wrong (i.e. would not be the next step of the program, following from the current state), if (e.g.) a switch was flipped, it would cause the program to be computed correctly despite the pre-computed states being wrong, and if the switch is not flipped, then it would continue along the pre-computer states. If the switch is flipped on, or if its flipped off, and all the pre-computer states are correct, the same things happen, and it does not interact with the switch at all. If all the pre-computed states are nonsense, and the switch is flipped on, then it runs the program correctly, despite the pre-computed states being nonsense.

So, suppose that if the pre-computed states are all wrong, and the switch is on, that that counts as conscious. Then, if the pre-computed states are all correct, and the switch is on, would that still be conscious? What if almost all the pre-computed states were wrong, but a few were right? It doesn't seem like there is an obvious cutoff point between "all the pre-computed steps are wrong" and "all the pre-computer steps are right", where there would be a switch between what is conscious. So then, one might conclude that the one where all the pre-computed steps are right, and the switch is on, is just as conscious as the one which has the switch on but all the pre-computed states are wrong.

But then what of the one where all the pre-computed states are right, and the switch is off?

The switch does not interact with the rest of the stuff unless a pre-computed next step would be wrong, so how could it be that when the switch is on, the one with all the pre-computations is conscious, but when it is off, it isn't?

But the one with all the pre-computations correct, and the switch off, is not particularly different from just reading the list of states in a book.

If one grants consciousness to that, why not grant it to e.g. fictional characters that "communicate with" the reader?

One might come up with some sort of thing like, it depends on how it interacts with the world, and it doesn't make sense for it to have pre-computed steps if it is interacting with the world in new ways, that might be a way out. Or one could argue that it really does matter if the switch is flipped one way or the other, and when you flip it back and forth it switches between being actually conscious and being, basically, a p-zombie. And speaking of which you could say, "well what if the same thing is done with brain states being pre-computed?", etc. etc.

I think the Chinese Room problem, while not conclusive, is a useful introduction to these issues?

Chathamization
I don't think it actually brings up any relevant issues. For instance, you mention a p-zombie, but that's another one with glaringly obvious problems that are immediately evident. Does bacteria have consciousness? Or did consciousness arise later, with the first conscious creature surrounded by a community of p-zombies, including their parents, siblings, partners, etc. Both possibilities seem pretty detached from reality.

Pre-computation is another one that seems to obfuscate the actual issue. No, I don't think anyone would think a computer simply reciting a pre-computed conversation had conscious thought going into it; but that same is true for a human being reciting a a conversation they memorized (which wouldn't be that different from reading the conversation in a book). But that's a bit of a strawman, because no one is arguing that lookup table-type programs are conscious (you don't see anyone arguing that Siri is conscious). And the lookup table/precomputations for even a simple conversation would impossibly large (run some numbers, it's most likely larger than the number of atoms in the universe for even tiny conversations).

So I don't see these arguments as bringing up anything useful. They seem more like colorful attempts to purposefully confuse the issue.

FeepingCreature
> But the one with all the pre-computations correct, and the switch off, is not particularly different from just reading the list of states in a book.

The states were (probably) produced by computing a conscious mind and recording the result.

Follow the improbability. The behavior has to come from somewhere. That somewhere is probably conscious.

Similarly, authors are conscious, so they know how conscious characters behave.

alfapla
The Chinese room argument is actually needlessly convoluted. Just imagine a piece of paper on which three words are printed: "I AM SAD". Now is there anyone who believes that this piece of paper is actually feeling sad just because "it says so"? Of course not. Now, suppose we replace this piece of paper with a small tablet computer that changes its displayed "mood" over time according to some algorithm. Now in my opinion it is rather hard to imagine that all of a sudden consciousness will "arise" in the machine like some ethereal ghost and the tablet will actually start experiencing the displayed emotion. Because it's basically still the same piece of paper.
Scarblac
AND YET, human brains are an implementation of such an algorithm. By all reasoning, we shouldn't be conscious.

Yet here I am, I am the one who is seeing what my eyes see and I am distinct from you. Science still has no idea how that happens, as far as I know.

So how knows, maybe all computer programs are in fact conscious in some way.

None
None
gpderetta
The Chinese room argument is actually needlessly convoluted. Just imagine a piece of paper on which I draw a face that looks sad. Now is there anyone who believes that this piece of paper is actually feeling sad just because it looks sad? Of course not. Now, suppose we replace this piece of paper with an organic machine made of cells, blood and neurons which changes its displayed "mood" over time according to some algorithm. Now in my opinion it is rather hard to imagine that all of a sudden consciousness will "arise" in the machine like some ethereal ghost and the organic machine will actually start experiencing the displayed emotion. Because it's basically still the same piece of paper.
dragonwriter
The Chinese Room argument has always seemed to me a powerful illustration of the problem that "consciousness" is so poorly defined as to be not subject to meaningful discussion dressed up as a meaningful argument against AI consciousness.

Its always distressed me that some people take it seriously as an argument against AI consciousness; it does a better job of illustrating the incoherence of the fuzzy concept of consciousness on which the argument is based.

tobiasSoftware
As a believer in Weak AI the Chinese Room argument really gave me more understanding of my position. His argument is based on the concept that interpretation of symbols is not the same as the understanding that we do. As an example, say that a person learns 1 + 1 = 2. Because that person understands the concept, he can then go apply it to other situations, and figure out that 1 + 2 = 3. Whereas because the Chinese room is just interpreting symbols, so when the computer is asked the question "what is 1 + 1?" and can answer "2" via lookup table, but the person inside the room has gained no understanding of the actual question so he can't then use that knowledge in different circumstances and know without looking up that 1 + 2 = 3.

The Chinese Room argument is that because computers can't "learn", everything has to be taught to them directly, whereas humans are able to take knowledge given and apply it to other situations. While some computers can "learn" enough rules to follow patterns, the argument is that computers can't "jump the track" and that humans can.

chubot
Yeah I see that, but the problem is that we don't know how humans are conscious, i.e. where meaning rises. If you believe that brains are just atoms, then meaning arises from "dumb physics" somewhere.

Another way to think of it is: a fetus, or a sperm, or ova is not conscious. Some might argue that a newborn isn't really conscious. Somewhere along the line it becomes conscious. How does that happen? Where is the line? We have no idea.

You can't assert that meaning can't arise from "dumb symbol manipulation" without understanding how meaning arises in the former case. We simply don't know enough to make any sort of judgement. The Chinese room argument is trying to make something out of nothing. We don't know.

ecopoesis
I've always thought that the Chinese Room proved just the opposite of what Searle thinks it does.

I think of it this way:

I have two rooms: one has a person who doesn't speak Chinese in it, but they have reference and books that allow them to translate incoming papers into Chinese perfectly.

The second room just someone who speaks Chinese, and can translate anything coming in perfectly.

Searle says that AIs are like person in room one: they don't know Chinese.

I would argue that is the wrong way to look at things. A better comparison is that an AI is like the system of room 1, which does know Chinese, and from observation is indistinguishable from the system of room 2. What's going on inside (a human with Chinese reference books vs a human who knows Chinese) doesn't matter, it's just internal processing.

If it walks like a duck and quacks like a duck, then it's a duck.

If a machine claims to be conscious, and I can't tell it apart from another conscious being, who am I to say it isn't conscious?

dnautics
I was never satisfied with the chinese room thought experiment. Let's momentarily replace the thing in the chinese room with a human, to parse Searle's notion of "understanding". Searle would argue that a human trained in to emit meaningful chinese characters would still lack understanding. But I think this is backwards and speaks to your identification of Searle begging the question: the only way a human could emit meaningful chinese responses would be if it had an understanding of chinese. Consequently if a machine is outputting meaningful chinese, it too must already understand chinese, and any argument otherwise is kind of a pro-biology bigotry with a shaky underlying logic at best.

This then devolves into semantics. Can a person locked in a room really come to "understand" chinese culture, for example, if only non-experiential learning were used as data inputs? I think we have to say the answer is yes. I am a chemist. I have never seen an atomic orbital with my bare eyes, yet I can design chemical reactions that work with my understanding of chemistry. Because I have not experienced an atomic orbital does that mean I do not understand? Even, when I set up my first reaction, I did not have any experience, and knew what I was doing only through what could be described as sophisticated analogy. I would say my understading was low, but it was certainly non-zero. Where does one draw the line?

rjsw
I have always felt that the human in the room would start to recognize patterns and develop an "understanding". Their "understanding" may have no basis in reality but I don't see that it is any less valid to them.

If Searle is right then we should be able to perform a MRI on a blind person while they are talking to someone and spot the point where their brain switches into "symbol manipulation mode" when the conversation subject becomes something visual.

qbrass
The guy in the room can memorize all of the rules and the ledgers and give you the same responses the room did, and if you asked him in his native language if he knew Chinese, he'd honestly tell you no.

He could have an entire conversation with you in Chinese, and only know that what you said merits the response he gave. He doesn't if he's telling you directions to the bathroom, or how to perform brain surgery.

dnautics
What about Latin? I learned Latin in a somewhat sterile environment, that in many ways is akin to symbol manipulation. I certainly never conversed with any native Latin speakers. Do I not understand Latin? Why or why not?
Dec 06, 2015 · 4 points, 0 comments · submitted by nolantait
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.