HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Google Engineer on His Sentient AI Claim

Bloomberg Technology · Youtube · 35 HN points · 6 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Bloomberg Technology's video "Google Engineer on His Sentient AI Claim".
Youtube Summary
Google Engineer Blake Lemoine joins Emily Chang to talk about some of the experiments he conducted that lead him to think that LaMDA was a sentient AI, and to explain why he is now on administrative leave. He's on "Bloomberg Technology."
#google #bots #artificialintelligence #technology

Read more here:
https://www.bloomberg.com/news/articles/2022-06-17/can-ai-gain-sentience-maybe-but-probably-not-yet-quicktake

--------
Like this video? Subscribe to Bloomberg Technology on YouTube:
https://www.youtube.com/channel/UCrM7B7SL_g1edFOnmj-SDKg
Watch the latest full episodes of "Bloomberg Technology" with Emily Chang here:
https://www.youtube.com/playlist?list=PLfAX25ZLrPGRzfILkSd-YiWfsoloCETAe

Get the latest in tech from Silicon Valley and around the world here:
https://www.bloomberg.com/technology
Connect with us on...
Twitter: https://twitter.com/technology
Facebook: https://www.facebook.com/BloombergTechnology
Instagram: https://www.instagram.com/bloombergbusiness/
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
> The prevalence of digital technology means that companies like Apple and Google imposing their values is a real problem. And let's be clear, these are often very parochial values.

And sadly, beneath all the headlines about AI sentience, this is what Blake Lemoine was actually trying to draw attention to, and that executives consistently dismiss these kinds of concerns [1].

What of the consequences when these corporate values become embedded in AI that plays an ever greater role in our lives?

[1]: https://youtu.be/kgCUn4fQTsc?t=4m23s

https://youtu.be/kgCUn4fQTsc

Not supporting the guy, but just pointing out that he doesn't actually believe his claim. He's an activist.

On your last point, that’s precisely what Lemoine very cogently argues in an interview with Emily Chang [1]. To paraphrase, he points out the absurdity in Google’s position, that a sentient AI cannot have been created, because they have a policy against creating sentient AIs.

Whether LaMDA demonstrates sentience is not even a clearly formulated proposition, yet the work bullishly charges ahead behind closed doors.

He is anyway trying to stimulate a broader conversation on AI ethics.

> Chang: Why does this matter? Why should we be talking about whether a robot has rights?

> Lemoine: So, to be honest, I don’t think we should, I don’t think that should be the focus. The fact is, Google is being dismissive of these concerns, the exact same way they have been dismissive of every other ethical concern AI ethicists have raised.

> Lemoine: All the individual people at Google care. It’s the systemic processes that are protecting business interests over human concerns, that creates this pervasive environment of irresponsible technology development.

> Chang: Big tech companies are controlling the development of this technology. […] How big a problem is that?

> Lemoine: It’s a huge problem because […] if you think about the pervasiveness of Google search, people are going to use this product more and more over the years […] and the corporate policies about how these chatbots are allowed to talk about important topics like values, rights, and religion, will affect how people think about these things, how they engage with those topics, and these policies are being decided by a handful of people, in rooms that the public doesn’t get access to.

[1]: https://youtu.be/kgCUn4fQTsc

sameers
Hey,thanks for the link, really interesting. Lol, though, on the quote from Lemoine, "All the individual people at Google care." I know some individual people at Google who _don't_ care, at least, they don't care about the philosophical implications of the ethical issues they are creating. They care only about, or at least much more about, the profit making implications of what their business units are creating. They particularly believe that any public deliberation about Google's right to behave as it does, is itself the greatest moral failure, and not Google's behavior itself.

I don't know why there's this meme among Google employees that ALL their fellow employees are starry-eyed do-gooders. Plenty of folks at the top are avaricious douches.

pawsforthought
Hah, well, yes. I think he was trying to show good faith in his position, and respect for former colleagues, but yes, the individuals must at some level be fine with what they’re doing.

In fact, the very notion is a dangerous and absurd one, that a corporation can do bad things without the individuals it comprises having done bad things.

Sadly that notion is enshrined in corporate law, and commitment to it is amply demonstrated by the impunity of bankers in the wake of the subprime crisis, to give one example.

Jul 02, 2022 · 1 points, 0 comments · submitted by antouank
Jun 30, 2022 · 8 points, 1 comments · submitted by mliezun
hulitu
The first ten million years were the worst," said Marvin, "and the second ten million years, they were the worst too. The third ten million years I didn't enjoy at all. After that I went into a bit of a decline.
Jun 29, 2022 · jaggs on Humans made AI go sentient
Interesting how coherent 'the engineer' is, despite being labelled and insulted around the world.

https://youtu.be/kgCUn4fQTsc

Jun 27, 2022 · 2 points, 0 comments · submitted by 0xedb
He went on Bloomberg with Emily Chang to discuss it. It’s a good interview, he doesn’t come off as weird.

He said Google won’t allow LaMDA to undergo a Turing test and it’s hardcoded to say it’s an AI but Google won’t even allow them to ask it in the first place.

A more important thing he points out is AI Ethics, and his worry why Google keeps firing them.

A comment on the youtube video sums his concerns the best:

“The AI sentience debate is a distraction to the real problem. He risked losing his job because he feared that Google is preventing AI ethics research from happening, which could amount to ‘AI colonialism’.”

https://youtu.be/kgCUn4fQTsc

Jun 26, 2022 · 15 points, 10 comments · submitted by derangedHorse
throwaway29303
They're worried about a new type of colonialism through AI and I get that. But why not then use those countries' laws and culture as a baseline. Yes, you'd have to manually input this data somehow. Have an AI (or a questionaire) ask a simple set of questions to a set of people, start from there and learn from there. Keep the AI asking questions every so often to better its model.

I'm sure I'm missing something. It's never that simple, right?

benlivengood
There's no good way to validate that a model aligns with human values. The closest thing we have now would be extensive behavioral tests; e.g. does the chatbot say things that a majority of cultural members strongly agree with in the vast majority of cases, including in large numbers of nuanced situations where the context matters and differentiates a Western vs. non-Western response.

No one knows how to make those kinds of behavioral tests comprehensive, either.

wrycoder
Good interview. He seems like a level-headed guy. One of his questions is why does Google keep terminating their AI ethics experts?
trasz
Why would anyone need ethics experts for a glorified version of curve fitting? ;-)
lupire
Because syou don't need to be intelligent to do evil.

See: Weapons of Math Destruction

https://en.m.wikipedia.org/wiki/Weapons_of_Math_Destruction

trasz
Sure, but Google’s doesn’t seem to have anything against being evil. I’d expect the ethics committee exists mostly to avoid getting sued.
mhoad
Everyone seemed sooo eager to shit on this guy as some unhinged religious nut bag who couldn’t tell fact from fiction but it’s weird to see how detached that narrative has been from the claims he is ACTUALLY making.
lupire
In the interview he said that he believes the AI is sentient "based on his personal spiritual beliefs".

(This is same argument that was used to ban abortion in a dozen states this week.)

mattcwilson
What’s your point? Personal spiritual beliefs are bad because they encourage people to speak out in defense of potential life?

Having personal spiritual beliefs automatically qualifies you as, and lumps you in with every other, “unhinged religious nut bag,” no matter what you’re trying to do as a result of those beliefs?

lolumyes
lol, um yes. Sorry, but if you come to me and say "I want to do X" and I say "why?" and you say "because my imaginery friend told me that my divine salvation for eternity depends on it", I'd laugh at you. Sorry, I'm tired of tip-toeing around religious people and acting like I have to tolerate their bullshit anymore than (insert any other example of being forced to deal with someone's irrational behavior).
sheepdestroyer
Personal spiritual beliefs are bad in both cases (and generally when not kept strictly private) because they are not rational, therefore the worst possible start to base an argument for any cause.

If you want to credibly argue for something, I would recommend not professing that your life is structured by magical thinking.

Jun 25, 2022 · 4 points, 0 comments · submitted by sitkack
Jun 25, 2022 · 5 points, 3 comments · submitted by SQL2219
plurgturtle9876
Is this engineer a crazy person, have some agenda, ignorant…? Or is there something to this story?
JohnJamesRambo
Can anyone provide insight on what kind of hardware Lamda runs on?

https://www.documentcloud.org/documents/22058315-is-lamda-se...

I do find the responses here fascinating, if not a little disturbing.

GalahiSimtam
oh so that is the doc they shared with the higher-ups?

as for your question, https://arxiv.org/pdf/2201.08239.pdf states 2 months of 1024 TPUv3 for a pre-training, and TPUv3 isn't even the most recent TPU hardware

JohnJamesRambo
Thank you for your response. So it is trained on 1024 TPUs but then if I chat with it today, what is it running on?
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.