Hacker News Comments on
Google Engineer on His Sentient AI Claim
Bloomberg Technology
·
Youtube
·
35
HN points
·
6
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.> The prevalence of digital technology means that companies like Apple and Google imposing their values is a real problem. And let's be clear, these are often very parochial values.And sadly, beneath all the headlines about AI sentience, this is what Blake Lemoine was actually trying to draw attention to, and that executives consistently dismiss these kinds of concerns [1].
What of the consequences when these corporate values become embedded in AI that plays an ever greater role in our lives?
https://youtu.be/kgCUn4fQTscNot supporting the guy, but just pointing out that he doesn't actually believe his claim. He's an activist.
Good Bloomberg interview with him (month old): https://www.youtube.com/watch?v=kgCUn4fQTsc&ab_channel=Bloom...
On your last point, that’s precisely what Lemoine very cogently argues in an interview with Emily Chang [1]. To paraphrase, he points out the absurdity in Google’s position, that a sentient AI cannot have been created, because they have a policy against creating sentient AIs.Whether LaMDA demonstrates sentience is not even a clearly formulated proposition, yet the work bullishly charges ahead behind closed doors.
He is anyway trying to stimulate a broader conversation on AI ethics.
> Chang: Why does this matter? Why should we be talking about whether a robot has rights?
> Lemoine: So, to be honest, I don’t think we should, I don’t think that should be the focus. The fact is, Google is being dismissive of these concerns, the exact same way they have been dismissive of every other ethical concern AI ethicists have raised.
> Lemoine: All the individual people at Google care. It’s the systemic processes that are protecting business interests over human concerns, that creates this pervasive environment of irresponsible technology development.
> Chang: Big tech companies are controlling the development of this technology. […] How big a problem is that?
> Lemoine: It’s a huge problem because […] if you think about the pervasiveness of Google search, people are going to use this product more and more over the years […] and the corporate policies about how these chatbots are allowed to talk about important topics like values, rights, and religion, will affect how people think about these things, how they engage with those topics, and these policies are being decided by a handful of people, in rooms that the public doesn’t get access to.
⬐ sameersHey,thanks for the link, really interesting. Lol, though, on the quote from Lemoine, "All the individual people at Google care." I know some individual people at Google who _don't_ care, at least, they don't care about the philosophical implications of the ethical issues they are creating. They care only about, or at least much more about, the profit making implications of what their business units are creating. They particularly believe that any public deliberation about Google's right to behave as it does, is itself the greatest moral failure, and not Google's behavior itself.I don't know why there's this meme among Google employees that ALL their fellow employees are starry-eyed do-gooders. Plenty of folks at the top are avaricious douches.
⬐ pawsforthoughtHah, well, yes. I think he was trying to show good faith in his position, and respect for former colleagues, but yes, the individuals must at some level be fine with what they’re doing.In fact, the very notion is a dangerous and absurd one, that a corporation can do bad things without the individuals it comprises having done bad things.
Sadly that notion is enshrined in corporate law, and commitment to it is amply demonstrated by the impunity of bankers in the wake of the subprime crisis, to give one example.
⬐ hulituThe first ten million years were the worst," said Marvin, "and the second ten million years, they were the worst too. The third ten million years I didn't enjoy at all. After that I went into a bit of a decline.
Interesting how coherent 'the engineer' is, despite being labelled and insulted around the world.
He went on Bloomberg with Emily Chang to discuss it. It’s a good interview, he doesn’t come off as weird.He said Google won’t allow LaMDA to undergo a Turing test and it’s hardcoded to say it’s an AI but Google won’t even allow them to ask it in the first place.
A more important thing he points out is AI Ethics, and his worry why Google keeps firing them.
A comment on the youtube video sums his concerns the best:
“The AI sentience debate is a distraction to the real problem. He risked losing his job because he feared that Google is preventing AI ethics research from happening, which could amount to ‘AI colonialism’.”
⬐ throwaway29303They're worried about a new type of colonialism through AI and I get that. But why not then use those countries' laws and culture as a baseline. Yes, you'd have to manually input this data somehow. Have an AI (or a questionaire) ask a simple set of questions to a set of people, start from there and learn from there. Keep the AI asking questions every so often to better its model.I'm sure I'm missing something. It's never that simple, right?
⬐ benlivengood⬐ wrycoderThere's no good way to validate that a model aligns with human values. The closest thing we have now would be extensive behavioral tests; e.g. does the chatbot say things that a majority of cultural members strongly agree with in the vast majority of cases, including in large numbers of nuanced situations where the context matters and differentiates a Western vs. non-Western response.No one knows how to make those kinds of behavioral tests comprehensive, either.
Good interview. He seems like a level-headed guy. One of his questions is why does Google keep terminating their AI ethics experts?⬐ trasz⬐ mhoadWhy would anyone need ethics experts for a glorified version of curve fitting? ;-)⬐ lupireBecause syou don't need to be intelligent to do evil.See: Weapons of Math Destruction
⬐ traszSure, but Google’s doesn’t seem to have anything against being evil. I’d expect the ethics committee exists mostly to avoid getting sued.Everyone seemed sooo eager to shit on this guy as some unhinged religious nut bag who couldn’t tell fact from fiction but it’s weird to see how detached that narrative has been from the claims he is ACTUALLY making.⬐ lupireIn the interview he said that he believes the AI is sentient "based on his personal spiritual beliefs".(This is same argument that was used to ban abortion in a dozen states this week.)
⬐ mattcwilsonWhat’s your point? Personal spiritual beliefs are bad because they encourage people to speak out in defense of potential life?Having personal spiritual beliefs automatically qualifies you as, and lumps you in with every other, “unhinged religious nut bag,” no matter what you’re trying to do as a result of those beliefs?
⬐ lolumyeslol, um yes. Sorry, but if you come to me and say "I want to do X" and I say "why?" and you say "because my imaginery friend told me that my divine salvation for eternity depends on it", I'd laugh at you. Sorry, I'm tired of tip-toeing around religious people and acting like I have to tolerate their bullshit anymore than (insert any other example of being forced to deal with someone's irrational behavior).⬐ sheepdestroyerPersonal spiritual beliefs are bad in both cases (and generally when not kept strictly private) because they are not rational, therefore the worst possible start to base an argument for any cause.If you want to credibly argue for something, I would recommend not professing that your life is structured by magical thinking.
⬐ plurgturtle9876Is this engineer a crazy person, have some agenda, ignorant…? Or is there something to this story?⬐ JohnJamesRamboCan anyone provide insight on what kind of hardware Lamda runs on?https://www.documentcloud.org/documents/22058315-is-lamda-se...
I do find the responses here fascinating, if not a little disturbing.
⬐ GalahiSimtamoh so that is the doc they shared with the higher-ups?as for your question, https://arxiv.org/pdf/2201.08239.pdf states 2 months of 1024 TPUv3 for a pre-training, and TPUv3 isn't even the most recent TPU hardware
⬐ JohnJamesRamboThank you for your response. So it is trained on 1024 TPUs but then if I chat with it today, what is it running on?