HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
USENIX Security '18-Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?

USENIX · Youtube · 156 HN points · 22 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention USENIX's video "USENIX Security '18-Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?".
Youtube Summary
James Mickens, Harvard University

Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?
A: Because Keynote Speakers Make Bad Life Decisions and Are Poor Role Models

Some people enter the technology industry to build newer, more exciting kinds of technology as quickly as possible. My keynote will savage these people and will burn important professional bridges, likely forcing me to join a monastery or another penance-focused organization. In my keynote, I will explain why the proliferation of ubiquitous technology is good in the same sense that ubiquitous Venus weather would be good, i.e., not good at all. Using case studies involving machine learning and other hastily-executed figments of Silicon Valley’s imagination, I will explain why computer security (and larger notions of ethical computing) are difficult to achieve if developers insist on literally not questioning anything that they do since even brief introspection would reduce the frequency of git commits. At some point, my microphone will be cut off, possibly by hotel management, but possibly by myself, because microphones are technology and we need to reclaim the stark purity that emerges from amplifying our voices using rams’ horns and sheets of papyrus rolled into cone shapes. I will explain why papyrus cones are not vulnerable to buffer overflow attacks, and then I will conclude by observing that my new start-up papyr.us is looking for talented full-stack developers who are comfortable executing computational tasks on an abacus or several nearby sticks.

View the full USENIX Security '18 program at https://www.usenix.org/usenixsecurity18/technical-sessions
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
ORT (Only Read Title, sorry) What I'm hearing is "put a non-deterministic black box between you and the man command."

Nah.

I recommend a stiff dose of Mickens: "Why Do Keynote Speakers..." https://www.youtube.com/watch?v=ajGX7odA87k ) and a good lie down.

Aug 23, 2022 · 2 points, 0 comments · submitted by wh313
I believe this talk [0] by James Mickens is very applicable. He touches on trusting neural nets with decisions that have real-world consequences. It is insightful and hilarious but also terrifying.

https://youtu.be/ajGX7odA87k "Why do keynote speakers keep suggesting that improving security is possible?"

> That can't happen.

Yes yes i've heard that one before. I've had many bugs "that can't happen" over the course of my lifetime. Fortunately none of them involved a car so i'm still around to talk about it. A few years ago, somebody would have laughed at the idea of Boeing producing fallible equipment, especially since we as a species have decades of experience of air travel. Now would you still say it's unthinkable that Boeing could produce faulty hardware/software?

A link that gets posted on HN quite often contains many such bugs "can't happen" bugs, that aren't even caused by AI : https://beza1e1.tuxen.de/lore/index.html

I also remember some threads about high-range medical equipment (in actual hospitals) killing people due to software bugs.

> And any application of the brake disengages autopilot/autosteer and also automatic cruise control with an audible chime.

Is that a physical kill switch that triggers a sensor to produce an alert/chime? Or does the sensor ask politely the computer who actually controls the car to disable the autopilot? In the latter case, i know of no way to make this safe: for example what happens if the sensors requesting human control back die, short-circuit, or otherwise malfunctions? Or if the program enters certain kinds of memory violations? Or any other software/hardware fault?

> I'm not sure what you're saying.

I'm saying i've seen enough bad engineering (not always, but sometimes along with bad faith) across all fields i've been even remotely involved in. And i'm saying i don't even remotely trust the >100 micro-controllers you find in a modern car whose schematics and source-code we can't inspect. I mean i don't fully trust mechanical hardware either, but at least with it symptoms and failure modes can be easily reproduced and debugged.

And i'm definitely saying i would never ever trust a machine-learning algorithm with life-making decisions. More on this topic:

- James Mickens at Usenix on why AI is usually a terrible idea: https://www.youtube.com/watch?v=ajGX7odA87k

- jwz's blog (another HN favorite) is also full of examples of AI failures ; a fun harmless example: https://www.jwz.org/blog/2021/06/using-ritual-magic-to-trap-...

If you have not seen James Mickens (Harvard CS) USENIX Security keynote presentation from 2018, I highly recommend it. It's hilarious while clearly showing how reckless and dangerous ML is:

https://www.youtube.com/watch?v=ajGX7odA87k

I generally dislike consuming computer content in video form, but there is plenty of interesting content out there. Lots of conferences video tape sessions. Plenty of noise but some good things too.

E.g. one of my faves: https://youtu.be/ajGX7odA87k

This comment is an excellent example of "technological manifest destiny"

https://www.youtube.com/watch?v=ajGX7odA87k&t=36m46s

It seems like the author is advocating for the sabotaging Microsoft Copilot by creating GitHub repos of misbehaving code.

This could be seen as a form of protest or as a Luddite (https://en.wikipedia.org/wiki/Luddite) reaction to AI automation of coding.

James Micken's warning against connecting machine learning to the Internet of Hate still apply: https://youtu.be/ajGX7odA87k?t=1206

May 06, 2021 · dralley on Crazy New Ideas
It's technological manifest destiny.

https://www.youtube.com/watch?v=ajGX7odA87k&t=36m48s

The stuff is what the stuff is brother. [1]

[1] https://youtu.be/ajGX7odA87k?t=945

fao_
Holy shit, this talk is amazing. Thank you!!
cratermoon
That's an amazing talk and I thank you for bringing it to my attention. Congratulations, you beat YouTube's recommendations for me.

BTW I was already aware of all the questionable things AI is used for, but Mickens' analysis is beautiful.

> the possible MNIST digits are 0-9

Except - and this rather ties into your point - those are not the only possible digits; your network also has to deal with (ie reject) other possible digits such as "P", "E", "3̸̶", or "[Forlorn Sigil of Amon-Gül redacted]"[0], which look like, but are not, decimal digits.

0: https://www.youtube.com/watch?v=ajGX7odA87k

HarHarVeryFunny
That depends on how you train it.

If you only train it on examples of 0-9 then those are the only outputs it's going to give. If you fed a "P" into such a net then the outputs would be the degree of similarity of that "P" to each of the (0-9) digits it was trained on. You could of course threshold the output and ignore any prediction with confidence less than, e.g., 90%.

If you wanted the net to do a better job of rejecting non-digits, or at least some specific ones, then you could include a bunch of non-digit examples in your training data (so now your net has 11 outputs: 0-9 and "non-digit"), then hopefully - but not necessarily - it's highest confidence prediction will be "non-digit" when presented with a non-digit input.

a1369209993
> If you only train it on examples of 0-9 then those are the only outputs it's going to give.

Exactly.

> 11 outputs: 0-9 and "non-digit"

IIRC, this doesn't work (or not well) because the net tries to find similarities between the various members of the set "everything except these ten specific things", but you could just require low confidence for all digits on non-digit inputs as part of the gradient descent function.

The problem is more that - if people with decision-making authority trust the AI to not be insane and evil by default - failure modes like this have to occur to you before the AI starts misbehaving in production.

"What matters now is impact, not intent."

James Mickens, using logic and humor, makes this point in a way that even geeks might understand.

"Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?"

https://www.youtube.com/watch?v=ajGX7odA87k

The stuff is what the stuff is, brother. https://youtu.be/ajGX7odA87k?t=931
gmueckl
Thanks! This is a great talk.
I've always been a fan of his talks. His keynote at USENIX Security 2018 is by far the best summation of the dangers of ML that I've ever seen. It's hilarious too.

https://www.youtube.com/watch?v=ajGX7odA87k

I think James Mickens (Harvard CS) has it right. ML is inscrutable, black-magic that no one really understands. This is long, but worth watching:

https://www.youtube.com/watch?v=ajGX7odA87k

James Mickens on this topic:

https://youtu.be/ajGX7odA87k

> Some people enter the technology industry to build newer, more exciting kinds of technology as quickly as possible. My keynote will savage these people and will burn important professional bridges, likely forcing me to join a monastery or another penance-focused organization. In my keynote, I will explain why the proliferation of ubiquitous technology is good in the same sense that ubiquitous Venus weather would be good, i.e., not good at all.

> Using case studies involving machine learning and other hastily-executed figments of Silicon Valley’s imagination, I will explain why computer security (and larger notions of ethical computing) are difficult to achieve if developers insist on literally not questioning anything that they do since even brief introspection would reduce the frequency of git commits. At some point, my microphone will be cut off, possibly by hotel management, but possibly by myself, because microphones are technology and we need to reclaim the stark purity that emerges from amplifying our voices using rams’ horns and sheets of papyrus rolled into cone shapes. I will explain why papyrus cones are not vulnerable to buffer overflow attacks, and then I will conclude by observing that my new start-up papyr.us is looking for talented full-stack developers who are comfortable executing computational tasks on an abacus or several nearby sticks.

Phemist
For what it's worth. The conference this talk is held at is given an interesting connotation when read in a combination of English & Dutch. Use in English being Use. And "Nix" can be read in Dutch as a short-hand/homophone for "Niks", meaning nothing. It sounds kind of forced when explaining it like this, but for me (and I'm sure for other Dutch people frequenting this forum) it's quite natural to interpret it as such. I.e., the conference name can be interpreted as meaning Use Nothing, which seems to be related to/reinforce the topic of this talk.
dekhn
personally I find his writing tiresome. it's literally the same setup and joke, with little content.
Verdex
Personally, I find his talks fantastic. However, to be fair I also normally have problems determining what his actual points are and what is a joke and I rarely see what his main point is (sometimes I'm pretty sure it's just a bunch of stuff randomly thrown together).

That being said, his talks are a great example of how to make an otherwise dry topic very interesting and consumable by laymen. And he typically takes a skeptical approach to technology just working. Combine those two things and you get something that we as an industry desperately need. A skeptical and conservative view towards emerging technology that arbitrary people can consume (especially technology inept decision makers).

Currently, the best presentations to decision makers about new technologies are all made by evangelists, wide eyed early adapters, and snake oil sales men. These presentations encourage those with power to make poor decisions (they don't want to be left behind for the next big thing, they already thought the internet wasn't going to work).

James Mickens provides a much needed splash of cold water. And it's in a form that is easy to listen to when you don't understand technology. The typical approach for why a new technology is a bad idea is a boring technical digression where a bunch of people say a bunch of words that nobody understands. James Mickens makes it interesting and compelling without getting bogged down. Somebody does need to get into the details at some point, but if we don't have a way to signal that some new things are in fact a bad idea then nobody is going to get the chance and we'll be stuck implementing the next bad idea yet again.

hoaw
More and more people I respect are becoming skeptical of Silicon Valley, or at least the attitudes attributed to it, to the point where I don't think convincing people is necessarily the problem any more. What is lacking is a solid plan of what to do instead.
AnthonyMouse
> What is lacking is a solid plan of what to do instead.

One of the issues is that surveillance capitalism is a collective action problem. If companies have more data about people like you then they can capture more of the consumer surplus when they sell you things. But they don't need data about you specifically for that, only aggregate data about people like you. So if you don't sell your privacy but someone else does, you don't get the free services but you still pay the higher prices. So everybody sells out.

Europe tried to address this with the GDPR, but the amount of friction that creates is problematic. What might work better is that instead of regulating collection, regulate third party distribution. Put Equifax out of business because they can't have a giant data breach if no one can give them any data. And if there are no more credit scores and it's harder for people to get a loan, housing prices would come down to what people could afford without having to pay interest on half a million dollars to the bank for thirty years. Probably a good thing.

Combine that with a big honking tax on advertising revenue to reduce the profits from collecting the data for that purpose and you reduce the incentive to collect data on everyone, without affecting smaller companies that don't sell advertising or user data to anyone.

But that would be a huge political feat. You'd be going after multiple hundred billion plus dollar companies in addition to the banks.

The other alternative would be for enough individuals to recognize the collective action problem and selflessly help their neighbors by not patronizing these companies, but that's not a trivial feat either.

hoaw
I don’t disagree. I am talking even smaller scale though. A lot of people would risk being deemed a poor performer and getting fired if they did things correctly. Because what they would be delivering wouldn’t be valued. There needs to be a path where you can join an organization, take a course, go to a conference, get a certificate or whatever so people can differentiate. I essentially think those influential in this area are overestimating “will” over “way” in “if there’s a will, there’s a way”. Today, with information proliferation, if there’s a way people will come to you. Maybe it could be as simple as a six hour work day. That isn’t somehthing most companies would do without thinking about it.
AnthonyMouse
There are individual-level consequences, but it's a macro-level problem. Doing the right thing costs less in the short term but more in the long term. But then someone quotes the Keynesian dodge ("in the long run we are all dead") as if humans will be extinct before we have to pay the piper, as if we're talking about billion year timescales rather than a few years or months.

And maybe we're back to the information asymmetry. People don't connect the fact that using Facebook's VPN could make them have to pay more for groceries than the cost of just paying for a different VPN, so they use it, and it costs them more than they expect it to, and after being multiplied by a thousand things like that, they don't understand why they have so much more debt than their parents did. The fact that the two are related hasn't really entered the public consciousness.

But it's not at the level of company-to-software-developer, it's at the level of customer-to-company. Companies can already tell what kind of developers they're employing. Companies know when they're selling out. But customers generally don't know that about companies.

It's like the whole religious war between Apple and Google. Is Android or iOS the best phone for user privacy? Trick question. It's PureOS. But most people aren't even aware of the possibility of that.

nwhatt
My takeaway from talk is basically "go slower". He points out that history can be a good guide. In academic fields you need to get IRB approval for human subjects. A similar system might make sense for models applied to people, for example the system used for sentencing prisoners probably should have some kind of third party oversight.
hoaw
I get what they are usually saying and I generally agree. For example DHH has a ton of material on how to do things differently, or more sane if you so will. It is just, now that I am convinced then what? I can try and incorporate some things, but it doesn't change much overall. So while I can appreciate the "gospel", there needs to be a path for people who are already on board. Maybe a organization, methodology, role or even a damn certificate. Because there are thousands of people learning "growth hacking", "agile" or whatever every day.
AnthonyMouse
> In academic fields you need to get IRB approval for human subjects.

On the other hand, that has its own problems:

https://slatestarcodex.com/2017/08/29/my-irb-nightmare/

It seems like the real issue is the information asymmetry. You can build hot garbage in five days but it takes the customer five months to figure it out, by which point they've lost all their data to malware. Meanwhile on day zero the carefully-designed application is $50 and the hot garbage is "FREE*", so which does the user choose without any other way to tell the difference?

paganel
Thanks for the link. At some point he says "the gadgets are the true people of the Earth", which more or less resembles what Jacques Ellul first wrote about 60 years ago [1]:

> Hard determinists would view technology as developing independent from social concerns. They would say that technology creates a set of powerful forces acting to regulate our social activity and its meaning.

and

> According to this view of determinism we organize ourselves to meet the needs of technology and the outcome of this organization is beyond our control or we do not have the freedom to make a choice regarding the outcome (autonomous technology) (...) In his 1954 work The Technological Society, Ellul essentially posits that technology, by virtue of its power through efficiency, determines which social aspects are best suited for its own development through a process of natural selection.

I used to be a pretty big believer in things like "technology will make everything better", but after reading some of Ellul's books I've started to have my doubts about that.

[1] https://en.wikipedia.org/wiki/Technological_determinism#Hard...

naringas
> "the gadgets are the true people of the Earth",

and corporate businesses are fast becoming the true citizens of nations.

graphitezepp
Great now I can quote somebody about why I think technology is evil who isn't the Unabomber thanks.
Verdex
Sarcasm?

I only ask because you're saying this on a website. A website focused on funding technology startups. Hosted on the internet. Build by darpa grants. Like, this doesn't seem like your sort of place if you're serious about thinking technology is evil.

perfmode
Sometimes the people in the best position to judge are the ones who know the most.
perfmode
I’m going to have to read this book. Thanks for mentioning it.
Dec 23, 2018 · 2 points, 0 comments · submitted by tomcam
Nov 28, 2018 · 128 points, 27 comments · submitted by pfefferz
greenyoda
Do yourself a favor and watch the video of the talk instead (the link is at the top of the transcript). Mickens is a hilarious and captivating speaker, and a mere transcript of his talk doesn't convey that experience.
mr_overalls
As someone who has really enjoyed reading Mickens' highly-caffeinated essays, I was pleased to see that his speaking style is even better!
miss_classified
It's also slightly wrong, and needs some proofreading.

Example:

  > Get at.
That's not what he says at the 24 second mark. If you can't get the first 30 seconds of an hour-long transcript right, why should I read on?

(btw, I get that it was an automated transcript [http://temi.com], and has some OCR-like errors, which is apropos, considering the context of the speech)

flohofwoe
The best part is that this "Audio to Text" transcription service says on its web page:

Proprietary algorithm

Built by our machine learning and speech recognition experts.

starbeast
Who are experts in learning new machines and can nearly always recognise speech, but generally prefer email.
HarryHirsch
Yes, the transcription errors convey a vaguely machine-generated feel, and together with the weird typography (contrast insufficient, background too bright, line spacing too tight, timing marks every 30 seconds) you ask yourself if the whole speech wasn't a machine-generated prank. Considering the subject of that speech, that's ironic.
huhtenberg
It just so happens it's the same James Mickens who wrote the Night Watch! What are the odds!

https://www.usenix.org/system/files/1311_05-08_mickens.pdf

michaelcampbell
100%
atq2119
... and many more Usenix writings, pretty much all of which are laugh-out-loud hilarious.
mikeash
Links to those, and other videos of talks he's done, are available here: https://mickens.seas.harvard.edu/wisdom-james-mickens

For anyone who isn't familiar with him, you should definitely check them out.

toomuchtodo
All of his talks are incredibly good.
starbeast
It's Partick Thistle, not Patrick Thistle, Kingsely's gonna be right steamin'.
andrewflnr
James better start running.
starbeast
Kingsley will have found out pretty quickly too as he follows all the latest developments in machine learning. One of the few luxuries allowed into Kingsleys's padded storage cell is a workstation packed with GPUs that Kingsley tests the latest algorithms on, in order to calculate his ideal fantasy-football team.

Edit - Mickens will have called Partick 'Patrick' on purpose, of course. He's been gunning for Kingsley for a while now, as James is a long time supporter of Dunfermline Athletic.

andrewflnr
> starbeast

Yellow, pointy... Oh shit.

starbeast
Nahh, wrong starbeast. I'm more of a lummox. https://www.goodreads.com/book/show/175328.The_Star_Beast
stuartd
Found the format a bit distracting to read, so cleaned it up a bit (removing Speaker 1 ... etc) - https://pastebin.com/5nvvxSWB
krylon
Mmmh, that face looks familiar. He used to work at Microsoft, right? I remember reading a couple of his blog posts, and they as funny as they were interesting.

But he is so much better in person.

michaelcampbell
> He used to work at Microsoft, right?

Yes, same guy.

sudofail
What a great talk. He brings up a lot of very important points that we all in tech need to consider and keep in mind.
merricksb
Transcript:

http://www.zachpfeffer.com/single-post/2018/11/27/Transcript...

pronoiac
I love his work! Here's his page at Harvard that collects all of it: https://mickens.seas.harvard.edu/wisdom-james-mickens
sctb
Previously: https://news.ycombinator.com/item?id=17785162.
selimthegrim
God help Gritty when Mickens finds out about him.
0xdeadbeefbabe
I hope he can make some better life choices.
erikb
The speech: https://www.youtube.com/watch?v=ajGX7odA87k
skummetmaelk
Mickens is brilliant. His style may be polarizing, but he has insight and makes good points.

Talks like this are a nice counterbalance to the most upvoted viewpoints expressed here a couple of days ago https://news.ycombinator.com/item?id=18516177 where people view their skills only as a way to make money and basically do not care how it is done or how safe it is. How many money makers are running around right now preaching these stupid ideas just to make money without even considering the real life implications their selfish actions will have on millions of people. It's downright terrifying. Technical excellence matters.

The Professor being interviewed is using a very specific niche jargon. As such, what she's saying is largely unintelligible to those of us not initiated into that jargon.

Mickens' keynote from USENIX Security 2018, "Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?" might be more accessible https://www.youtube.com/watch?v=ajGX7odA87k

https://mickens.seas.harvard.edu/wisdom-james-mickens

Oct 31, 2018 · tempodox on Google Home (in)Security
You might be interested in this keynote by James Mickens:

https://www.youtube.com/watch?v=ajGX7odA87k

titled “Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?”.

1. Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible? - James Mickens https://www.youtube.com/watch?v=ajGX7odA87k

2. James Mickens on JavaScript - James Mickens https://www.youtube.com/watch?v=D5xh0ZIEUOE

3. Creating containers From Scratch - Liz Rice https://www.youtube.com/watch?v=8fi7uSYlOdc

4. 2013 Isaac Asimov Memorial Debate: The Existence of Nothing - Panelists: J. Richard Gott, Jim Holt, Lawrence Krauss, Charles Seife, Eve Silverstein. Moderator: Neil deGrasse Tyson https://www.youtube.com/watch?v=1OLz6uUuMp8

5. 2016 Isaac Asimov Memorial Debate: Is the Universe a Simulation? - Panelists: David Chalmers, Zohreh Davoudi, James Gates, Lisa Randall, Max Tegmark Moderator: Neil deGrasse Tyson https://www.youtube.com/watch?v=wgSZA3NPpBs

6. Zig: A programming language designed for robustness, optimality, and clarity – Andrew Kelley https://www.youtube.com/watch?v=Z4oYSByyRak

7. Concurrency Is Not Parallelism - Rob Pike https://www.youtube.com/watch?v=cN_DpYBzKso

winkeltripel
> 3. Creating containers From Scratch

I read that as "creating containers IN Scratch (a visual game programming language from MiT)." It wasn't as impressive as my expectations, only because my expectations were so very high.

derangedHorse
James Mickens on JavaScript is hilarious! I've been a huge fan of his ever since I stumbled onto one of his AMAs on Reddit.
themoat
I just watched the first video. That was so good!
hessenwolf
What’s the message?
soobrosa
https://medium.com/@soobrosa/my-humble-james-mickens-shrine-...
Wow... this was worth the watch - thank you!

awesome lol at https://youtu.be/ajGX7odA87k?t=1540 (tldr - feeding gratuitously racist input to ML twitter bot 'TayTweets'

Aug 30, 2018 · 1 points, 0 comments · submitted by empath75
Aug 23, 2018 · 1 points, 0 comments · submitted by nanis
Here's the YouTube link to his talk: https://youtu.be/ajGX7odA87k. This made me laugh more than it probably should.
Aug 17, 2018 · 12 points, 2 comments · submitted by ygra
dredmorbius
Title: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?
dredmorbius
Incidentally, excellent, raises critically important questions of morality & ethics, and has the best and clearest description of gradient descent ANNs I've yet seen.
Aug 16, 2018 · 6 points, 0 comments · submitted by matt_d
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.