HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
USENIX Enigma 2016 - NSA TAO Chief on Disrupting Nation State Hackers

USENIX Enigma Conference · Youtube · 16 HN points · 6 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention USENIX Enigma Conference's video "USENIX Enigma 2016 - NSA TAO Chief on Disrupting Nation State Hackers".
Youtube Summary
Rob Joyce, Chief, Tailored Access Operations, National Security Agency

From his role as the Chief of NSA's Tailored Access Operation, home of the hackers at NSA, Mr. Joyce will talk about the security practices and capabilities that most effectively frustrate people seeking to exploit networks.

A transcript of this talk is available:
https://www.usenix.org/conference/enigma2016/conference-program/presentation/joyce

Sign up to find out more about Enigma conferences:
https://www.usenix.org/conference/enigma2016#signup

Watch all Enigma 2016 videos at:
http://enigma.usenix.org/youtube
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
When you're on the defense side (I am) you often read a lot of research and watch conference talks about cutting edge stuff. It makes you wonder - why don't attackers do these things?

I actually asked a criminal I was in contact with once why he didn't attempt to perform an attack a certain way that I thought would be very lucrative and significant. His answer was that there was no point, he made thousands of dollars a month with very little effort, and he was more interested in refining his existing work through improved C2 communications as opposed to what I had been suggesting (academically, I never supported that work).

The title's a bit clickbaity too of course. The end is more reasoned:

> However, I think the final explanation is most likely. Whoever developed the code was probably in a hurry and decided using more advanced hiding techniques wasn’t worth the development/testing cost.

Yes, naturally that is exactly what happened. There is no question at all that the NSA has people capable of doing more advanced work, they just really don't have to.

https://www.youtube.com/watch?v=bDJb8WOJYdA

Rob Joyce gives a great talk about his work on TAO. The short version is that TAO doesn't have to do anything crazy, they just have to know who their target is and spend the time figuring out the environment they'll be working in - then they meet the bar that's beyond what that environment is capable of handling.

Homomorphic encryption is gonna be pretty overkill. Then again, the NSA also leveraged the first publicly known attack that used an MD5 collision, which probably cost quite a bit of money, so they can flex when they decide it's worth it.

Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.

This is the sort of absolutism that is so pointless.

At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.

The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.

[0] https://arstechnica.com/information-technology/2021/01/hacke...

[1] https://www.youtube.com/watch?v=bDJb8WOJYdA

cratermoon
> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.

ddalcino
What’s with the backlash against Rust? It literally is “just another language”. It’s not the best tool for every job, but it happens to be exceptionally good at this kind of problem. Don’t you think it’s a good thing to use the right tool for the job?
zo1
It's unusually or suspiciously "hyped". Not to the extent as the other sibling exaggerated to, but enough for it to be noticeable and to rub people the wrong way, myself included. It rubs me the wrong way because something feels off about the way it's hyped/pushed/promoted. It's like the new javascript in the programming world. And if we allow it (like we did with JS), it'll overtake way too much mindshare with the unfortunate detriment and neglect of all others.
cratermoon
> What’s with the backlash against Rust?

What's with the hyping of Rust as the Holy Grail as the solution to everything not including P=NP and The Halting Problem?

pdimitar
No serious and good programmer is hyping Rust as the "Holy Grail". You are seeing things due to an obvious negative bias. Link me 100x HN comments proving your point if you like but they still mean nothing. I've worked with Rust devs for a few years and all were extremely grounded and practical people who arrived at working with it after a thorough analysis of the merits of a number of technologies. No evangelizing to be found.

Most security bugs/holes have been related to buffer [over|under]flows. Statistically speaking, it makes sense to use a language that eliminates those bugs by the mere virtue of the program compiling. Do you disagree with that?

maqp
I like what tptacek wrote in the sibling comment. IIUC Rust keeps getting mentioned as "the" memory-safe language because it's generally equally fast compared to C programs. And it's mainly C and C++ that are memory-unsafe. So Rust is good language to combat the argument of speed (that's often interchangeable with profits in business world, especially if security issues have a flat rate of cyber insurance).
tptacek
Nobody seriously thinks it's "Rust" that's the silver bullet either; they just believe memory-safe languages are. There are a bunch of them to choose from. We hear about Rust because it works in a bunch of high-profile cases that other languages have problems with, but there's no reason the entire iMessage stack couldn't have been written in Swift.
staticassertion
Totally. I said Rust because I write Rust. Like, that's (part of) my job. Rust is no more memory safe (to my knowledge) than Swift, Java, C#, etc.

I also said "way, way less" not "not at all". I still think about memory safety in our Rust programs, I just don't allocate time to address it (today) specifically.

noizejoy
If you would have mentioned those other languages in your original post, it might have amplified your valuable and important point even better, rather than triggering some readers effectively accusing you of shilling.

I don’t mean this in a very critical spirit, though.

Communication is really hard - especially in a large setting where not everyone reads you in the same context, and not everyone means well.

On balance, you post was valuable to me!

staticassertion
I mentioned Rust because I write Rust professionally. If I wrote Java professionally, as I used to, I would have said "java". So you're probably correct that I could preempt stupid people's posts, but I don't care about the dregs of HN reading into my very clear, simple statement, just because they're upset about rust or whatever. It's just not worth it to me.

I'm glad the post was of value to you. The talk is really good and I think more people should read it.

noizejoy
I hear you, and it’s your prerogative to choose how much to invest in reducing the attack surface for your communication.

On the other hand, you could choose to think about communications in an analogous way to your code, both being subject to attack by bad actors trying to subvert your good intentions.

So, the argument could be made, that removing attack surface from communication is analogous to hardening your code.

I also come from a coding background (albeit a long time ago) and with the help of some well meaning bosses over time eventually came to realize, that my messages could gain more influence, by reducing unnecessary attack surface. - Doesn’t mean I always get it right, even now - but I am aware and generally try hard to do just that.

pdimitar
> So, the argument could be made, that removing attack surface from communication is analogous to hardening your code.

That's true, but this is one of the cases where obtaining the last 5-10% of clarify might require 90% of the total effort.

Now whether one actually already has plucked all the low-hanging fruit in their own communication and if it's already good -- that's a separate discussion.

staticassertion
Yep, I definitely get what you're saying and strategic communication is totally worthwhile (I'm a CEO, the value is absolutely not lost on me). It's just not something I prioritize on HN, that's just the personal call I make.
noizejoy
fair enough! :-)
paavohtl
If we include data race safety in the definition of memory safety (which it ultimately is), then Rust is safer than any commonly used garbage collected language with access to multithreading, including Swift, Java and C#.
tptacek
This is a RESF trope. We do not include Rust's notion of data race safety in the definition of memory safety as it is used in security. Not all bugs are created equal.
pdimitar
Agreed. I was simply mostly addressing this person's obvious beef with Rust.
option_greek
It also doesn't help that Rust has this addictive nature and once you tasted your first major Rust program and tamed the borrow checker, you will want to keep using it everywhere. And that's the reason why people keep looking around to rewrite something in Rust. It's in the same category as any other banned drug :)
tptacek
That has not been my experience.
tialaramex
Fair. Two further thoughts:

1. Rust also has other safety features that may be relevant to your interests. It is Data Race Free. If your existing safe-but-slow language offers concurrency (and it might not) it almost certainly just tells you that all bets are off if you have a Data Race, which means complicated concurrent programs exhibit mysterious hard-to-debug issues -- and that puts you off choosing concurrency unless it's a need-to-have for a project. But with Data Race Freedom this doesn't happen. Your concurrent Rust programs just have normal bugs that don't hurt your brain when you think about them, so you feel free to pick "concurrency" as a feature any time it helps.

2. The big surface area of iMessage is partly driven by Parsing Untrusted File Formats. You could decide to rewrite everything in Rust, or, more plausibly, Swift. But this is the exact problem WUFFS is intended to solve.

WUFFS is narrowly targeted at explaining safely how to parse Untrusted File Formats. It makes Rust look positively care free. You say this byte from the format is an 8-bit unsigned integer? OK. And you want to add it to this other byte that's an 8-bit unsigned integer? You need to sit down and patiently explain to WUFFS whether you understand the result should be a 16-bit unsigned integer, or whether you mean for this to wrap around modulo 256, or if you actually are promising that the sum is never greater than 255.

WUFFS isn't in the same "market" as Rust, its "Hello, world." program doesn't even print Hello, World. Because it can't. Why would parsing an Untrusted File Format ever do that? It shouldn't, so WUFFS can't. That's the philosophy iMessage or similar apps need for this problem. NSO up against WUFFS instead of whatever an intern cooked up in C last week to parse the latest "must have" format would be a very different story.

amelius
It is good to keep in mind that the Rust language still has lots of trade-offs. Security is only one aspect addressed by Rust (another is speed), and hence it is not the most "secure" language.

For example, in garbage collected languages the programmer does not need to think about memory management all the time, and therefore they can think more about security issues. Rust's typesystem, on the other hand, can really get in the way and make code more opaque and more difficult to understand. And this can be problematic even if Rust solves every security bug in the class of (for instance) buffer overflows.

If you want secure, better use a suitable GC'ed language. If you want fast and reasonably secure, then you could use Rust.

staticassertion
> Rust's typesystem, on the other hand, can really get in the way and make code more opaque and more difficult to understand.

I don't disagree with the premise of your post, which is that time spent on X takes away from time spent on security. I'll just say that I have not had the experience, as a professional rust engineer for a few years now, that Rust slows me down at all compared to GC'd languages. Not even a little.

In fact, I regret not choosing Rust for more of our product, because the productivity benefits are massive. Our rust code is radically more stable, better instrumented, better tested, easier to work with, etc.

tptacek
I don't think this is a good take. Go, Java, Rust, Python, Swift; they all basically eliminate the bug class we're talking about. The rest is ergonomics, which are subjective.

"Don't use Rust because it is GC'd" is a take that I think basically nobody working on memory safety (either as a platform concern or as a general software engineering concern) would agree with.

tialaramex
A thing to remember about GC is that it solves only one very important resource. Memory.

If your program loses track of which file handles are open, which database transactions are committed, which network sockets are connected, GC does not help you at all for those resources, when you are low on heap the system automatically looks for some garbage to get rid of, but when you are low on network sockets, the best it could try is hope that cleaning up garbage disconnects some of them for you.

Rust's lifetime tracking doesn't care why we are tracking the lifetime of each object. Maybe it just uses heap memory, but maybe it's a database transaction or a network socket. Either way though, at lifetime expiry it gets dropped, and that's where the resource gets cleaned up.

There are objects where that isn't good enough, but the vast majority of cases, and far more than under a GC, are solved by Rust's Drop trait.

valenterry
It's not like Rust were the only or even the best language in solving the problems you mentioned. It might be the best performance focused / low-level language though.
vanviegen
It's not the best language for solving this type of problems? What (kind of) language would you say is even better for that?
valenterry
Any language that offers some kind of effect system that has support for brackets and cancelation, for example Haskell or Scala.

There isn't even specific language support necessary, it's on the library level.

rurban
Actually safe languages fOR example. Pony guarantees all three safeties, memory, type and concurrency, whilst in Rust it's only a plan, just not implementated. Stack overflows, type unsafety, dead locks. POSIX compatible stdlib.

Concurrent Pascal or Singularity also fit the bill, with actual operating systems being written in it.

kaba0
High-level languages can provide abstractions though that can manage object life cycles to a degree for you, for example dependency injection frameworks, like Spring.

Not disagreeing, just mentioning.

juki
And many languages also provide convenient syntax for acquiring and releasing a resource for a dynamic extent (Java try-with-resources, C# `using`, Python `with`, etc.), which cover the majority of use cases.
fauigerzigerk
Yes, but these features are usually optional. Library users can easily forget to use them and neither library authors nor the compiler can do anything to enforce it.

The brilliant thing about RAAI style resource management is that library authors can define what happens at the end of an object's lifetime and the Rust compiler enforces the use of lifetimes.

jolux
I agree that RAII is superior, but it’s not true that compilers and library authors can’t do anything to enforce proper usage of Drop-able types in GC’d languages. C# has compiler extensions that verify IDisposables are used with the using statement, for example. Granted, this becomes a problem once you start writing functions that pass around disposable types.
staticassertion
I'm a security professional so it's based on being an experienced expert, not some sort of hype or misplaced enthusiasm.
colonelxc
The article we are commenting on is about targeted no-interaction exploitation of tens of thousands of high profile devices. I think this is one of the areas where there is a very clear safety value (not just theoretical).
bitwize
Whole classes of bugs -- the most common class of security-related bugs in C-family languages -- just go away in safe Rust with few to no drawbacks. What's irrational about the exuberance here? Rust is a massive improvement over the status quo we can't afford not to take advantage of.
UncleMeat
> how much value that safety really has over time

Billions and billions of dollars. Large organizations like Microsoft and Google have published numbers on the proportion of vulns in their software that are caused by memory errors. As you can imagine, a lot of effort is spent within these institutions to try to mitigate this risk (world class fuzzing, static analysis, and pentesting) yet vulns continue to persist.

Rust is not the solution. Memory-safe languages are. It is just that there aren't many such languages that can compete with C++ when it comes to speed (Rust and Swift are the big ones) so Rust gets mentioned a lot to preempt the "but I gotta go fast" concerns.

Ar-Curunir
… it is a solution for every memory-unsafe operation, though?
choeger
No. Rust cannot magically avoid memory-unsafe operations when you have to deal with, well, memory. If I throw a byte stream at you and tell you it is formatted like so and so, you have to work with memory and you will create memory bugs.

It can however make it extremely difficult to exploit and it can make such use cases very esoteric (and easier to implement correctly).

Ar-Curunir
That's not memory-unsafety. Memory-safety means avoiding bugs like buffer overflow, ROP, etc.
UncleMeat
That's totally untrue, unless you are using a really weird definition of "memory safety". A rust program that doesn't make use of the unsafe keyword will not have memory safety bugs. We've had programming languages for decades that are able to happily process arbitrary bytestreams with incredibly buggy code without ever actually writing to a memory region not reachable through pointers allocated by the ordinary program execution.

A Java program can't write over the return address on the stack.

bogomipz
>"A Java program can't write over the return address on the stack."

Could you say why Java is not susceptible to ROP?

UncleMeat
ROP isn't the vulnerability, but instead the exploitation technique. "Memory safety errors" were around for decades before ROP was widely understood.

A Java program, by construction, cannot write to memory regions not allocated on the stack or pointed to by a field of an object constructed with "new". Runtime checks prevent ordinary sorts of problems and a careful memory model prevents fun with concurrency errors. There are interesting attacks against the Java Security Manager - but this is independent of memory safety.

bogomipz
Yes I'm well aware of buffer overflows/stack smashing. I was asking why Java wasn't susceptible to something like ROP.
UncleMeat
All memory access in Java goes through fields or array offsets.

There are runtime checks around class structure that ensure that a field load cannot actually read some unexpected portion of memory.

There are runtime checks that ensure that you cannot read through a field on a deallocated object, even when using weakreference and therefore triggering a GC even while the program has access to that field.

There are runtime checks around array reads that ensure that you cannot access memory outside of the allocated bounds of the array.

I have no idea why "susceptible to something like ROP" is especially relevant here. ROP is not the same as "writing over the return address" ROP is a technique you use to get around non-executable data sections and happens after you abuse some memory safety error to write over the return address (or otherwise control a jump). It means "constructing an exploit via repeated jumps to already existing code rather than jumping into code written by the attacker".

But just for the record, Java does have security monitoring of the call stack that can ensure that you cannot return to a function that isn't on the call stack so even if you could change the return target the runtime can still detect this.

scoutt
> A rust program that doesn't make use of the unsafe keyword will not have memory safety bugs

https://www.cvedetails.com/vulnerability-list/vendor_id-1902...

What if the bug is in std?

What if I use a bugged Vec::from_iter?

What if I use the bugged zip implementation from std?

You'll probably blame unsafe functions, but those unsafe functions were in std, written by the people who know Rust better than anyone.

Imagine what you and me could do writing unsafe.

Imagine trusting a 3rd party library...

cute_boi
Then same logic applies for python, java too? What if there is bug in internal implementation?

Rust Lang strives for safety and safety is no 1 priority. Regarding the unsafe in std please read the source code just to know how much careful they are with the implementation. They only use unsafe for performance and even with unsafe rust, it doesn't provide too much freedom tbh.

The 3rd party thing you are referring etc sounds childish. They are not the rust lang fault tbh. If you don't trust them don't use. It is as simple as that.

So I think telling people rust program that doesn't have unsafe will not have memory safety bugs. Exceptions to this statement do occurs but are rare.

UncleMeat
Sure, and the JVM can contain an exploitable buffer overrun.

We are on a thread about "a case against security nihilism".

1. Not all vulnerabilities are memory safety vulnerabilities. The idea that adopting memory safe languages will prevent all vulns is not only a strawman, but empirically incorrect since we've had memory safe languages for many decades.

2. It is the case that a tremendously large number of vulns are caused by memory safety errors and that transitioning away from memory-unsafe languages will be a large win for industry safety. 'unsafe' is a limitation of Rust, but compared to the monstrous gaping maw of eldritch horror that is C and C++, it is small potatoes.

3. You are going to struggle to write real programs without ever using third party code.

jvanderbot
Language absolutism.
TaupeRanger
There's literally zero evidence that a program written in Rust is actually practically safer than one written in C at the same scale. And there won't be any evidence of this for some time because no Rust program is as widely deployed as an equivalent highly used C program.
rrdharan
I’d wager Dropbox’s Magic Pocket is up there with equivalent C/C++ based I/O / SAN stacks:

https://dropbox.tech/infrastructure/extending-magic-pocket-i...

staticassertion
That's not true, actually. There is more than "literally zero" evidence. I don't feel like finding it for you, but at minimum Mozilla has published a case study showing that moving to Rust considerably reduced the memory safety issues they discovered. That's just one example, I believe there are others.

There are likely many other examples of, say, Java not having memory safety issues. Java makes very similar guarantees to Rust, so we can extrapolate, using common sense, that the findings roughly translate.

Common sense is a really powerful tool for these sorts of conversations. "Proof" and "evidence" are complex things, and yet the world goes on with assumptions that turn out to hold quite well.

TaupeRanger
Not sure what your last sentence means - without evidence, there are cases when we guess right, and those when we guess wrong. Are you just choosing to ignore the latter?

The Mozilla case study is not a real world study. It simply looks at the types of bugs that existed and says "I promise these wouldn't have existed if we had used Rust". Would Rust have introduced new bugs? Would there be an additional cost to using Rust? We don't know and probably never will. What we care about is preventing real world damage. Does Rust prevent real world damage? We have no idea.

staticassertion
> Not sure what your last sentence means - without evidence, there are cases when we guess right, and those when we guess wrong. Are you just choosing to ignore the latter?

What I'm saying is that truth is a matter of debate. We believe lots of things based on evidence much less rigorous than a formal proof in many cases - like most modern legal systems, which rely on various types of evidence, and then a jury that must form a consensus.

So saying "there is no evidence" is sort of missing the point. Safe Rust does not have memory safety issues, barring compiler bugs, therefor common sense as well as experience with other languages (Java, C#, etc), would show that that memory safety issues are likely to be far less common. Maybe that isn't the evidence that you're after, but I find that compelling.

To me, the question of "does rust improve upon memory safety relative to C/C++" is obvious to the point that it really doesn't require justification, but that's just me.

I could try to find more evidence, but I'm not sure what would convince you. There's people fuzzing rust code and finding far fewer relevant vulns - but you could find that that's not compelling, or whatever.

tialaramex
Not just memory safety. Rust also prevents data races in concurrent programs. And there are a few more things too.

But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?

If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.

If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.

Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.

pcwalton
Well, Zig isn't memory safe (as implemented today; they could add a GC), so it's not a good example of a Rust alternative in this domain. But I agree with your overall point, and would add that you could replace Zig with any one of the dozens of popular memory safe language, even old standbys like Java. The point is not to migrate to one language in particular, but rather to migrate to languages in which memory errors are compiler bugs instead of application bugs.
tialaramex
I could have sworn I'd read that Zig's ambition was to be memory safe. Given ten years I don't find that impossible. Indeed I gave C++ the same benefit of the doubt on that timeline. But, when I just searched I couldn't find whatever I'd seen before on that topic.
dralley
The Zig approach is "memory safe in practice" vs "memory safe in theory". They don't have any aspirations to total memory safety like Rust, but they want to get most of the way there with a lot less overhead.

Basically they have a lot of runtime checks enabled in debug mode, where you do the majority of your testing, that are then disabled in the release binary.

Additionally the approach they've taken to allocators means that you can use special allocators for testing that can perform even more checks, including leak detection.

I think it's a great idea and a really interesting approach but it's definitely not as rigorous as what Rust provides.

adwn
> Basically they have a lot of runtime checks enabled in debug mode, where you do the majority of your testing, that are then disabled in the release binary.

But there's the problem: Testing can't and won't cover all inputs that a malicious attacker will try [1]. Now you've tested all inputs you can think of with runtime checks enabled, you release your software without runtime checks, and you can be sure that some hacker will find a way to exploit a memory bug in your code.

[1] Except for very thorough fuzzing. Maybe. If you're lucky. But probably not.

littlestymaar
It's not “memory safe in practice”. It's “we provide tools to make our memory unsafe language with as little memory issue as possible”. Is it better than what C or C++ offer out of the box: yes. It's totally reasonable to think that it may be as good as C or C++ with state of the art tooling that most programmers aren't using today because they don't want to invest the effort, so this is a big progress over C.

But this shouldn't be called “memory safety”.

pcwalton
I don't think Zig is going to be memory safe in practice, unless they add a GC or introduce a Rust-like system. All of the mitigations I've seen come from that language--for example, quarantine--are things that we've had for years in hardened memory allocators for C++ like Chromium PartitionAlloc [1] and GrapheneOS hardened_malloc [2]. These have been great mitigations, but have not been effective in achieving memory safety.

Put another way: Anything you could do in the malloc/free model that Zig uses right now is something you could do in C++, or C for that matter. Maybe there's some super-hardened malloc design yet to be found that achieves memory safety in practice for C++. But we've been looking for decades and haven't found such a thing--except for one family of techniques broadly known as garbage collection (which, IMO, should be on the table for systems programming; Chromium did it as part of the Oilpan project and it works well there).

There is always a temptation to think "mitigations will eliminate bugs this time around"! But, frankly, at this point I feel that pushing mitigations as a viable alternative to memory safety for new code is dangerous (as opposed to pushing mitigations for existing code, which is very valuable work). We've been developing mitigations for 40 years and they have not eliminated the vulnerabilities. There's little reason to think that if we just try harder we will succeed.

[1]: https://chromium.googlesource.com/chromium/src/+/HEAD/base/a...

[2]: https://github.com/GrapheneOS/hardened_malloc

pron
You understand "memory safe in practice" as soundly eliminating all memory safety issues. This is not how I understand it. Zig can exceed Rust's memory safety in practice without soundly eliminating all issues. The reason is that many codebases rely on unsafe code, and finding problems in Zig can be cheaper than finding problems in Rust w/ unsafe. This is even more pronounced when we look at security overall because while many security issues are memory safety issues, many aren't (and most aren't use-after-free bugs); in other words, it's certainly possible that paying to eliminate all use-after-free harms security more than just catching much of it more cheaply. So there is no doubt that Rust programs that don't use unsafe will have fewer use-after-free bugs than Zig programs, but it is very doubtful that they will, on average, be more secure as a result of this tradeoff.
pcwalton
The idea that being memory safe is less secure than not being memory safe is at odds with the opinion of more or less the entire security community.
pron
> Well, Zig isn't memory safe (as implemented today; they could add a GC), so it's not a good example of a Rust alternative in this domain.

While the first part of the sentence is mostly true (although the intention is to make safe Zig memory safe, and unsafe Rust isn't safe either), the second isn't. The goal isn't to use a safe language, but to use a language that best reduces certain problems. The claim that the best way to reduce memory safety problems is by completely eliminating all of them regardless of type and regardless of cost is neither established nor sensical;. Zig completely eliminates overflows, and, in exchange for the cost of eliminating use-after-free, makes detecting and correcting it, and other problems, easier.

dralley
Zig isn't memory safe but it's still leaps and bounds above C.

I have to admire the practicality of the approach they've been taking.

pa7ch
I remain unconvinced that race proof programs are nearly as big a deal as memory safety. Many classes of applications can tolerate panics and its not a safety or security issue. I don't worry about a parser or server in go like I would in C.

(I realize that racing threads can cause logic based security issues. I've never seen a traditional memory exploit from on racing goroutines though.)

nyanpasu64
I've seen Qt Creator segfault due to the CMake plugin doing some strange QStringList operations on an inconsistent "implicitly shared" collection, that I guess broke due to multithreading (though I'm not sure exactly what happened). In RSS Guard, performing two different "background sync" operations causes two different threads to touch the same list collections, producing a segfault. (These are due to multiple threads touching the same collection/pointers; racing on primitive values is probably less directly going to lead to memory unsafety.)

Apparently in Golang, you can achieve memory unsafety through data races: https://blog.stalkr.net/2015/04/golang-data-races-to-break-m... (though I'm not sure if a workaround has been added to prevent memory unsafety).

tialaramex
A Race in Go is Undefined Behaviour. All bets are off, whatever happens, no matter how strange, is OK.

If you have a race which definitely only touches some simple value like an int and nothing more complicated then Go may be able to promise your problem isn't more widespread - that value is ruined, you can't trust that it makes any sense (now, in the future, or previously), but everything else remains on the up-and-up. However, the moment something complicated is touched by a race, you lose, your program has no defined meaning whatsoever.

gpderetta
Of course, but when talking about security, a race in Go would be very hard to exploit.

It is a different story in languages meant to run untrusted code of course.

raxxorrax
I take that bet about C being pretty prominent in 10 years from now.

A language and a memory access model are no panacea. 10 years is like the day after tomorrow in many industries.

perryizgr8
> also prevents data races in concurrent programs.

I have another neat trick to avoid races. Just write single threaded programs. Whenever you think you need another thread, you either don't need it, or you need another program.

kaba0
You do realize that data races can happen between multiple programs as well, when shared resources are used? Which is pretty much a requirement for many things.
perryizgr8
Yes, and rust can't prevent those.
adwn
It can prevent data races in memory shared between processes in the same way it can prevent them in memory shared between threads. Data race prevention isn't built into the Rust language, it is constructed using a combination of the borrow checker and the type system.
perryizgr8
As I understand the borrow checker, it wouldn't detect races that result from the interaction of separate processes, since that would be out of the bounds of the compilation unit. But my knowledge is limited in this, so I maybe wrong.
staticassertion
Rust has no concept of a process, same as it has no concept of a thread. So you'd build a safe abstraction for sharing memory across processes the same way you do today with threads.
SolarNet
> If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

I mean to be clear, modern C++ can be effectively as safe as rust is. It requires some discipline and code review, but I can construct a tool-chain and libraries that will tell me about memory violations just as well as rust will. Better even in some ways.

I think people don't realize just how much modern C++ has changed.

staticassertion
It must have changed a shitload in the last 2-3 years if that's the case. What tools are you referring to? I'm pretty familiar with C++ tooling but I haven't paid attention for a little while.
SolarNet
The modern standard library, plus some helpers is the big part of it. Compiler warnings as errors are very good at capturing bad situations if you follow the rules (e.g. don't allow people to just make raw pointers, follow a rule of 5). I never said it was as easy to do as in rust.

As for tooling, things like valgrind provide an excellent mechanism for ensuring that the program was memory safe, even in it's "unsafe" areas or when calling into external libraries (something that rust can't provide without similar tools anyway).

My broader point is that safety is more than just a compiler saying "ok you did it", though that certainly helps. I would trust well written safety focused C++ over Rust. On the other hand, I would trust randomly written Rust over C++. Rust is good for raising the lower end of the bar, but not really the top of it unless paired with a culture and ecosystem of safety focus around the language.

tialaramex
Address sanitizers and valgrind tell you whether your program did something unsafe while you were analysing, but they can't tell you whether the program can do something unsafe.

Since we're in a thread about security this is a crucial difference. I'm sure Amy, Bob, Charlie and Deb were able to use the new version of Puppy Simulator successfully for hours without any sign of unsafety in the sanitizer. Good to go. Unfortunately, the program was unsafe and Evil Emma had no problem finding a way to attack it. Amy, Bob, Charlie and Deb had no reason to try naming a Puppy in Puppy Simulator with 256 NUL characters, so, they didn't, but Emma did and now she's got Administrator rights on your server. Oops.

In contrast safe Rust is actually safe. Not just "It was safe in my tests" but it's just safe.

Even though it might seem like this doesn't buy you anything when of course fundamental stuff must use unsafe somewhere, the safe/unsafe boundary does end up buying you something by clearly delineating responsibility.

For example, sometimes in the Rust community you will see developers saying they had to use unsafe because, alas, the stupid compiler won't optimise the safe version of their code properly. For example it has a stupid bounds check they don't need so they used "unsafe" to avoid that. But surprisingly often, another programmer looks at their "clever" use of unsafe, and actually they did need that bounds check they got rid of, their code is unsafe for some parameters.

For example just like the C++ standard vector, Rust's Vec is a wasteful solution for a dozen integers or whatever, it does a heap allocation, it has all this logic for growing and shrinking - I don't need that for a dozen integers. There are at least two Rust "small vector" replacements. One of them makes liberal use of "unsafe" arguing that it is needed to go a little faster. The other is entirely safe. Guess which one has had numerous safety bugs.... right.

Over in the C++ world, if you do this sort of thing, the developer comes back saying duh, of course my function will cause mayhem if you give it unreasonable parameters, that was your fault - and maybe they update the documentation or maybe they don't bother. But in Rust we've got this nice clean line in the sand, that function is unsafe, if you can't do better label it "unsafe" so that it can't be called from safe code.

This discipline doesn't exist in C++. The spirit is willing but the flesh (well, the language syntax in this case) is too weak. Everything is always potentially unsafe and you are never more than one mistake from disaster.

SolarNet
> This discipline doesn't exist in C++.

And this is where the argument breaks down for me. The C++ vector class can be just as safe if people are disciplined. And as you even described, people in rust can write "unsafe" and do whatever they want anyway to introduce bugs.

The language doesn't really seem to matter at the end of the day from what you are telling me (and that's my main argument).

With the right template libraries (including many parts of the modern C++ STL) you can get the same warnings you can from Rust. One just makes you chant "unsafe" to get around it. But a code review should tell off any developer doing something unsafe in either language. C++ with only "safe" templates is just as "actually safe" as rust is (except for with a better recovery solution than panics!).

deckard1
> If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

It's 26 years after Java was released. Java has largely been the main competitor to C++. I don't see C++ going away nor do I see C going away. And it's almost always a mistake to lump C and C++ developers together. There is rarely an intersection between the two.

I think you do not understand how short 10 years is. There are tons of people still running computers on Sandy Bridge.

tialaramex
> I think you do not understand how short 10 years is. There are tons of people still running computers on Sandy Bridge.

Ten years is about the time since C++ 11. I may be wrong, but I do not regret my estimate.

api
There's still a lot of macho resistance to using safe languages, because "I can write secure code in C!"

"You" probably can. I can too. That's not the point.

What happens when the code has been worked on by other people? What happens after a few dozen pull requests are merged? What happens when it's ported to other platforms with different endian-ness or pointer sizes or hacked in a late night death march session to fix some bug or add some feature that has to ship tomorrow? What happens when someone accidentally deletes some braces with an editor's refactor feature, turning a "for { foo(); bar(); baz(); }" into a "for foo(); bar(); baz();"?

That's how bugs creep in, and the nice thing about safe languages is that the bugs that creep in are either caught by the compiler or result in a clean failure at runtime instead of exploitable undefined behavior.

Speed is no longer a good argument. Rust is within a few decimal points of C performance if you code with an eye to efficiency, and if you really need something to be as high-performance as possible code just that one thing in C (or ASM) and code the rest in Rust. You can also use unsafe to squeeze out performance if you must, sparingly.

Oh and "but it has unsafe!" is also a non-argument. The point of unsafe is that you can trivially search a code base and audit every use of it. Of course it's easy to search for unsafe code in C and C++ too... because all of it is!

If we wrote most things and especially things like parsers and network protocols in Rust, Go, Swift, or some other safe language we'd get rid of a ton of low-hanging fruit in the form of memory and logic error attack vectors.

UncleMeat
> "You" probably can. I can too. That's not the point.

I'm not even sure that's true. I do agree with you that the argument that you need to hire other people is more convincing, but I'd wager that no single human on the planet can actually write a vuln-free parser of any complexity in C on their first attempt - even if handed the best tools that the model checking community has to offer.

Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.

3gg
Beat me to it. The macho effect is there for sure, but on what grounds do you claim you can write secure C? As far as I know, you can't really prove anything about C unless you severely restrict the language, and those restrictions include pointer usage. So at best, you can do a hand-wavy read through code and have some vague notion of its behaviour.
api
It depends on the size of the parser. As they get big and complex I would start to agree with you.
kobebrookskC3
what about https://github.com/seL4/seL4?
paavohtl
I would argue formally verified C does not count, because in the grand scheme of things only the tiniest fraction of C in existence is formally verified. The effort and knowhow it requires is not a realistic option for the vast majority of users.
Animats
It doesn't do enough. It's so low level that you have to run another OS on top of it. So all it does is provide a virtual machine. Typically people load Linux on top, which means you have all the security holes of Linux. You just get to run a few copies of Linux, possibly at different security levels.

I would have liked to see a secure QNX as a mainstream OS. The microkernel is about 60Kb, and it offers a POSIX API. All drivers, file systems, networking, etc. are in user space. You pay about 10%-20% overhead for message passing. You get some of that back because you have good message passing available, instead of using HTTP for interprocess communication.

kobebrookskC3
i was responding to the claim "It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++". of course, the feasibility part is questionable.
UncleMeat
"on their first attempt" is part of that sentence.
kaba0
It was written by top experts of the field through multiple years and is formally verified. It could have been written in brainfuck as well, since at that point the language is not important.
meowface
>Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.

It reminds me a little of some of the free-wheeling nuclear physicists in the Manhattan Project - probably some of the smartest people on the planet - being hubristically lax with safety: https://en.wikipedia.org/wiki/Demon_core#Second_incident

>[...] The experimenter needed to maintain a slight separation between the reflector halves in order to stay below criticality. The standard protocol was to use shims between the halves, as allowing them to close completely could result in the instantaneous formation of a critical mass and a lethal power excursion.

>Under Slotin's own unapproved protocol, the shims were not used and the only thing preventing the closure was the blade of a standard flat-tipped screwdriver manipulated in Slotin's other hand. Slotin, who was given to bravado, became the local expert, performing the test on almost a dozen occasions, often in his trademark blue jeans and cowboy boots, in front of a roomful of observers. Enrico Fermi reportedly told Slotin and others they would be "dead within a year" if they continued performing the test in that manner. Scientists referred to this flirting with the possibility of a nuclear chain reaction as "tickling the dragon's tail", based on a remark by physicist Richard Feynman, who compared the experiments to "tickling the tail of a sleeping dragon".

>On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while he was lowering the top reflector, allowing the reflector to fall into place around the core. Instantly, there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing an intense burst of neutron radiation estimated to have lasted about a half second. Slotin quickly twisted his wrist, flipping the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation, while Slotin's reaction prevented a recurrence and ended the accident. The position of Slotin's body over the apparatus also shielded the others from much of the neutron radiation, but he received a lethal dose of 1,000 rad (10 Gy) neutron and 114 rad (1.14 Gy) gamma radiation in under a second and died nine days later from acute radiation poisoning.

greyhair
I have used a Yubikey for years. Nothing is perfect, but as you mentioned, the only hacks of them have been with persistent physical access, or somehow getting the end user to hit the activate button tens of thousands of times.

On any system, if you give an attacker physical access to the device, you are done. Just assume that. If your Yubikey lives in your wallet, or on your key chain, and you only activate it when you need it, it is highly unlikely that anyone is going to crack it.

As far as physical device access, my last employer maintained a 'garage' of laptops and phones for employees traveling to about a half dozen countries. If you were going there, you left your corporate laptop and phone in the US, and borrowed one of these 'travel' devices with you for your trip. Back home, those devices were never allowed to connect to the corporate network. When you handed them in, they were wiped and inspected, but IT assumed that they were still compromised.

Lastly, Yubikey, as a second factor, is supposed to be part of a layered defense. Basically forcing the attacker to hack both you password and your Yubikey.

It bugs me that people don't understand how important two factor auth is, and also how crazy weak SMS access codes are.

ajsnigrutin
This depends...

I've had an argument here about SMS for 2FA... Someone said, that SMS for 2FA is broken, because some companies misuse it for 1FA (for eg password reset)... but in essence, a simple sms verification solves 99.9% of issues with eg. password leaks and password reuse.

No security solution is perfect, but using a solution that works 99% of the time is still better than no security at all (or just one factor).

tialaramex
I'm pretty sure I've written on HN before that SMS 2FA doesn't do much against phishing, which we know is a big problem, but worse it creates a false reassurance.

The user doesn't reason correctly that the bank would send them this legitimate SMS 2FA message because a scammer is now logging into their account, they assume it's because this is the real bank site they've reached via the phishing email, and therefore their concern that it seemed maybe fake was unfounded.

andi999
The phisher needs to know your phone number though to do that.
nimih
Why would the phisher need to know your phone number? Once you've clicked the link in the email and are on the phisher's website, they can just trigger the 2FA SMS through the bank's own login flow, display a 2fa prompt on the phishing site, then relay the credential on their end.

This isn't unique to SMS, obviously, since the same attack scenario works against e.g. a TOTP from a phone app.

andi999
Of course. I was thinking man in the middle, but it is not needed here.

Edit:thinking about it, without man in the middle the phisher can login, but cannot make transfers (assuming the SMS shows what transfer is beiing authorized). Still bad enough.

tialaramex
Crooks also thrive on confusion†. We can and should make software more robust against getting confused by bad guys, but Grannie we can't do much about.

So alas, even if on every previous transaction, Grannie was told, "Please read the SMS carefully and only fill out the code if the transfer is correctly described", she may not be suspicious when this time the bank (actually a phishing site) explains, "Due to a technical fault, the SMS may indicate that you are authorising a transfer. Please disregard that". Oops.

† e.g. some modern "refund" scams involve a step where the poor user believes they "slipped" and entered a larger number than they meant to, but actually the bad guys made the number bigger, the user is less suspicious of the rest of the transaction because they believe their agency set the wheels in motion.

ajsnigrutin
But the scammer needs username, password and to phish the user... this is still more than just username+password (which could be reused on eg. linkedin, adobe or any of the other hacked sites), and if the scammers do the phishing attack, they can also get the OTP from the users app in the same way as they would get the number from an SMS
viztor
For anyone who's fresh to cyber security, the fundamental axiom of it is that anything can be cracked, only a matter of computations (time*resource). Just as the dose makes the poison (sola dosis facit venenum).

Suppose you have a secret, that is RSA-encrypted, we might be looking at three hundred trillion years according to Wikipedia with the kind of computer we have now. Obviously that secrecy would have lost its value then, and the resource it requires to crack the secret would worth more than the secret itself. Even with quantum computing, we are still looking at 20+ years, which is still enough for most of the secrets, you got plenty time to change it, or after it lost its value. So we say that's secure enough.

alabamacadabra
If that’s a fundamental axiom of cyber security then it’s obvious that it’s a field of fools. This is a poor, tech-driven understanding of security that will leave massive gaps in its application to technology.
o8r3oFTZPE
From the video: "Cloud computing is really a fancy name for someone else's computer."

He goes on to discuss the expansion of "trust boundaries".

Big Tech: Use our computers, please!

yarcob
> The use of memory unsafe languages for parsing untrusted input is just wild.

I think some of the vulnerabilities have been found in image file format or PDF parsing libraries. These are huge codebases that you can't just rewrite in another language.

At the same time, Apple is investing huge amounts of resources into making their (and everyone elses) code more secure. Xcode/clang includes a static analyzer that catches a lot of errors in unsafe languages, and they include a lot of "sanitizers" that try to catch problems like data races etc.

And finally, they introduced a new, much safer programming language that prevents a lot of common errors, and as far as I can tell they are taking a lot of inspiration from Rust.

So it's not like Apple isn't trying to improve things.

UncleMeat
These are stopgaps, not long term solutions.

Msan has a nontrivial performance hit and is a problem to deploy on all code running a performance critical service. Static analysis can find some issues but any sound static analysis of a C++ program will rapidly havoc and report false positives out the wazoo. Whole-program static analysis (which you need to prevent false positives) is also a nightmore for C++ due to the single-translation-unit compilation model.

All of the big companies are spending a lot of time and money trying to make systems better with the existing legacy languages and this is necessary today because they have so much code and you can't just YOLO and run a converter tool to convert millions and millions of lines of code to Rust. But it is very clear that this does not just straight up prevent the issue completely like using a safe language.

blowski
I was with you until the parsing with memory unsafe languages. Isn’t that exactly the kind of “random security not based on a threat model” type comment you so rightly criticised in the first half of your comment?
titzer
Based on the hundreds, perhaps thousands of critical vulnerabilities that are due directly to parsing user input in memory-unsafe languages, usually resulting in remote code execution, how's this for a threat model: attacker can send crafted input that contains machine code that subsequently runs with the privileges of the process parsing the input. That's bad.
staticassertion
The attack surface is the parser. The ability to access it is arbitrary. I can't build a threat model beyond that for any specific case, but in the case of a text messaging app I absolutely expect "attacker can text you" to be in your threat model.
kmeisthax
There are very few threat models that a memory unsafe parser does not break.

Even the "unskilled attacker trying other people's vulns" threat basically depends on the existence of memory-safety related vulnerabilities.

blowski
Then we’re right back in the checklist mentality of “500 things secure apps never do”. I could talk to somebody else and they’d tell me the real threat to worry about is phishing or poor CI/CD or insecure passwords or whatever.
_jal
It sounds like you should talk to fewer people until you do your own threat modeling. Nobody not at your company shares the exact same threats, risks and consequences of breaches. You really need a solid look at what you're facing before anyone can offer clear advice.

That said, it is easy to talk about high-risk mistakes that people make over and over and over. And processing untrusted input in memory-unsafe languages is absolutely a bullet that finds so many feet that once my generation of programmers (raised on C) are dead, I think it will just be conventional wisdom.

staticassertion
There is no "real threat". Definitely phishing is one of the top threats to an organization, left unmitigated. Thankfully, we now have unphishable 2FA, so you can mitigate it. When you choose to prioritize a threat is going to be a call you have to make as the owner of your company's security posture - maybe phishing is above memory safety for you, I can't say.

What I can say is that parsing untrusted data in C is very risky. I can't say it is more risky than phishing for you, or more risky than anything else. I lack the context to do so.

That said, a really easy solution might be to just not do that. Just like... don't parse untrusted input in C. If that's hard for you, so be it, again I lack context. But that's my general advice - don't do it.

lanstin
In-arguable these days.
ExtraE
I mean, the threat model is that 1. Memory leaks/errors are bad 2. Programmers make those mistakes all the time 3. Using memory safe languages is cheap Therefore, 4. We should use memory safe languages more often
legulere
The threat-model there is that the attacker controls the text that is parsed.
jvanderbot
Add this language absolutism to the list of things we need to avoid.
lisper
I think you must have misunderstood the point the parent comment was trying to make. Memory-safety issues are responsible for a majority of real-world vulnerabilities. They are probably the most prevalent extant threat in the entire software ecosystem.
IggleSniggle
It may sound like I’m being snarky, but I’m not:

Aren’t users / social engineering make up the actual majority of real-world vulnerabilities, and pose the most prevalent extant threat in the entire software ecosystem?

lisper
A fair point, but that's not really a problem with the technology. (And I did hedge with "probably" :-)
staticassertion
Yes, but I think that within the context of discussing a memory safety vulnerability in a text messaging app it's reasonable to talk about memory safe parsers, no?

Beyond that, I've already addressed phishing at our company, it just didn't seem worth pointing out.

wahern
Buffer overflows are common in CVEs because it's the kind of thing programmers are very familiar with. But I'm pretty sure that in terms of real-world exploits things like SQL injection, cross-site scripting, authentication logic bugs, etc, are still far more common. Almost all of those are in bespoke, proprietary software. A Facebook XSS exploit doesn't get a CVE.
steveklabnik
First Microsoft, then two different teams at Google, and then Mozilla, and then someone else, all found that roughly 70% of security vulnerabilities reported in their products are due to memory unsafety issues. That roughly that number keeps coming up across all of the biggest companies in our industry lends it some weight.

Here's the first Microsoft one: https://www.zdnet.com/article/microsoft-70-percent-of-all-se...

And Chrome: https://www.zdnet.com/article/chrome-70-of-all-security-bugs...

wahern
Yes, I'm well aware of what the data says, as well as what the data is measuring--CVEs and bug reports in well-known C/C++/Java projects.

But not too long ago, before SaaS, social media, etc, displaced phpBB, WordPress, and other open source platforms, things like SQL injection reigned supreme even in the reported data. Back then CVEs more closely represented the state of deployed, forward-facing software. But now the bulk of this software is proprietary, bespoke, and opaque--literally and to vulnerability data collection and analysis.

How many of the large state-sponsored penetrations (i.e. the ones we're most likely to hear about) used buffer overflows? Some, like Stuxnet, but they're considered exceptionally complex; and even in Stuxnet buffer overflows were just one of several different classes of exploits chained together.

Bad attackers are usually pursuing sensitive, confidential data. Access to most data is protected by often poorly written logic in otherwise memory-safe languages.

UncleMeat
SQL Injection is a good lesson here. How is it mitigated effectively? By telling devs to write code carefully? No. It is mitigated by prepared statement libraries that are structurally resistant to SQL Injection. Similarly, "here are some static analysis tools - try your best to write safe C" is not a winning move.
kaba0
The thing is, SQL injection and cross-site scripting are both trivial to defend against — at least compared to memory safety. It has a small surface area and most frameworks do help with it, or at least it is in their realm of possibility.

Preventing buffer overruns require language-level support.

steveklabnik
My understanding was that while some of these are about CVEs and such, not all are. Like my understanding was the Microsoft numbers are from across all products, proprietary and open source.
comex
> How many of the large state-sponsored penetrations (i.e. the ones we're most likely to hear about) used buffer overflows?

It really depends on the target. If you’re attacking a website, then sure, you’re more likely to find vulnerability classes like XSS that can exist in memory-safe code. When you’re talking about client-side exploits like the ones used by NSO Group, though, almost all of them use memory corruption vulnerabilities of some sort. (That doesn’t only include buffer overflows; use-after-free vulnerabilities seem to be the most common ones these days.)

EricE
“ The use of memory unsafe languages for parsing untrusted input is just wild.” Indeed! The continued casualness of attitudes towards input validation continues to floor me. “computer science” is anything but :p
bigiain
> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust

Can you though? Where/how are you deploying your Rust executables that isn't relying deeply on OS code written in "wild" "memory unsafe languages"?

I mean, I _guess_ it'd be possible to write everything from the NIC firmware all the way through your network drivers and OS to ensure no untrusted input gets parsed before it hits your Rust code, but I doubt anyone except possibly niche academic projects or NSA/MOSSAD devs have ever done that...

staticassertion
Yeah I mean, 100%, I hate that I run my code on Linux, which I don't consider to be a well secured kernel. It's an unfortunate thing, but such is life.

But attackers have significantly less control over that layer. This is quite on topic with regards to security nihilism - my parser code being memory safety means that the code that's directly interfacing with attacker input is memory safe. Is the allocator under the hood memory safe? Nope, same with various other components - like my TCP stack. But again, attackers have a lot less control over that part of the stack, so while unfortunate, it's not my main concern.

I do hope to, in the future, leverage a much much more security optimized stack. I'd dive into details on how I intend to do that, but I think it's out of scope for this conversation.

bsder
> Just the other day I suggested using a yubikey

The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.

Everybody defaults to a small number of security/identity providers because running the system is so stupidly painful. Hand a YubiKey to your CEO and their secretary. Make all access to corporate information require a YubiKey. They won't last a week.

We don't need better crypto. Crypto is good enough. What we need is better integration of crypto.

staticassertion
> Hand a YubiKey to your CEO and their secretary.

Well, I'm the CEO lol so we have an advantage there.

> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security.

Totally, this is a huge issue to me. I strongly believe that we need to start getting TPMs and hardware tokens into everyone's hands, for free - public schools should be required to give it to students when they tell them to turn in homework via some website, government organizations/ anyone who's FEDRAMP should have it mandated, etc. It's far too expensive today, totally agreed.

edit: Wait, per month? No no.

> We don't need better crypto.

FWIW the kicker with yubikeys isn't really anything with regards to cryptography, it's the fact that you can't extract the seed and that the FIDO2 protocols are highly resistant to phishing.

tialaramex
> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.

But what does this have to do with the FIDO authenticator?

At first I thought you said $100 per user, and I figured, wow, you are buying them all two Yubikeys, that's very generous. And then I realised you wrote "per month".

None of this costs anything "per month per user". You're buying some third party service, they charge whatever they like, this is the same as the argument when people said we can't have HTTPS Everywhere because my SSL certificate cost $100. No, you paid $100 for it, but it costs almost nothing.

I built WebAuthn enrollment and authentication for a vanity site to learn how it works. No problem, no $100 per month per user fees, just phishing proof authentication in one step, nice.

The integration doesn't get any better than this. I guess having watched a video today of people literally wrapping up stacks of cash to Fedex their money to scammers I shouldn't underestimate how dumb people can be but really even if you struggle with TOTP do not worry, WebAuthn is easier than that as a user.

bsder
And how do I use my YubiKey to access mail if its not Gmail/Office365?

And how do I enroll all my employees into GitHub/GitLab?

And how do I recover when a YubiKey gets lost?

And how do I ...

Sure, I can do YubiKeys for myself with some amount of pain and a reasonable amount of money.

Once I start rolling secure access out to everybody in the company, suddenly it sucks. And someone spends all their time doing internal customer support for all the edge cases that nobody ever thinks about. This is fine if I have 10,000 employees and a huge IT staff--this is not so fine if I've got a couple dozen employees and no real IT staff.

That's what people like okta and auth0 (now bought by okta) charge so bloody much for. And why everybody basically defaults to Microsoft as an Identity Provider. etc.

Side note: Yes, I do hand YubiKeys out as trios--main use, backup use (you lost or destroyed your main one), and emergency use (oops--something is really wrong and the other two aren't working). And a non-trivial amount of services won't allow you to enroll multiple Yubikeys on the same account.

tialaramex
> And a non-trivial amount of services won't allow you to enroll multiple Yubikeys on the same account.

For WebAuthn (and its predecessor U2F) that "non-trivial" amount seems to be precisely AWS. The specification tells them to allow multiple devices to be enrolled but they don't do it.

bbarnett
I am scared to death of rust.

It appears that if one uses it, one become evangelicalized to it, spreads the word "Praise Rust!", and so forth.

Anything so evangelized is met with strong skepticism here.

INTPenis
What scares me about Rust is that people put so much trust in it. And part of that is because of what you mention, the hype in other words.

I don't follow this carefully but even I have heard of at least one Rust project that when audited failed miserably. Not because of memory safety but because the programmer had made a bunch of rookie mistakes that senior programmers might be better at.

So in other words, Rust's hype is going to lead to a lot of rewrites and a lot of new software being written in Rust. And much of that software will have simple programming errors that you can do in any language. So we're going to need a whole new wave of audits.

bbarnett
I recall a time at the grocery store, years ago. I wanted some sliced meat, but when I approached the counter a young woman was sweeping the floor.

Naturally, she was wearing gloves.

Seeing me, she grabbed the dustpan, threw away her sweepings, put the broom away, and was prepared to now serve me...

Still wearing the same gloves. Apparently magic gloves, for she was confused when I asked her to change them. She'd touched the broom, the dustpan, the floor, stuff in the dustpan, and the garbage. All within 20 seconds of me seeing her.

Proper procedure, understanding processes, are far more effective than a misused tool.

Is rust mildly better than some languages? Maybe.

But it is not a balm for all issues, and as you say, replacing very well maintained codebases might result in suboptimal outcomes.

o8r3oFTZPE
From the Ars reference: "There are some steep hurdles to clear for an attack to be successful. A hacker would first have to steal a target's account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning-were it ever to happen in the wild-would likely be done only by a nation-state pursuing its highest-value targets."

"only by a nation-state"

This ignores the possibility that the company selling the solution could itself easily defeat the solution.

Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".

Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.

We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.

Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.

The comment on that Ars page is more realistic than the article.

Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.

tialaramex
> This ignores the possibility that the company selling the solution could itself easily defeat the solution.

How do you imagine this would work?

The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.

I think you're imagining a lot of moving parts to the "solution" that don't exist.

PeterisP
A key part of various such tamper-resistant devices is an embedded secret that's very difficult/expensive to extract. However, the manufacturer (i.e. "the company selling the soution) may know the embedded secret without extracting it. Because of that, trust in the solution provider is essential even if it's just simple math.

For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...

tialaramex
That's true. Yubico provide a way to just pick a new random number. Because these are typically just AES keys, just "picking a random number" is good enough, it's not going to "pick wrong".

If you worry about this attack you definitely should perform a reset after purchasing the device. This is labelled "reset" because it invalidates all your credentials, the credentials you enrolled depend on that secret, and so if you pick a random new secret obviously those credentials stop working. So, it won't make sense to do this randomly while owning it, but doing it once when you buy the device can't hurt anything.

However, although I agree it would be possible for an adversary who makes keys to just remember all the factory set secrets inside them, I will note an important practical difference from RSA SecurID:

For SecurID those are actually shared secrets. It's morally equivalent to TOTP. To authenticate you, the other party needs to know the secret which is baked inside your SecurID. So RSA's rationale was that if they remember the secret they can help their customers (the corporation that ordered 5000 SecurID dongles, I still have some laying around) when they invariably manage to lose their copy of that secret.

Whereas for a FIDO token, that secret is not shared. Each key needs a secret, but nobody else has a legitimate purpose for knowing it. So whereas RSA were arguably just foolish for keeping these keys, they had a reason - if you found out that say, Yubico kept the secrets that's a red flag, they have no reason to do that except malevolence.

o8r3oFTZPE
All I am suggesting is that "hacker" as used by the Ars author could be a company, or backed by a company, and not necessarily a "nation-state". That is not far-fetched at all, IMO. The article makes it sound like "nation-states" are the only folks who could defeat the protection or would even have an interest in doing so. As the comment on the Ars page points out, that is ridiculous.

Assuming "hacker" could be a company what company would have such a motivation and resources to spy on people. The NSO's of the world, sure. Anyone else. Companies have better things to do than spy on people, right. Not anymore.

What about a company whose businesss is personal data mining, who goes so far as to sniff people's residential wifi (they lied about it at first when they got caught), collect audio via a "smart" thermostat (Nest), collect data from an "activity tracker" (FitBit), a "smartphone OS", a search engine, e-mail service, web analytics, etc., etc. Need I go on. I could fill up an entire page with all the different Google acquisitions and ways they are mining people's data.

Why are security keys any different. 9 out of 10 things Google sells or gives away are designed to facilitate data collection, but I guess this is the 1 in 10. "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising, but I suppose Google is different.

These companies want personal data. With the exception of Apple, they do not stay in business by selling physical products. Collecting data is what they do and they spend enormous amounts of time and effort doing it.

"That's all I know."

tialaramex
> That is not far-fetched at all, IMO.

The problem with your neat little model of the world is that it doesn't provide you with actionable predictions. Everything is a massive global conspiracy against you, nothing can be trusted, everybody is in on it, and so you can dismiss everything as just part of the charade, which feels good for a few moments, but still doesn't actually help you make any decisions at all.

> "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising

Right, I mean, if somebody really wanted to help provide working two factor authentication, they'd have to invent a device that offered phishing-proof authentication, didn't rely on sharing "secrets" that might be stolen by hackers, and all while not giving up any personal information and ensuring the user's identity can't be linked from one site to another. That device would look exactly like the FIDO Security Keys we're talking about... huh.

Actually no, if they weren't really part of a massive conspiracy against o8r3oFTZPE there would be one further thing, instead of only being from Google you could just buy these Security Keys from anybody and they'd work. Oh right.

staticassertion
It's just tinfoil hat nonsense, it's not worth responding to.
o8r3oFTZPE
But you just did. :)
o8r3oFTZPE
They want more data/information. Today it is two factors. Tomorrow it will be three. You love your Big Tech. I get it.

But personal attacks are not cool. Keep it civil, please.

tialaramex
In what sense is it "more data" ? Did you know you can hook up a CRNG and just get endless streams of such "data" for almost nothing? If "they" just want "more data" they could do that all they like.

Earlier you gave the example of Facebook harvesting people's phone numbers. That's not just data that's information. But a Yubikey doesn't know your phone number, how much you weigh, where you live, what type of beer you drink... no information at all.

The genius thing about the FIDO Security Key design is figuring out how to make "Are you still you?" a question we can answer. Notice that it can't answer a question like "Who is this?". Your Yubikey has no idea that you're o8r3oFTZPE. But it does know it is still itself and it can prove that when prompted to do so.

And you might think, "Aha, but it can track me". Nope. It's a passive object unless activated, and it also doesn't have any coherent identity of its own, so sites can't even compare notes on who enrolled to discover that the same Yubikey was used. Your Yubikey can tell when it's being asked if it is still itself, but it needs a secret to do that and nobody else has the secret. All they can do is ask that narrow question, "Are you still you?".

Which of course is very narrowly the exact authentication problem we wanted to solve.

o8r3oFTZPE
Who created that "problem we are trying to solve". It wasn't the user.

If the solution to the "problem" is giving increasingly more personal information to a tech company, that's not a great solution, IMO. Arguably, from the user's perspective, it's creating a new problem.

Most users are not going to purchase YubiKeys. It's not a matter of whether I use one, what I am concerned about is what other users are being coaxed into doing.

There are many problems with "authentication methods" but the one I'm referring to is giving escalating amounts of personal information to tech companies, even if it's under the guise "for the purpose of authentication" or argued to be a fair exchange for "free services". Obviously tech companies love "authenticating" users as it signals "real" ad targets.

The "tech" industry is riddled with conflicts of interest. That is a problem they are not even attempting to solve. Perhaps regulation is going to solve it for them.

tialaramex
> Who created that "problem we are trying to solve". It wasn't the user.

Sure it was, if you didn't want this problem you'd be fine with remaining anonymous and receiving only services that can be granted anonymously. I understand reading Hacker News doesn't require an account, and yet you've got one and are writing replies. So yes, you created the problem.

Now, Hacker News went with 1970s "password" authentication. Maybe you're good at memorising a separate long random password for each site, and so this doesn't really leak any information it's just data. Lots of users seem to provide the names of pets, favourite sports teams, cultural icons, it's a bit of a mish-mash but certainly information of a sort.

In contrast, even though you keep insisting otherwise, Security Keys don't give "escalating amounts of personal information to tech companies" but instead no information at all, just that useful answer to the question, "Are you still you?".

o8r3oFTZPE
I think you misunderstood. I am not insisting anything about security keys (physical tokens) requiring escalating amounts of personal information. I am referring to "two-factor authentication" as it is promoted by "tech" companies (give us your mobile number so you can use our website or "increase your security"). Call me a tinfoil hat if you like, but I am skeptical,^1 when the "solution" to "the problem of authentication" is giving ever-increasing amounts of information to Big Tech.

Regardless of intent, it seems very much in the spirit of trying to solve a complex problem by adding more complexity, a common theme I see in "tech".

There is nothing inherently wrong with the idea of "multi-factor authentication" (as I recall some customer-facing organisations were using physical tokens long before "Web 2.0") however in practice this concept is being (ab)used by web-based "tech" companies whose businesses rely on mining personal data. The fortuitous result for them being intake of more data/information relating to the lives of users, the obvious examples being email addresses and mobile phone numbers.

1. This is not an issue I came up with in a vacuum. It is shared by others. I once heard an "expert" interviewed on the subject of privacy describe exactly this issue.

tialaramex
> I think you misunderstood. I am not insisting anything about security keys

And yet here's a thread in which you did exactly that.

o8r3oFTZPE
"In contrast, even though you keep insisting otherwise, Security Keys don't give "escalating amounts of personal information to tech companies" but instead no information at all, just that useful answer to the question, "Are you still you?"."

No, I am responding to the above assertion that I have insisted security keys give esacalating amounts of personal information to "tech" companies.

This is incorrect. Most users do not have physical security tokens. But "tech" companies promote authentication without using physical tokens: 2FA using a mobile number.

What I am "insisting" is that "two-factor authentication" as promoted by tech campanies ("give us your mobile number because ...") has resulted in giving increasing amounts of personal information to tech companies. It has been misused; Facebook and Twitter were both caught using phone numbers for advertising purposes. There was recently a massive leak of something like 550 million Facebook accounts, many including telephone numbers. How many of those numbers were submitted to Facebook under the belief they were needed for "authentication" and "security". I am also suggesting that this "multi-factor authentication" could potentially increase to more than two factors. Thus, users would be giving increasing amounts of personal information to "tech" companies "for the purposes of authentication". That creates additional risk and, as we have seen, the information has in fact been misused. This is not an idea I came up with; others have stated it publicly.

tialaramex
Whilst you're clearly much more comfortable with your "Facebook are bad" line, the problem is that this isn't the thread about how Facebook are good actually, this thread was about your completely bogus claim about Security Keys:

> This ignores the possibility that the company selling the solution could itself easily defeat the solution.

I'm sure you really are worried about how "Facebook are bad", and you feel like you need to insert that into many conversations about other things, but "Facebook are bad" is irrelevant here.

You made a bogus claim about Security Keys. These bogus claims help to validate people's feeling that they're helpless and, eh, they might as well put up with "Facebook are bad" because evidently there isn't anything they can really do about it.

So your problem is, which is more important, to take every opportunity to surface the message you care about "Facebook are bad" in contexts where it wasn't actually relevant, or to accept that hey, actually you're wrong about a lot of things, and some of those things actually reduce the threat from Facebook ? I can't help you make that choice.

staticassertion
Yes, if you don't trust Google don't use a key from Google. Is that what you're trying to say? If your threat model is Google don't buy your key from Google. Do I think that's probably a stupid waste of thought? Yes, I do. But it's totally legitimate if that's your threat model.
o8r3oFTZPE
"But it's totally legitimate if that's your threat model."

Not mine. I have no plans to purchase a security key from Google. I have no threat model.

Nothing in the comment you replied to mentioned "trust" but since you raised the issue I did a search. It seems there are actually people commenting online who claim they do not trust Google; this has been going on for years. Can you believe it. Their CEO has called it out multiple times.^1 "[S]tupid waste of thought", as you call it. (That's not what I would call it.) It's everywhere.^2 The message to support.google and the response are quite entertaining.

1. For example, https://web.archive.org/web/20160601234401/http://allthingsd...

2.

https://support.google.com/googlenest/thread/14123369/what-i...

https://www.inc.com/jason-aten/google-is-absolutely-listenin...

https://www.consumerwatchdog.org/blog/people-dont-trust-goog...

https://www.wnd.com/2015/03/i-dont-trust-google-nor-should-y...

https://www.theguardian.com/technology/2020/jan/03/google-ex...

https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this...

ignoramous
I'll conclude with a philosophical note about software design: Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administators; but it's time for software engineers to start applying the same engineering principles within individual applications as well.

-cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....

hikihiki123
Also pointing your (or anyone's finger) at the already overworked and exploited engineers in many countries is just abysmal in my opinion. It's not an engineers decision to make what the deadlines of finishing a software is. Countless number of companies are controlled by business people. So point your finger at them because they are who don't give a flying f*&% whether the software is secure or not. We engineers are very well aware with both the need and the implications of security. So this kind of name shaming must be stopped by the security community now and forever in my opinion.
jl2718
I’ve found that software, among other engineering disciplines, is uniquely managed as a manufacturing line rather than a creative art. In the other disciplines, the difference between these phases of the project is much more explicit.
hikihiki123
I think this quote is fundamentally wrong and intentionally misleading. The equivalent question would be "can we find any cracks on it?" Which makes complete sense. And in fact it is frequently asked during inspections. Just like the security flaw question should be asked in the same vein.
User23
This is one of the best examples I’ve ever seen supporting the claim that analogies aren’t reasoning.

Edit: apparently elaboration is in order. In mechanical engineering one deals with smooth functions. A small error results in a small propensity for failure. Software meanwhile is discrete, so a small error can result in a disproportionately large failure. Indeed getting a thousandth of a percent of a program wrong could cause total failure. No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong. In software the margin of error is literally undefined behavior.

sameerds
From the GP comment:

> Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera)

I am not a mechanical engineer, but none of these examples look like smooth functions to me. I would expect that an unexpectedly high wind can cause your structure to move in way that is not covered by your model at all, at which point it could just show a sudden non-linear response to the event.

User23
They are smooth in that they are continuously differentiable.
AussieWog93
>No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong.

Perhaps not with building properties, but very small errors can cause catastrophic failure.

One of the most famous ones would be the Hyatt Regency collapse, where a contractor accidentally doubled the load on a walkway because he used two shorter beans attached to the top and bottom of a slab, rather than a longer beam that passed through it.

https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...

In electrical engineering, it's very common to have ICs that function as a microcontroller at 5.5V, and an egg cooker at 5.6V.

Microsoft lost hundreds of millions of dollars repairing the original Xbox 360 because the solder on the GPU cracked under thermal stress.

It's definitely not to the same extreme as software, but tiny errors do have catastrophic consequences in physical systems too.

3pt14159
I've worked as a structural engineer (EIT) on bridges and buildings in Canada before getting bored and moving back into software.

There are major differences in designing bridges and in crafting code. So many, in fact it is difficult to even know where to start. But with that proviso, I think the concept of safety versus the concept of security is one that so many people conflate. We design bridges to be safe against the elements. Sure, there are 1000 year storms but we know what we're designing for and it is fundamentally an economic activity. We design these things to fail at some regularity because to do otherwise would require an over-investment of resources.

Security isn't like safety because the attack scales up with the value of compromising the target. For example, when someone starts a new social network and hashes passwords the strength of their algorithm may be just fine, but once they have millions of users it may become worthwhile for attackers to invest in rainbow tables or other means to thwart their salted hash.

Security is an arms race. That's why we're having so much trouble securing these systems. A flood doesn't care how strong your bridge is, or where it is most vulnerable.

rapind
So it's like building a bridge... that needs to constantly withstand thousands of anonymous, usually untraceable, and always evolving terrorist attacks.
rocqua
But its also a case where "perfect" exists. A case where you can, in principle, have perfect information about the internals of your bridge at any point. A case where you can, in theory, design the bridge to handle an infinite load from above.

In software, you can spec the behavior of your program. And then it is possible to code to that exact spec. It is also possible, with encryption and stuff, to write specs that are safe even when malicious parties have control over certain parts.

This is not to say that writing such specs is easy, nor that coding to an exact spec is easy. Heck, I would even doubt that it is possible to do either thing consistently. My point is, the challenge is a lot harder. But the tools available are a lot stronger.

Its not a lost cause just because the challenge is so much harder.

perl4ever
>But its also a case where "perfect" exists

Only where you draw a boundary around your system and refuse to acknowledge any context beyond that.

mLuby
That kind of perfect is possible in math but not in software, which runs on physical machines and was written and verified by humans. It's like building your bridge inside a vacuum chamber with no entrances or exits—possible but not practical.
greyhair
Or climate change.
jcims
...in which the attackers have free access to copies of the bridge where they can silently test attack strategies millions of times per second for months or years on end.

The safety vs security distinction made above is fundamental. Developers are faced with solving an entire class of problems that is barely addressed by the rest of the engineering disciplines.

lisper
> where they can silently test attack strategies millions of times per second for months or years on end

Remotely, anonymously, at virtually no risk to themselves.

TeMPOraL
And then, when they finally perfect their technique, they can just sell or give away the plan to other people in an instant, who can then put it into practice almost for free, against any compatible bridge they like.
vlovich123
I agree that safety & security are frequently conflated, but I don't think the important aspect is that there's no analogy between IT & construction.

IT safety = construction safety. What kind of cracks/bumps does your bridge/building have, can it handle increase in car volume over time, lots of new appliances put extra load on the foundation etc. IT safety is very similar in that way.

IT security = physical infrastructure security. Is your construction safe from active malicious attacks/vandalism? Generally we give up on vandalism from a physical security perspective in cities - spray paint tagging is pretty much everywhere. Similarly, crime is generally a problem that's not solvable & we try to manage. There's also large scale terrorist attacks that can & do happen from time to time.

There are of course many nuanced differences because no analogy is perfect, but I think the main tangible difference is that one is in the physical space while the other is in the virtual space. Virtual space doesn't operate the same way because the limits are different. Attackers can easily maintain anonymity, attackers can replicate an attack easily without additional effort/cost on their part, attackers can purchase "blueprints" for an attack that are basically the same thing as the attack itself, attacks can be carried out at a distance, & there are many strong financial motives for carrying out the attack. The financial motive is particularly important because it funds the every growing arms raise between offence & defense. In the physical space this kind of race is only visible in nation states whereas in the virtual space both nation states & private actors participate in this race.

Similarly, that's why IT development is a bit different from construction. Changing a blueprint in virtual space is nearly identical from changing the actual "building" itself & the cost is several orders of magnitude lower than it would be in physical space. Larger software projects are cheaper because we can build reusable components that have tests that ensure certain behaviors of the code & then we rerun them in various environments to make sure our assumptions still hold. We can also more easily simulate behavior in the real world before we actually ship to production. In the physical space you have to do that testing upfront to qualify a part. Then if you need a new part, you're sharing less of the design whereas in virtual space you can share largely the same design (or even the exact same design) across very different environments. & there's no simulation - you build & patch, but you generally don't change your foundation once you've built half the building.

Beldin
Aside: The distinction between safety and security I know:

- safety is "the system cannot harm the environment"

- security is the inverse: "the environment cannot harm the system"

To me, your distinction has to do with the particular attacker model - both sides are security (under these definitions).

jl6
I wonder how this distinction plays out in languages that use the same word for safety and security, e.g. German and Portuguese.
allendoerfer
You would use "protection" (Schutz) to make this distinction. Also German verbs can have many suffixes, which often help with the direction of an action and thereby changing the meaning (e.g. sichern, absichern, besichern, versichern).
kwhitefoot
suffixes -> prefixes
TeMPOraL
That's an interesting distinction, but I think GP meant something else - and I'm willing to agree with their view:

- Safety is a PvE game[0] - your system gets "attacked" by non-sentient factors, like weather, animals, or people having an accident. The strength of an attack can be estimated as a distribution, and that estimate remains fixed (or at least changes predictably) over time. Floods don't get monotonically stronger over the years[1], animals don't grow razor-sharp titanium teeth, accidents don't become more peculiar over time.

- Security is a PvP game - your system is being attacked by other sentient beings, capable of both carefully planning and making decisions on the fly. The strength of the attack is unbounded, and roughly proportional to how much the attacker could gain from breaching your system. The set of attackers, the revenue[2] from an attack, the cost of performing it - all change over time, and you don't control it.

These two types of threats call for a completely different approach.

Most physical engineering systems are predominantly concerned with safety - with PvE scenarios. Most software systems connected to the Internet are primarily concerned with security - PvP. A PvE scenario in software engineering is ensuring your intern can't accidentally delete the production database, or that you don't get state-changing API requests indexed by web crawlers, or that an operator clicking the mouse wrong won't irradiate their patient.

--

[0] - PvE = "Player vs Environment"; PvP = "Player vs Player".

[1] - Climate change notwithstanding; see: estimate changing predictably.

[2] - Broadly understood. It may not be about the money, but it can be still easily approximated in dollars.

Jul 08, 2021 · 1 points, 0 comments · submitted by nceqs3
While I have no information to share on this specific malware, here is the NSA's TAO Chief on what makes their jobs harder:

https://www.youtube.com/watch?v=bDJb8WOJYdA

The company I've founded is very much about providing better detection capabilities, but I'd say this is an oversimplification.

First of all, detection is methodologically bankrupt. We have almost no one out there saying how detection should be done with consensus - only in the last 5 years have we even started to improve here.

In my opinion, detection of attackers, which is what the industry focuses on today, is a huge waste of time and resources - it's the last step in the process that I would recommend.

I would personally say that detection should be staged as:

1. System inventory (can you attribute an IP or Hostname to a device identitiy, a user, etc)

2. Policy enforcement (can you detect when policies change, or are violated?)

3. Unexpected behaviors - go to the people building systems - ask them what's expected, what isn't, and build rules for that, or even better, have them build and maintain the rules under your guidance.

4. Attacker behaviors - finally, spend some time building rules for attacker behaviors.

Most organizations skip straight to 4, and then you have a team of defenders who have no idea how the network they're supposed to detect is supposed to actually work. This is throwing away the greatest advantage defenders have - that they know where the attack will take place, and they know all of the stakeholders for those environments.

Here's the chief of the NSA Tailored Access Operations saying this at USENIX four years ago.

https://youtu.be/bDJb8WOJYdA?t=83

"If you really want to protect your network you really have to know your network"

None of this is as simple as "detection" - it means working with the policy teams, with your infrastructure teams, with your product teams, to better understand your environment.

The NSA invests heavily in its defensive role, providing guidance and awareness training both to the US privately and globally through open recommendations.

It doesn't matter. We've invested so heavily in offense that we can tell people how to defend themselves and we'll still own them.

Here's one of my favorite talks,

NSA TAO Chief on Disrupting Nation State Hackers

https://www.youtube.com/watch?v=bDJb8WOJYdA

This is the NSA Chief of Tailored Access Operations telling people how to defend themselves against the NSA (and other similar capabilities organizations), and it's all really valuable. None of it is fancy ML 0day detection or whatever, it's just about understanding your network better than an attacker can.

rmrfstar
TAO's biggest fear is an out of band network tap monitored by a curious sysadmin.
Feb 14, 2017 · 4 points, 1 comments · submitted by vanburen
saycheese
>> "Rob Joyce, Chief, Tailored Access Operations, National Security Agency: From his role as the Chief of NSA's Tailored Access Operation, home of the hackers at NSA, Mr. Joyce will talk about the security practices and capabilities that most effectively frustrate people seeking to exploit networks."

BIO, Slides, etc: https://www.usenix.org/conference/enigma2016/conference-prog...

Feb 02, 2016 · 3 points, 0 comments · submitted by suhitg
Jan 29, 2016 · 8 points, 0 comments · submitted by cyberviewer
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.