HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
OpenBSD was Right - Linux Kernel Developer Greg Kroah-Hartman

TFiR · Youtube · 265 HN points · 2 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention TFiR's video "OpenBSD was Right - Linux Kernel Developer Greg Kroah-Hartman".
Youtube Summary
Subscribe to our weekly newsletter: https://www.tfir.io/dnl
Become a patron of this channel: https://www.patreon.com/TFIR
Follow us on Twitter: https://twitter.com/tfir_io
Like us on Facebook: https://www.facebook.com/TFiRMedia/

Discussing the state of security on Linux, Greg credited the OpenBSD community for being right about their ideology of security over performance. This is just a clip, watch the full interview here: https://www.tfir.io/2019/09/01/lets-talk-to-linux-kernel-developer-greg-kroah-hartman-open-source-summit-2019/
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I mean, Greg Kroah-Hartman of Linux agrees[1,2] as far as Spectre etc. go; more generally there seems to be almost no microarchitectural isolation between HT threads on x86 implementations, so the leaks probably won’t ever really cease there.

The performance benefits of HT are also not at all universal. I mean, they do exist in some cases—if you’re running closely related scalar-compute- (or, better, dependency-) bound tasks that bang on the same small piece of memory, or if you run several things that are just bad at loading the CPU so HT-induced contention doesn’t matter. (The latter scenario was obviously much more important at the time HT was introduced, when neither people nor compilers were particularly good at optimizing for superscalars.) Or, though it’s not a performance benefit, it can help if you have few cores and the OS scheduler is bad at letting interactive tasks run in the presence of long-running batch ones or heavy swapping. But properly optimized multithreaded data crunching like video coding or compression doesn’t really run faster when given extra threads to run on HT “cores” rather than just leaving them disabled or idle.

[1] https://www.youtube.com/watch?v=jI3YE3Jlgw8

[2] https://news.ycombinator.com/item?id=20865492

sph
Would it make sense to mitigations and hyperthreading on a workstation?

Perhaps we're getting to a point that both disabled is faster than both enabled (and suffering from the mitigations penalties)

sillystuff
This is interesting. The mitigations have had a major impact. Especially with the recent loss of reptoline[1] on older AMD/Intel. Maybe enough that there isn't an advantage anymore to enable SMT on some processor families?

At an old job, we disabled SMT on our hypervisor hosts in an effort to mitigate some of the security issues before patches were available. We had to roll back. Without SMT, the hosts struggled to keep up with load. If I had to quantify it, I'd estimate that we lost, at least, 30% performance by disabling SMT (probably more). Those servers were replaced before I left, but I'd be interested to compare disabled SMT to enabled SMT with all the current mitigations in place.

[1] https://www.makeuseof.com/new-spectre-vulnerabilities-amd-in...

Indeed, the OpenBSD guys said early on that hyperthreading should just be disabled for security reasons. It took everyone else to catch up, but see Greg KH's comments that if users are untrusted, one should do so: https://www.youtube.com/watch?v=jI3YE3Jlgw8
Sep 03, 2019 · 265 points, 268 comments · submitted by rodrigo975
robmusial
For those that can't watch the video GregKH says that OpenBSD was right to disable hyper-threading earlier than Linux in response to Spectre and Meltdown and now Linux disables it too.

He also caveats it by saying they were right for "a little bit of the wrong reasons" but at least in this clip doesn't expand on what he meant by that or what those wrong reasons were or why they were wrong.

dooglius
Why should the kernel be making this decision at all, rather than leaving it up to the distro or a command-line boot argument?

EDIT: can't see the video so if it is just a default rather than a forced disablement that's fine.

Canada
defaults matter
admax88q
Because sane defaults are important.
Crinus
Isn't this a "sane" default only in specific contexts though? (VMs). For a desktop PC that almost always runs a single heavy task (games, rendering, video encoding, etc) hyperthreading can be a day and night difference.
x0x0
Your desktop PC is regularly running largely unverified code, some of it potentially hostile: all the javascript in your browser.
cosarara
Browsers have mitigations in place, don't they? Aren't they enough, at least on paper?
SAI_Peregrinus
Not if hyperthreading is enabled. The point is that it's a hardware flaw, so any software mitigations can be bypassed.
cosarara
Is there a PoC exploit for current firefox that would work if I set mitigations=off and enable HT on linux, for instance?
mikedelfino
I can't provide proper citations right now but there has been working exploits of some the latest CPU bugs using just Javascript in a browser as if we're browsing regular websites.
cosarara
If you could provide citations at some other time that would be great!
diegoperini
Lol,

amount of untrusted dll injections to mod, by default, unmoddable games; 3rd party VR tools and drivers; video and audio multiplexer drivers; compatibility drivers for normally unsupported console cameras/controllers etc; ultra demanding last gen console emulators; macro tools that are essentialy keyloggers; anti-cheat daemons running as admin to read memory of other processes;

Windows gaming is wild! I literally sell my soul to gain a few more fps or immersion. Browser js looks almost too innocent in this whole mess.

favorited
Except most users aren't modding their PC games. Every user runs untrusted Javascript.

If someone is patching DLLs they can figure out how to enable hyperthreading.

phaer
Yess, "sane" is context-specific, but it's useful to err on the side of more security and less performance, rather than the other way round.
beatgammit
Exactly. If you need more performance, there's always something you can tune.

It would be especially awesome if this could be enabled at runtime so it could be turned on when thing specific tasks, like rendering or compiling something big. Even better, it would be nice to lock that to specific processes (i.e. enable it on a few cores and lock privileged processes to those cores).

AndrewUnmuted
HT doesn't actually make video encoding go any faster. It just allows the system to remain more stable while performing some other task at the same time. You really should not be running other tasks on your system while encoding video, your encoder will need all the resources it can get.
Wowfunhappy
Uh, are you sure of that? My experience has been that x264, for instance, benefits greatly from hyperthreading.

Hyperthreading allows you to queue up additional instructions (in a different thread) that the executor can switch to when it would otherwise be just waiting for the next instruction in the primary thread.

AndrewUnmuted
You are correct that x264 does benefit from hyperthreading. According to the devs, the speed increase is about 20% until you are on the veryfast or ultrafast presets, which at this point is usually bottlenecked by decode speed, and not the speed of the encode.

In making my previous claim, I was limiting the concept of 'encoding' to the more old-school definition: lossless compression of media bytestreams. This would include things like HuffYUV, ProRes, etc. But to be fair, these days it is quite likely that even a few of the newer intermediate/mezzanine codecs benefit from hyperthreading. I'd edit my post to clarify but the edit window has passed.

sharpneli
One must consider the potential damages that can happen when the default is wrong, as it’s nigh certain that people are going to be careless and not change it.

If the desktop PC has wrong default the performance is bad. Still functional though.

If in case of VM the default is wrong we will read another headline about how N million customers of $company got their personal data leaked.

admax88q
I can't imagine a desktop PC that is unlikely to ever be used to browse the web. Spectre and simliar mitigations like disabling hyper threading are required to properly isolate javascript.

Defaults should be set for the lowest common denominator. A lay computer user shouldn't have to understand and make decisions upon things like this.

Gibbon1
My feeling is current processors are completely fast enough to do what most users need. And it's programming comminity producing bloated slow frameworks that is the problem. Layers of shit are covered over with yet another layer of shit. Needs to stop.
Crinus
I think we'll read such headlines regardless of that setting :-P.

I'm actually wondering if there should be some sort of premade "profiles" when it comes to default settings. Debian for example is used in a lot of contexts so perhaps it'd make sense to have a way to ask you what sort of usage you'll do when installing and provide different defaults based on that (not just at the initial installation time but also when installing some package, the default settings would be based on the profile you chose).

icebraining
Yes, it does make sense: https://help.ubuntu.com/community/Tasksel
robotresearcher
Defaults are what you get before profiles are applied, or, if you insist on a profile, it’s the default one.
beatgammit
It's pretty easy to change a default, so it's completely reasonable for a gaming distro (say, SteamOS) to have different defaults than a server distro (say, CentOS). Most Linux distributions don't ship the kernel in it's default configuration anyway, so this would really only affect those who use the kernel directly from the source.
rhabarba
As if the reasons were important!
melicerte
He also said: if you are running a system and you don't trust your users, you should definitively disable Hyperthreading.

Hence I believe hyperthreading is an option disabled for sane defaults.

Skunkleton
Are scripts loaded in a web browser considered "users". If so, don't most people run systems which cant trust all of their users?
beatgammit
No, the person using the browser that loads the scripts is the user. The user has the option to disable JavaScript, so they are responsible for whatever they allow to be run on the machine.

If you don't trust yourself to use the web safely, then put limits on what a naughty script can do. One option is to disable hyperthreading.

Skunkleton
That's like saying that its the users fault if their box was compromised by heartbleed. You never know what exploits are out there.
danarmak
Two of the three (current) top-level replies compare BSDs to Linux in general, but that really has nothing to do with whether you disable HT. Using Linux should not have stopped anyone from listening to Theo and disabling HT months ago. Your security authorities don't have to be your kernel developers.
josteink
> Using Linux should not have stopped anyone from listening to Theo and disabling HT months ago.

Iirc OpenBSD actively disabled it for you, making it the default.

Nobody else did that.

baybal2
In fact, cache p3 timing attack was known as "theoretically possible" back in nineties, but was booed out of the conversation by "big name security analysts"
h2odragon
"I told you so" gets less satisfying each time you get to say it, yaknow? (* commiseration, no criticism)
danarmak
If that's true, it's wonderful and I applaud them for making this decision.

I haven't been following this conversation closely; is there any serious change of Linux (some distributions, or the kernel upstream) disabling HT by default?

beefhash
> Iirc OpenBSD actively disabled it for you, making it the default.

They have, as of this commit[1] on 2018-06-19.

[1] https://marc.info/?l=openbsd-cvs&m=152943660103446&w=2

bagacrap
How many other decisions made by kernel developers are you supposed to second-guess because you know better?
ertian
On your desktop? Probably none.

On the image that you're pushing out to your fleet of production servers? Maybe a couple.

beatgammit
What about everyone else? Many companies only operate a handful of servers, and they often don't have the staff to know the kernel that intimately, so they rely on sane defaults. These companies are also not typically using their CPUs to the max, so disabling HT seems like a reasonable default.

If you know enough to know whether up should be using HT, you can enable it yourself.

ajsnigrutin
A wee bit offtopic, but if we look at the VW/dieselgate, and the aftermath of it all, and the class-actions, returns, refunds, etc, and hyundai/kia lies about gas milage and people getting refunds for gas...

...when is something like this going to happen to intel?

We've bought CPUs with excpectations of promised performance (like people did with emission expectations and gas milage expectations), they messed up, and we get lower speed, and now no hyperthreading and still no refunds? If i bought a 60" TV and the picture was only 50" with a black border around, i'd return it immediately... why isn't there some action regarding CPUs?

Crinus
Isn't the reason that technically the CPU is still fast and it is the OS (that is outside Intel's control) that slows it down? And AFAIK all OSes can disable these mitigations (are they even a concern for personal computers, especially for cases like gaming?) so if you really want you can get your performance back.
ajsnigrutin
If you look at it this way, then the computer manufactures are to blame, and you should be refunded by them.

That's the same as buying a car from CarCompany(TM) with an A/C and a Android Car touchscreen interface/radio/..., and an automatic android updates disables your A/C and changes the engine paramters so you have 20hp less... wouln't you expect "them" to fix it? As a consumer, you shouldn't have to worry if it's googles fault or CarCompanies(TM) fault, you should be able to take it to the dealer and have them fix it or give you a refund, or atleat 'do something'?

Crinus
I'm not sure i follow the reasoning nor the example you gave. You can change and/or update the OS, both being outside of the computer manufacturer's control and that is assuming there was even a computer manufacturer and wasn't a desktop PC you built yourself (are you going to blame yourself for allowing Windows to install updates that slow down the CPU?).
floatingatoll
IIRC (long time ago!) Krypton Lock used to offer a lifetime warranty against manufacturing defects and against properly-locked bicycles stolen through lock damage. Then the Bic pen cap unlock strategy was found. I’m not sure if they still do offer that lifetime defect warranty or not anymore, but my hometown was excluded from their warranty back then.
jdmichal
Because there's a huge philosophical and technical difference between specifically gaming tests and having a vulnerability discovered that invalidates technologies you were using to legitimately pass a test?
bagacrap
Even if you think any broken promise amounts to fraud, Intel didn't intentionally commit fraud.
fpgaminer
If you strip away all the safety features of a car, the car will weigh quite a bit less, and thus be able to achieve better performance and gas mileage. Should VW be allowed to market their cars as having that performance? Of course not, because no one should be driving their cars without a fire wall. So why is Intel allowed to market their 8 core chips as having 16 threads when in practice you need to disable hyperthreading?

So there's a strong argument that Intel, which is currently marketing their chips this way, is committing fraud. Maybe it could be argued that Intel didn't previously commit fraud. But as soon as the bugs became known, and Intel continued to market their chips as having hyperthreading, from that point forward they were committing fraud.

umvi
If I sell you a lock, and then 10 years later someone finds a vulnerability with the lock I sold you, should I refund you? That seems absurd. You are basically saying the product has to be perfect and the architects have to be able to see the future. Even if your hardware is formally verified, people can do physical attacks like listening to high frequency chirps of your cpu and using that to break security. Do you still deserve a refund?

There is no such thing as perfect security. It is a cat and mouse game that will continue until the end of time, requiring ever greater resources. Therefore... all software and hardware should be free because all software and hardware is defective?

gamache
> If I sell you a lock, and then 10 years later someone finds a vulnerability with the lock I sold you, should I refund you?

Kryptonite did exactly this when someone figured out they could open their U-locks with a Bic pen barrel. Full recall of vulnerable products, with free replacement, regardless of age.

ajsnigrutin
10 years? Of course not. But if i bought it yesterday, I'd expect a refund. Just consider that they were still selling affected CPUs even when they knew about the vulnerabilities and even after the papers were published.
throwaway2048
They are still selling them today.
umvi
> Of course not. But if i bought it yesterday, I'd expect a refund.

If you bought it yesterday, why wouldn't you be able to get a refund? I don't know of any major vendor that would deny you a refund on grounds that the unit is defective.

ajsnigrutin
I know of no mass refunds (as it was with volkswagen) due to spectre/meltdown, and slowing your pc down by 30% after the first patch, and as it seems losing hyperthreading seems like a defective unit to me.
stickfigure
Have you asked the vendor that sold you your PC for a refund? I don't know if it would work, but that's the avenue you would have to take - including sending your PC back. Then what are you going to buy? Another PC with the same issue?
floatingatoll
Locks often offer a lifetime warranty against manufacturing defects in their locks.

Is this a manufacturing defect in CPUs?

(The defect is baked into hard silicon out in the world, so the analogy is plausible.)

dymk
It’s not a defect, so the analogy doesn’t work
vorpalhex
How is hyperthreading being insecure not a defect?
dymk
A new technique to pick locks is discovered. Does that mean all locks are defective?
vorpalhex
When the Kaba Simplex (a commercial door lock) was discovered to be easily bypassed by holding a magnet near it, yes, it was in fact a design defect and the company had to correct it by giving repair kits out to purchasers.
dymk
Intel and others did give out a repair kit; they give you the option of disabling hyperthreading and a whole host of other optimizations. Those optimizations are both what provides this new side-channel of attack, and an immense speedup when they're enabled. You can't have one without the other.
andoriyu
Except lock buyers still go the door lock and in case of intel you lost threads.
vorpalhex
Except they didn't advertise that way. They advertised the hyperthreaded performance, without disclosing its security implications.
dymk
You're asking for something impossible.

Lock manufacturers can't advertise that their locks are hardened against specific yet-to-be-discovered attacks.

Intel can't advertise that their CPUs are hardened against specific yet-to-be-discovered attacks.

They can only provide mitigations after the fact.

Forbo
Yet they are still advertising the number of threads without any mention of the vulnerabilities involved, well after those vulnerabilities have been disclosed. It's deceptive advertising at best.
toast0
This is a design defect, not a manufacturing defect.

In case of design defects in highly regulated fields (cars), there is often a campaign to make things right. When Intel processors couldn't divide properly, they had a campaign to replace them. In this case, it looks like we're not getting much.

derpherpsson
Its not like that.

Intel took shortcuts to make their CPUs faster. At least some of the chip architects working on their implementation of hyperthreading should have understood that they sacrificed security for speed - without telling anyone.

rrss
> should have understood that they sacrificed security for speed

And what if they didn't?

It's pretty much exactly like that. Intel has been making CPUs for well over a decade that are vulnerable to various side channel attacks, and the only thing that has changed is the community's understanding of the vulnerabilities (i.e. there's a new way to pick the lock).

wahern
It strains credulity to believe that Intel wasn't aware that they were trading side-channel resistance for performance. The problems are just too deep and pervasive. None of AMD, ARM, Power, or SPARC came close to the number and severity of issues in Intel chips. There were problems in those chips, but their nature and limited scope shows that everybody had a rough idea about how far they could go before they made privilege separation worthless from a confidentiality perspective. Yes, some went a little too far, but it seems clear that Intel just said, "f-it", and stood on the gas pedal.

Hyperthreading/SMT is a trickier issue because it had obvious and even proven side-channel potential from the beginning. But 1) everybody had to hold their nose in order to compete with Intel on SMT performance, and 2) technically the operating system communities should have made the effort to keep unrelated processes from sharing an SMT'd core. And that still needs to happen--we need smarter schedulers.

rrss
> It strains credulity

I don't agree.

Meltdown: Intel, IBM, some ARM

Spectre v1: Intel, ARM, IBM

Spectre v2: Intel, ARM, IBM, AMD

Spectre v3a: Intel, ARM

Spectre v4: Intel, ARM, IBM, AMD

L1TF: Intel, IBM

Meltdown-PK: Intel

Spectre-PHT: Intel, ARM, AMD

Meltdown-BND: Intel, AMD

MDS: Intel

RIDL: Intel

That doesn't look to me like "everybody had a rough idea about how far they could go."

It is really easy for me to believe that a ton of designers could add optimizations without consideration of side channels. Nobody appreciated the vulnerabilities that speculation introduced.

(And keep in mind Intel has probably 90+% market share in the search for exploitable behavior.)

> The problems are just too deep and pervasive

One could also say that it strains credulity that the entire community failed to realize the existence of these vulnerabilities that are so fundamental to speculation, and yet here we are - that's exactly what happened.

wahern
Not all those named side-channel exploits are the same in terms of severity and difficulty to mitigate, nor are the chips vulnerable in the same way.

For example, Meltdown exposed severe negligence in Intel's design. For ARM Meltdown was limited to values of a single register, for which there's no reason to believe it was anything other than an unintentional bug--i.e. you don't get any substantial performance benefits from permitting speculation through that single register, though it perhaps simplified some other aspect of the chip.

Basically, if you go down the line Intel's issues were both more severe and pervasive, as-if they just didn't care about preventing speculation across privilege domains.

Notwithstanding the ARM's Meltdown mistake, both ARM and AMD very clearly had designs that attempted to prevent speculation across privilege domains. And they mostly succeed. The major issues are at syscalls where intra-privilege (not cross-privilege) speculation can indirectly be exploited by unprivileged callers. But like with SMT, it was always sort of understood that it was the operating system's responsibility here; there really are no good hardware mitigations.

Basically, the exploits for AMD and ARM (notwithstanding the lone register issue) are intrinsic to speculative execution, period. And everybody sort of understood this, especially in the cryptographic community with work on constant-time algorithms. It's just that everybody was too lazy to take it seriously more generally until Meltdown/Spectre lit a fire under everybody's pants. And once they began to pay attention, it immediately became clear that Intel's designs made patently and grossly unsafe design choices.

The details on IBM Power chips are spartan. I think their Meltdown issue was similar to ARM--a bug with a register--but I can't confirm that. My impression is that Power pushed the envelope more heavily than AMD and ARM, but not like Intel. Power went all-in on SMT, though, and though SMT is fundamentally anathema to cross-privilege confidentiality, Intel's and IBM's SMT implementations seem to leak more than AMD's.

cies
Indeed its the difference between lying and making a mistake. Off course they hoped with, plausible deniability, to mask those lies as mistakes: but they got caught. Hence the class action suits.
3JPLW
Have there been any rumors of internal discovery at Intel prior to any of these disclosures?
dymk
No.
3JPLW
Downvoted for the straw man — Intel is currently selling processors with, e.g., 8 cores and 16 threads without any asterisks or caveats. That's in their official marketing materials. Currently.

https://a.sellpoint.net/a/Qo3wL1no.jpg (via NewEgg)

https://www.intel.com/content/www/us/en/products/processors/...

jcranmer
So does AMD, for what it's worth. And I presume POWER, and anyone else who sells processors with SMT.
rrss
And it's true...

Has Intel ever said "we guarantee that hyperthreads are entirely isolated from one another?"

andoriyu
All car makers put asterisk when talk about performance and mileage - it didn't help VW from being fined and prosecuted.

We were given certain benchmark numbers and performance target and it all went to shit with a single microcode update.

Somehow most people expect CPU not to give random javascript in the Internets a private key from encrypted file system.

Intel got off so easy from that drama. Imagine a car marker selling you a car 4 seats, but when backseats are used you might lose steering? Would that be okay? No where it says you get 4 usable seats.

3JPLW
One quick google later:

https://www.intel.com/content/www/us/en/architecture-and-tec...

> By combining one of these Intel® processors and chipsets with an operating system and BIOS supporting Intel® HT Technology, you can:

> * Run demanding applications simultaneously while maintaining system responsiveness

> * Keep systems protected, efficient, and manageable while minimizing impact on productivity

rrss
I have no idea what "keep systems protected" means (and probably neither did the person who wrote that).

That statement is a long way from an actual guarantee that there is no way for one logical thread to extract information about another.

3xblah
Full interview: https://www.youtube.com/watch?v=sDrRvrh16ws

One of the things the Linux kernel developer says in the interview is that researchers are going through Intel patents to find "security bugs".

He says this is "fun". Twice. He seems pretty nonchalant about these issues. Like it is great to fix them, but not like it is too important or anything to worry about. He says what is most important to him is that Linux "succeeds". The attitude is reminiscient of Microsoft in their heyday. Drunk on success. He even calls out a Microsoft employee he is working with. He says the companies contribute "selfishly". This is no different from BSD. Users do not determine how much security is prioritized, the contributors do. However, what happens when the biggest kernel contributors are companies?

He also indicates he does not agree with Stallman philosophically on technology issues. We do not get any details of the specifics of their disagreement.

As a user, I think is it somewhat easier to keep track of Net/OpenBSD kernel contributions than it is to keep track of Linux kernel contributions. I might be wrong on that. As far as I can tell, the biggest contributors to Net/OpenBSD kernels are still individuals and are not acting directly on behalf of corporations.

asveikau
I haven't had a chance to watch but it seems like you are berating him for figures of speech and/or speaking styles. As far as I know his contributions to the kernel are pretty vast. We are all human and security isn't his only focus so maybe give him a break.
Syzygies
I disable hyperthreading for better performance.

In my experience as a mathematician building parallel compute servers, hyperthreading generates more heat than it is worth. I can overclock further without hyperthreading, to more than overtake the faint advantage that hyperthreading offers at a given clock speed. So I now buy binned, delidded processors from Silicon Lottery, choosing the best reasonably priced speed of the best cpu without hyperthreading. That would today be the i7-9700 @ 5.1GHz for $400.

pingyong
That really depends on the workload though. x264 benefits massively from hyper threading for example. Way more than the 6% performance you get from more overclocking headroom.
xuhu
I guess you mean i7-9700k instead of i7-9700.
hinkley
The number of generations of processors where this has been true is really astounding to me. It really makes me wonder why they persist with this line of effort instead of doing something else, like cores that share logic units only.
gen3
Use cases are different. I would imagine the poster above is able to saturate all the cores, but in the case of the regular user, the cores spend most of their time waiting for data.
AaronFriel
Because for a large number of workloads, hyperthreading gives real performance improvements.

The vast majority of consumers aren't running compute heavy workloads that are more amenable to SIMD work (which it sounds like this might be) than the sort of highly branching, often stalled work that general purpose programs do.

dnautics
Perhaps someone might know better than me: why is hyperthreading necessarily bad? Can't you just keep it on and give your tenants cores with affinity? For example, some tenant wants two cores, you give them two vcores on the same core; some tenant wants four cores, you give them two vcores over two, etc.
dnautics
actually I just took a shower and answered my question (I think) - many of the hyperthreading bugs don't breach the process divide, they use errors in hyperthreading statefulness to use a side channel to breach memory divide, and the cores share memory, whether or not who's on what core, if any one core gets compromised, you could potentially access any of the core's memory.
muricula
Close but not quite -- sibling hyperthreads (logical cores) share cache state. Physical cores do not share cache state. Different processes, threads, or VMs on sibling hyperthreads (by definition on the same physical core) can infer the other's memory state based on the cache state.

If an attacker is pinned to one hyperthread, and the victim is pinned to another which isn't a sibling hyperthread, none of the spectre attacks will work since the cache state isn't shared.

As an attacker with code exec on a core, you can theoretically play games with the OS scheduler until you're running on a sibling core with your victim thread/process/vm.

dnautics
That's not true. Spectre works due to speculative execution leaking memory data through a side channel exposed by hyperthreading. That memory can be in use by any of the threads, not just the ones on sibling threads.
muricula
The side channel for most of the spectre variants is the latency of misses on the cache lines. L1 and L2 cache lines are local to a physical core. As far as I know, nobody has made any of the spectre variants work by measuring the latency of L3 cache misses, which are local to NUMA nodes if I understand correctly, but I'd love to hear otherwise.

The most recent round of spectre variants measured the latency of the line fill buffers and other parts which are local to a physical core's memory subsystem.

throwaway2048
Because unless you make hyperthreading a purely opt in thing, there is no way to know as a kernel if two things that are security sensitive relative to each other are being run on the same core as Hyperthreading neighbors.

Two threads in a web browser that are assigned to different websites for instance.

rrss
> there is no way to know as a kernel if two things that are security sensitive relative to each other

The kernel has a way, and it is process isolation. The kernel doesn't care if you want two threads in the same process to be isolated from one another - that's your problem, not the kernel's.

Anyway, thread isolation already doesn't work even without hyperthreading-specific attacks: "we have discovered that untrusted code can construct a universal read gadget to read all memory in the same address space through side-channels. In the face of this reality, we have shifted the security model of the Chrome web browser and V8 to process isolation" (from https://arxiv.org/pdf/1902.05178v1.pdf).

throwaway2048
Chrome doesn't exclusively use process isolation, several sites will share the same chrome process. This is only one example, another might be for instance, authenticating users on one thread, and sensitive authenticated data on another, which is an extremely common pattern.
rrss
Chrome turned on site isolation by default in chrome 67: https://security.googleblog.com/2018/07/mitigating-spectre-w...

My point is that the OS has no responsibility to isolate threads from one another, regardless of how many applications may try to do it themselves anyway.

gamache
It's not necessarily bad, but it is not secure by default. That's why it ships turned off.
dleslie
At a time when the commercial BSD companies were fighting among themselves and allowing their technology to become stale and remain expensive, Linux came along with RedHat and SuSE.

Those two made an effort to meet directly with business leaders, attend all the trade shows, and gave their product away for free. Their model was at least as shocking as their license; hitherto, hardly any business software was free and paired with consulting services. It caused a storm, and gave birth to a whole industry that wasn't possible under the expensive BSD model.

messe
That's mostly fine, but it complicates the scheduler and doesn't necessarily help performance. IIRC hyperthreading performs best when the workload on each thread is different (not taking caches into account), so running threads from the same process on the same core can (although isn't always; I'd hesitate to claim anything concrete here without benchmarks) be detrimental.

OpenBSD devs are likely open to it, but there are other inefficiencies in the kernel like locking that have priority.

MPSimmons
You have to be absolutely sure that your tenants are not executing arbitrary code (i.e. javascript) because then they can get compromised, too.
Woodi
"Disable HT when you don't trust your users"

Also:

Do not run not your own code, like eg. JavaScript.

There is way too much short living code.

Why we not build software libraries as we learn using computers and then use it for the rest our life ?

Programmers should switch to programming human domain problems, not constantly reimplementing Start Menu hierarchy.

Btw. anybody uses Tripwire ? Eg. with apt-get or pacman and saving hashes to ro device ? ;)

mikece
Why aren’t the *BSD operating systems more popular in the server and workstation spaces?
UI_at_80x24
It is if you know where to look or who to talk to.

That is the exact same question I was asked ~20 years ago regarding Linux vs Windows.

The barrier to entry is higher on *BSD then it is on Linux. But with the appropriate skills/time/energy it is very much worth the effort.

ALL of my edge devices run OpenBSD(since 2011). Most of my Internal servers run FreeBSD (90+%), with the remainder on OpenBSD. I made the decision to migrate away from Linux when SystemD was made default in Debian. In my mind they make more sense. I can grok the config files and Init process. MAN pages are much easier to understand. Network config is brain-dead simple and powerful; that's a combination that shouldn't be overlooked.

I freely admit that I'm an old-fart. My manager thinks I'm a hippy, and my co-workers think I'm a Unix-greybeard. In reality it's just this simple: It does what I want, and gets out of the way.

majewsky
It's hilarious how one can distinguish systemd fans and haters by how they write either "systemd" (pro) or "SystemD" (against).
davidcuddeback
This is pretty much my experience, too, almost word-for-word. I also run OpenBSD on my edge nodes and FreeBSD for most app servers. After using FreeBSD on servers for about two years, I felt more comfortable with it than I do with Linux after 20 years. A large part of that is FreeBSD's simplicity, consistency, and documentation. It means I can pull on a thread and follow it myself, often without resorting to mailing lists. On Linux, I often feel like I'm trying to piece together information from a variety of sources, sometimes outdated or not applicable to the distro I'm running. BSD feels more cohesive to me, and I think that makes me more self-reliant.
dleslie
At a time when the commercial BSD companies were fighting among themselves and allowing their technology to become stale and remain expensive, Linux came along with RedHat and SuSE.

Those two made an effort to meet directly with business leaders, attend all the trade shows, and gave their product away for free. Their model was at least as shocking as their license; hitherto, hardly any business software was free and paired with consulting services. It caused a storm, and gave birth to a whole industry that wasn't possible under the expensive BSD model.

technofiend
Even prior to RedHat and SuSE for some reason System V-based systems were deemed more suitable to business, at least by Sun Microsystems. I stand corrected based on beef's comment below: they migrated from BSD to Sys V as part of Solaris aka SunOS V. I was misremembering their adding streams support to SunOS 4 as the big switch but I was wrong about that.
dleslie
Sun probably had a good sales team. ;)
beefhash
SunOS 4 was still BSD-based. You can find the source code for SunOS 4.1.4 on TUHS's code comparison site[1], but for some reason, you can't browse the tree.

Solaris was when Sun made the jump to System V. AFAIK part of that was because Sun had financial difficulties at the time and AT&T offered to help them -- in exchange for Sun to rebase on System V.

[1] e.g. https://minnie.tuhs.org/cgi-bin/utree.pl?file=SunOS-4.1.4/us...

msla
A number of good replies in this thread, and I usually reach for the USL lawsuit as the go-to explanation, but another explanation I've heard probably has some merit as well: Linux came up from the PC world, whereas BSD came from academia and, later, techie companies.

Therefore, bedroom hackers would be more likely to be able to install Linux on the hardware they had, not the hardware they wished they had, so they'd reach for Linux when their employer wanted some kind of backroom system that didn't cost an arm and a leg.

Therefore, there was more Linux out there beyond the explicitly techie companies, the ones who'd have already bought Solaris or IRIX or HP-UX, and the current userbase drives both features and future userbase.

pas
Because you can get the same technical outcome with using off the shelf Linux and setting it up to be conservative just like OpenBSD. (Or simply run sensitive workloads on separated/isolated machines.) The dreaded TCO is likely lower with Linux, because it's easier to work with, better drivers, better performance, more software available natively, and so on. (The whole Linux ecosystem seems more efficient for business, even if OpenBSD/FreeBSD is simply better in a lot of particular stuff.)
moviuro
> setting it up to be conservative just like OpenBSD

Meh. Probably for superficial stuff, yes. But would Linux kill cat(1) if it did some network calls? I bet not. See https://www.openbsd.org/innovations.html

ailideex
You could block it's network access with network namespaces: http://man7.org/linux/man-pages/man1/unshare.1.html

I'm sure you could engineering something to kill it also with currently kernel functionality - though I would need to do more research to say what.

majewsky
BPF can filter syscalls.
pas
AppArmor does that. I don't know what's the default profile for cat, but a simple one line "deny network;" works. It's up to the distribution.
roryrjb
You can do this with seccomp, but of course not by default like pledge and OpenBSD.
verytrivial
Without trying to sound facetious: "marketing". By that I mean the BSDs aren't trying to capture $$$ but seem to be focusing on good craft and sound engineering first etc and service markets where that matters more than other concerns. Turns out that market isn't as big as others.
peterwwillis
Linux had more bleeding edge development, and that's what developers wanted, so that's what they wrote their apps for, and so that's what people used for workstations, so that's what got all the newest hardware drivers, and new kernel features, and new desktop work, etc etc.

Linux just had "more stuff". Without any other particular reason to pick one over the other, most people picked the one with "more stuff". Nobody wanted to install an OS for a server just to find out it couldn't run the latest software, or didn't have the latest drivers. In addition, the userland of the BSDs was different from GNU tools; if you were already installing GNU tools on every other OS you adminned, you might as well run the OS that's based on it. Finally, if you wanted Enterprise support, Linux was the only Open Source choice, afaik.

Some at the timed claimed performance benefits from one or the other, but various benchmarks showed Linux and BSDs each had their respective performance strengths that could generally be overcome by tweaking.

cntlzw
The BSD wars of '92.
masklinn
The BSD troubles rather than wars: it wasn't a spat between BSD distros, it was BSDi and UC (Berkeley) getting sued by USL.

It cast a long shadow over the viability of Berkeley at the height of the UNIX wars, and just as Linux was appearing as a completely independent and unencumbered UNIX supported by the maturation & advocacy of GNU and the FSF.

Conan_Kudo
In retrospect, it's probably a good thing. BSD splinters have largely become incompatible with each other, whereas Linux distro splinters have (largely) remained compatible due to licensing and cultural differences.

We probably would have had BSD wars for real not long after, if it had become popular. To some extent, this is by design, as there's no culture or ethos that pushes people back together.

kees99
Linux stayed cohered thanks to a stable ABI between kernel and userspace (where BSD has a stable API higher upstack - between libc and applications).

The only minor splinter one could argue exist among Linux distros is "systemd" vs. "non-systemd"; Firefox and BlueZ 5 are non-functional without systemd, largely different workflow for start/stop/view logs, etc. Luckily, based on usage numbers, its "systemd" vs. "rounding error" at this point in time.

danarmak
On the other hand, if BSDs had been more popular, one of these splinters might have dominated the market as much as Linux does, due to the many positive feedback effects of being the most successful free OS.
geggam
It still is a possibility. The Linux community is becoming a mess of things stuck together in and its getting to be really interesting to support.
AnIdiotOnTheNet
The BSDs aren't really much better in that regard once you leave the base install, and their hardware support is substantially worse.
geggam
I will agree hardware still lags but for the target of servers and serving its fine.

As far as BSD being a mess after base I completely disagree. Using and understanding a package manager makes life pretty simple.

That said if the Linux community did that they would probably realize how silly containers are :)

AnIdiotOnTheNet
Linux uses a bunch of package managers, it doesn't solve anything yet creates the problem of "X isn't in the repo, now what?".
geggam
If you dont understand that you need to automate and manage your own packages (including containers) for Linux, which meet your requirements, then no framework will help you.

You also need to do this for any pipeline for any OS.

masklinn
> whereas Linux distro splinters have (largely) remained compatible due to licensing and cultural differences.

Seems to me this had nothing to do with licensing or culture, but rather that if you wanted your distribution you'd use the upstream kernel and build your own userland with blackjack and hookers and whatever direction you were interested in, so there's some measure of strong relatedness in the kernel everyone shares. You might add a few modules or patches, but few people bother (or need) to fork the kernel itself.

Since BSDs are systems, if you want to go your own way you fork the entire system (that's literally the genesis of both OpenBSD and Dragonfly).

thefz
IIRC Netflix is using FreeBSD.

Outside the server scope, OSX is mainly BSD with a different kernel.

FreeNAS is based on FreeBSD too.

messe
It also has a bit of an outdated userland, imported from FreeBSD/NetBSD years ago.
roryrjb
They are using FreeBSD for their CDN appliances that they have dotted around the world, everything else runs on Ubuntu out of AWS.
throw0101a
NetApp, Dell-EMC Isilon, Juniper, iXsystems, pfSense, etc:

* https://en.wikipedia.org/wiki/List_of_products_based_on_Free...

If you follow the commit logs, you'll regularly see "Sponsored by" messages:

* https://www.freshsource.org/commits.php

Not just for the core OS, but also in ports and also drivers (Intel, Chelsio, Mellonox).

FreeBSD in particular has always been persnickety about acknowledging work done on behalf of others. Something that would have prevented the IBM-SCO lawsuit if Linux had used commit/patch tracking from the beggining.

marmaduke
Not all EMC systems are FreeBSD; at least the VNX I managed is a Linux derivative
throw0101a
Hence why I wrote Dell-EMC Isilon.

Isilon and their OneFS was a stand-alone company (like Panasas still is). EMC bought Isilon. Then Dell and EMC merged.

siffland
I have been using FreeBSD since 1999, a great OS and i love it, at work i am a Linux system administrator.

One of the huge reasons people use FreeBSD is simply licensing. If you dont want to release any source code simply build your custom app on FreeBSD and only include BSD licensed software. Makes being proprietary simple.

Note, this does not mean any companies that do this do not give back to the project, they do in the way of code commits and sometimes donations.

throw0101a
Companies that do not give back non-secret sauce patches will find they will contribute to their own pain (unless they're big enough to fork and not care about going back).

Isilon did not contribute back for a while, and then the FreeBSD project kept moving forward, and so the patches they kept in-house kept getting bigger and bigger, which was overhead in their development.

They've basically caught up now:

* https://en.wikipedia.org/wiki/OneFS_distributed_file_system#...

notacoward
Because the large companies that deploy hundreds of thousands of servers need to hire lots and lots of people to make or maintain local enhancements, at all levels from the OS to libraries and utilities. In that environment it's not hard to make a local change that saves a million dollars in running-hardware costs, and it's also not hard to find Linux developers to make those changes. OpenBSD developers? Not so much. Then smaller companies do what the big ones are doing even when the potential for million-dollar enhancements isn't there, often because their founders and/or most of their personnel came from those big companies. It's the "rich get richer" effect familiar from social networks, but for software platforms.
toast0
The kernel teams at most large companies, even large tech companies, are not that big.

Sure, the pool of currently experienced Linux kernel developers is large compared to the BSDs, but if you hire a smart person (like a Linux kernel developer), and provide them with means, opportunity, and motive you'll have a BSD kernel developer soon enough.

Commercial support and network effects and costs of running multiple operating systems are more likely to be a deciding factor than developer availability. If you need to run someone else's software, it probably runs on Linux or Windows. If you run a BSD for your stuff, and Linux for theirs, that means you'll probably need more people to understand both.

If you're going to run on other people's hardware, Linux is almost always supported, although there's been some movement on BSD in clouds. Like with other niche platforms, if it's not currently supported and you want to do it, you might need to do the work -- there's a lot less community to rely on to magically make things work.

notacoward
>The kernel teams at most large companies, even large tech > companies, are not that big.

It's not just about the kernel. You're talking about entire ecosystems of core utilities, filesystems, network stacks, security mechanisms, virtualization and resource-control subsystems (e.g. cgroups), performance profiling and tuning, etc. They're different systems from top to bottom, just like when I switched from 4.3 to V.2 thirty years ago.

> Commercial support

Stop right there. FAANG companies aren't going to buy support contracts. They'll hire the maintainers instead (like they did with me and hundreds of others like me at my current company). They're not going to split that effort across multiple platforms. They'll focus on one, then hire thousands of developers who don't even realize the tools and interfaces they use are Linux-specific. Then all of the wannabes will copy them, for the reasons I already mentioned. The network effect has gone way beyond any chance of reversal. Sorry.

big_chungus
In some places they are. For instance, net flix appliances use FreeBSD. One of the deciding factors here is licensing: Linux is gpl, meaning net flix would have to contribute back its changes, whereas by using a BSD-licensed alternative, it can keep those changes in-house for a performance advantage. You can dispute the merits or ethics of this, but that's the choice a good few people make. I believe yahoo used it for similar reasons. Some consoles (Sony Play Station 3, Nintendo Switch, possibly others?) use it internally, again so they don't have to open-source console code. Others use it for appliance-type devices, again for similar reasons.
derpherpsson
Nothing stops you from switching. Make an informed choice.
gerbilly
In the early 1990s there was some legal dispute over the BSD license.

It spread FUD and prevented people from developing on BSD, and steered them towards Linux.

This is from memory, maybe someone else can fill in the details.

nnq
no docker support on *BSD unfortunately :|

...this can be a show stopper

cat199
no jail support on linux..
nnq
you miss the other point of Docker: cross-platform dev (and only occasionally cross-platform deployment)...

I can run same set of docker containers that make up an app, including scaling multiple instances of each container, on (1) my macbook, (2) my ubuntu linux laptop, (3) my centos server (4) my windows laptop, (5) my other windows laptop, (6) my client's windows server.

I can do this without bothering to configure my app specifically, or without even knowing or learning what the dependencies of my app are!

BSD's are the "loner children" playing by themselves while everyone expects to have everything running everywhere and expect it by default...

Sorry, but unless effort is made to embrace kuber and docker, the niche will shrink.

One way out of the problem is to have entities like the FreeBSD foundation the OpenBSD devs invest effort into Docker development to make it capable of running BSD containers inside it... but this will never happen because of endless ego :|

cat199
... all 6 of which were running a linux vm until recently, and still predominately so (windows containers are hugely niche and have just about as much 'loner children' baggage in the larger commmunity)

writing a gui to spin up the VM flavor du-jour with one hipster command that you can show in an animated terminal in your bloated website doesn't make something cross platform underneath the hood, and that same VM runtime could run just about anything else.

and no, i don't miss the point. you miss the point - you are arguing market share as if it is technical merit. I'll take 1 true school unix hacker over 5 million 'noders' who cant debug their super hip k8s clusters they provisioned 'in the cloud' with wget|sh when they break.

geggam
Because the people who pay folks to work on systems dont like to pay enough money to get that level of talent.
chrisseaton
> Because the people who pay folks to work on systems dont like to pay enough money to get that level of talent.

People like Google can't pay enough? Do all the BSD developers work as front-end quants or something? How are they all earning so much that nobody can afford them?

geggam
Google doesn't use BSD. (to my knowledge)

Yahoo! did but they also had BSD developers on the payroll.

Any Silicon Valley company will pay enough but that is why when you leave Silicon Valley or a big tech hub everything is windows. The pay attracts the talent.

chrisseaton
You said the reason people don't use BSD was because the people who work on it are were too expensive. I know Google don't use BSD, but your idea that it's because they couldn't afford to pay BSD developers doesn't seem to add up, since Google happily pays Linux kernel developers maybe half a million dollars or more. Are the BSD developers really paid so much that Google can't afford them over the Linux developers? That doesn't seem likely to me. I think the reason they're not using BSD is something other than what you're suggesting.
geggam
when I said "Work on the systems" I meant support maintain care and feed.

Linux has become mainstream so there are more people in the talent pool to pay to support it.

Windows ... has more so it is cheaper

BSD simply doesn't have enough people who know the system well enough to support it

Paying 5 developers to build something cool doesn't mean you have the support system to run it.... You actually need people who understand your product to use it as a business system

cat199
there's only slightly more learning curve between Linux Distro X vs BSD Y as there is between Linux Distro X and Linux Distro Y.

In some ways & depending on the situation, moreso, since linux distros are often trying to do things drastically different from each other in order to differentiate themselves, whereas BSD's are often sharing code because of the small developer base, sharing-compatible license, and lack of bottom-line oriented corporate sponsorship.

geggam
Having worked in the industry for a couple decades and been interviewing people for the last decade I can tell you the number of people who understand low level systems management is not high.

Used to be a system admin could modify a kernel module in linux. These days its getting hard to find one who can use the CLI properly

zdw
To speak to the specific case, Google does use some BSD components - large portions of the C library in android are OpenBSD sourced:

https://undeadly.org/cgi?action=article&sid=20140506132000

Google also has donated to the OpenBSD foundation historically in relatively small amounts: https://www.openbsdfoundation.org/contributors.html

phicoh
I'd say, release early, release often.

In general, *BSDs take more time to work out technical details, make sure that the design is right.

The approach in the Linux communtity is much more, release something that mostly works now and fix it later.

mkr-hn
Ubuntu kicking everyone into a 6 month release cycle pushed Linux, as a kernel and as a software ecosystem, forward a lot. Quality and quantity seem to converge over a long enough view.
hackworks
I think it again falls into kernel versus whole system. Releasing an update to the whole system is more work than releasing the kernel.

Little orthogonal: It sort of rhymes with the mono repository versus collection of micro repositories mindset. Does BSD have more code reuse since it is whole system?

mruts
OpenBSD has had a six month release cycle for a very long-time.
marcosdumay
It has lower performance than Linux, and some compatibility problems that appear once in a while.

It used to be fashionable to run BSD on security focused machines (like firewalls). I'm not sure why Linux won there too.

AnIdiotOnTheNet
Generally people prefer stuff that works over stuff that is nebulously "more secure".
derpherpsson
Stuff breaks when it is hacked.

State actors are now targeting whole populations. It takes less and less time to enumerate the entire IPv4 address space. Knowledge about hacking is becoming more and more accessible to a larger group of people. This problem is not going away - it is growing larger for every year.

If you don't believe me: Just attach an object to the internet and watch the logs.

AnIdiotOnTheNet
> Stuff breaks when it is hacked.

Yep, but you have to grade it based on likelyhood. Which costs more, a thing that doesn't work right now or a thing that might not work in the future, maybe. If you can make it work now and be secure, good, otherwise people will, quite rationally, prefer that it work.

zdw
In using both BSD and Linux based appliances, the Linux based ones tend to have more features (VyOS, etc.) and cover more hardware (OpenWRT, etc.) but aren't as long-term stable and problem free, as well as the BSD ones.

OpenBSD has much more complete and accurate documentation, which is a plus when you're debugging a problem.

I would opine that Linux won because it was familiar but not ideal - see also people trying to shoehorn Windows into embedded systems where a unix variant would be a more ideal pick.

dwheeler
Linux came after the BSDs, so you would think the BSDs would have won.

There are many reasons Linux-based systems are generally much more popular than the BSDs in the server and workstation spaces. Here's why I think that happened:

* GPL vs. BSD license. Repeatedly someone in the BSD community had the bright idea of creating a proprietary OS based on a BSD. All their work was then not shared with the OSS BSD community, and the hires removed expertise from the OSS BSD community. In contrast, the GPL forced the Linux kernel and GNU tool improvements to stay in the community, so every company that participated improved the Linux kernel and GNU tools instead of making their development stagnate. This enabled the Linux kernel in particular to rocket past the BSDs in terms of capabilities.

* Bazaar vs. Cathedral. The BSDs had a small group who tried to build things elegantly (cathedral), mostly in "one big tree". GNU + Linux were far more decentralized (bazaar), leading to faster development. That especially applies to the Linux kernel; many GNU tools are more cathedral-like in their development (though not to the extent of the BSDs), and they've paid a price in slower development because of it.

* Multi-boot Installation ease. For many years Linux was much easier to install than the BSDs on standard x86 hardware. Linux used the standard MBR partitioning scheme, while the BSDs required their own scheme that made it extremely difficult to run a BSD multi-boot setup. For many people computers (including storage) were very expensive - it was much easier to try out Linux (where you could dual-boot) than BSDs. The BSDs required an "all-in" commitment that immediately caused many people to ignore them. I think this factor is underappreciated.

* GNU and Linux emphasis on functionality and ease-of-use instead of tiny-ness. GNU tools revel in all sorts of options (case in point: cat has numerous options) and long-name options (which are much easier to read). The BSDs are often excited about how small their code is and how few flags their command lines have... but it turns out many users want functionality. If tiny-ness is truly your goal, then busybox was generally better once that became available circa 1996 (because it focused specifically on tiny-ness instead of trying to be a compromise between functionality and tiny-ness).

Some claim that the AT&T lawsuit hurt the BSDs, but there was lawsuit-rattling for Linux and GNU as well, so while others will point to that I don't think that was serious factor.

Here's one article discussing this:

https://www.channelfutures.com/open-source/open-source-histo...

msla
> Some claim that the AT&T lawsuit hurt the BSDs, but there was lawsuit-rattling for Linux and GNU as well, so while others will point to that I don't think that was serious factor.

The SCO lawsuit was a joke and everyone knew it. A bare shell of a company, a mere coat rack they could hang a lawsuit on, was going up against IBM with evidence it wouldn't even release for an embarrassingly long period of time, and when it did, it was laughed out of Slashdot and Groklaw. Microsoft really didn't get its money's worth out of that little venture.

As for lawsuits against GNU, I don't know of any off the top of my head. Can you name one?

blihp
By the time things like the SCO lawsuit happened, Linux already had the momentum and corporate financial backing so it could weather it. Had that lawsuit happened in the early days, history would have likely played out differently. (i.e. imagine you were in college getting sued by AT&T over a project that barely anyone had heard about... I suspect most would decide to find a different hobby project)

When the AT&T lawsuit happened it had the direct effect of steering people from *BSD to Linux at a critical time for both, so I'd say that it was definitely a factor.

smhenderson
Some claim that the AT&T lawsuit hurt the BSDs, but there was lawsuit-rattling for Linux and GNU as well

Do you mean the SCO lawsuit against IBM? Because I would argue that 1) that happened long enough after Linux had established itself that people were too invested to be immediately scared off and 2) people put a lot of faith in IBM and there legal team to defend Linux. I think the community around Linux was able to basically laugh the whole thing off once SCO started presenting their actual "evidence".

Without a large company to defend it and the fact that no one had really started using it for anything serious yet made the AT&T lawsuit against the BSD's look a lot scarier at the time.

All good points by the way, but I do think AT&T rattling their sword did have a pretty chilling effect on BSD adoption as well.

stock_toaster
> Do you mean the SCO lawsuit against IBM?

No, pretty sure the parent meant the the USL vs BSDi one. Though I disagree with the parent, and believe it _was_ impactful on BSD adoption.

https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc.....

smhenderson
I meant the part where he mentioned but there was lawsuit-rattling for Linux and GNU as well.

I don’t remember anything before SCO so I was curious if there was something before that I missed.

I must admit I was downright obsessed with that case and Groklaw’s coverage at the time so I guess it’s not surprising that it was the first thing I thought of...

dwheeler
I meant the USL vs. BSDi case. That case was, of course, focused on the BSDs. While GNU and Linux were independently implemented, I recall there being some claims at the time that the USL vs. BSDi case also implicated GNU and Linux.

The SCO vs. Linux/IBM/the universe travesty that attacked Linux came a little later; that started in 2003 and seems to never really end. There were also legal accusations around that time (circa 2004) about Linux raised by Kenneth Brown (from the Alexis de Tocqueville Institution). Both were focused on Linux and not on BSD, and both failed to slow down Linux.

Reasonable people can definitely disagree on whether or not the USL vs. BSDi legal case seriously impacted BSD adoption as compared to Linux. I don't think it was a key factor in the long run. The USL vs. BSDi case was raised in 1992, and resolved in 1994, so the legal issues didn't stick around very long. In addition, the BSDs had a head start and plenty of time to recover after the legal issues were resolved... if legal issues were the only problem. But again, reasonable people can disagree. We'll need two universes, where it did and did not occur, to really answer the question :-).

chongli
GNU + Linux were far more decentralized (bazaar)

That's the first time I've ever heard GNU associated with the bazaar. The original thesis of the book (CatB) is the observation that GNU is a cathedral (with saint rms at its head) and Linux (the kernel) is a bazaar.

celticmusic
I think the point is that it's all relative.
dwheeler
Yes, that's what I meant. It's true that many GNU projects were (and are) cathedrals, but their cathedrals were more like a bazaar compared to the BSDs. GNU projects tend to be run mostly-independently, but the BSDs were (and are) maintained as entire monoliths... kind of the ultimate cathedral approach. And while cathedrals are beautiful, they take hundreds of years to build, and that's a problem in the software development world.
gerbilly
BSD is less popular because BSD is objectively better, which meant it was slower to reach market. See: https://en.wikipedia.org/wiki/Worse_is_better
spamizbad
An excellent summary. I think by the late 90s BSD could operate cleanly alongside Windows but by then Linux had become the default "free" *nix choice.

For me, I ran FreeBSD for about a year around 1999. I lasted about a week before breaking down and installing a GNU userspace. Excellent CLI ergonomics for the day.

Linux had quite a few "easy to install" distros where, if your critical hardware was fully supported, you had something that was easier to get up and running than Windows 95/98. X configurations and sound drivers were a sticking points back then though. BSD had no such "easy mode" gateway drugs.

693471
No, by the late 90s BSD had dominated the hosting world because the TCP/IP stack allowed better scaling on the same hardware. Tons of ISPs ran nothing but FreeBSD.
cpeterso
Yahoo and Hotmail ran FreeBSD for a long time, too.
kiney
I'm not up to date with all the spectre mitigations. Is AMDs SMT implementation still consideres secure?
derpherpsson
Yes.
tptacek
The title of this should be "OpenBSD Was Right About Hyperthreading", which is all he says in the video. "They were right for a little bit of the wrong reasons, but they were right".
Causality1
Would it be that difficult to run chips in a secure mode with no hyperthreading or branch prediction when handling sensitive information and then takes the brakes off for normal operation? I mean, I wouldn't really care if someone was watching everything I do for 95% of my computer use time.
lcall
There was a longish recent thread on a mailing list where people described well (and linked to) why they like OpenBSD.

https://marc.info/?l=openbsd-misc&m=156700281107546&w=2

The most recent one I saw gave a reasonable sense of some tradeoffs:

https://marc.info/?l=openbsd-misc&m=156750048426578&w=2

I'm sure they welcome donations. :) Software they have written have benefited many, directly or indirectly (like OpenSSH).

...but the whole thread was interesting I thought.

Animats
If you run BSD on AWS are you running a hypervisor underneath that's still doing hyperthreading? On most AWS instances, a "virtual CPU" is really a hyperthread.[1] "Each vCPU is a thread of either an Intel Xeon core or an AMD EPYC core, except for T2 and m3.medium."

Is anybody actually attacking AWS this way? It seems a promising attack vector; you get to run your own code on the same machines as others.

[1] https://aws.amazon.com/ec2/instance-types/#instance-details

kinghajj
I don't think that vector would work, as only a single tenant's VM is scheduled on any particular core. That's why the minimum number of vCPUs for instances on hyperthreaded hosts is 2.
imtringued
Hyperthreading is cool in theory, but most workloads that take full advantage of it are usually filled with code that is doing nothing but following pointers. It is generally unoptimized code or code written in an interpreted language that doesn't focus on speed anyway. It also requires the program to take advantage of multiple threads in the first place. This means languages like python don't benefit at all but code written in C/C++ are so optimized that hyperthreading does nothing but divide the cache into two halves.
mixmastamyk
I bought a core i5 a few years ago merely to save money as I edit text files for a living. By chance it turns out to not offer hyper-threading. Not sure it matters much but interesting none the less.
shmerl
Does it refer to Intel specifically (hyperthreading is Intel's term), or it any processor (then the general term should be SMT).
fierarul
From the video: "two weeks ago: another one was released... publicly"

So... there's more to come.

jpm_sd
Anyone have a TL;DW? What was OpenBSD right about?
vallismortis
I'm still not - um - disabling, ah, hyperthreading, because, I'm paying for, ah, 4 cores on a 2 core ah, processor.
segfaultbuserr
The title has a clickbaity tendency, it should be changed from "OpenBSD was Right" to "OpenBSD was Right (on disabling hyperthreading)".
big_chungus
There will be a large number of people who don't like this. However, consider the following: most people, even those who use linux, don't want to bother with too much configuration. For those people, it is probably "good policy" to disable a potential security risk. Leave the option for those who understand what they are doing and wish to go ahead, by all means, but make the default secure. For instance, I have server machines at home which do not handle critical or particularly sensitive data. They are securely firewalled, and shouldn't be touched by anything not trusted by me. On those machines, I sacrifice some potential security for performance. No patches for meltdown, spectre, foreshadow, l1tf, etc. as they have significant impacts on performance and these boxes don't really need it. For my firewall, however, I turn on all patches and sacrifice quite a bit of performance (in other areas, not just patches which cause slow-downs) for more security. It handles un-trusted input; other boxes do not. I can plan accordingly.

The important thing to remember is that I am intentionally making those choices and considering that trade-off. Most users are not. It is somewhat better to leave things like ASLR enabled and hyperthreading disabled by default, because that is what is best for the average user.

Lastly, it is of not that if something such as this is disabled and there is a resulting security breach, every one will run around screaming, "Linux is insecure; ahh!" This will happen, even if the user is informed he is supposed to do this for things which need to be secure.

baybal2
Greg is a very important man, many bet he will be the one to take over the Linux development after Linus
dleslie
History will show that Theo was right in a manner similar to how Stalman was right: technically correct analysis and eerily accurate predictions, but lacking in sufficient charisma to create more than a small following.

To some extent, you might say they are like Cassandra; speaking the truth but not believed or listened to.

ScottFree
Does that apply here? The BSD guys chose security over speed, as is their mantra, but companies that run linux for profit prioritize speed and cost per computing unit over security. I think 'disable hyperthreading' would be a difficult sell even for Steve Jobs.
x0x0
This is why we need enormous fines for security breaches, and smaller fines just for not following best practices.

Right now, only the people worried about paying more for performance, dev time, or security engineers are listened to. We need the legal teams inside companies to have something more substantial than possible negative publicity with which to motivate the CEO and CTO as a countervailing balance.

Just like the majority of industries, we need real negative consequences for when we dump incompetent code out into the world. We've tried the "no consequences at all" plan for a long time and it's gotten us, well, continual data breaches via the easiest possible things to control. S3 buckets and databases open to the world. An inability to patch known CVEs in under 3 months (hi Equifax!).

atheowaway4z
What ever incentive you create and analogy you make. The hardware / software / internet has no parrellel. I’ve been thinking about the place of software in the context of other disciplines and here is the thing. If you are thrown back to the prehistory with 50 man dream team of engineers, and are told to recreate ... something. Let’s say the train station I was just in. A rudimentary train network could be created in maybe 50 years? ( starting with, how to make steel. ) Wat ever estimate you have, the work required before they recreate the lcd screens showing the time of arrival is easily double that.

With that as a barrier to entry , the only solution I could see working for security is: public domain hardware and software.

The only solution I believe

AnIdiotOnTheNet
> If you are thrown back to the prehistory with 50 man dream team of engineers, and are told to recreate ... something. Let’s say the train station I was just in. A rudimentary train network could be created in maybe 50 years?

I'm no expert, but this seems like a ludicrously optimistic timeline.

ebog
Don't know why you're comment is grayed, we absolutely need heavy monetary penalties for the worst kinds of data breaches. The abstract idea of a class action lawsuit isn't enough, even after the Equifax breach.
homologate
Do you have a similar opinion in regards to crimes? Do you think that there will be less crime if there are harsher prison sentences? Are you in favor of mandatory minimum sentences?

If not, why do you think harsher punishments are needed here but not for crimes?

throwaway2048
Do you think there would be less murders or more if there was no punishment at all for murdering people?

That's where we are atm with security breaches.

monocasa
If street crime had lesser penalties than the profit of said crime, yeah, I'd be pushing for harsher sentences, yes.

That's pretty exclusively the purview of white collar crime behind a corporation though.

yjftsjthsd-h
Compared to effectively zero penalty, probably.
ebog
White collar crimes (like this should be) are all about making value calculations. Take the famous Ford Pinto memo. They decided the risk to their customers' lives was smaller (in terms of pure dollar amount, after potential litigation) than fixing the gas tank issue. If you penalize reckless security practices that lead to data breaches companies will be far more inclined to look after their customers. We already issue fines like this with COPPA, so it's not a new concept.

Street crimes have a far different cause and should be treated differently. I'm surprised I even have to type that, it seems obvious.

washadjeffmad
Is there anything about how breaches are currently remediated that might contribute to better outcomes than if we adopted a higher and harsher penalty system?

It seems like it might create some perverse incentives as the risk escalates.

ebog
That's true. I'm sure that the perverse incentive could be resolved with some system for self-reporting and fixing.
jimbob45
In all honesty, security is just really hard and we're really bad at it. Perhaps an alternative would be to establish standards when it comes to security team headcount and salary in an organization? That way they're incentivized to follow the rules and you have more leeway to punish them if they don't follow the baseline.
csande17
The solution to being bad at security isn't to establish quotas (that's a great way to make sure DevOps engineers get rebranded as Dev-Ops-Sec engineers, and not much else), but to get better at security.

Imagine if any other field said that. "Not burning people's houses down with electrical wiring is just really hard and we're really bad at it." "Keeping bridges standing is just really hard and we're really bad at it." "Flying across the country without killing any passengers is just really hard and we're really bad at it."

mikepurvis
Isn't GDPR supposed to be an attempt at this kind of thing, treating privacy issues as a punishable negative externality similar to pollution?

I only ask because that all makes perfect sense to me, but I see a lot of negativity about GDPR on here, that all it ever does is stifle innovation and produce ever more cookie-agreement popups.

ScottFree
The EU has seen poor results with fines. The big tech companies (Google, Amazon, etc) pay them with the change they find in their couch cushions. Then, they continue doing whatever they want to do. It doesn't dissuade them.
tyfon
We haven't really seen the "end game" deployed by the EU yet (4% of global annual turnover fines).

I suspect when that happens the companies will launch a massive PR campaign and fight it in court but eventually lose. If they pull out of the EU or pay I have no idea.

Edit: seems like 4% of alphabets 2018 global revenue [1] is "only" 5.44 billion dollars. Wonder if it can be applied multiple times.

[1] https://www.statista.com/statistics/266206/googles-annual-gl...

gwright
Not taking anything away from your point, I think we should also have real negative consequences for the people who commit security breaches.

There is a real social stigma with regard to committing robbery, burglary, breaking and entering, etc. I feel like there isn't so much with online crime. As a community we really pile the blame on the victim for not be prepared and seem to give the perpetrators a pass for taking advantage of the situation.

Also, there is a real tension between anonymity on the Internet and the ability to identify perpetrators. It is a difficult tradeoff.

wahern
> It is a difficult tradeoff.

It's not a tradeoff we can make because the nature of computer security is that unless you fix the software and networks, you can't even identify the criminals, let alone catch them, presuming they're even in your legal jurisdiction. There's a tremendous asymmetry between attacker and defender in terms of cost+benefit, and it heavily favors the attacker.

In any event, computer crimes are punished with an iron fist in the U.S. What's not criminally prosecuted and punished very well is harassment. Yes, if social media platforms offered less anonymity, we could deal with harassment easier. But organized criminal organizations don't need the anonymity of Twitter to pilfer and fence credit card numbers; they have the anonymity of zombie networks and stolen accounts. And you can't address that with harsher penalties. If you penalized that activity with summary execution, the problem would substantially remain. And in fact in some respects it could get worse by deterring security research.

We have no choice but to fix the vulnerabilities. We have to make it more difficult to execute these attacks from a technical perspective, dramatically increasing the likelihood of identification and capture, before we can even hope of using criminal penalties as a substantial deterrent. We're a long way off from that day.

gwright
I agree with you about the asymmetry, which I was alluding to but didn't really spell out. I also agree with you that we are limited by our current software/network infrastructure and fundamental changes in that area may be necessary to get to a better security "story".
AnthonyMouse
The problem with fines is that they happen after the fact and only if the worst actually happens. Tons of companies have totally abominable security and never get breached only out of dumb luck. So you'll still get lots of companies playing Russian Roulette where they make higher profits for ten years before they may or may not suffer a breach and get fined into oblivion, at which point they file for bankruptcy and start over.

You also end up creating a lot of really perverse incentives, like nefarious companies not disclosing data breaches because disclosing them would result in liability even though that's necessary for the victims to take steps to mitigate the damage. There's a reason the NTSB does no-fault investigations.

And a lot of mediocre but still harmful incentives like cargo culting decades-old security checklists to satisfy compliance requirements even though they don't actually result in improved security, but do create a false sense of security.

More than that, the problem is that humans are fallible, so even if you do 99.9% of everything right you can still make a mistake. A company with one security vulnerability can get just as compromised as a company with ten thousand. Does it really make sense to destroy OpenBSD with fines as soon as they have one security vulnerability? Or every random company that uses OpenSSH on a day that a not publicly known 0-day is being exploited in the wild? Or a company that updates to the latest version of some software that claims to have fixed a CVE even though it didn't?

The real problem here is architectural. It shouldn't be possible for someone to breach Equifax and get all your information because they shouldn't have that information to begin with. They shouldn't exist. Your data should be yours, on your device, so that it isn't possible for someone to get it by breaching a third party because the third party doesn't have it.

letstrynvm
These large rich tech companies are really responsive to 'compliance' with the letter and spirit of laws that otherwise might cause severe losses. Look at, eg, gpdr, and google suddenly getting religion about you being able to mass-download your data. Yes you can legislate solutions to corporate behaviours.
unilynx
Letter, yes. Spirit, I'm not so sure, it feels like Google and FB want to keep doing what they're already doing, and comply where they have to, instead of reconsidering whether they actually need all that data and need these dark patters for consent (which would be the spirit of GDPR)

And the smaller-than-FAANG companies... too many checklists, contracts and theater ("GDPR requires us to disable autofill on this form") and not enough actual rethinking what they're doing and if they should change their approach to data... so we'll still be seeing plenty of breaches where they shouldn't even be having the breached data

It'll probably be a decade before we see real effect from the GDPR...

AnthonyMouse
"These large rich tech companies" are not the ones getting breached. The likes of Google and Microsoft take security seriously already. The problem is the likes of Equifax and Capital One and government databases with poor security that nonetheless contain all kinds of sensitive information that they shouldn't be aggregating and retaining to begin with and they certainly shouldn't be required by law to collect and store, even though they frequently are right now.

Also:

> and google suddenly getting religion about you being able to mass-download your data.

They had that even before the GDPR.

beatgammit
If you make the fine large enough that it may cause the company to go under, you can bet they'll buy some insurance. And you can bet the insurance companies will have some standards to reduce the risk of a company getting breached, such as doing audits regularly.

For example, if Equifax faced a fine of $5B (more than 1/4 of their market cap) instead of $500M, you can bet they'd be more serious about audits in the future. However, we've conditioned business to expect minor consequences for breaches, so security becomes an afterthought. Likewise, the $5B fine against Facebook is unlikely to change anything, though a $200-300B (20-30% market cap) fine would be much more convincing.

The point isn't necessarily to ruin companies, but to set a precedent that says these types of issues will not be tolerated. It'll force companies to get insurance, and the insurance will have an incentive to avoid collection on the policy.

AnthonyMouse
Using fines that large is how you get them to not buy insurance, because it would cause the insurance to be prohibitively expensive, assuming you could even find someone to sell you a policy that large.

It also doesn't make any sense to base fines on market cap because the two things have nothing to do with one another. All that would really do is cause corporations to restructure their operations to separate the entity that does all the dirty work from the one that owns all the assets, so that the entity that exists in your jurisdiction and is susceptible to being fined is renting/leasing everything and has only a nominal market cap, whereas the one with all the assets is a totally independent company that isn't even in your jurisdiction and never does anything "wrong" because all it ever does is lease and license things to a different entity.

It also seems kind of obvious that even if you could try to impose a fine equal to 20-30% of a company's global market cap, all that would do is cause the local entity declare bankruptcy, dissolve and abandon your jurisdiction without actually paying the fine, because that large of a fine would exceed the long-term value of operating there. Especially when there isn't any guarantee it won't happen again if they stay. For that matter it would tend to make companies not want to operate there to begin with, because it's possible to do your best and still fail, and that kind of uncertainty is precisely how you drive businesses away.

But most importantly, it still generally isn't the large tech companies who are the ones with poor security. It's the other industries, especially finance and government, that are collecting just as much data but then doing a much worse job of securing it. What does a fine mean to the DMV or OPM?

knocte
> but lacking in sufficient charisma to create more than a small following.

IMO it's not that Linus has more charisma than Theo, it's simply the network effect of one project over the other.

Andrew_nenakhov
The difference is because of the license. GPL license (created by Stallman, btw) won over BSD license.

The emergence of a GPL-licensed kernel was inevitable. If Linux didn't appear when it did, some other kernel would appear. Maybe folks would have more motivation to work on Hurd, and it would be the main kernel for everything now.

proverbialbunny
Why is that the case? Isn't the BSD license more relaxed? I would think companies like RedHat would have liked the BSD license more.
dleslie
It was inevitable insofar as, at least, the GNU project was intending to make one.
Miraste
Its more relaxed nature is the problem. Linux is strong not because of Red Hat or any of the other contributors individually, but from their collective efforts--which they're required to make open by the GPL. BSD doesn't have this, so you end up with companies like Apple building whole ecosystems around a BSD kernel (not so much now, but once upon a time) without improving BSD as a whole at all.
spacemanmatt
Seems like the history of science and technology is littered with crackpots (alchemists, egotists, etc) who made amazing discoveries.
dleslie
It wasn't Linus; it was RedHat, SuSE, and the rest of the early commercial Linux endeavours.
ggg3
which makes the permissive BSD license even worse in hindsight.

it allowed microsoft and apple to profit hugely while giving nothing back. and in the end GPL produced much better product that was used by the industry despite the less permissive license.

...and today we are making the exact same mistake with AGPL.

knocte
Right, this is what I said.
ur-whale
"lacking in charisma" sort of implies that they have too little, but still some and doesn't really account for how they actively piss off other technologists and push them away, however right they may be technically.
tyingq
Tannenbaum perhaps somewhere in the middle. Huge install base, but little noteriety.
mitchtbaum
the middle is kind of a funny territory

appearances would say it's between two polar complements

as some bridges extend beyond the scene, even from behind it

reminds me of 'a message that discredits the medium that carries it'

~~

sun and moon watercolor painting, "Polar Complements" original watercolor painting, zentangle art, wall art, decor living room decor

https://www.etsy.com/dk-en/listing/626399645/sun-and-moon-wa...

https://imgur.com/2Te9EBF

https://imgur.com/XIWBmum

~~

(maybe to slow down thy downvotes: I don't intend for this comment to make a whole lot of sense to you, right at this moment.. hopefully you'll at least enjoy the painting...)

throwaway2048
Minix only has a "huge install base" because of Intel ME firmware junk.

Its not really meaningful, because the firmware could be pretty much any arbitrary OS and it would make zero difference to any end user.

Tannenbaum himself didnt even know about Intel using MINIX in their ME firmware until recently, so that should show you how much relevance it has.

morpheuskafka
Yes, but it is significant that an industry leading company chose it over a RTOS or other embedded system for a high volume project.
throwaway2048
Why? Intel hasn't contributed anything to the project to my knowledge.
beatgammit
I know they at least formally requested some changes, but I don't know if that means they contributed. I'm guessing they didn't want the public to know they were using it, likely because of the nature of IME.
hawski
But with that in mind would Android count for Linux?
yellowapple
I'm still looking forward to the day when someone figures out how to install X11 on Intel ME, thus ushering in the Year of the MINIX Desktop rather instantaneously.
red_phone
This is, at least in part, a result of MINIX using a highly permissive license. It’s easier to use an open source OS in relative secrecy if you aren’t required to release your modifications. And an organization with deep technical expertise like Intel would not likely need much assistance from the community for their implementation.
ryacko
They could still require redistributions to include a copyright notice. I think the real issue is that Intel might engage in product binning without changing the source code.
yjftsjthsd-h
I'm pretty they just forgot. MINIX is under a license that does actually still require a copyright notice, and I'm pretty sure that after the news came out about it being used, they went and fixed the fact that they'd apparently forgotten to include it.
chithanh
> This is, at least in part, a result of MINIX using a highly permissive license.

Not at all. MINIX was actually Intel's second choice, they tried first to fit Linux into their new x86 based ME. But the maintainers were uncooperative:

https://www.phoronix.com/scan.php?page=news_item&px=MTY4MzM

Intel then submitted similar patches to the MINIX kernel, which subsequently got accepted.

https://www.cs.vu.nl/~ast/intel/

beatgammit
And this shows that the license really isn't that critical. Vendors don't like to maintain operating systems, so they have a vested interest in upstreaming their modifications. Why maintain something yourself if you can get someone else to do it?

BSD licensed projects see plenty of contributions, they're just not as popular as Linux because of historical reasons. I and most BSD fans blame the AT&T lawsuit for BSD losing popularity and Linux gaining popularity. That being said, BSD is still quite popular, though somewhat niche.

Companies not upstreaming code will happen regardless of the license. Plenty of companies maintain Linux change sets because they're not obligated to release them, but plenty more upstream their changes when not strictly necessary. It just depends on the value proposition of releasing improvements.

voldacar
I think Stallman is more of a Tiresias
rhaps0dy
How so? I had to look Tiresias up on Wikipedia, a blind prophet, but I still don't see the connection. Would it be possible for you to please explain?
lettucehead
Tiresas' prophecies come true because of a poetic or literary recursion built into the Oedipus cycle or the Oresteia or whatever. (I'm AFK, where K means "my library.") So when the guy to whom Tiresias prophecies that said guy will commit patricide in fact kills his father, he does so because he tried to not kill his father, and in attempting to escape his destiny, inflicted it upon himself. Stallman's apocalyptic ravings have had their basic gist "come to pass," in this interpretation, because of the actions of the developer community or tech community or whatever, in the struggle to develop software, which software has spiraled beyond all reasonable control, even as the guy killed his father by trying not to do so. Thus this wise guy is hella Greek. And this parent poster is also a wise guy. And this patricide is about to become all-too-ironic, because throwing out one-liners of the parent's (post's) sort is a bleak literacy, which I have now killed. And literacy rises like a phoenix in my child post, and the guy who kills his father is blinded as punishment, and becomes a wandering seer himself. So there.

TLDR is that prophecies are self-fulfilling. Tiresias is the OG self-fulfilling prophet.

voldacar
The analogy of Cassandra is inferior because:

1. Cassandra is usually at the center of attention, an object of desire to Agamemnon and his troops, an object of hatred and jealousy to Clytemnestra and Aegisthus. Tiresias isn't nearly as ostentatious, and mostly exists passively in the background, waiting until someone else asks his opinion on something. Tiresias gives off a kind of awkward vibe, much like Stallman, compared to Cassandra, who's totally a social butterfly.

2. Cassandra is cool and sexy, Tiresias is a blind old dude. Richard Stallman eats stuff off his foot and often looks like he hasn't showered since the last emacs release.

3. Cassandra's prophecies are very straightforward, Agamemnon and his buddies understand what she says and even listen to her to some degree, they just don't care enough to do anything. Tiresias is much more cryptic and is always derided until the denouement when it is revealed that he was right all along, just in a way that nobody else could have foreseen. Likewise, Stallman's insights into the future of our technological dystopia seem absurd and maniacal until they inevitably come true a few years later.

I like your comment though =)

dleslie
I rather enjoyed this exchange between you folks; thanks.
3xblah
djb also comes to mind.
yazaddaruvala
lol, I enjoyed it, but you might consider citing any greek mythology when its obscure and easily confused with a database.
yborg
The story of Cassandra is one of the most well-known of the Greek myths and the allusion a standard rhetorical device in at least the English language. The fact that anyone would have confused the reference with a database in this context makes me sad for the state of 21st century humanities education.
draw_down
A corollary is that our field is not as coldly rational and technical as we'd like to think.
enriquto
> History will show that Theo was right in a manner similar to how Stalman was right: technically correct analysis and eerily accurate predictions, but lacking in sufficient charisma

Your assessment of charisma is foreign to me. I cannot imagine anybody who is more charismatic that these two men, theo and rms.

alxlaz
Lacking in charisma hardly describes a person like Theo de Raadt. I've never met the man, but I doubt that someone who lacks charisma could have lead a dedicated, opinionated team through more than 40 releases (counting OpenBSD releases alone!), over a period of almost 25 years now, a team which not only developed a sturdy (if equally opinionated) operating system but also a bunch of highly successful projects like, I dunno, Open-bloody-SSH.

I mean, maybe it's not too much charisma but I'd be glad if I had only like 10% of that...

Florin_Andrei
It simply means he can convince some people who are open to technical arguments and do not think too differently from him. And that's about it.

Charisma means being able to persuade all kinds of individuals, regardless of their inclinations and initial positions.

riffraff
the same applies to Stallman, if you consider that the FSF has been running for 34 years.

But I believe grandparent didn't mean they lack charisma, but that they didn't have enough to swerve the general public.

toyg
With charisma, it’s not just quantity but quality. What appeals to some will repel others. De Raadt has qualities that entice him strongly to some, but are irksome to many, many more. See for example the trollish attitude to Comic Sans.
tyri_kai_psomi
And Stallman is very off-putting to many with how unabashed and unashamed he is about some of his more extreme progressive views. Unfortunately for him, that excludes almost half of the potential support he would receive.
pgcj_poster
I don't think Stallman's political views are that extreme when you consider that a lot of free software advocates are actual communists.
lowtolerance
I’ll take a commie over a pedophile apologist any day.
Koshkin
I think deep down all people are communists - they all like things for free.
beatgammit
I disagree. Just because people like free stuff doesn't mean they want to force others (and themselves) to provide that free stuff. Some certainly do, but there are plenty who don't.
Koshkin
But people are "forced" to work anyway, free stuff or not.
lone_haxx0r
"Forced" by human nature (hunger, cold, etc), but not forced by other people.
lone_haxx0r
I'm a right-wing libertarian, and I've never felt excluded or anything like that because of his leftist views. This for many reasons:

1) He's not a hypocrite in any way. He's honest and you can tell that he has truly thought about his opinions.

2) In my country, I get bombarded with a ralentless stream of leftist ideology. The worst kind of leftism: the lazy 'slogan' leftism, from people too stupid to realize the full implications of their discourse. In comparison to them, Stallman is a moderate, thoughtful, sweet person.

3) I see his leftism as orthogonal to his software freedom ideology. I use and support free software, and I'm more pro-capitalism than Milton Friedman.

AmericanChopper
I don’t care about his views on those kind of politics at all. But when it comes to free software he is an ideological maximalist. Take a look at how he uses computers personally, his values (at least currently) don’t fit in with most users expectations of what personal computing should be. His contributions aren’t in question, and his voice is a valuable one to have in the choir, but he’s not a person willing to accept a middle ground, and as such the RMS way of doing things is never going to be widely accepted.
TheSpiceIsLife
From my perspective, The RMS Way and Libertarianism have this in common:

They aren't meant to be widely adopted, but mean, rather, as a critique of the status quo.

sifar
>> but he’s not a person willing to accept a middle ground

You need somebody as guideposts to stand at the extreme ends. I am glad RMS stands at his end.

thrwayxyz
To go to the nuclear option Stallman is pro voluntary pedophilia. This puts him so outside the mainstream of opinion that he might as well be shoving Jews into ovens himself.

The honesty is extremely refreshing for a small subset of people who will be fanatically loyal to him. These people will also be some of the most talented and difficult to work with, much like Stallman himself. In my circle of friends technical competency is directly proportional to your agreement with Stallman.

stuartd
> he might as well be shoving Jews into ovens himself.

In what way is this not utterly, appallingly offensive? For shame.

thrwayxyz
It was meant to be.

Now try and think it trough instead of having knee jerk outrage as your reaction.

dvfjsdhgfv
> 1) He's not a hypocrite in any way. He's honest and you can tell that he has truly thought about his opinions.

I think that's crucial. I remember he was discussing one of the aspects of software freedom with my friend and she said "I don't agree." He answered: "No problem, you have your view, I have mine, we don't have to agree on everything."

It struck me as I had expected he'll try to convince and win her over his ideas.

toyg
RMS has seen (and likely started) more flamewars in his lifetime than most of us ever will. He has clearly developed (or adopted) a meta-system to deal with differing views. If you ever talk to a smart priest, it's the same playbook: when it comes to deep beliefs, there is no point hammering something in your face, because it won't stick. Either you get to "truth" out of your own reasoning, or it's not worth it.
ertian
To be fair, they're both making good, legitimate arguments. It could be that people listen to them in spite of their (lack of) charisma, rather than because of it.

There's more to leadership than just charisma.

cat199
I would posit that:

    impact = charisma * funding
so .1 charisma score of some faceless tech exec * 1B in VC funding goes alot more than 2.0 charisma score * 50k of grasroots funding..
ixtli
I wish i could favorite a comment.
grzm
You can. Click on the timestamp of the comment, and then click "favorite"
ixtli
Woah thanks. Dunno why someone downvoted this lol
grzm
I'm not sure, but commenting on downvoting is likely to attract downvotes as it's explicitly against the guidelines.

https://news.ycombinator.com/newsguidelines.html

ixtli
Fair. But thank you for the information.
WatchDog
But funding is also a function of charisma, among other variables.
cat199
I would posit that:

    funding = charisma * suitability for capitalism
so .1 charisma score of some faceless tech exec * 1B in VC funding goes alot more than 2.0 charisma score * 50k of socially oriented funding..
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.