HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Linus Torvalds on why desktop Linux sucks

gentooman · Youtube · 90 HN points · 13 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention gentooman's video "Linus Torvalds on why desktop Linux sucks".
Youtube Summary
Linus highlights several pain points with regards to desktop Linux.
From DebConf 14 https://www.youtube.com/watch?v=5PmHRSeA2c8

0:00 Application distribution is a huge PITA
2:52 Distros break things and ignore backwards compat.
5:53 Distros waste too much effort on package management
8:26 Linus roasts his own package maintainer
8:50 Windows has a better app distribution experience
9:29 Linux distros expect users to compile everything
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
> What does distributions have to do with need for containers?

A lot (but not everything).

It's important to recognise where the "stable API/ABI boundaries" are. In Linux, the stable boundary is the kernel. Userspace is unstable. There are many distributions, with many small and big differences. Different C runtimes, different folder structures, different configs, modules, etc...

In Windows, the stable boundary is the Win32, COM, and .NET APIs, which are in user space. The kernel boundary is not stable[1] however, which also matters.

Even Linus Torvalds has trouble distributing his hobby dive-computer software to Linux![2] He has no issues with MacOS and Windows, because they have stable APIs and distribution mechanisms. With Linux, the kernel is stable, but essentially nothing else is. If you want to distribute software to everyone, then this is a complex problem with many moving parts that you have to manage yourself.

Containers sidestep this by packaging up the distribution along with the software being deployed. This works because the kernel ABI is stable.

[1] In the Windows world you don't really need to package up the "Windows distribution" because there is only one: Microsoft Windows. Conversely, Windows containers aren't truly portable because the Windows kernel ABI isn't stable. However Microsoft changed this with Windows 11 and Server 2022, the kernel ABI is now "stable", or at least sufficiently stable for Server Core containers to be portable for 5+ years as-is.

[2] He's complaining about "desktop applications", but the exact same rant would apply to server software also. This one talk made me understand containers and why they're so important in the Linux world: https://www.youtube.com/watch?v=Pzl1B7nB9Kc

mr_toad
> In Linux, the stable boundary is the kernel.

Different distributions run different kernels. A container from a different distribution will probably work, until it doesn’t.

AlphaSite
I e heard this before, but the only case I’ve heard of people running into issues is if you hit a kernel bug or depend on a really new feature that’s not present in the kernel yet.
Maursault
> Different distributions run different kernels.

Not so much, no. Linux is the kernel, and all Linux distros, every single one, employs Linux as its kernel, or else it would not be Linux. Stepping back from my pedantry, I believe what you must mean is that different Linux distributions will customize the kernel a bit. That doesn't change what it is. Just because you have automatic windows and I have cranks, and you have leather interior and I have plush doesn't mean our vehicles aren't the exact same year, make and model.

mr_toad
They customise, backport patches, and use different kernel versions.

If you run kernel version A in docker, and try and run a container that was designed to use kernel version B you may run into problems.

josephcsible
Note that because of Linus's strict policies on userspace compatibility, A>B is basically guaranteed to always be safe, and only A<B will ever cause problems, unless distros add patches that break things.
Maursault
Sure, I can imagine, but that is merely a bug, or perhaps the compatibility issue stems from misuse of a general but inaccurate claim of "Linux compatible." My expectation is specific compatibility is stated plainly in the documentation and on the box, if there is one, without ambiguity.
moondev
Even if Windows containers are not as portable, the ability to manage them from the same Kubernetes controlplane that manages Linux containers seems pretty powerful. It also enables you to do it outside of Azure/Service Fabric on other clouds or metal.

https://kubernetes.io/docs/concepts/windows/

https://docs.microsoft.com/en-us/azure/aks/windows-faq?tabs=...

jiggawatts
You're trading one convenience for a world of hurt if you use Kubernetes with Windows. Just take a casual glance at the "Windows containers" issues list on GitHub: https://github.com/microsoft/Windows-Containers/issues

Some of the very recently closed issues were jaw-dropping, such as totally broken networking in common scenarios.

DNS resolution is very different in Windows compared to Linux, making much of the "neatness" of Kubernetes pod-to-pod communication not work.

There is no maximum memory limit in Windows kernel "job objects" (equivalent to cgroups), so one memory leak in one pod can kill an entire node or cluster. This is very hard to solve, and I've seen it take out Service Fabric clusters also.

Etc, etc...

moondev
Thanks for sharing, it all seems terrifying but still very interesting to explore. I was curious how the Windows nodes would even join - turns out they support cloud-init just like Linux machines.

https://cloudbase.it/cloudbase-init/

In theory this means they could be managed then by something like KubeVirt instead of treating them like a worker node. https://kubevirt.io/user-guide/virtual_machines/startup_scri...

Excited to see this space continue to evolve for sure.

jiggawatts
Cloudbase-init is interesting, but Microsoft doesn't support or (apparently) use it. There's no mention of it in their docs: https://www.google.com/search?q=site%3Adocs.microsoft.com+%2...

New Windows VMs are generally built using PowerShell DSC or a similar native tool.

Apr 01, 2022 · Flankk on Farewell, Elementary
Linus Torvalds also has many criticisms of the Linux desktop.[1] None of the commercial applications I have bought will run on Linux without wasting time tinkering. For programming, I'm forced to use Xcode because it's required on the App Store. I couldn't use Linux even if I did want to use something worse than macOS.

[1] https://www.youtube.com/watch?v=Pzl1B7nB9Kc

mcronce
"Worse than" lol
DebConf14: Linus about breaking librarie-ABIs:

https://www.youtube.com/watch?v=Pzl1B7nB9Kc&t=168s

Linus Torvalds about this: https://www.youtube.com/watch?v=Pzl1B7nB9Kc

Distros (Debian in particular comes to mind) have some really annoying packaging rules, and as a maintainer of a Go program, it's a huge pain, so we decided to just set up a repo with https://cloudsmith.com/ instead of trying to deal with that. They require every dependency (indirect or not) to be packaged separately. We don't have the time for that. Way simpler to just build a static binary and ship it.

I watched a great rant by Linus T himself on how distributing code for Linux is a "pain in the arse", but is comparatively easier on Windows and MacOS: https://www.youtube.com/watch?v=Pzl1B7nB9Kc

Something I took away from that is that it is Linus himself, personally, that is responsible (in a way) for Docker existing.

Docker depends on the stable kernel ABI in order to allow container images to be portable and "just work" stably and reliably. This is a "guarantee" that Linus gets shouty about if the kernel devs break it.

Docker fills a need on Linux because the user-mode libraries across all distributions are a total mess.

Microsoft is trying to be the "hip kid" by copying Docker into Windows, where everything is backwards to the Linux situations:

The NT kernel ABI is treated as unstable, and changes as fast as every few months. No one ever codes directly against the kernel on Windows (for some values of "no one" and "ever".)

The user-mode libraries like Win32 and .NET are very stable, papering over the inconsistency of the kernel library. You can run applications compiled in the year 2000 today, unmodified, and more often than not they'll "just work".

There just isn't a "burning need" for Docker on Windows. People that try to reproduce the cool new Linux workflow however are in for a world of hurt, because they'll rapidly discover that the images they built just weeks ago might not run any more because the latest Windows Update bumped the kernel version.

I read all the way through this and wept: https://docs.microsoft.com/en-us/virtualization/windowsconta...

> Right, if you want Win32 API [...] Okay but which updates [...] Ok these are real things to ask for, that someone could feasibly attempt to do.

These are examples. I'm not asking to use Win32 or add updates to Gtk1 or Gtk2. The important part is keeping ABI and API stability. Also i refer to what i'd like to see in a hypothetical library that did things right, not the current ones that either keep doing everything wrong (Gtk) or even if they wanted they can't due to the tech they're based on (Qt, though chances are even Qt could stay API and ABI compatible they wouldn't because they're primary middleware for paying corporations and only secondary a platform library that happened by chance thanks to KDE).

> OpenGL is not really a good example because while it might be easy to add some things like vsync and triple buffering, other things (like GPU parallelization) are basically impossible (or extremely impractical) to try to hack into in the GL implementation. The fixed function mode is always going to have its limitations.

It is a great example because it shows that it can remain API and ABI compatible for decades. Also both CPU and GPU parallelization could be implemented with extensions (though GPU parallelization is something that GPU vendors seem to be moving away from) in a variety of ways, e.g. using display lists in different threads a per-thread shared context that records a thread-local command list for the operations inside the display list than later can be executed via the render thread. Similarly different contexts can be "bound" to different GPUs and yet allow for resource sharing. Those may or may not need new APIs (most likely will) but are still possible. Or a more low-level approach like Nvidia's command list extension could be used instead.

> But someone has to actually put in the work to get it. Keep in mind, it is not helpful to keep requesting that small open source projects do things exactly how a trillion dollar corporation is able to. You are wasting your time asking for that.

Well, i'm not explicitly asking for someone to do anything, especially from projects that do not seem to be interested in any of that. My comment was about what i ask for to exist, not what i ask for from someone or some specific project.

Also

> This is an entirely separate problem of which there are multiple solutions to. Let's not get into this though, take it one step at a time.

This isn't a separate problem, it is a problem that exists because libraries do not have stable ABIs and programs cannot rely on libraries that are on the system to provide fundamental functionality that wont break (or even wont disappear) in a couple of years (or whatever other short timespan).

There is this great commentary by Linus from almost a decade ago...

https://www.youtube.com/watch?v=Pzl1B7nB9Kc

...where he went into the issues and yet they are still there because instead of trying to fix the problem at the core (which is what he explicitly mentions about ABI stability, like it is done in the Linux kernel - BTW this also would be a good example), the developers who work on these projects instead decided that the best solution isn't to decide and stick with a stable ABI but instead provide a standardized form of the "bundle your own libraries" approach - which barely solves anything related to not having stable ABIs.

The first sentence is probably true. The third one is objectively false:

1. Poor quality drivers. The money is not there. Intel and AMD might throw a couple of engineers on it but the process is always catching-up: slow and incomplete. Think of video, audio, power management, etc. I recently bought an AMD laptop just to discover modern standby and my jack headset are not supported, that my internal mic and speakers occasionally stop working and otherwise are too low, etc. After two kernel releases all these problems persist, even if they were acknowledged by the developers that are working on it (very resource-constrained). "Linux-ready" laptops might be a safer option but in general they are not cheap and they share most of the shortcomings listed here, it's just a safer gamble and you pay the insurance premium. Of course, if Linux had been sold OEM for 30 years with hardware vendors providing state-of-the-art drivers even before releases, this would be a different story.

2. The state of the display server. Multiple HiDPI screens with different scaling factors are not properly supported. This is an all too common requirement nowadays: FHD laptop + FHD/UHD external screen. X+xrandr is slow and buggy with most video drivers. Even after ten years, Wayland adoption is slow both by compositors and by apps. And critical functionality is missing or in alpha-state, for example screen sharing (a must have in these WFH days), be it because of the design of Wayland itself or because it depends on technologies that are still more immature (PipeWire, portals, etc).

3. Fragmentation at all levels: distribution, package manager, desktop, toolkit. From the perspective of an app developer you get a tiny market share with no clear target platform and a propensity to break backwards compatibility every now and then. This of course enables a negative feedback loop wrt to my previous points. Must read/watch from kernel developers: Molnar: http://files.catwell.info/misc/mirror/ingo-molnar-what-ails-..., Torvalds: https://www.youtube.com/watch?v=Pzl1B7nB9Kc. These problems are not unknown to Windows users (WinForms|WPF|UWP|WinUI and MSI|MSIX|exe, for example) but are way worse in Linux.

I remember a Linus Torvalds video on this exact problem. I found it both blunt at insightful:

Linus Torvalds on why desktop Linux sucks : https://www.youtube.com/watch?v=Pzl1B7nB9Kc

AussieWog93
Oh my God, Linus hit the nail on the head there. Prior to stabilising on a policy of "AppImage or GTFO", this was by far the biggest headache when it came to supporting Linux.
This is common indeed, and isn't a bug in the affected source languages for reasons. How it displays when printed is irrelevant.

Here's Linus Torvalds explaining it better than I could: https://youtu.be/Pzl1B7nB9Kc?t=263

And sure you can transfer your string that someone else does not consider a string using alternative mechanisms, but then you are only not doing anything wrong because you are not doing it at all for entire categories of languages. There is no integration story for these, and once one mixes with optimizations like compact strings or has multiple encodings under the hood one cannot statically annotate the appropriate type anyhow. And sadly, adapter functions won't help as well when the fundamental 'char' type backing the 'string' type is already unable to represent your language's string.

I also do not understand where the idea that a single language always lives in a single component comes from. Certainly not from npm, NuGet, Maven or DLLs.

Extended this post to provide additional relevant context. It's not a bug, it's a feature.

conrad-watt
I agree with the linked quote - it captures an important reason why it is valuable to _enforce_ sanitisation at component boundaries, rather than merely documenting "please don't rely on isolated surrogates being preserved across component boundaries" (which would be a problem if we didn't enforce it, since an external component you don't control may be forced to internally sanitise the string if it relies on (e.g.) an API, language runtime, or storage mechanism that admits only well-formed strings).

EDIT: since a whole other paragraph was edited in as I replied, I will respond by saying that within a component, your string can have whatever invalid representation you want. Most written code will naturally be a single component (which could even be made up of both JS and Wasm composed together through the JS API). The code may interface with other components, and this discussion is purely about what enforcement is appropriate at that boundary.

EDIT2: please consider a further reply to my post, rather than repeatedly editing your parent post in response. It is disorientating for observers. In any case, my paragraph above did not claim that there will be one component per language, but that the code _one writes oneself_ within a single language (or a collection of languages/libraries which can be tightly coupled through an API/build system) will naturally form one component.

dcode
Sure, we could resolve this problem by either a) giving these languages a separate fitting string type to use internally or externally (Rust for instance can use 'string' everywhere) or b) integrating their semantics into the single one so they are covered as well as first-class citizens. And coincidentally, that would fit JavaScript perfectly, which is rather surprising being off the table in a Web standard. Yet we are polling on having a "single" "list-of-USV" string type, likely closing the door for them forever with everything it implies.
conrad-watt
There is no problem, assuming that one believes that the list-of-USV abstraction (i.e. sanitising strings to be valid unicode) is the right thing to enforce at the component boundary, _including_ when the internals of the component are implemented using JavaScript.

I appreciate that this is exactly the point where we currently disagree, and accept that I won't be able to convince you here. However, the AS website's announcement did not make the boundaries of the debate clear.

Linus recently said something that left me thinking that Linux on the desktop has much deeper problems that need to be addressed before it's worth even thinking about the fine details of GUI toolkits.

https://www.youtube.com/watch?v=Pzl1B7nB9Kc

rodelrod
> Linus recently said

At DebConf 2014

mumblemumble
Ha. That's what I get for looking at the video posting date and assuming it means anything.
pengaru
To be fair, 2014 is recent in Debian time.
AnIdiotOnTheNet
True, but it isn't like the things he talks about have changed significantly since then.
qwerty456127
Yet Linux worked perfectly for me and many people/offices I migrated to it (they didn't need any Windows-only apps) during the recent decade. Since about half that time Windows and Mac users don't even have to learn nor tolerate anything - everything is familiar (today users of Ubuntu should better learn GNOME3 workspaces though but that's not necessary) and eye-candy.

So I really have zero idea of what problems might he mean. Besides lack of native Photoshop and Visual Studio the only thing which always annoyed me in desktop Linux were NumLock quirks. There was also a problem with games but apparently it's not the case any more.

Okay I had to look up what Flatpak was since the link doesn't begin with a summary, abstract, thesis statement, or overview of whatever the subject is. Would it be so hard to start with "Flatpak, a cross distribution package format [1] for distributing Linux applications, is a security nightmare. We'll outline some flaws we believe to be deal breakers"?

On the other hand, this seems like an attempt to fix application packaging on Linux which is something Linus has, rightly, complained about very publicly [2].

[1] I don't know what to call it exactly, is it a package format or a package manager? Could be something else entirely, I'm not sure.

[2] https://www.youtube.com/watch?v=Pzl1B7nB9Kc

I think Linus really hits the nail on the head here. It's the fact that core libs, like glibc, are CONSTANTLY changing their ABI making it damn near impossible to ship a binary from one version of the platform to the next https://www.youtube.com/watch?v=Pzl1B7nB9Kc .

You can still run windows 95 apps in windows 10. But running a ubuntu 12.04 binary in ubuntu 21.04? No way.

AnIdiotOnTheNet
It is so hilariously bad that Linux has an easier time running Windows applications than it does Linux applications from an older (or newer, or sideways) distro. It's been like this for decades, it has been complained about for decades, but there seems to be very little desire to fix it.
zozbot234
This is what containers are for. Wine is nothing special, it just installs a Windows userspace container alongside your existing system.
diegocg
Glibc constantly changing their abi? Uh?
ynik
glibc is perfectly backwards compatible. We still build software on Debian 7 (glibc 2.13 from 2011) and the binaries works just fine on Debian 11 (and also on most other glibc-based distributions). But you need to be careful to only use libraries that provide long-term binary compatibility. Most libraries you can find in the system package manager don't do this.

Also, on Linux it's weirdly difficult to do it the other way around: use the current distribution to build binaries that also run on older distributions. On Windows this is trivial: I can develop on Windows 10, with the current VS2019 and the current Windows 10 SDK, and a simple "#define _WIN32_WINNT 0x600" is sufficient to make my binary work all the way to Windows Vista (2006).

On Windows it's normal to ship libraries in the application directory instead of depending on a global /usr/lib/ installation; so non-Microsoft libraries that don't maintain a stable ABI are much less problematic.

Also, good luck if you're on Linux and want to use an up-to-date gcc (e.g. gcc-8 for C++17 support) on an older distribution. Building gcc from source is easy, but the resulting gcc-8 will produce binaries that require symbols from libgcc-8, which the system libgcc won't have. We actually ended up patching gcc to disable the optimizations that introduce uses of new symbols not yet present in deb7 libgcc.

May 08, 2021 · 81 points, 108 comments · submitted by IvanGoncharov
SV_BubbleTime
Worth watching. But if you don’t... [Linux desktop sucks because application writers don’t make binaries that “just work”, pushing the user to always just have to recompile for their system, they try and address this in the kernel by not breaking things, but the distributions to and screw it all up, because breaking is fine so long as it’s “improving”]

Tons of contrary points I’m sure, but duh. It’s really nice to be able to run Windows 3.1 code on a Windows Server 2019, the idea is nice, practicality aside.

As someone who could run Linux desktop, I see it as the same reason I don’t root an android phone, I do enough tweaking and tinkering in my own code that sometimes I just want tools that immediately work. So I can do my other work. Linux as a system, great, as a desktop, I’ll never set one up for someone I don’t want to hear from all the time.

macksd
Instead of thinking of Linux for non-technical users, I think you can actually segment non-technical users into people who use a computer and people who use a browser. People who just use a browser for virtually everything (i.e. people who can use a Chromebook and think that it's basically the same as a normal laptop) are actually very common and can use Linux just fine. I had a friend who needed a home computer but was desperately poor. I gave them a machine I picked up at a garage sale with Ubuntu on it. They've been using it for years and have never looked back. After they had been using it for a while I mentioned something to them about their computer having an unusual operating system, and they really didn't have a clue what I was talking about. Because they had never paid attention to the operating system beyond the clicking the browser icon.

Now yeah that's not true for people who will want to plug in different printers all the time, and install Microsoft Office specifically, etc. Surprisingly I think it's actually the people with a medium amount of technical knowledge that have the most trouble adapting to Linux.

But even as someone who virtually never uses anything but Linux, I don't really get the constant drive to have this "year of the Linux desktop". It works fine for me and I love it. I make my smug comment when Windows and OS X users have problems but I'm not in a rush to help them with their problems or give them new ones. If you're bothered enough by the mainstream options to switch, you can learn a few new things. If not, I don't see the problem.

Edit: and I think Linus focuses too much on Debian here. Even in 2014, Ubuntu was already a very easy to use and out-of-the-box distro. Even Fedora was pretty good. My distribution doesn't package Zoom, but if I go to their website and click install I get the right binary and the experience is comparably easy to Microsoft Windows installers. If you're on a fringe distro, you almost certainly love the power that comes with (and requires) building everything from source regularly. So again, I don't see the problem. If you want to be included in Debian, using Debian's shared libraries doesn't seem like an unreasonable policy. It's quite easy to ship an application that is easy to install and upgrade independently. Google Chrome is another app that does it super well. More people would do it if there was demand, but there isn't, and I don't think that's just an app packaging problem at all.

schmorptron
For Zoom: Even better, you just type flatpak install zoom or snap install zoom and get a community maintained version that has many potential incompatibilities taken care of. Or any user that doesn't want to use the terminal can just search gnome / ubuntu software and find zoom in there and click install.
myk9001
>> It’s really nice to be able to run Windows 3.1 code on a Windows Server 2019

You can also run the same version of a newest app on Windows 7 and Windows 10. At least, you could back when 7 was still supported, and you probably still can, but I didn't check.

Try installing a package targeting Ubuntu 21.04 on Ubuntu 20.04 LTS? :) (Replace Ubuntu with a distro of your choince in the previous sentence.)

For all its flaws -- and it has many -- Windows is a platform while a Linux distro is a ball of tightly-coupled packages.

>> I just want tools that immediately work.

Totally agree with you here.

schmorptron
Flatpak solves this :)
bachmeier
> I do enough tweaking and tinkering in my own code that sometimes I just want tools that immediately work

Interestingly, that's exactly the reason I continue to use Linux. Depends on what you're doing, but for the boring work I do, I never have to tinker. Install and use. Once in a while do a boring update.

bawolff
> Windows 3.1 code on a Windows Server 2019

Can you? I was under the impression 64bit windows dropped support for 16bit windows.

SV_BubbleTime
Good point. IDK, it was figurative, and I may have gone back too far.
yyyk
Windows 3.1 had some 32bit support - via Win32s. I think that these apps should still work.
loudtieblahblah
I use Linux top to bottom in my home.

My work laptop. My wife's PC. Our plex/jellyfin media server.

My sister's pc and my elderly parents PC are also ubuntu.

I have found my support time for all of them dramatically went down after I got them off windows.

Windows slowing down over time. I longer happens, lengthening the age of hardware and pushing off OS reinstalls, which can at times cost money based on the OEM model.

I can still suck more juice out of a machine with a RAM and add drive upgrade, again, without paying for a reinstall, and saving them lots of money in the process.

LTS/stable distros just work, out of the box, for most people.

Its people like me who have an incessant need to tweak everything that will mess up a Linux desktop.

philliphaydon
I tried Ubuntu with my parents. Caused lots of problems. Moving dad back to Windows and my mum to a MacBook Air. I haven’t done the whole support call in 3 years now.
SV_BubbleTime
Is it a preference or a requirement that when someone points out issues with Linux, in this case from Linus... that Linux users must come to tell you all about how they use it successfully?
watermelon0
> LTS/stable distros just work, out of the box, for most people.

Do you have any source for this? I know a few people who use Linux (or used it in the past, including myself), and number of complaints is definitely a lot higher than from the people on macOS/Windows.

loudtieblahblah
This has just been my experience.

The only real difficulties I've had is someone else buying them a scanner/printer combo and manually having to get drivers from the manufacturer's site and install them creating a problem when the driver packages depend on libraries not available in the repos any longer. But for me, it takesa bout 20 min of googling around to find the solution.

And other than that problem, I rarely if ever, have to support them for anything.

Half the time when i do, it's because they got banned from a website and didn't understand it, or they were having internet problems and it was unrelated to the OS.

My mother doesn't "like" Linux, but she can't articulate why. She's been wanting to "upgrade" my dad's computer for years as this sneaky way of getting away from Linux. But the reality is, as old as my Dad is - the XFCE desktop environment is more familiar to him, coming from Windows XP/7, than Windows 8.1-10 is.

It's also pretty helpful, frankly, that when something - other than updates - requires root access, he just backs away and doesn't mess with things. Same for my sibling.

This, in itself, has prevented a lot of problems IMHO. People just click "yes" and escalate their privs in Windows without giving a second through about it.

bcrescimanno
While I think a lot of the complaints he raises here are still relevant 7 years later, it's interesting that both MacOS and Windows have trended more towards a package management system with their respective app stores--with MacOS especially discouraging downloading or running any apps from outside that ecosystem.

Since one can reasonably expect to target "Windows" or "MacOS" the packages in these app stores can be maintained directly by their developers which avoids a lot of the problems that Linus talked about in this video. When you get past the surface layer concept of, "We've sort of overblown this whole package management thing," it's really an argument that the fragmentation of the distributions and shared libraries that can't reasonably be shared.

Even using Arch with its rolling release model and making liberal use of the AUR for the most bleeding edge, I've found myself in exactly the situation Linus describes of needing a newer version of a package because the older one flat out doesn't work for me. I can make it work because I'm technical enough to roll my own package if I need to; but, even my wife who is pretty tech-savvy herself wouldn't be willing or able to go down that route.

IMO, the fact that the conversion in these comments will be rich with opinion and debate (almost all of which will be informed and intelligent) is the crux of the problem. Too many cooks have built too many kitchens--or some such metaphor. :)

myk9001
Agree. And want to add one point which people seem to ignore in discussion like this.

>> needing a newer version of a package because the older one flat out doesn't work for me

The reverse situation is also a problem. For me personally, even a bigger problem.

Imagine, you're happy with a supported Ubuntu LTS version and don't want to upgrade to a non-LTS version. If there's a new version of this one package you would really like to use that targets a newer version of Ubuntu, you're out of luck, basically.

For example, maybe KDevelop replaced its home-grown C++ parsing engine with libclang. It's a huge change, you'd really like to give it a shot. Well, the only option is to upgrade to a newer version of Ubuntu.

Yes, there may be a PPA that has the new KDevelop targeting your LTS version. But there's an equal chance such a PPA doesn't exist. Or it has pretty much the whole set of KDE libs as its dependencies; those will not only pollute your system but also can mess up your KDE installation. Also an anonymous person made the PPA and you decide to trust them at your own risk. And when you're finally ready to upgrade, you'll have to re-add PPAs manually. Etc, etc, etc.

Snap solves both problems rather well. And in contrast to flatpak, there're official builds of VSCode and JetBrainds IDEs in the snap store. Maybe other software too -- didn't really look it up.

One problem with snap is forced updates, of course. Ubuntu developers really need to add an option to disable them.

bcrescimanno
Thanks for sharing! I do think that reverse situation is an often overlooked problem; though, one that I've never found to affect me personally.

I honestly don't know enough about Snap or Flatpak to really comment on them and their advantages and disadvantages. I've always shied away from them because I can't shake Xzibit in my head, "Yo dawg, I heard you like package managers so we put a package manager in your package manager!"

c-smile
What exactly is the Linux Desktop? There are too many of them.

"If you have several watches you cannot tell exact time". Likewise "several Linux WMs - no Linux Desktop at all".

Creating Linux desktop application these days is the same task as to create multiplatform application that will run on Windows, MacOS, Linux/GTK, Linux/KDE, Linux/... (~40 of those more).

Conceptually we can use XWindow primitives (if it is used in particular WM), but that does not help at all when your application need to show "file open" dialog or the like.

If someone would ask me to establish Linux Desktop architecture I'll start from defining WindowProc(HWND,MSG,WPARAM,LPARAM,LPRESULT*):BOOL abstraction. Similar to VERY stable WndProc concept of Microsoft Windows. Plus defining stable API that all WMs should implement.

Only after that we may see real Linux Desktop - not just WM but bunch of useable applications.

joerichey
Shouldn't this be marked with (2014) as it was made at DebConf 14?
SV_BubbleTime
Right, 2014. Although I’m not sure anything has improved on his main complaints. There is no possibility of compatibility between binaries for the end user which means the experience will rarely be download and run. The distorts will always be incompatible, the package managers don’t care about you, and the core system components will break things in the name of progress (for them, hope you can come along!).
red_trumpet
Things like Flatpak or Snap try to solve this problems. Do you think they don't succeed?
tored
Has Linus any opinion on flatpak, snap or appimage?
Decabytes
Linus Torvalds uses AppImage for his diving app subsurface. He is on the record saying he endorses it, and you can find a quote by him on the AppImage website. I don’t think he would have anything good to say about snap. I don’t think he has said anything about flatpak
intc
In my personal opinion it's not very advisable to run binaries "from somewhere". Some views regarding flatpak: https://www.flatkill.org/2020
spixy
Try but since its more of them that devs have to support (Flatpak, Snap, AppImage...), problem is still the same.
tomrod
My family and I have used Linux desktops since 2008, when Windows Vista failed to run full screen video with BSOD.

There was a learning curve for a few months, but I've never regretted the shift and it has made for significant contributions to my career.

YMMV.

crossroadsguy
I’m looking for a laptop. I can’t find any good laptop with 16-32GB RAM at a non-Apple price. Also they all have windows preinstalled.

Defeats the purpose. That’s how it is.

nucleardog
Got a Thinkpad during one of their sales last year for $1300 that had a generation old i7 and 48gb of RAM. Had the Dolby HDR 4K screen and a bunch of other odds and ends as well.

Didn’t look if there was an option without Windows preinstalled since it was for my wife.

Added bonus: lots of ports, onboard HDMI, onboard ethernet

If you need something now probably not a huge help, but if you’re not in a rush might be worth keeping an eye on.

throwaway77388
There's Tuxedo computers and System76.

https://www.tuxedocomputers.com/en

https://system76.com/

tristan957
Refurbished Lenovo Thinkpads on eBay or Amazon is what I've gone with for my past two machines. Not disappointed at all. Quality builds for quality prices.
tomrod
I like my Dell XPS15. It came with Windows and runs Fedora flawlessly. I maxed out RAM, extra SSD, and GPU because I do a lot of ML consulting (AWS works well too, but its nice to have my own hardware).

Much cheaper than Apple. I use an Apple 2015 MBP for a client's requirements. It screws up my muscle memory. The OS is ok I suppose but it's no Linux.

ghaff
There are Lenovo Thinkpad Linux configurations up to 16GB but they'll be in the same price range as MacBooks.
myk9001
Where I am, it's much much cheaper to get a laptop with the least possible amount of RAM preinstalled but -- importantly! -- with two RAM slots.

Then just buy two stick of fast RAM and install them yourself. It often doesn't void your warranty (but make sure to check the warranty of your specific laptop, obviously).

Same applies to SSD, actually.

tachyonbeam
I switched to Linux around the same time as you. Before that I was a Windows user from 1997 until ~2008. In the late 1990s and early 2000s people just accepted that Windows crashing everyday was a normal feature of that software, and they blamed every crash on user error, eg: "your drivers are bad", "you're running buggy software", etc. My Windows 98 install couldn't run for more than ~4 hours without crashing, it was infuriating. Windows 2000 did better, but it would still crash once every couple of days.

Linux was a million times more stable and much more pleasant to use as a programmer (no SDKs to download with complex installation instructions). It also had a package manager which made installs/reinstalls a breeze. You could actually write a bash script to redo your setup automatically, wow! Never looked back.

I recently installed Windows 8 on an older computer I was setting up for my mom (she didn't want Linux, understandably). It was my first time using Windows in two years or so. Windows Update was broken out of the box. It wouldn't run. You had to manually download a patch to get it to work. I don't really understand why desktop Linux gets so much hate when commercial software is this bad.

cl0ckt0wer
because printer/scanners don't work with the included software, and linux has a reputation.
ginko
I have an old Canon scanner that doesn't work on Windows because Canon never released a driver for anything after Windows XP. Works like a charm on Linux though.

Same experience with an old Nikon Super CoolScan negative scanner I recently bought used. It came with a Firewire PCI card. I installed the card, plugged in the scanner and it just worked.

yjftsjthsd-h
No, printers/scanners work without the manufacturer's software; if it's supported (which, I admit, is incomplete but more than you might expect), you just plug it in, tell CUPS to add a printer (which it can do seamlessly without installing extra garbage from the manufacturer), open Simple Scan (or any other SANE frontend) and off you go.
tachyonbeam
I guess my point was, Windows should have just as much of a reputation, given Microsoft has shipped release builds with absolutely horrible and blatant bugs (does anyone remember windows millennium?).

I did do some research before I bought my printer and found a brother that connects to wifi and works flawlessly with both Mac and Linux. I find you can also make your life much easier as a Linux user by choosing popular distributions. It's always easy to Google specific fixes for Ubuntu.

Gwypaas
Windows 8.1 reached end of mainstream support on January 9, 2018, over three years ago. Windows 8 support ended on January 12, 2016.

No wonder you ran into issues if you installed it and tried getting it "updated" recently.

https://docs.microsoft.com/en-us/lifecycle/faq/windows#windo...

tachyonbeam
Here is a post from 2016 outlining the same problem I was describing: https://superuser.com/questions/1103966/windows-update-doesn...

There is a bug in the Windows Update client on Windows 8.1, and it can't update itself.

pitay
Do note that Windows Update may just not offer a major update because such as Windows 10 2004 because of concerns about driver compatibility. It happened to me because Microsoft and the manufacturer of the PC didn't come to an agreement about driver compatibility. Windows update gave me warning about updating to the new Windows 10 version (to 2004 or 20H2), but there was no update for it in Windows Update, I had to download the update manually to get it installed. Also had to do that manually for a previous version as well.
that_guy_iain
The real issue, macos has one distribution, Windows has one distribution, Linux has so many I doubt anyone really keeps track. The issue with Linux is they’re working in fragments instead of as one. The problem is oss not the operating system but the culture it was developed in.
lbriner
I guess the theory is open competition and whoever builds the best thing gets all the market and everything else dies. The only problem with that is that if you have spent years developing something, especially if you weren't paid for it, you are not likely to want to accept defeat and lose your investment.

Maybe the way to sort this is to create Linux groups around certain topics like graphics or music production and then the people with the specific skills can contribute to multiple applications at the same time (at least their design work). This way, if one product dies, their work lives on in other products.

encryptluks2
This isn't really a bad thing. There are lot of enterprisey distros that I absolutely despise the package management and culture. If they all operated under one I can only imagine how political the whole process would be to get anything accomplished.
pjmlp
In the mid-90's, I had hope that GNU/Linux would eventually evolve into either GNOME or KDE as the main desktop environment with their frameworks filling the same role as Kits in OS X, BeOS, WinAPI.

Instead, it is not only the distributions, the whole desktop stacks keep being re-invented way more that what Apple or Microsoft have done thus far.

Then there are the whole set of GNOME and KDE forks from those not happy with those reboots.

Not a surprise that only ChromeOS and Android have managed to have stuck as desktop/mobile variants of Linux based OSes.

So those are the only "desktop" Linux that I ended up caring about.

ldiracdelta
I don't care about window transitions and compiz fusion or anything flashy about my window manager anymore. Xfce. Happy camper since 2015 as my main dev environment. Just run my programs please and provide some easy tools to position multiple windows.
pjmlp
You might not care, but everyone that doesn't want to be yet another Electron developer cares about the developer stack available across all desktops.

Since that is too much effort, you just get Electron apps instead.

zarkov99
I don't know. I understand what you are saying, but if there was just one Linux desktop it would almost certainly have to be tailored to the lowest common denominator user and it would end up just being a less polished MacOC version. The fragmentation forces some degree of interoperability which enables more specialized and innovative approaches, like tilling window managers,functional package managers, weird shell languages. In other words, the fragmentation might be an innovation and specialization enabling feature, rather than a bug.
kingsuper20
I hear what he is saying, although I haven't run into issues with applications so much. It makes you respect how much trouble Microsoft has gone to through the years.

If I've had a problem with desktop Linux, it's been on the driverish edges. Sleep behavior, printers, graphics cards, wifi, multiple monitors. That's also what seems to make OpenBSD a pain.

alexpotato
On the flip side, I found out recently that if you are sharing a printer connected to a Linux machine using CUPS, you can print to said printer FROM AN IPHONE!

This blew me away when I first tried it and then digging deeper realized it's b/c Apple was the maintainer of the CUPS library. Kudos to Apple for that integration!

turtletontine
It sounds like he's saying Linux desktop DEVELOPMENT sucks, not that using Linux desktop sucks.

I've been running (mostly ubuntu based) Linux since 2014. The only major problems I've had have been driver issues, userspace applications have largely worked out of the box. Maybe the devs behind everything I use are doing an exceptional job, but I've never noticed an update break a common application due to library changes.

willis936
Does appimage not solve this problem? Linus himself released software by appimage the next year (2015).

https://en.wikipedia.org/wiki/AppImage

uo21tp5hoyg
I think part of the problem is that there's so many "Does x not solve this problem?" solutions now all with their own unique downsides and upsides, it reminds me of that one xkcd[0].

[0] https://xkcd.com/927/

willis936
I spent time in MIPI PHY and I am very familiar with this issue. appimage isn't the same kind of solution though because there were zero competing solutions. There were no universal linux binaries. Having one is a much better situation.
gwmnxnp_516a
It only soves the problem partially. The drawback of AppImage is that it cannot package an application containing multiple executables such as Emacs. The best solution may be coping Apples' app-bundle idea. An app-bundle is just a folder containing metadata, executables, shared libraries (dylibs), text files and images. This package format can be installed by just dragging and dropping the folder to /Applications directory and deleted by just removing the folder. The Finder file manager shows the app bundle as the application icon and allows running the app by clicking on the app-bundle folder. This idea is easy to implement on Linux, it would decrease Linux fragmentation and the work duplication that happens when packaging some application.
throw7
Linus is right about being angry when binaries break. I remember google music actually provided a linux upload client... in a later fedora release, it broke. ISTR a library it depended on broke it in some way in the newer version. The issue is there's no Linus in library land and there couldn't be anyway.
happyjack
I run RHEL 7 desktop, because it's what my vendors support (scientific / engineering desktop applications, simulators). I'm about to get a new Thinkpad, and highly considering running Fedora because packages on it weren't written in 2002 (sarcasm) and use it side by side to my RHEL 7 dekstop, even if some of my work applications won't run on it.

Linus is completely correct, though. I think the open source world has gotten way to mixed up with the "free" world. I understand that gcc and some of the HUGE packages are free and open source. They are cost shared by companies and have many people that use them and are maintained well. But other packages with small user bases? His dive example was perfect. Unless someone uses it personally, how can you expect to maintain it unless you pay for it? Or, if the software is paid, how can you expect the company to maintain it for Linux, when their user base is going to be for Mac / Windows?

I think Ubuntu LTS is the most sane approach I've seen. You can get relatively new packages in a pretty stable environment. It's not RHEL stable, and Canonical is starting to care less and less about desktop Linux, but it's not too bad. Ubuntu needs Vanilla gnome option by default, though. Their default look is horrendous. I know I know, it's Linux, you can customize it! Why should I have to waste my time?

sascha_sl
This has improved a lot. Hardware support has caught up. I'm using a 1 year old laptop with hard to support hybrid graphics (AMD integrated, NVIDIA dedicated). In Fedora, it all worked out of the box. And you get an extra 50% battery life over Windows. In 2010, when I last ran a laptop with hybrid graphics, setting up Bumblebee was a disaster.

The only line I had to type into a terminal in Fedora 34 to make it a GUI-usable system was adding the default flathub repo. That fixes the package manager issue. Most people have so much bandwidth and storage coming with their devices, shared libraries make less and less sense anyway.

Bancakes
I've no idea how people manage to flawlessly run their nvidia GPU. Prime is OK but having to open apps with prime-run like sudo is such a drag. Does your laptop have a BIOS switch?
austincheney
TLDR;

Linux distribution landscape is diverse (complex) and emphasis is on shipping source code and not something (binary) that real people use. For example something is broken so do you tell grandma to download source code and compile it or do you give her a compiled binary that fixes the problem in one click?

tldr; tldr; Not simple (more than few).

davidgerard
This is still the case. Snaps and flatpaks help a bit. But for a lot of stuff, the easiest way is to run the Windows binary under Wine. Including open-source stuff.
rektide
it'd be interesting to try to make a hybrid static/dynamic library program. assume you can find the libraries you want on the system, but have a fallback where you go download some Debian packages & install them into your XDG_CACHE_DIR & load libraries from there, when the system (whatever os it is) didn't have it.

part of the trick would be making this code significantly smaller than doing a static build.

sascha_sl
I think you'd like Nix.
Wowfunhappy
I’m not the GP but I for one am quite sure I would love Nix.

...just, well, not quite enough that I want to spend time learning how to use it. :/

rektide
I feel similarly. It seems to have a lot of good things going for it.

When I first investigated it like a decade ago, one majorly offputting aspect about it is that I didn't see it as offering the capability to install multiple copies of things. I greatly prefer systems where i can blue-green deploy services locally, or otherwise have special purpose instances, and Nix, at the time, didn't seem to have much interest in that. I believe things have changed a lot in that regard, or perhaps my couple hours spent as an utter neophyte poking around for answers didn't surface the clues.

I would say that, these days, I am far less interested in configuring a system. I am far less interested in the desktop. To me, my time is 100% not worth investing in these goals. I am trying to focus on working with & orchestrating a fleet of machines, in a coordinated fashion. I believe free-software folks should all be re-investing, re-focusing similarly, in running & operating personal/manorial/federalized "cloud" systems. I happen to agree with a lot of the philosophy of Kubernetes- a desired state management system for resources, and controller/operators that autonomically enact & maintain that state- and invest my time & work trying to make it an useful fabric for my & hopefully my friends digital homes.

gwmnxnp_516a
One of the greatest problems of desktop Linux is the huge fragmentation. Linux desktop has fewer users than MacOSX and Windows, and lots of distributions, each one with a different packaging methods and incompatible dependencies which may result in work duplication and dependency hell, that can happen if one needs to install an application outside of repository, but it needs a dependency that overrides an already installed library. Other problems of Linux distributions is that they not allow keeping multiple versions of the same application and multiple versions of the same compiler. The packaging issue is not relevant for people using Linux as server due to Docker, that can package any application alongside the configuration, dynamic linker, GlibC and shared libraries.

Another reason why is hard to find binaries for Linux is the lack GlibC backward compatibility. Even a fully statically linked application or packaged with all shared libraries dependencies via LD_LIBRARY_PATH or RPATH may crash if the application was linked against a newer version of GLIBC and is deployed on an Linux distro with an older version of GLIBC. The linking error happens due to the lack of backward compatibility of the GLibC, which is the main bridge between the user-space and kernel-space on Linux.

A possible solution for packaging an application for multiple distribution with minimal work duplication might be: 1 - minimize the number of dependencies; 2 - static link as much as possible; 3 - instead of linking directly against shared libraries such as Curl, Gtk or SSL, load those libraries through dlopen(), dlsym() Unix APIs; 4 - build the application on distro or docker containing an older version of GlibC for dealing with GlibC compatibility problems; 5 - pack all shared libraries dependencies on the same directory as the application setting RPATH to a relative path to the shared libraries dependencies; 6 - embed all files additional non-source files such as images or text in the binary using resource compilation; 7 - distribute the application in a zipped archive; 8 - use GO (Golang) which is able to static link almost all dependencies and also bypass the GlibC by performing system calls directly. A golang binary will just run everywhere with minimal effort.

Linux distributions could make the developers life easier if they copied the MacOSX app bundle idea where applications are distributed as <APPLICATION-NAME>.app directories containing metadata, the executable and all shared libraries dependencies. The advantage of this approach, is that the application can be installed by just dragging and dropping a folder to the /Applications directory and uninstalled by just deleting a directory. Another benefit is that the user can keep multiple versions of same application and not fear dependency hell or overriding a previously-installed library.

throwaway77388
People who haven't tried Linux on desktop for some years might like these distros:

Pop OS

KDE Neon

Linux Mint (Cinnamon version)

MX Linux (ahs/Advanced Hardware Support version)

schmorptron
I'd add Fedora 34 (Or anything else that runs Gnome 40) to that list, the gnome team really outdid themselves with how polished and snappy it feels, especially when you're on a laptop using the trackpad gestures.
dewlinedew2
Fedora Silver Blue may now resemble somewhat he's talking about in the video, give it a whirl!
brnt
How usuable is it nowadays? Used to be a bit rocky.
intc
We have Linux on all desktops and laptops. This has been the case for more than 10 years by now. There are still a couple Windows VM's (mainly for checking how some things work on Windows in case need be).
ramchip
Who's "we"?
intc
https://fennosys.fi/
waynesonfire
does freebsd suffer from this problem as well? specifically, that application package needs to support "15 billion different versions"
forgotpwd16
No since there's one FreeBSD like there's one Ubuntu. But you still have the problem of supporting older versions. FreeBSD 13, 12, ... have different libraries and your program should consider that.
Black101
Linus, Please maintain a distribution along with everything else you are doing? I bet you could charge for it.
None
None
andreajessic
play Super Mario Bros for free online: https://supermario-bros.co
gruhyrose
That's great, If are a fan of Super Mario Bros, you can play for free online: https://supermario-bros.co
gruhyrose
Thanks for share, play world of solitaire for free online without download, install, or register. Update many beauty themes. Website: https://worldofsolitaires.co
Decabytes
I'm going to vent a bit about this. My perspective is as someone who was a Windows user who moved to Linux 6 years ago after getting fed up with Microsoft.

I love the package managers on Linux distros. They are amazing. I feel physical pain whenever I have to jump through hoops to get software up and running outside of a package manager. That being said, I do a lot of hobby work on a Raspberry Pi 4. Not every package is built for arm64. Packages break, have issues with their dependencies, or fail cryptically for me. Building software from source for programs that do run on linux systems can be hit or miss. I understand I chose this with my specific platform of choice, and I have accepted this. But if I can support a solution that will make my life easier I will.

It's 2021 and we have Flatpak, Snap, and AppImage, and it's super frustrating. None of these tools solve the problem entirely and they all come with their own sets of drawbacks. That's okay it doesn't have to be perfect. But there are people who hate the entire concept of using these tools and will crap all over them every time they come up. They have valid criticism don't get me wrong, but in my opinion doing something and shipping it is better than doing nothing at all. I would love to see the solution to Snap, Flatpak and AppImage by I've yet to see anything from their biggest proponents. I just can't be bothered anymore wasting my energy listening to people who aren't actively trying to find solutions. If you are reading this and are working on a solution, I appreciate you.

So which to choose? I think based on Canonical's past and present behavior Snaps just have too much baggage for most people in the Linux community. This sucks as you have software packaged in Snaps from Microsoft, Nvidia, etc that aren't available as a Flatpak or AppImage.

AppImage has the blessing of Linus Torvalds, but if you say that aloud then you will have people say "So what?" as if Linus Torvalds is some just some guy. For better or for worse it means something when he supports things, and he does provide his software as an AppImage so take that for what you will.

Then there is Flatpak. If you look at Flatkill.org then the tool is just a pile of lies and security holes. With that being said my money is on either AppImage or Flatpak

tristan957
Flatkill.org is full of lies and deception. Many sources on the internet explaining why. It's just FUD at this point.
bawolff
What he's saying is true, but also i think the app-store type approach that apt-get provides is a major benefit.

With windows i have to find some random program, hope its not malware, possibly pay for it, etc. With linux, i have reasonable assurance that packages (from main repos) aren't evil, they are free (as in beer), and i can easily search through and find something for my usecase. I can't really do that in windows.

skohan
I totally agree. This is one of the things that makes Windows feel like a second-tier OS to me compared with Linux and Mac. If I have to do something "tech-y", on Linux or Mac, I look for it in the package registry. I trust that action would probably have been taken by the community if it were doing something malicious, and usually I can find the project on github and peruse the source if I really want.

By contrast on Windows, it usually means finding some GUI-based utility on some sketchy website filled with ads, and maybe a fake download button. Probably it's freemium, so during the task maybe I have to dodge several calls to action to upgrade to the paid version. And then a week later, maybe I check the task manager, and find out it's gone and set itself to run in the background at startup, doing god knows what, without asking.

It's just one of the ways that Windows feels less like it's my computer.

candiodari
uh ... doesn't brew support windows ?

https://chocolatey.org/ https://brew.sh/2019/02/02/homebrew-2.0.0/

Also in general I would say windows is not lacking for software registries. Or software.

https://portableapps.com/apps

guhidalg
Right, because sourcing a ruby file to install homebrew on macOS is what “real” operating systems should do.
disgruntledphd2
If curl |sh was good enough for my father, then it's good enough for me ;)
cpach
It might not be pure, but IME it works very well.
oxguy3
The problem is that there are always going to be a ton of apps that aren't in the repos. The repos contain the top hundred apps that have a million user each, but they don't contain the top million apps that have a hundred users each. The repos get you most of the way there, sure, but they can't possibly provide every app that every user wants.
tachyonbeam
OTOH on Ubuntu much of the common software is either in the repo, or they're nice enough to provide a .deb file you can download from their website.
cturtle
Its not the right solution for everyone, but the AUR on Arch Linux has been wonderful for managing those "hundred users each" applications.
zeta0134
The AUR is wonderful. Ubuntu's support for PPAs comes in at a close second, though less from a tech standpoint and more because of Ubuntu's massive community. Both provide a middle ground, inbetween the ideal of the package manager (which almost always works) and the frustration of trying to build the software from source. With the AUR, someone else has resolved most of the kinks for me already, and that's time saved that I can really appreciate.
pjmlp
Strange, because I can do it,

https://www.microsoft.com/en-us/store/apps/windows

https://chocolatey.org/

cfn
Or https://scoop.sh/
teuna
https://github.com/microsoft/winget-cli
encryptluks2
Chocolatey is like 10x slower than Linux package management.
bordercases
Installation also happens only a handful of times per program, versus the amount of use you get out of it.
smackeyacky
Also the worst named package manager ever. I wanted to play with an RTOS for an embedded system a few weeks ago an the first step was "install chocolatey".

Not knowing what it was, I had to spend some time reading about chocolatey.

I know it must be some 3rd generation pun or something but it really put me off going any further. Names are important.

bawolff
Indeed, windows is moving in that direction. Back in the windows XP era, it really did feel revolutionary.
pjmlp
It is orthogonal to this discussion, but had Windows NT offered a serious POSIX environment I would never bothered with Linux to start with.
worble
The fact that people just listed 4 competing tools shows exactly why this is a problem:

2 of them are essentially community run, and could theoretically at any time be taken over by a hostile (or even just an incompetant) entity and be used to distribute malware. Not that this couldn't happen through an official channel, but it's certainly far less likely.

Since the software distribution is not even, I currently have to check choco, scoop and winget for updates. It's slow and irritating, and if I need to uninstall or check a package, I need to figure out which tool I installed it with.

The software that does crossover between package managers can cause compatability issues. Just today I accidentally broke Rider since I had the .NET Core runtime installed through choco, but the .NET Core SDK installed through scoop.

I get they're trying to finally fix this through WinGet, but I can't help feel it's too little, too late.

jayd16
>The fact that people just listed 4 competing tools shows exactly why this is a problem

yum, apt, snap, flatpack, probably more?

pjmlp
RPM Red-Hat, RPM SuSE, deb, tar balls, snap, flatpak, nix, ....

Yeah, thankfully it doesn't happen on GNU/Linux.

bawolff
Yeah, but the average debian user just uses apt. Other people do other things, but as a debian user my experience is basically just one place, which is what matters.
pjmlp
For the developers it matters, and maybe they won't care to deal with deb, though luck.
ssivark
The two aren’t mutually exclusive! The package repo need only be a list of “endorsed” applications/binaries once they’ve been packaged. The fact that it requires so much work on the package maintainers’ part (essentially duplicating the effort for every distribution) is the main problem being pointed out — and that is original to what you’re expressing.
atatatat
> With linux, i have reasonable assurance that packages (from main repos) aren't evil

Fallacy.

https://blogs.sap.com/2020/06/26/attacks-on-open-source-supp...

sgc
Except that all my packages are out of date, and manually install a ton of stuff because of missing features or unpatched bugs. But it's a good first start.
andreajessic
Super Mario Bros, play for free online: https://supermario-bros.co
hs86
Having a system-wide package manager where nearly all libraries are dynamically linked also has its drawbacks.

A seemingly minor update might cause a huge cascade of dependency updates which causes common Linux distributions to tend to one of these two extreme solutions: Either fix all packages in place and freeze their version numbers or just "give up" and update everything all the time. Both solutions feel like compromises to me.

Other end-user OSes don't act like this. On Android/iOS/macOS/Windows, I can have the latest 3rd party software without having to deal with intrusive updates to the OS infrastructure all the time. The BSDs handle this better, and maybe something like Ubuntu LTS + Nix on top of it might be a way around this.

yjftsjthsd-h
Right, Windows is totally stable and an OS update would never break basic functionality [0] or delete peoples' files [1].

[0] https://arstechnica.com/gadgets/2021/03/blue-screen-of-the-d...

[1] https://arstechnica.com/gadgets/2018/10/microsoft-suspends-d...

formerly_proven
> On Android/iOS/macOS/Windows, I can have the latest 3rd party software without having to deal with intrusive updates to the OS infrastructure all the time.

That's not really true. The .NET runtime isn't redistributable, and so has to be installed on the host OS, which usually works but not always (and pre-Windows 10, newer versions of .NET required a bunch of KBs, which meant that Windows Update was actually working and able to install those, which fairly frequently broke on Windows 7 due to the lack of Service Packs). Nowadays this is less of a problem, due to improved .NET compatibility and .NET 4.x coming pre-installed on Windows 10. Which honestly is great - you can compile and run .NET 4.x programs on any Windows 10 machine. Granted, it's some relatively outdated version by now, but it is still very nice to have a "proper" programming language out of the box, and also the ability to compile to small .exe's.

Similarly, MSVC runtimes (except the installers are redistributable, but you are still in the situation of "have to globally install it").

alkonaut
In what way is the runtime not redistributable?

The old one “.NET Framework” is an OS component but has enough compat that you can always upgrade it.

The newer one (net5+) is typically fully bundled with each app so no sharing.

MSVC also went this way - you bundle them rather than take a dependency on a system wide runtime. These two (.NET and c++ runtimes) were basically the last shared libs on windows, and it’s now basically obsolete tech (.NET 4.X) or no-longer-recommended deployment method (Msvc).

formerly_proven
You are right, but there are still a lot of applications reliant on the "old ways".
pjmlp
Given that the new way for SxS was introduced in Windows XP, the old way is a long time ago.
May 06, 2021 · 4 points, 1 comments · submitted by ig0r0
mikece
No operating system -- or desktop -- is perfect. But what would be a massive improvement is if all of the linux distros adopted a single desktop stack to support for enterprise/business clients while optionally supporting other desktops combos. I know this is more or less be asking the impossible of the Linux community who split hairs over everything but if a ten year "Okay, we'll all support this stack" was reached that would massively increase the Linux desktop market share and give app developers (eg: Adobe) a stable target for porting non-trivial applications.

The ultimate loser in such a joining of the clans would be Windows and macOS while Linux would rise rapidly in prominence. Why can't we do this?

May 05, 2021 · arunc on Linux and Glibc API Changes
glibc breaks ABI quite often. Linus has roasted about it openly in the past https://www.youtube.com/watch?v=Pzl1B7nB9Kc

Notable quote from that: If there's a bug that people rely on, it's not a bug, it's a feature.

wahern
Linux famously removed the sysctl syscall (the original, BSD-derived syscall version of /proc). It was justified because distros had already removed it. The removal was a huge API breakage and even broke security sensitive software, like Tor, for countless deployed systems. But because the distros removed it first (RedHat, specifically), Linus got to claim that "nobody was using it" and was shielded from the fallout.

Otherwise, both the kernel and glibc regularly break things accidentally. You rarely hear about it, though, because its the nature of software development that the areas most likely to be broken are those where people rarely lurk. glibc makes at least as much effort as Linux in terms of supporting backward compatibility, but glibc's job is in some ways much more difficult, and they have far fewer contributors to help out. There's no shortage of bugs in glibc, and I have plenty of my own gripes, but by the standards of the industry (particularly of FOSS), they do an outstanding job of maintaining ABI compatibility.

Once upon a time people would claim that glibc's efforts were feeble as compared to proprietary OSs like Solaris, AIX, or Windows. But these days those backward compat stories are far more complex and less pristine, and glibc has well over a decade (or two decades?) of using ELF symbol versioning to maintain compat.

Denvercoder9
> The removal was a huge API breakage and even broke security sensitive software, like Tor, for countless deployed systems.

Honestly, I'd say that is on them. It has been discouraged to use it since basically forever (it has been noted in all-caps in the manpage since at least 2001), the kernel started complaining about its usage since Linux 2.6.24 which was released in January 2008, and it finally disappeared in Linux 5.5, released in January 2020. That's a two-decade deprecation period.

wahern
Sure[1], but it was nonetheless a backward break that caused substantial trouble. I'm only trying to push back on the claim that Linux has a pristine and principled record in this regard, not that the removal wasn't reasonable for Linux. Linux can make certain claims because distros make many of the hard decisions for them. If projects were fronting glibc (like eglibc for awhile), glibc might be able to make similar claims resting on technicalities.

Also, the removal of sysctl by distros took away a facility, descriptor-less kernel entropy consumption via sysctl+RANDOM_UUID, that wouldn't be restored until getrandom was added many years later. Until then jail'd processes (or other code that couldn't make too many assumptions about its environment) had no easy way to seed their RNGs. Indeed, it likely created many unknown security issues that have [hopefully] been accidentally fixed with the adoption of getrandom by various libraries.

To this day Linux is still resolving issues and dilemmas caused by the removal of sysctl. There are many scenarios where /proc can't and shouldn't be accessible. (In most of those scenarios sysctl shouldn't be accessible, either, but especially since the addition of seccomp BPF it's easier to filter scalar syscall arguments than /proc opens.)

[1] Though, I don't remember any man page warning prior to 2008. (Or after, for that matter. I just remember the dmesg warnings, which because of the aforementioned dilemma regarding /proc put you between a rock and a hard place, waiting for the sword to fall, presuming you even caught it in time. Embedded developers might revisit a particular codebase only every couple of years.) Perhaps you're referring to notes that it wasn't portable? But there are countless interfaces that glibc documents as non-portable but infinitely less likely to disappear than even a Linux syscall. Do you have a link to a 2001 manual page?

Denvercoder9
> I'm only trying to push back on the claim that Linux has a pristine and principled record in this regard

For sure, I agree that Linux's record isn't perfectly clean. Just wanted to point out that if you were hit by that removal, part of the blame is on you.

> Do you have a link to a 2001 manual page?

I got it from the oldest manpages package from archive.debian.org. The git history on kernel.org doesn't go as far back.

The note I was referring to was the following:

  BUGS
       The object names vary between kernel versions.  THIS MAKES THIS SYSTEM CALL WORTHLESS FOR APPLICATIONS.  Use the /proc/sys interface instead.
Which in 2007 got replaced with the following (partly bolded):

  NOTES
       Glibc does not provide a wrapper for this system call; call it using syscall(2).

       Or rather... don't call it: use of this system call has long been discouraged, and it is so unloved that it is likely to disappear in a future kernel version.  
       Remove it from your programs now; use the /proc/sys interface instead.
Apr 28, 2021 · 5 points, 2 comments · submitted by pjmlp
nabla9
Exactly the problem.

Linux kernel is making it possible, but distributions don't care.

jqpabc123
I thought everyone knew this already?

The problem is not really Linux distros, it is the open source/community concept and ecosystem that not only allows but promotes reinventing the wheel and subsequent incompatibility.

The only reason the kernel doesn't have the same problem --- a consensus has developed around making Linus "benevolent dictator for life".

In other words, Linus provides leadership and control on a global level in the kernel space that is missing from "distro" projects. There is only one kernel but there are many distros --- and this is a very real problem if you care about widespread marketplace impact.

I'll bet if you could talk privately and confidentially with Linus, he would tell you that he lives in fear of what will happen to the kernel once he is gone.

One of the main reasons why corporations like Microsoft exist is to alleviate the risk of any one individual becoming indispensable. Once Linus is gone, look for corporations to compete for developmental control over the kernel --- with Microsoft as the odds on favorite.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.