HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
A reimplementation of NetBSD using a MicroKernel (part 1 of 2)

Andrea Ross · Youtube · 153 HN points · 3 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Andrea Ross's video "A reimplementation of NetBSD using a MicroKernel (part 1 of 2)".
Youtube Summary
by Andy Tanenbaum

Based on the MINIX 3 microkernel, we have constructed a system that to the user looks a great deal like NetBSD. It uses pkgsrc, NetBSD headers and libraries, and passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running, and without user processes noticing it. The talk will discuss the history, goals, technology, and status of the project.

Research at the Vrije Universiteit has resulted in a reimplementation of NetBSD using a microkernel instead of the traditional monolithic kernel. To the user, the system looks a great deal like NetBSD (it passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running.

The latest work has been adding live update, making it possible to upgrade to a new version of the operating system WITHOUT a reboot and without running processes even noticing. No other operating system can do this.

The system is built on MINIX 3, a derivative of the original MINIX system, which was intended for education. However, after the original author, Andrew Tanenbaum, received a 2 million euro grant from the Royal Netherlands Academy of Arts and Sciences and a 2.5 million euro grant from the European Research Council, the focus changed to building a highly reliable, secure, fault tolerant operating system, with an emphasis on embedded systems. The code is open source and can be downloaded from www.minix3.org. It runs on the x86 and ARM Cortex V8 (e.g., BeagleBones). Since 2007, the Website has been visited over 3 million times and the bootable image file has been downloaded over 600,000 times. The talk will discuss the history, goals, technology, and status of the project.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I get an error when I press the play button. Perhaps the same/similar video? https://www.youtube.com/watch?v=0pebP891V0c
delinka
Error in Chrome on OS X and iOS
protomyth
weird: try https://www.youtube.com/watch?v=0pebP891V0c and https://www.youtube.com/watch?v=Bu1JuwVfYTc&index=22&list=PL...
Jun 17, 2015 · 153 points, 83 comments · submitted by agumonkey
agumonkey
Youtube video description:

Based on the MINIX 3 microkernel, we have constructed a system that to the user looks a great deal like NetBSD. It uses pkgsrc, NetBSD headers and libraries, and passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running, and without user processes noticing it. The talk will discuss the history, goals, technology, and status of the project.

Research at the Vrije Universiteit has resulted in a reimplementation of NetBSD using a microkernel instead of the traditional monolithic kernel. To the user, the system looks a great deal like NetBSD (it passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running.

The latest work has been adding live update, making it possible to upgrade to a new version of the operating system WITHOUT a reboot and without running processes even noticing. No other operating system can do this.

The system is built on MINIX 3, a derivative of the original MINIX system, which was intended for education. However, after the original author, Andrew Tanenbaum, received a 2 million euro grant from the Royal Netherlands Academy of Arts and Sciences and a 2.5 million euro grant from the European Research Council, the focus changed to building a highly reliable, secure, fault tolerant operating system, with an emphasis on embedded systems. The code is open source and can be downloaded from www.minix3.org. It runs on the x86 and ARM Cortex V8 (e.g., BeagleBones). Since 2007, the Website has been visited over 3 million times and the bootable image file has been downloaded over 600,000 times. The talk will discuss the history, goals, technology, and status of the project.

fmstephe
That is very exciting. I am glad to hear about fairly substantial amounts of money being granted for this kind of project. I wish them well, but I won't be jumping on board this bus for a while.
carussell
I took a serious look at MINIX over the winter, and digested several of Tanenbaum's talks around that time. (For anyone wondering if this talk contains anything substantially different from past ones, the answer is no.)

Here are some things to add:

- Nowadays x86 is built with LLVM by default, and ARM is using GCC.

- X11 is mentioned in the video, but the 3.3.0 release from last fall didn't ship with a working X server, although past releases have. There is a message on the mailing list from someone who writes that they've got it working on a subsequent snapshot release.

- You may have heard something in the past about 10 minute build times. That info is out of date as of the switch to NetBSD userspace for 3.3.0. MINIX itself (i.e., all the interesting parts) still only takes about 10-15 minutes to build, but there's no way to just slurp down the sources for its kernel/drivers/servers to build a MINIX "core" and then supplement it with the prebuilt binaries for userspace (at least, not without doing some significant work on your end to allow for that). The initial build for x86 on my modest machine takes about 3 hours, almost all of it spent building LLVM twice.

- For anyone looking for serious collaborators, MINIX is seriously lacking in infrastructure from a project/community standpoint. E.g., a fair bit of documentation is missing and much of what you will find on the wiki is out of date. Development processes are neither documented nor easily discoverable because there are effectively no development processes in place. Until about six months or so ago, MINIX was without a bugtracker. Organizationally/project-wise, the whole thing is pretty sparse.

- If you have watched previous talks on MINIX, e.g. FOSDEM 2010, you will be familiar with the open calls for those interested in working for pay, using the money from the two grants mentioned in this video. That money is now gone. During that time, MINIX was basically a research project run by grad students who were working ~full time on MINIX, with the funding from those grants. It's the same now as far as the student-run aspect goes, but with drastically fewer contributions. Not much of the paid man hours seem to have gone towards scaffolding out project infrastructure as I mentioned before, or the sorts of drudge work that volunteers are unlikely to take up.

- The code quality is... I dunno, fair? As I mentioned before, there are/were effectively no development processes in place; no code review, etc. So there's a fair bit of nastiness that got checked in directly, like copy and paste, especially among the arch-specific boot time stuff; the comments are fairly sparse; and you can find dead code and references to routines/fields that either no longer exist or have been renamed, even in the relatively short (~500 line) main.c.

For anyone looking in to maybe starting to work with MINIX, I'd suggest assessing whether or not you would be comfortable striking out and doing things on your own, and then being prepared to do so. With MINIX, you aren't going to find a thriving community that you can just add your piece to, so as to contribute to the effort. You might run into a certain level of that sort of old-guard, paralyzing stop energy, so in a way it's got a lot of the downsides of a greenfield project except with few, if any, of the upsides.

fmstephe
Thanks for this post. That is very interesting. It seems like a shame, because a project like this is a long road. If they haven't built a place where work can continue it will likely fade away.

I would love to see a sustainable, non-vapourware, micro-kernel. I've _heard_ such great things about them :)

cyber
This is pretty cool. It would be neat to seem some of this technology folded back into NetBSD (potentially with the already existent modules infrastructure.)
luckydude
He lost me when he said "start a new window" as the work around to not having job control.

Neat idea but seems nowhere near done.

And as others have said, this was nicely handled by QNX way more than 15 years ago, I was running multiple users on a 80286 around 1986 or so. Really neat system.

sigzero
I loved QNX.
nickysielicki
I am 100% on the side of Linus Torvalds when it comes to microkernels.[0]

I will concede that in some instances a microkernel may outperform a monolithic kernel in stability or performance or both. I am not the least bit excited about any progress made in microkernels, I feel that it can only result in much more closed systems that are easier to implement in ways that make them harder to modify. This is why I wish for Hurd to continue to fail.

[0]: http://www.oreilly.com/openbook/opensources/book/appa.html

vezzy-fnord
Microkernels have been running the world behind the scenes for a while now, but most people don't seem to have gotten the memo and are still stuck with associating the Mach server as representative of u-kernels in general.
pjmlp
Yeah, for example each Symbian phone has one.

Also Mach OS X and Windows are hybrid in design and not the monolithic traditional UNIX way.

JoshTriplett
There have been wildly successful microkernels. One of Xen's greatest successes was demonstrating that to encourage widespread adoption of a microkernel, you rebrand it as a hypervisor. More recently, some people have started running software directly under Xen without a full OS, including language runtimes, all without ever calling it a microkernel.
cbd1984
Microkernels and hypervisors are not the same thing.
JoshTriplett
What's the difference between Xen and a microkernel? They both manage memory, do CPU scheduling, provide efficient message-passing, protect "processes" running underneath them from each other, and leave almost everything else to the "processes" running underneath them.
cbd1984
> What's the difference between Xen and a microkernel?

Xen allows full OSes to be guests and run on top of it. Microkernels only allow servers to run on top of it, and those servers have to be purpose-written and cannot meaningfully be ported.

Xen doesn't provide hardware abstraction and is fully invisible (except to the extent it advertises itself); microkernels are neither.

Paravirtualization (what you did before VT-x and similar) was an oddity, and blurs these lines a tiny bit, but the distinction is fairly clear otherwise.

nickpsecurity
Gernot Heiser states it here:

https://microkerneldude.wordpress.com/2008/04/03/microkernel...

Their highly-efficient microkernel has been doing so well that most people using it don't know it's in their smartphones. Do most virtualization solutions have a similar impact on user experience? ;)

JoshTriplett
> Xen allows full OSes to be guests and run on top of it. Microkernels only allow servers to run on top of it, and those servers have to be purpose-written and cannot meaningfully be ported.

L4 is an archetypal microkernel, and people often run full OSes or other ported software under it, including Linux.

> Xen doesn't provide hardware abstraction and is fully invisible (except to the extent it advertises itself); microkernels are neither.

Microkernels typically don't abstract any hardware other than CPU and memory; any other drivers would run under the microkernel.

And Xen is only "invisible" if you run full hardware virtualization and no paravirtualized drivers.

> Paravirtualization (what you did before VT-x and similar)

People still use "paravirtualization" today; see the "virtio" drivers typically used with KVM.

cbd1984
Microkernels are the wave of the future and always will be.
rodgerd
L4 is used in the secure element of every modern iPhone. OS X/XNU is based on microkernel designs. Windows is a hybrid.
bch
Are you asserting MacOS X is "based on microkernel designs" just because some versions[0] of Mach are a microkernel, or something else?

[0] https://en.wikipedia.org/wiki/Mach_%28kernel%29

bch
Hrmmm. Digging deeper (and trawling my memory), I'm getting conflicting information:

* https://en.wikipedia.org/wiki/XNU

* https://youtu.be/8RwlEZ88rKM

PuercoPop
I'm confused, I understand you think the Hurd is making a technical mistake but why do you want to fail?

Is it because you know that usage decisions of software are not based on technical merits? Or do you not want to be proved wrong? Or something else?

nickysielicki
Let's go to an alternative universe where Hurd was successful in the 90's and it reached common usage to the extent that Linux has today.

You're Western Digital in 2008 and you're making a TV set-top box called the WDTV-Live. I own one of these in real life universe. It runs linux, which is awesome, because that means that I can SSH into it. It runs an apache server in my home. It can download from usenet or torrents. I can control it via SSH instead of using the remote control.[0]

In this alternative universe, WDLX going to use Hurd instead of linux, because for this small device it will certainly have better performance on their underpowered MIPS chip. And they're not going to ship anything besides what they have to, becasue this is a small embedded computer.

What happens to that homebrew community when they ship a microkernel with proprietary servers for everything, and nothing else? It's going to be profoundly difficult to develop on this. You might already see this if you own a chromebook or a WDTV-- missing kernel modules means that you simply can't do anything without compiling your kernel. Couple this with secureboot and you're locked in.

I'm no expert on these things, most of this is based on brief research from years ago. If you think that I'm wrong, please tell me why, I'd love to be proven wrong. But for the time being, I believe widespread implementations of microkernels would be very anti-general-purpose computing.

[0]: http://wdlxtv.com/

vezzy-fnord
Your example doesn't make much sense because Hurd is the servers. The microkernel component itself is GNU Mach.

Shipping an embedded appliance with a microkernel and proprietary servers again makes no sense, because it's akin to rewriting userspace from scratch on top of the base VMM, schedulers and disk I/O. Just for a TV set top?

KaiserPro
> And they're not going to ship anything besides what they have to, becasue this is a small embedded computer.

It happens already, to keep hardware costs down. The whole point of linux is that you can pick and choose which userland services to ship... (SSH being a userland service)

> ship a microkernel with proprietary servers for everything, and nothing else?

Whats to stop them now? effort. It costs real money to create propietary programmes from scratch. One of the reasons they would have chosen linux in the first place is that half the work is done for them (decoding libraries, network stacks, hardware interfaces, communications daemons)

e12e
I suppose you've never run into an Android device that didn't run ssh out of the box, or had a locked bootloader? Perhaps not distributed with full sources so you could easily modify the system?

This is the reason for AGPL/GPL3 -- not much of an argument against modular software/kernels.

atmosx
> What happens to that homebrew community when they ship a microkernel with proprietary servers for everything, and nothing else?

In order to do what you did in your WD TV-Live you flashed the image with a new one. Otherwise you wouldn't be able too. So even in the case of a micro-kernel you would just flash the pre-installed with a new one (voiding warranty).

But to get to the point, do you have any idea how much effort would it take for a corporation, to write a reliable httpd server that has apache's capabilities, plugins, testing and support? Then write their own update system, dhcp client and so on? It would take huge amount of $$$ and time. And most of them would probably be buggy. So either way they would have gone with Free Software, if wanted to stay in the current price range.

jdub
You'd achieve the same thing by building a proprietary userland, from the C library up, on top of the Linux kernel. As far we know, no one has bothered (on a large scale) because it would be a massive waste of time and money.

The closest you'll find to this alternate universe is Android.

bitwize
What would happen is, RMS comes up with yet another GPL variant to prevent that scenario from happening.
carussell
Where/how do the proprietary servers come in for a kernel that doesn't want to allow them (and therefore would go through no special effort to make them possible)?

Remember that the reason you can link against glibc is because it's LGPL and not GPL. The LGPL was created for a reason. There's also a reason why when the decision was made to release Java under the GPL, Sun explicitly added a linking exception. It's because that isn't something you just automatically get for free.

isr
Isn't this dependent upon the microkernel's license? Wouldn't it be possible to use an open-source license which also explicitly forces servers which use it to also be covered under the same license?

Something akin to to affero gpl?

minthd
But isn't basically always the tradeoff - if you want security ,you play by the rules of the company who built it ? is there even a theoretical way out of this ?
caf
The idea of the Hurd is that any user is able to run whatever server they want - GNU has been concentrating on microkernels not because it's the new hotness but because they believe it's good architecture for more openness.

So presumably in this hypothetical case you'd be able to upload and run whatever additional servers you needed on the WDTV. You might say "but they might make it impossible to login and do that", but they could have done the same under Linux just by not running sshd - however they didn't.

vezzy-fnord
MINIX 3 has been based on the NetBSD userland since the beginning, I think. That said, always interesting to hear Tanenbaum talk and the dynamic upgrade/checkpointing features sound interesting.
psgbg
Indeed. I really love the part of the reverse check. It would never cross my mind.
istvan__
Weren't they using FreeBSD?
carussell
No.
istvan__
It seems they were using some of the tools:

https://2011.eurobsdcon.org/papers/gras/minix-bsd.pdf

"ABSTRACT MINIX 3 has imported a significant amount of userland BSD code. The trend began several years ago, but the pace has quickened markedly. We have already imported NetBSD’s buildsystem, NetBSD’s C library, the pkgsrc package management infrastructure, and various userland utilities from NetBSD and FreeBSD."

carussell
I thought we were talking about whether or not MINIX was using FreeBSD userland before moving to NetBSD for 3.2.0.
istvan__
I was just generally asking. But thanks.
mahmud
Glad you clarified and said Minix 3; I remember the 90s version had its own userland, and its own C compiler (also written AST), I know this because I hacked on them extensively. Probably the first time I have seen extensive source code, more than a snippet or a *.c file.
boardwaalk
I was trying to install minix 3.3 for fun and ran into a bug in the e1000 driver that caused VirtualBox to throw up. It's fixed already, but not in 3.3:

https://github.com/Stichting-MINIX-Research-Foundation/minix...

virtio seems to be working.

codezero

  The latest work has been adding live update, making it possible to upgrade 
  to a new version of the operating system WITHOUT a reboot and without running
  processes even noticing. No other operating system can
  do this.
Can anyone correct me if I'm wrong – but can't Linux do this with Ksplice, and the more recent live kernel patching by Red Hat?
glibgil
The video can correct you. Watch it.

> MUCH BETTER THAN KSPLICE

> KSPLICE can handle only small security patches

> KSPLICE patches the running process

> Over time, crud accumulates in the process

> If the update fails, there is no recovery

https://youtu.be/0pebP891V0c?t=47m56s

istvan__
Yes, and there are some cases when it works. I would not run our production system with Ksplice. The microkernel architecture makes it possible to restart kernel functionality seamlessly. Emphasis on restart. Ksplice just binary patches the running kernel and overwriting the changed parts. Very different scenario.
vezzy-fnord
The talk addresses Ksplice specifically, noting that unlike the MINIX mechanism, it doesn't handle significant changes to data structures well.
nickpsecurity
An AIX admin claimed it could do a live update of even kernel code without a reboot. The solution for most others is clustering, needed to deal with hardware risk anyway. OpenVMS clusters have reportedly gone 20+ years without system downtime. Individual instances needed reboots so little that admins occasionally forgot how to do that. So, yes, there's precedents in other operating systems according to their users.

Note: And as far as OpenVMS, I believe it because those designers built it like their job depended on it not failing. A cluster of that OS shouldn't experience any significant downtime given about everything I've ever read on it.

wumbernang
We had 11 years of uptime on a VAX cluster at a company I worked for in the late 1990s.

They took it down in 2001 to replace it with something that took up 2U of rack space, about 2kw less power and ran windows 2000.

I turned up in 2012 to replace it again with something cloudy and it had 11 years of uptime (well done NT!) again[1] so YMMV.

The cloud based version has gone down about 10 times (thanks Azure!).

[1] not a great position but this was on an isolated network with locked down everything so less of a problem than a normally networked system.

nickpsecurity
That's funny. I swear I've considered just buying up a boatload of used Alpha and Itanium machines to keep a VMS cluster going another decade. Put a guard in front of it to block any attacks due to its age or protocols. People might laugh but my stuff would stay running no matter what. Example below:

http://h71000.www7.hp.com/openvms/brochures/commerzbank/comm...

Notice how the Intel hardware all failed when things heated up a bit. The AlphaServers running VMS just kept chugging along. The eventual fail-over didn't loose a single transaction. Aggravates me that I can't easily obtain such reliable IT hardware/software anymore outside eBay. I mean, HP NonStop sure as hell doesn't have a hobbyist program with used servers for $130. ;)

wumbernang
The mid-range HP stuff is pretty reliable. We had a DL380p Gen7 survive the switch underneath it catching fire. Had zero chassis failures on about 500 nodes in the last 12 years as well. Lose disks and power supplies all the time and the odd Ethernet interface but nothing else.

Agree with ebay. I still look around for Sun Ultra kit now and then but the wife has other ideas because it's noisy and expensive to run.

nickpsecurity
Wow! That is impressive. I appreciate the tip on those.

" I still look around for Sun Ultra kit now and then but the wife has other ideas because it's noisy and expensive to run."

The battle that never goes away. Haha. This is why you need a basement or soundproofed room for that stuff. That's on my todo list for next house.

fithisux
I believe microkernels are a very good abstraction/concept.

For me it is very interesting. I would like to see some day darwin running on a real micro kernel (e.g. an updated mach).

I also like the fact that Hurd makes some progress. when ready, I will definitely switch.

In any case hardware vendors must release more documentation on their hardware to revive the OS scene. Blobs do not do much good.

mzs
Oh wow, mklinux is still around... http://www.mklinux.org/

Sorry this really has nothing to do with the video, just that the tangental thought of linux on mach made me wonder and I was pleasantly surprised.

nickpsecurity
The thing I like best about Tannenbaum's videos is that the dude is a great presenter. Funny too. You just don't get bored easily. We might get more outreach on some of these topics if presenters in those areas dress it up better.
brian_smith
I actually just bought the 3rd edition of Operating Systems: Design and Implementation (and Design of the Unix Operating System) when I watched this. Now I'm quite excited. Any recommendations for other books on this topic?
regularfry
The live update and reincarnation server in general remind me a lot of the ideas in Erlang.
None
None
Animats
That's nice, but late. QNX had that 10-15 years ago. With hard real time scheduling, too.

All you really need in a practical microkernel is process management, memory management, timer management, and message passing. (It's possible to have even less in the kernel; L4 moved the copying of messages out of the kernel. Then you have to have shared memory between processes to pass messages, which means the kernel is safe but processes aren't.)

The amusing thing is that Linux, after several decades, now has support for all that. But it also has all the legacy stuff which doesn't use those features. That's why the Linux kernel is insanely huge. The big advantage of a microkernel is that, if you do it right, you don't change it much, if at all. It can even be in ROM. That's quite common with QNX embedded systems.

(If QNX, the company, weren't such a pain... They went from closed source to partially open source (not free, but you could look at some code) to closed source to open source (you could look at the kernel) to closed source. Most of the developers got fed up and quit using it. It's still used; Boston Dynamics' robots use it. If you need hard real time and the problem is too big for something like VxWorks, QNX is still the way to go.)

lambdaelite
Funny you mentioning L4... based on the HN title, my first thought/hope was someone got seL4 running with the NetBSD userland.
justincormack
Are you interested in this? I am planning to get the NetBSD rump kernel running on sel4 at some point soonish, not quite the same but you would get much of userland. (Email in profile).
nchelluri
When is VxWorks inappropriate, but QNX appropriate?

EDIT: http://www.embeddedrelated.com/showthread/comp.arch.embedded... says:

> the most fundamental difference between VxWorks and QNX is as you have described, QNX lends itself to a message passing architecture while VxWorks lends itself to a shared memory architecture.

>

> My personal opinion is that a message passing architecture is easier to get to grips with and as such is potentially easier to understand and debug.

> However, the majority of software engineers with experience of an embedded RTOS will be very well informed about the Shared Memory architecture.

gte525u
I think it's less of an issue now than say 10 years ago. VxWorks 6.x added support for protection domains (MPU/MMU support) and RTPs (real time processes). VxWorks 5 everything operated in the kernel. Even with 6.x very little typically runs by default in user space on a VxWorks setup.

With respect to the message passing - both support messaging. VxWorks has several types of message queues - vxworks proprietary msgQLib API, POSIX api etc. QNX has much the same MsgSend/MsgRecv which is the microkernel API and POSIX. QNX has an add-on PubSub middleware that the OP of the usenet group may be thinking of.

saosebastiao
Would this imply no support for mmap(or similar) in qnx? Or just not very optimal to use it?
gte525u
Both support mmap and shared memory - that's why I found the "shared memory" usenet post a little puzzling.
unethical_ban
I'm watching the video now, but are you suggesting that QNX, which is not Free and Open Source, has already accomplished MINIX's stated goals of OS reliability?

I would like to hear Mr. Tanenbaum's answer to the less provocative form of the sentiment: "What design decisions were made with MINIX3 that other RTOS with microkernels didn't consider?"

jacquesm
QnX achieved Minix's stated goals of OS reliability 20 years ago.

And Minix isn't a micro kernel in the same way that QnX is.

rbanffy
QNX achieved it years before MINIX existed.
betaby
Interesting how reliability is measured in the context of OS kernel? Also I'm sure someone took a look to the QNX source code to understand which technics were used to achieve that. Since code is available but not opensource, perhaps someone even re-implemented those ideas. Are there any good up to date source on the subject ?
jacquesm
Reliability of an OS kernel is measured exactly the same way as any other complex piece of technology: you note how often it does not do what it should do. In the case of a large QnX deployment over all the time that I worked for the company involved (several years), 0 incidents that we could attribute to the QnX kernel (or even any part of QnX).

If something didn't work it was either hardware or our own code, the way it should be. It's not magic but it is very good at what it does. And it is a way of building things that just works, message passing is a very powerful technique for making reliable distributed systems.

ptaipale
> Reliability of an OS kernel is measured exactly the same way as any other complex piece of technology: you note how often it does not do what it should do.

To me this sounds a bit funny - isn't the whole point of microkernel architecture that you achieve the reliability by simplifying the kernel - it's a less complex piece of technology, so it can be more reliable, and the complexity is offloaded into user space processes.

Yes, it does not do what it should not do, because it does less.

jacquesm
A microkernel is simple but the whole OS is not and when you have all the lines counted a micro kernel + associated programs will be about the same complexity as a macro kernel + associated programs.

It's just that with the microkernel more of the code will be in stand-alone programs. Every driver, file system, network layer and so on will be a program all by itself.

ptaipale
Exactly my point. Now, the interesting question is whether you have a smaller total number of bugs, and amount of impact, with this architecture or that architecture. I don't know about that, but if you are measuring the reliability of microkernels "exactly the same way as any other complex piece of technology", you must keep in mind that you are probably measuring the behaviour of a smaller and simpler thing, and the risk for bugs has been offloaded somewhere else, it has not disappeared.
jacquesm
You have a much smaller number of bugs because (a) each component is much simpler (b) runs as a separate process and so can be debugged and worked on by mere mortals and (c) works using a well defined interface (message passing) which makes testing and debugging a much simpler affair.
stox
I think UNIX-RTR has met those goals.
pjmlp
Thanks for pointing it out. I wasn't aware of it.
nickpsecurity
Tanenbaum cited QNX in Round 2 of the microkernel debate between he and Linus. It's had all sorts of great traits for a long time. It also had plenty of development time and a rip-off open source model to give it capabilities. Like Tanenbaum said in his paper, Minix 3 has had a small amount of core developers working on it for a relatively short amount of time. There's no way Minix 3 will trump QNX with such small resources and I doubt they planned to. It's more a start on building something using better engineering principles that might eventually become a great alternative to other UNIX's and Linux.
bch
> If QNX, the company, weren't such a pain...

Well, regardless of how late, Minix is bringing it fuss-free to the rest of us now.

pjmlp
Not really free from EU tax payers point of view.

I am not complaining as I find Minix great, just making the point that nothing is really free.

rbobby
RIM bought QNX a while back... which probably has tossed some spanners in the works in various ways.
soperj
Based on what, you not being a fan?
Animats
That's when it suddenly went from open source to closed source. One day, RIM removed the sources from the web and FTP servers.

QNX used to be a standalone company. They declined a buyout by Microsoft. Then one of their key people died, and they sold out to Harmon, the car audio company. So then they were heavily into automotive dashboard applications, which continues. Then RIM bought them, which gave the Blackberry a better underlying OS but didn't fix Blackberry's problem of an obsolete UI and business model.

Meanwhile, QNX still sells to real-time and industrial automation customers, but those customers feel kind of neglected. Incidentally, the Boston Dynamics robots all have QNX managing the balance and the hydraulic servovalves. You need reliable hard real time for that.

nickpsecurity
Yeah they have huge problems which QNX itself can't begin to fix. Yet, the Blackberry Playbook (running on QNX) totally screamed in performance and responsiveness in the tests against the iPad. Showed how smart a decision they made to use a proper RTOS for their devices. I gave product manager props for the decision.

I think they should stick to their strong suit and build on strengths in business world. Collaboration, integration with legacy stuff, bake in more security than iPhone/Android, and so on. Build services on top of that like IBM does with all their stuff. And so on.

joshuapants
I don't really think that BBOS 10 has an obsolete UI. The Hub is actually really convenient and I wish more smartphones had something like it.

Blackberry isn't failing because of their OS, they're failing because they were way too late to market with robust smartphones and lost a huge amount of mindshare after Apple and Google ate their lunch.

vezzy-fnord
QNX is fascinating on its own, but MINIX 3 is still a different project in that its full adoption of a NetBSD userland will probably make it more useful for generic servers and workstations as well. They also seem to be going much deeper with checkpointing and dynamic upgrades/hot code reloading.

If you need hard real time and the problem is too big for something like VxWorks, QNX is still the way to go.

There's all sorts of much tinier RTOS like FreeRTOS, MicroC/OS and Contiki that are used out there for particularly critical and/or constrained environments.

david-given
QNX has a big advantage in that from userland it's basically a Unix. You can develop on it completely self-hosted on a desktop PC. The GUI's pretty good; it even comes with Java and Eclipse.

They've downplayed that recently, alas, but I believe that if you hunt around on their website you can find a bootable CD. I think platform support has slipped a bit so you might have trouble making it boot.

Way back when, there was a QNX demo floppy, which was a bootable 1.44 MB floppy disk which contained a full GUI, web browser, dialup modem support, etc. It'd run on a 386 with 8MB of RAM.

http://toastytech.com/guis/qnxdemo.html

QNX is pretty awesome.

Edit: Here's a similar writeup of the bootable CD, using a more recent version of the GUI. http://toastytech.com/guis/qnx621.html

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.