HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
BeOS DEMO VIDEO

koyhoge · Youtube · 125 HN points · 12 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention koyhoge's video "BeOS DEMO VIDEO".
Youtube Summary
Official BeOS Demo Video from Be, Inc.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Nov 15, 2022 · pcurve on 'Be' is nice, end of story
New-ish video of actual BeBox running in HD video quality https://www.youtube.com/watch?v=RkM9WbB8cWM I didn't know it had functional LED light meter in front of the case. very cool.

Obligatory demo from the 90s https://www.youtube.com/watch?v=BsVydyC8ZGQ

a-dub
it can play MULTIPLE VIDEOS AT THE SAME TIME (!)
unsupp0rted
It can show the content of a folder window while dragging it around the screen!
mkl
That kind of multitasking was one of the most impressive things about it. I could play four MPEG videos at once smoothly (with the interface and other windows still responsive) while Windows struggled to play one on the same hardware. That and it booted in 30 seconds including detecting and loading built-in drivers for all my hardware (Windows took 1-2 minutes and needed drivers manually installed).

I never used BeOS much though, as there wasn't much software available for it. This was in ~2000.

jandrese
It's easy to boot quickly when you have very little supported hardware to look for. I remember compiling custom FreeBSD kernels with only the hardware on my box and racing my roommate's BeBox to login prompt. Then he would start playing a bunch of movies at once and I'd open up Netscape.
a-dub
ironically, freebsd was late to the multiprocessor party, with a global kernel mode lock that existed well into the mid-2000s.
acdha
Even better, they had a real I/O scheduler. I could transfer video from a DV tape reader with a tiny buffer while Mozilla compiled in the background and while the compiler got slower the tape drive never stalled and the UI never lagged. That was quite a change compared to Windows or classic macOS where you had to treat the system as a single-app device.
helf
yes! The Blinkenlights on the front of the case were a hoot. I had a prototype BeBox for awhile and it was delight. I linked to some images in another comment.
anentropic
Dude sure has enough plants on their desk
classichasclass
It's actually the system load on each 603e.

Source: dual BeBox/133 on my desk

helf
Still have one? I am jelly. Is there a working version of Haiku for them yet?
classichasclass
Sadly no, but still get a lot of wear out of R5. I have it on this BeBox and on a 6500/275 that I'm trying to figure out sound issues on.
helf
I wonder if they will ever target such an old ISA.

What are your sound problems? I only briefly used BeOS on an actual mac and it, amazingly enough, worked without a hitch.

classichasclass
Something with the AWACS driver (need to run off an instrumented build and boot the kernel in debug mode to see what's up, just gotta get a round tuit). The 6500 is technically Unsupported but Compatible in Be's support matrix. Some people report the sound works, some people report it doesn't. Ditto for the TAM, which is basically a 6500 in a cool suit with an LCD.
memsom
mmuman was still talking about it recently, but the issue is that the compiler used under BeOS is proprietary - Metrowerks mwcc, and is basically DOA now (do they even still make is commercially available?.) The PowerPC version of BeOS uses the proprietary Apple PEF exe format too, and there are almost no modern compilers that support that format (Retro68 is the only one I know of.) Haiku is therefore likely to not support anything legacy from BeOS PowerPC, and that makes even bootstrapping difficult, as you need a whole new OS to run the new code.
I installed BeOS a long time ago on a PC. It was something ahead of the times.

I still remember how incredible it was the rotating cube demo where you coud drag and drop images and videos on the cube faces... it worked without a glitch on my pentium.

Just found out the demo video shows the application with a GL wave surface playing a video over it: https://youtu.be/BsVydyC8ZGQ?t=1074

cptnapalm
By all the stars in heaven, that was an impressive demo.
tialaramex
It's about making a virtue of a necessity.

When Be wrote that demo the situation is that the other operating systems you might plausibly choose all have working video acceleration. Even Linux has basic capabilities in this area by that point. BeOS doesn't have that and doesn't have a road map to get it soon.

So, any of the other platforms can play full resolution video captured from a DVD for example, a use case actual people have, on a fairly cheap machine and BeOS won't be able to do that without a beast of a CPU because it doesn't have even have hardware colour transform acceleration or chromakey.

But - 1990s hardware video acceleration can only play one video at a time, because "I want to play three videos" isn't a top ask from actual users. So, Be's demo deliberately shows several different postage stamp videos instead of one higher resolution video, as the acceleration is no help to competitors there.

And then since you're doing it all in software, not rendering to a rectangle in hardware, the transform to have this low res video render as one side of a cube or textured onto a surface makes it only very slightly slower, rather than being impossible.

Audiences come away remembering they saw BeOS render videos on a 3D surface, and not conscious that it can't do full resolution video on the cheap hardware everybody has. Mission success.

anthk
Eh, multithreading decoding could help a lot. And by the population of DVD video in computers (and the PS2), most people had a Pentium3 450MHZ at homes, which was more than enough for DVD video with an ASM optimized video player such as MPlayer and a good 2D video card.

2D acceleration was more than enough.

http://rudolfs-place.nl/BeOS/NVdriver/3dnews.html

On Linux you didn't need OpenGL, just Xv.

Source: I was there, with an Athlon. Playing DVD's.

forwhomst
Not to disagree much but when Be was going around the country doing their "demo days" on the bebox nobody had a DVD player in their computer. DVD wasn't even on the market until after Be had ported BeOS to x86. People thought that playing the VHS-quality music videos that were included in Windows 95 was a hot demo (on a PC).
smallstepforman
BeOS R4.5 did have hardware accelerated OpenGL for 3dfx Voodoo cards. I played Quake 2 in 1999 with HW OpenGL acceleration. For R5 BeInc wanted to redo their OpenGL stack, and the initial prototypes seeded to testers actually had more FPS on BeOS than under Windows.
kccqzy
Impressive. But it makes me think how far we've come; it's long been possible to do a rotating cube with video using pure HTML and CSS.
rusk
Remember when the Amiga bouncing ball demo was impressive? Ironically 3D graphics ended up being the Amiga's specific achilles heel once Doom and co came on the scene.
all2
That's curious to me. Doom is specifically not 3D. Was it a publishing issue (that Doom and co weren't produced for the Amiga), or a power issue, or something else?
rusk
Doom didnt use polygons but it very much was 3D in any practical sense of the term.
anthk
No, it was "distorted" 2D, like cardboards put in perspective. Not 3D.
rasz
No free look meant no perspective distortions in Doom.
rusk
You are still getting confused by polygons. It was a 3D space that you could move around in. The matter of how it was rendered is an implementation detail.
likeclockwork
It really wasn't. Doom's gameplay almost entirely took place in a 2D maze with one-way walls. It was rendered to look 3D, and as you said, that's an implementation detail.
anthk
You coudn't look up and down, neither you could do with DN3D.

I am not confused, the opposite. I grew up with that.

rusk
I grew up with it too ... I disagree with your categorical boundaries. The distinctions you draw are purely technical.
mtrower
purely technical? You can't go above or below anything; no two objects can exist at the same X/Y; height doesn't exist in any true fashion (the attribute is used purely for rendering --- there is no axis!). How is the existence of the third axis in a supposedly 3D environment purely technical?

With only two axis, it is literally a 2D space, which gives some illusion of 3D as an implementation detail --- not the other way around.

rusk
It isn't "literally" a 2D space. It is "topologically" a 2D space in that you could represent it as a 2D space without loosing information. It doesn't provide 6 degrees of freedom but it is very much experienced as a 3D game environment.

EDIT also, using the term "literally" to talk about 3Dness when it is all rendered onto a 2D screen, is fairly precarious. No matter how many degrees of freedom, or how rendered, it will never be "literally" 3D, in the literal sense of the term.

rasz
You can look up/down in Duke3D, its under home/end keys. It doesnt look pretty nor correct, but you can do it.
forthac
Doom was a 2D space that looked like a 3D space due to rendering tricks. You could never move along the Z-axis though because the engine doesn't represent, calculate, or store one. That's why you can't jump, and there are no overlapping areas of the maps.
rusk
Regardless of the “technicalities”. My point was that this, and other 3D games were something that Amiga could not do well - whether 3D, or “simulated 3D”.
T-hawk
Doom's 3Dness or lack thereof only mattered to programmers. Players didn't care, to them Doom looked entirely 3D.
nguoi
Players didn't have to aim up to shoot something above them
mtrower
Curious. As a player, I certainly cared. There's a world of difference between Doom and Quake...
joakleaf
The Amiga had planer graphics modes, while the PC/VGA cards had chunky mode (in 320x200x256 color mode).

It means that, to set the color of a single pixel on the Amiga, you had to manipulate bits at multiple locations in memory (5 in 32 colours), while for the PC each pixel was just one memory location; In chunky mode you could just do something like: videomem[320*y+x]=158 to set the pixel at (x,y) to color 158, where videomem would point directly to the graphics memory (at address 0xa0000) -- It really was the simplest graphics mode to work with!

If you just copied 2D graphics (without scaling/rotating) the Amiga could do it quite will using the blitter/processor, but 3D texture mapping was more challenging because you constantly read and write to individual pixels (each pixel potentially requiring 5 memreads/writes on the Amiga vs. 1 on the PC).

Doom's wall texture mapping was affine, which basically means scaling+rotation operations were involved. The sprites were also scaled. Both operations a problem to the Amiga.

As software based 3D texture mapping games became the new hot thing in 1993-1997, the Amiga was left behind. Probably wouldn't have been a problem if the Amiga has survived until the 3D accelerators in the late 90s.

This is quite well described elsewhere. Google is your friend if you want to know more! :-)

rusk
Also Amiga didn’t have hardware floating point whereas DX series of PCs in the 90s did. Essential for all those tricky 3D calculations and texture maps.
tialaramex
No. Hardware floating point was _Quake_

Quake has software full 3D which runs appallingly if you can't do fast FP, it's targeting the new Pentium CPUs which all have fast FPUs onboard, it runs OK on a fast 486DX but it flies on a cheap Pentium.

Doom is just integer calculations, it's fixed point math.

rusk
I didnt know Doom was all integer ... quite a feat.

In the general sense though the lack of floating point, as well as flat video addressing seriously hampered Amiga in the 3D ahem space.

EDIT I just remebered there is definitely at least one routine I know of that performs calculations based on IEEE 754 - “fast inverse square” or something. That could be at the root [badum] of my confusion vis-a-vis Doom ...

lmm
The famous "fast inverse square root" was in Quake 3.
rasz
Duke3D Build engine did use FPU for slopes :O http://fabiensanglard.net/duke3d/build_engine_internals.php Luckily you already needed at least DX2-66 to play the game comfortably so not many people stumbled onto this.
ng7j5d9
Agreed, I remember trying BeOS in the late 90s and I felt the way Tesla fans report feeling about their cars - "it just feels like the future".

The responsiveness of the UI was like nothing I'd ever seen before. Unfortunately BeOS fell by the wayside, but I have such fond memories I keep meaning to give Haiku a shot.

Apr 11, 2018 · jjrh on Fuchsia is not Linux
I watched a BeOS demo video ( https://www.youtube.com/watch?v=BsVydyC8ZGQ ) the other day and some of that stuff is /still/ impressive.
jandrese
I had a roommate in college who had a BeBox with the twin PPC 603(?) processors and a row of LEDs on the front configured to show the current load of each CPU.

It was super cool and could make these absolutely insane multimedia demos, but he was forever trying to get software to work on it. Whatever POSIX compatibility it had was absolutely insufficient for modern (at the time) applications. Everything required some rewriting by hand, and there were definitely crashes. Worse, Netscape didn't release a BeOS version of Navigator so he was always hacking up the latest Mosaic release to try to get it working. I was running FreeBSD at the time and it was the polar opposite. Sound barely worked, the only video players were slow and unreliable open sourced school projects, but it was front and center on the newfangled Internet thing that was going around at the time.

Dec 18, 2016 · zamalek on Haiku booting in UEFI mode
You're missing the point entirely, it could be done in hardware (I'm sure they would be happy with a PR) - but Haiku can do stuff in software that other OSes simply can't; media is just one of the better demos. It follows very strongly from the BeOS demos from the mid 90s[1].

[1]: https://m.youtube.com/watch?v=BsVydyC8ZGQ

dzamo_norton
To me this sounds unlikely. The CPU will spend all its time in codec library code here so how much of a role will the OS get to play? More likely the difference is coming from some departure from fairness in how the software codec was compiled
voidz
Most of us get your point, I'm sure. :-) Some people just go against everything in their comments.
For an illustration of this, see [1]. Keep in mind that this was almost 20 years ago, so the hardware he's running is probably something like a dual-socket Pentium Pro or Pentium II platform with a 66 MHz bus and core clock speeds around 10% of what we see on modern processors, and nothing like a modern GPU (I'd guess that the graphics hardware has accelerated blitting and an overlay for the live video input, though; it's unlikely that software is pushing every pixel in that demo).

[1] https://youtu.be/BsVydyC8ZGQ?t=965

djsumdog
Oh man! I remember seeing this exact demo back in 2003/2004-ish .. I think. At that time, being able to capture from two video capture cards, in real time, on commodity hardware, was insane. So was being able to turn off individual processors.

When I first started University a few years earlier, I had a quard-boot Win98/2000/BeOS/Slackware box using the BeOS bootloader (it was the most colourful at the time).

/nostolgia

ashark
For reference, Win 98 (and Linux of the same vintage) used to stutter and pop on MP3 playback when you'd load a (mostly plain html back then) webpage.
niels_olson
Windows 7 still does this with Pandora in Chrome when loading a new tab. Oh, wait, the world moved on from 32-bit Windows 7? Let me tell my boss...
protomyth
Your boss probably is doing what we are doing 7 -> 10, although why the heck you are on 32-bit is a little odd. I think we gave away all our 32-bit machines to students.
digi_owl
32-bit can still run Win16 code. 64-bit can't because of conflicting CPU modes. Could be they have some aging inhouse software they just can't replace...
protomyth
You know, COBOL programs are easier to run on modern machines than a lot of Win16 programs. I could only imagine that utter vile feeling of seeing a Win16 program for a modern Windows programmer.
Sep 23, 2016 · dangom on Haiku Project
Here's an old demo video from BeOS, for those like me who were unaware of its existence.

https://www.youtube.com/watch?v=BsVydyC8ZGQ

Futurama reference ;) https://www.youtube.com/watch?v=VmCqn-DNSA0 though apparently popular enough that some people have made all-rush playlists ( https://www.youtube.com/watch?v=JsKBIBJj-4M&list=PL8BF75E7F0... )

On-topic - definitely a cool project and one that I haven't heard about actually, added to my queue of weekly source-code readings.

I'd also consider DirectFB similar, and another project that I just have in my vaguest of memories (think they used a project-name like Cairo that got google-CVed out of existence from the more recent one) from the 2001-02ish era. They tried to attack X back then, but fell victims to the driver situation (but they had arbitrarily rotated windows!).

One of my stronger personal influences is BeOS though - Compare https://www.youtube.com/watch?v=BsVydyC8ZGQ to https://www.youtube.com/watch?v=3O40cPUqLbU&feature=youtu.be... :)

i336_
Ahh... I don't watch any^H^H^Henough TV. :P

I've been meaning to poke around PicoGUI myself - I personally love stuff that's tiny and efficient, always looking out for things like that. (I just found a bunch of old versions of Contiki, the ones with the GUI stack; the non-broken ones were fun to play with: http://hitmen.c02.at/html/tools_contiki.html) Very cool to hear I recommended something relevant! ^^

I've heard of DirectFB, but my understanding is that it just tames framebuffers, as opposed to dealing with everything below the toolkit level.

I don't recall anything called Cairo myself, but as for attacking X I do vaguely recall a company that made a closed-source alternate Linux display stack+desktop environment; it was very rudimentary and went nowhere because of that.

I like BeOS too. I keep meaning to install it (and OS/2... and QNX... and...), just preferably on real hardware. I have a bunch of old stuff hanging around here that I hope to use once I have a little file server and I can free up my dozens of old HDDs :P

And your video (I watched the other one in 2008 :D still remember it) was really awesome, and led to git cloning and compila--"wait it's done already?! Nice."

Now my main request is, please update the documentation on GitHub (particularly the quickstart instructions) so we all aren't stuck with just welcome.lua (which I initially thought was a builtin options screen then double-checked to see if I'd specified the args wrong, lol). I (and probably everyone else) want(s) to play with the stuff you're demoing in your videos!

Another tiny thing I'd mention about the video (it was the first thing I noticed actually) was that it would be a) awesome to watch and b) a great demo of your engine's graphics pipeline performance, if you have the media player update its graph at like 60fps. Or at least 24fps. Just a thought.

For my favorite reference of what fast VU looks like, I recommend rezound (http://rezound.sf.net/) - run it once, then edit ~/.rezound/registry.dat and set meterUpdateTime (in Meters {}) to something like 4 (it's a delay in microseconds I think, 4 is nice and doesn't flood X with updates too fast on my machine). The weird knob thing to the right of the VU (very bottom-right) adjusts the frequency response.

Another possible source of fast VU updates is the Linux port of Open Cubic Player (http://stian.cubic.org/project-ocp.php) - alt+c, set framerate to 60 or 120fps, set font to 4x4 (after loading music :D), and the result there looks nice, too - although it responds best (for me) with a non-fullscreen window. (This one's a bit of a project to learn all the shortcuts for, IMO.)

crazyloglad
I actually force myself to do at least some of the development on (no cross-compilation) a raspberry pi for the reason of getting a snappy build-time.

I had even forgot there were instructions like that, outside the technical bits I'm probably the worst person to write helpful guides as the workflow etc. is just so strongly internalized that most of it 'feels' obvious.

Now don't look at the code for generating the FFT (or anything else in the _decode frameserver for that matter), the reason it doesn't update smoother is just how much I don't get along with "lib"vlc, but I added it more as a novelty (spoiler, FFT precision is murdered and packed into a texture and the rest is a shader, it doesn't even synch well).

i336_
Are you me? That's my ideal approach! Except I'm not so crazy to consider running the build on the slow box (:D), rather my approach would be to have 1Gbps+ between the two machines, build on the fast machine, and run (perhaps directly from NFS?) on the slow machine. I was thinking of editing on the slow machine too (so it's the machine you use) but I don't think going that far is actually necessary. Now, as to how exactly I would implement this idea under DOS for my 486 is another story entirely...

I know what you mean by the obviousness thing. I however am sitting here with no idea how even to desktop with Arcan. I wouldn't mind finding out though.

And I was wondering if it was updating the FFT so slowly because of... yeah, something like that. I see. I have to acknowledge and agree that it does indeed not sync well. Fixing this sounds like a large pile of boringness; I can see that the other display components update pretty quickly, at least. (But now I'm wondering, is the video of a VNC server? I thought it was SDL. How are you updating the screen so quickly?!)

crazyloglad
If you poke me on IRC (letoram, #arcan @freenode) I can probably help you out in using the thing.

I think the hardest I've pushed the shared memory interface is basic computer vision (filtering, 9-segment display OCR, tracking an x-y plotter and some glowing devices) on 8-bit mono 2x1000fps 320x180-or-so cameras. Even then most time was spent waiting for synchronization bottlenecks because of OpenGL2.1 limitations.

you thinking about https://youtu.be/bQlHnW2qCh0?t=1m28s ? so the round-trip time there is gpu-composite -> readback -> vnc-server -> vnc->client -> back to gpu.

i336_
This is so annoying - I keep finding reasons to go on IRC, but am still stuck on what IRC client to use. ("No, my web browser feels weird." "irssi doesn't have a multiline text box." "weechat isn't configurable like irssi is." "I don't want to use GTK or Qt." Lost cause: check. Protip, don't drown yourself in chat client ideas for 4 years, you'll poison yourself to everything out there :X) - but the mention of IRC is duly noted. :D

I had a major derp, however: I forgot durdan and arcan aren't, err, the same thing... let's just say I just installed durdan, and successfully played around for a bit. It's a bit slow on my frankensystem (GPU is older than motherboard+CPU... don't ask >.>) but still very cool.

And wow, that's a pretty awesome usage of the SHMIF, cool. I wonder if porting Arcan to Vulkan would produce interesting results...

And I meant https://youtu.be/3O40cPUqLbU?t=279 - in particular the part I seeked to, where you play and record video and blit it onto a 3D surface so on and so forth - the VU is noticeably slower than everything else. I just thought it would be cool if the video was like "look: EVERYTHING is updating at 60fps!" - but it's cool. :P

About the video you linked, that's pretty incredible too... wow.

crazyloglad
I have high hopes for the shmif- port to vulkan, as far as I can tell, there's no good way to flag pinned memory as shared and build the shmif around that, but if it was possible it would be predictable synch, controlled colorspace conversion and ... oh well, QEmu integration first :-)

You might be able to reduce the fillrate cost by trying config/system/simple displaymode.. it removes a lot of features but you save at least a full extra renderpass.

i336_
Wow, shared pinned memory sounds absolutely awesome. Please tell me NVIDIA doesn't have to alter their binary driver to get this working... please. :P

I'm not sure if you need to shout at NVIDIA, Linux or Vulkan to make this possible, but there are so many awesome things people could do if this were possible...

I would totally recommend you shout at all the relevant mailing lists - even NVIDIA's, if it comes to that :P to get this supported.

And how are you managing qemu integration? What do you mean by that? o.o

I tried the simple displaymode, which seems to be a tad faster, but it's still glitchy - the issue specifically is that opening a fullscreen terminal (at 1600x1200) basically makes my mouse a slideshow, and there's noticeable typing lag too. Resizing the terminal down makes it go away: this is proportional to terminal size.

And as an aside, -w 1600 -h 1200 makes the mouse cursor go halfsize, -w <= 1599 and -h <= 1199 makes the cursor normal size. Curious.

crazyloglad
seems we've hit some magical > reply depth limit. I'll get back to you via the gmail account in your "about".
i336_
That works fine - I've hit this before, I think that it takes a minute or two for the reply button to show up. I could be wrong though.

Clicking the "n minutes ago" thing opens a working reply form for me if I can't see the reply link, I'm not sure if it works in this case.

Either reply method works. :D

crazyloglad
QEmu integration just started working, https://github.com/letoram/qemu it's far from complete enough that I'll try to upstream patches. -display arcan.

Your GPU is most likely fill-rate limited. Are you running this entirely natively (EGL/GBM/KMS stack) or using SDL through X? In the latter case there'll be so many fullscreen sized buffer copies that your GPU cries. There's more special tricks I can do to get the fullscreen case to go faster, and it's on the near todo for Durden anyhow.

Also, the terminal emulator only really supports truetype fonts (there's a built-in fallback that is quick, but it's awful and only 7bit ascii..), which is damn expensive to render.

i336_
Opens repository "Official QEMU mirror." "I see." So basically... you can display QEMU inside arcan. That's really neat.

My GPU is everything-limited :P I just tried arcan on my integrated video, no issues there. Maybe an almost imperceptible bit of slowdown, but just that, almost imperceptible.

I'm using an ancient ATI X1300-series [1002:7183] fanless GPU I yanked from an oldish workstation so I can have 3 screens in a pinch. It runs two displays; my i3's HD 2000 runs the 3rd. (And I can't move Chrome between :0.0 and :0.1, which X won the fight over me having. Yey :3)

As for drivers, I'm using the radeon driver w/ KMS (switching between X and tty is instantaneous); the only issue is that I occasionally see framedrops due to driver bugs, but that's the only problem I have; the driver is very fast. (But it should be, the card's a decade old. :P)

I'm not sure if I'm running truly fullscreen, incidentally - I can see vestiges of i3 (my windowmanager) in the form of a 1px border underneath the bottom of the arcan window.

...So I just tried -f... and you don't support screens with different resolutions. My center display is 1600x1200, my left is 1280x1024. I get fullscreen on the left, and a cute arcan window (that I can't move my mouse into) on my center display. (The left and middle are on the ATI card.) With fullscreen it's still slowish (which I understand is to be expected at this point).

AFAIK I'm using SDL; I tried building with X11 but the build balked, so I conceded to the instructions on github :P

About truetype and terminal emulators, I had an idea a while ago: glyph caching. (((8x15=size)x3=24bpp)x256=ASCII)=92160 bytes. That's remarkably manageable. Unicode blows the 256 out of the water, but how much of Unicode will the average terminal session see? Certainly not all of it, so the cache eviction algorithm won't need to be particularly aggressive or smart.

The only really major catch is that #222222 will not antialias the same as #FFFFFF, and trying to make one look like the other will look either really dim or really pukey, so the caching system would also need to cache each color of each character that it sees. However, this is not actually totally the end of the world, since glyph tables don't actually take up all that much space, as I've just noted. Very mathematically inelegant, but quite possibly worth it in practice.

That's the point I would be stopping at; you may be interested in going full crazy with something like https://news.ycombinator.com/item?id=11440599 (this looks like a lot of fun to play with).

What's the builtin fallback font, out of curiosity?

Also, I want to clarify and emphasize that, running the terminal in arcan on my old ATI card, the size of the terminal window is directly proportionate to the input+video lag. As I resize the terminal smaller (in floating mode) it speeds up, as I resize it bigger it sl-ow-s ddoowwnn and gets stuttery and glitchy.

Lastly, I just discovered an extremely curious phenomenon I thought I'd mention. I wanted to screencap the terminal cursor to get the font size for the calculation above. After learning about "mouse lock > no" (good riddance :D) so I could take the screenshot, I soon found that the input lag I'd experienced was actually affecting my entire X session. Moving the arcan window offscreen seems to alleviate it; moving it back onscreen and eg running htop (inside a fullscreen terminal >:D) bogs everything down so badly that typing into Chrome (running on the i3 GPU, on :0.1) is very very very noticeably slow, and resizing xterm is... well I can sit there for 10-20 seconds watching it repaint. The weird thing: my CPU was near 0% utilization. This is either a bottleneck in my GPU, X or both (or X blocking on GPU bottlenecks).

I'm not highlighting any of this to make arcan look bad; I personally find it really interesting to throw code at suboptimal hardware and seeing what, if anything, can be done to speed it up - because if code runs well on worst-case scenario hardware, it'll fly on average kit. So I thought I'd mention the above in case it's interesting. If you have any ideas for benchmarks I can run or timing information I can collect I'm fine with supplying that. ^^

If we're talking tech, then your post couldn't be more wrong given my BeOS and QNX examples. Performance was equal to or better than monoliths of the time. BeOS especially destroyed competition in concurrency performance due to its architecture. QNX runs at hardware speed basically with real-time properties and POSIX support. BeOS disappeared due to Microsoft monopoly with Haiku making a OSS clone. QNX was at $40 million a year in revenue when Blackberry bought it. Green Hills and VxWorks are doing OK, too, with VxWorks making more than QNX per quarter. Both have desktops virtualizing Windows, Linux, etc on microkernels w/ Gbps throughput.

I don't see why we keep getting these theoretical counters given the proven results of microkernel performance in the field. Tell me why microkernels are too slow when they can only do this on 90's era hardware:

https://youtu.be/BsVydyC8ZGQ?t=16m9s

Then, tell us why we should sacrifice isolation of malice and faults in favor of kernel-mode code with properties like this:

https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...

Our side produced highly reliable and secure systems plus high-performance systems. It was always done with a small group with little time. The monoliths took a decade and thousands of man hours to do the same. It's up to you people to justify why those hours were well-spent.

"while for say an exokernel the opposite is mostly true."

An exokernel is a microkernel...

nwmcsween
An exokernel is definitely not a microkernel, one provides abstraction via (usually) processes 'servers' the other one does via a library which is vastly cheaper overhead wise. I do understand that there will be a need for some sort of IPC just not to the extent of a microkernel.
nickpsecurity
A microkernel is an abstraction over hardware with minimal code and API. An exokernel is a form of microkernel since it has these properties. It just does thing very differently from most microkernels. Hence a name for that style.
burfog
I've seen the VxWorks code. VxWorks is not a microkernel.

That BeOS demo did not heavily use privileged interactions. Mostly it showed computation which is the same on any OS. The best thing it showed was a process scheduler which was good at giving priority to things that a user would care about. A more interesting test would be serving files or building software.

I think one should be careful not to read too much into CVE numbers. People aren't exactly trying to mess with KeyKOS, Haiku, QNX, and other weird things. Few people want to bother. None of the Linux problems are inherently specific to monolithic design. The best you could say is the you might have a sandbox that makes things more difficult for the attacker. On the other hand, restarting means you give attackers more chances to succeed.

nickpsecurity
The best thing you can say is a bug in kernel code that hoses my whole system is less likely to happen several times over. Suddenly, hackers or faults have to work through components' information flows. You keep ignoring that in your analyses. Also why I brought up CVE's because it's impossible that the microkernels had as many in kernel mode just by code size. Still plenty to be found in privilieged processes but POLA and security checks are way easier when memory model is intact.

Btw, one person here who wrote about QNX desktop demo mentioned doing productivity stuff while compiles ran in background with no lag. So there's that use case except not for BeOS. The link below will show you BFS was more like a combo of NoSQL DB, files, and streaming server:

http://arstechnica.com/information-technology/2010/06/the-be...

Due to its nature, compilation and build systems are about the slowest things you can do on it. I've seen numbers ranging from 2.5x to 20x slower than Linux but they didnt share specs. I'd swap out the magic filesystem for a simpler one if on a development box. BeOS was aimed at creating, editing, and viewing streaming media, though. Did that very well.

Re sandbox more difficult

No kidding! That's the entire point: get it right or make it harder to beat at least. Monoliths on mainstream hardware are amusement parks with free rides and victims everywhere for attackers. Microkernels on COTS hardware and even modular, typed monoliths on POLA hardware are a series of sandboxes with adult supervision during play and movement. Quite a difference in number of problems showing up and damage done.

Re more chances to succeed

You keep repeating this too without evidence. Attackers need vulnerabilities to succeed. They'll know some to use ahead of time or they won't if we're talking OS compromise. A flaw in one module lets them take one module no matter how many restarts. A flaw in two with a flow means they'll get it in first try. This is why you design it so each flow and individual op on them follow security policy.

The only time restarts give attack opportunities is if your using probabilistic tactics (eg ASLR) or they're waiting for intermitent failure (eg MMU errata). Any high assurance system better not exclusively rely on tactics (ever) and should account for latter (eg immunity-aware programming).

All in all, anything you've said about microkernel systems applies to monoliths in various ways. One model just limits system-hosing faults and hacks a lot better. The question is do you want to accept that risk to squeeze out max performance or eliminate that risk with acceptable performance? Microkernels choose risk reduction while mainstream monoliths choose performance.

"Tanenbaum mentions commentators on slashdot who never tried a microkernel. That's me and I never tried to write an OS either. "

"Suppose I'm just interested in desktop applications. Linux is already treated as a lost cause. That's what gets me because at the moment, the microkernels don't seem like an alternative in my case. If I'm not yet convinced by the opponents, I'm looking for facts, not anecdotes or authority. "

Ahh, OK. Well, I'm guessing you probably used Windows/DOS desktops back in the 1990's, right? They were OK for apps but architecturally terrible. That meant they crashed a lot, slogged down under heavy load, and so on. UNIX and Linux didn't do much better outside of SGI's machines. Hell, were barely usable. You'll appreciate what this microkernel-based, concurrency-oriented desktop is doing if you remember those days:

https://youtu.be/BsVydyC8ZGQ?t=16m8s

Note: I couldn't even drag videos without them skipping back then. They do that in the demo while other stuff is playing. Too bad Microsoft had a monopoly and Jobs turned them down for Darwin in Next (later became Mac OS X).

An early great one that's in all kinds of stuff is QNX microkernel. It did a POSIX-compatible implementation w/ microkernel, fast message-passing, and hard real-time. Later showed off its capabilities with a desktop on a floppy with GUI, web browser, etc. Users, including Animats (Nagle) here, said you could be doing day-to-day stuff with big compiles in background with no hit to responsiveness due to its great scheduler. Once Blackberry bought it, they deployed it in the Playbook. If you used an iPad of the time (circa iPad2), then you'll be very impressed with the alternative's performance and responsiveness.

https://www.youtube.com/watch?v=vI1VgedbMUY

Note: I couldn't find the original demo showing it outperform iPad on most stuff. Several fan-made ones did but were boring. I'll spare you. Important part was alternating between web browser, (either a game or video here), and Need for Speed game with no apparent slowdown. Much like BeOS demo. These are despite Apple having a severe lead on them across the board. The results appear to be directly from QNX as its desktop prototype & embedded solutions had same properties.

Far as old school, the world still runs on IBM's mainframes for transaction processing and batches especially. Now days, they also run thousands of Linux VM's. Shapiro's EROS on x86 was modeled after KeyKOS: a capability-based microkernel system aimed at mainframes. It performed well enough to do the job plus virtualized IBM OS's, VMS, and UNIX. Allowed fine-grained separation, high-security enforcement, and persistence of all apps' state via checkpointing built into the VM. Core functionality supporting all that took 20Kloc.

https://www.cis.upenn.edu/~KeyKOS/

Latest entrants are MINIX 3 and GenodeOS, focused on reliability & security respectively. Each have something like 2-3 people working on them for only a few years doing a significant amount clean-slate. Thanks to architecture, MINIX 3 already achieved reliability that it took UNIX's decades to pull off. GenodeOS is alpha-grade usable on desktop right now while already deployed in embedded. Similar architecture in OKL4 microkernel is in 1+ billion phones mostly isolating basebands but sometimes virtualizing Symbian, Windows Mobile, or Android. Might be in your phone invisibly doing its job. Green Hills INTEGRITY or a variant of it is doing same in Samsung Galaxies via KNOX. Feel free to try any of those products to see how your performance, etc doesn't magically go away despite constant microkernel activity. ;)

So, hopefully these convey the experience of using a microkernel vs monolithic UNIX or mainframe stuff even if you're not personally using one. I hope seeing BeOS and Blackberry's QNX system scream on limited hardware will show you advantages or disadvantages aren't theoretical: the stuff works well enough sometimes faster than competition. I can only imagine how much easier to use, reliable, upgradeable, and performance my desktop would be had a billion dollars worth of labor been put into such architectures instead of UNIX/Linux. Especially given what small teams did with them.

"but Shapiro's rebuttal is lacking a bit of substance and doesn't quantify anything with a meaningful measure"

I gave you a link with tons of substance supported by research prototypes including Shapiro's like EROS. They theorized, built stuff to test, modified when differed from expectations, rinse, repeat. Whether the specific memory predictions come out true or not is barely relevant. What matters is "Will it perform well enough with the reliability, security, and maintenance advantages it claimed?" There was enough data in that link to prove that out several times over. I added examples for desktops and tablets in this comment. Now it's up to Linus to prove BeOS and Playbook users were experiencing a mass hallucination when those boxes were faster, more stable, and more usable than Linux-based systems. :)

Combo of cheap hardware, legacy effects, and lock-in to software killed off all desktop competition except projects that virtualize stuff compatible with major players. That's why you are stuck with Linux, QubesOS, maybe GenodeOS eventually, etc. Economics and social factors, not technical. "Worse is Better."

List of UNIX alternatives for you to investigate to see what superior attributes it could include or explicitly rejects:

https://news.ycombinator.com/item?id=10957020

Have fun with those. Especially Genera. Just imagine, with or without LISP, what developing and deploying on machines like that would be like. Still nothing that can do all of that.

conceit
So the Mach kernel is just not up-to-date with best security research? That's what MacOS runs on, I just noticed and wondered, why it still gets viruses. Which is just to the point I was trying to make. Well, besides moking Linus' abrasive type.
Not organized to be able to just throw out a reference and many disappeared over time as old web faded. It's more something you see relative to other OS's than an absolute. I really need to try to integrate all the examples sometime. Here's a few, esp from past, that give you an idea.

Historical look at quite a few http://brinch-hansen.net/papers/2001b.pdf

Note: A number were concurrency safe, had a nucleus that preserved consistency, or were organized in layers that could be tested independently. UNIX's was actually a watered down MULTIC's & he's harsh on it there. I suggest you google it too.

Burrough's B5000 Architecture (1961-) http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...

Note: Written in ALGOL variant, protected stack, bounds checks, type-checked procedure calls dynamically, isolation of processes, froze rogue ones w/ restart allowed if feasible, and sharing components. Forward thinking.

IBM System/38 (became AS/400) https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf

Note: Capability architecture at HW level. Used intermediate code for future-proofing. OS mostly in high-level language. Integrated database functionality for OS & apps. Many companies I worked for had them and nobody can remember them getting repaired. :)

Oberon System http://www.projectoberon.com/ http://www.cfbsoftware.com/modula2/Lilith.pdf

Note: Brilliance started in Lilith where two people in two years built HW, OS, and tooling with performance, safety, and consistency. Designed ideal assembly, safe system language (Modula-2), compiler, OS, and tied it all together. Kept it up as it evolved into Oberon, Active Oberon, etc. Now have a RISC processor ideal for it. Hansen did similar on very PDP-11 UNIX was invented on with Edison system, which had safety & Wirth-like simplicity.

OpenVMS https://en.wikipedia.org/wiki/OpenVMS

Note: Individual systems with good security architecture & reliability. Clustering released in 80's with up to 90 nodes at hundreds of miles w/ uptime up to 17 years. Rolling upgrades, fault-tolerance, versioned filesystem using "records," integrated DB, clear commands, consistent design, and great cross-language support since all had to support calling convention and stuff. Used in mainframe-style apps, UNIX-style, real-time, and so on. Declined, pulled off market, and recently re-released.

Genera LISP environment http://www.symbolics-dks.com/Genera-why-1.htm

Note: LISP was easy to parse, had REPL, supported all paradigms, macro's let you customize it, memory-safe, incremental compilation of functions, and even update apps while running. Genera was a machine/OS written in LISP specifically for hackers with lots of advanced functionality. Today's systems still can't replicate the flow and holistic experience of that. Wish they could, with or without LISP itself.

BeOS Multimedia Desktop http://birdhouse.org/beos/byte/29-10000ft/ https://www.youtube.com/watch?v=BsVydyC8ZGQ

Note: Article lists plenty of benefits that I didn't have with alternatives for long time and still barely do. Mainly due to great concurrency model and primitives (eg "benaphors"). Skip ahead to 16:10 to be amazed at what load it handled on older hardware. Haiku is an OSS project to try to re-create it.

EROS http://www.eros-os.org/papers/IEEE-Software-Jan-2002.pdf

Note: Capability-secure OS that redid things like networking stacks and GUI for more trustworthyness. It was fast. Also had persistence where a failure could only loose so much of your running state. MINIX 3 and Genode-OS continue the microkernel tradition in a state where you can actually use them today. MINIX 3 has self-healing capabilities. QNX was first to pull it off with POSIX/UNIX compatibility, hard real-time, and great performance. INTEGRITY RTOS bulletproofs the architecture further with good design.

SPIN OS http://www-spin.cs.washington.edu/

Note: Coded OS in safe Modula-3 language with additions for better concurrency and type-safe linking. Could isolate apps in user-mode then link performance-critical stuff directly into the kernel with language & type system adding safety. Like Wirth & Hansen, eliminates all the abstraction gaps & inconsistency in various layers on top of that. JX OS http://www4.cs.fau.de/Projects/JX/publications/jx-sec.pdf

Note: Builds on language-oriented approach. Puts drivers and trusted components in Java VM for safety. Microkernel outside it. Internal architecture builds security kernel/model on top of integrity model. Already doing well in tests. Open-source. High tech answer is probably Duffy's articles on Microsoft Midori.

So, there's a summary of OS architectures that did vastly better than UNIX in all kinds of ways. They range from 1961 mainframes to 1970-80's minicomputers to 1990's-2000's desktops. In many cases, aspects of their design could've been ported with effort but just weren't. UNIX retained unsafe language, root, setuid, discretionary controls, heavyweight components (apps + pipes), no robustness throughout, GUI issues and so on. Endless problems many others lacked by design.

Hope the list gives you stuff to think about or contribute to. :)

chris_wot
Thanks!
Apr 08, 2015 · 125 points, 104 comments · submitted by NaOH
ab5tract
For anyone who is prone to ignore vertical dynamics, keep in mind that this OS was competing with Mac OS 8, Windows 98, and GNU/Linux in the Lesstif/GNOME 1.x era. In short, it trumped all the rest in speed, development environment, multimedia, stability...

But the Wintel cartel was stronger than ever. Microsoft even settled with Be years afterwards for colluding to keep their OS off of any vendor computers. But that can't ever commit backwards into history what computing would be like if the best OS at the time had been given a fair shake.

antocv
I cant understand why more people dont realize US is not a capitalist or free market economy.

Or even the EU.

If you look closer at any industry you'll see cartels, monopolies, government support and skeweing what and who gets the money. Who gets payed and who doesnt. The best products do not win, the best companies do not win.

It seems the different vs USSR is the scale of the economy.

coldtea
>I cant understand why more people dont realize US is not a capitalist or free market economy.

Because the presense of "cartels, monopolies, government support and skeweing what and who gets the money" doesn't preclude it being "capitalist" or "free market".

There's the abstract "capitalism"/"free market" and there's the concrete reality, which is that it goes hand in hand with these things and always had.

It's like Christianity -- in theory it's this perfect thing touting peace and love --, but in reality it's involved in everything from the crusades and the enslaving on native indians in the Americas to present day right wing bigotry.

antocv
In reality what EU is and USA today, is economically, financially, not very different than what USSR or SFRY was.

Planned economy. The distinction private-public doesnt even make sense, with state subsidized private losses.

Its just a few groups of people controlling systems.

geon
Cartells and monopolies is the result of unregulated capitalism. To get a free market you need quite a bit of regulation.
glandium
Both KDE and Gnome were actually in their 0.x's when the first non-preview BeOS was released, and BeOS had been kicking asses long before these projects were even started. It was also kicking asses in the era of Mac OS 7 and Windows 95. Although at the time, I think it was pretty much limited to the BeBox (which in itself was an amazing machine).
ab5tract
Good point. I was referring to the 1999 date of the demo, but you are 100% spot on here.
fulafel
What would have been the plausible avenue via which it could have taken off, even if Microsoft didn't take interest? There was no software, and the only connection to outside software world was the POSIX API supported for text-only apps.

Their aim was to be acquired by Apple and get the throne that OS X now occupies, and after missing that chance they were left a niche hobbyist OS for Amiga refugees looking for a new technically superior underdog to root for.

ab5tract
There were quite a few interesting softwares on BeOS. It's always the same thing: "you have no software, so you have no users / you have no users, so you have no software".

If I had to guess, I would say the real growth markets would be audio and video. Your claim that it was only ever a bid to be sold by Apple is a bit ridiculous, IMO. They needed to look for a buyer because they could not get a single preinstall.

The 'OS X throne' you are talking about was by no means a guaranteed outcome. It took almost a decade from the NeXT purchase to having real traction amongst Mac professionals.

My only point is that NO ONE can say for sure what would have happened if there were not a monopoly standing in the way, using illegal tactics to shoulder out potential competition.

fulafel
The chicken and egg problem is no joke. All surviving desktop operating systems (NT/OS X/Linux [source only]) launched having compatibility with existing apps.

Of course nobody can say for sure, but their chances didn't look good.

ab5tract
You can go ahead and ignore the vertical dynamics of the illegal Wintel alliance on their chances if you want, but then I think we have nothing left to discuss.

Also, Sheepshaver (sp?) was available on the PPC version, allowing Mac OS apps to run.

danieldk
Of course nobody can say for sure, but their chances didn't look good.

Ignoring POSIX compatibility. You have to remember that this was still Mac OS classic time, and a lot of Mac developers were interested in and porting to BeOS. I used R4.5 and R5, and there was already quite some interesting software for it from former Mac developers, such as Gobe Productive, the Pe editor, and if I remember correctly some AAA games around the same time Loki existed.

pjmlp
It was given a 2nd opportunity, but its owner was too greedy.
72deluxe
Crazy eh? OSX could have had BeOS undergarments instead of BSD.
cturner
It might have been the VC behind it, rather than him directly. We won't know until an insider writes a history.
blackoil
Even without MS push, it may not have been a different story. BeOS was to generic, without any niche. Without MS Office etc it would have been difficult to capture any market. Linux was successful as a free UNIX clone, so instead of paying 1000s of dollars to sun, IBM you could have a free UNIX with a good webserver with PHP Java and all. BeOs never had such a ready market nor a niche.
thom
It did have a niche, it just didn't have any software. I remember waiting/hoping in the 90s for Adobe and Quark to port over to BeOS so we could ditch Mac OS for publishing.

In fact, it would have been interesting for someone like Adobe to buy Be, and become the de facto standard platform for most kinds of creative work. I can't imagine in that universe that there'd be a MacBook on basically every creative's desk, as there is now.

Zardoz84
I remember that BeOS was publicised as the dream OS for multimedia creation.
optimusclimb
Was just chatting with someone about my feelings toward the "new" Microsoft. I admitted that, no matter what moves they'd make, I just wouldn't care, as I really can't forgive them as a company for all of the shitty, anti-progress moves they made. Embrace-and-extend anyone?

Thanks for the reminder (that I had forgotten about) of just one of the many actions they took that makes me wish corporate capitol punishment had been an option.

If I were BillG I'd be trying to philanthropize my heart out too.

digi_owl
Sadly i fear that many companies will do just the same given the chance, open source or not.
mveety
They're amoral entities that are effectively programmed to make the most money. So really they're working as intended if they do that shit because it allows them to make more money.
carlesfe
I remember installing BeOS a long time ago, when I was fiddling with old linux distros which were almost impossible to install (what is the model of your RAMDAC? which is your interrupt controller? manually configure your modem!)

I was absolutely amazed at the ease of install, running the software and the powerful GUI. I even ran it for some months as my only OS, since it had a web browser and e-mail client and the dialup config was very straightforward.

After that time, I started missing a lot of software, especially browser updates, so I tried linux again and found that it had improved a bit, so I stuck with it.

But I'll never forget how probably the best OS at that time failed into misery for the lack of software and a bit of promotion.

nailer
I remember Debian asking these questions for years after PCI was mainstream ( PCI has device IDs that Red Hat would simply look up in the PCI map file to get the proper kernel module).
ab5tract
Hey carlesfe, see my previous post -- it was not a simple lack of promotion but a complete lockout from pre-installation for any PC vendor who didn't want to get royally stomped on by Microsoft.
carlesfe
Well, same as Linux, but since the later had a big community, it was more vocal and that lead to a larger userbase in the end.

Yes, pre-installed Windows (and the lack of drivers) used to suck back in the day, but BeOS was in an awkward middle point between being Microsoft and being Linux. I guess it was a bad time for that, it would have done better nowadays with an equivalent product.

ab5tract
Linux never had a chance with the average consumer at that time. Shall I pull up some GNOME 1 screenshots to show you? Do you know all of the refresh rates of your monitor, per resolution?

"If you get this wrong, your monitor may become inoperable or explode"

BeOS was a consumer OS, and was denied any access to consumers. Ignore vertical dynamics if you want, but the writing is clear as day to me.

optimusclimb
LOL - Thanks for the reminder of what fun Xconfigurator was, and setting up XFree86 in general. 2016 will be the year of the linux desktop though ;)
pjmlp
You at least had GNOME 1 already.

When I looked into GNU/Linux for the first time, my options were twm, fvwm and OpenLook.

Zardoz84
SuSE Linux 5.3 -> fvwm95, twm, KDE 1.x, and I remember playing with GNOME 1. Also was weird times, I remember that I manage to run GNOME using KDE as window manager.
72deluxe
I remember trying it back in the day and was mighty impressed with it; pity that more consumer applications weren't ported, as that would have given it more momentum.

You were right about it being a consumer OS (other than the fiddly titlebars - too small!)

I remember running XConfigurator under RedHat 5 and taking days to get my rubbish VESA card working. Such was life without the Internet (or with £££ dialup). I thought GNOME1 looked squishy and nice but when you look back on it and the KDE2 screenshots, you realise how dated it was back in the days when mice with scrollwheels didn't exist.

jeffbush
In early 2008, having graduated from college the year before, I was living in the midwest working at a company making warehouse management software. I decided I really wanted to work at Be. I sent them my resume for a position that was posted (I think I actually snail mailed it). I called the recruiter and she politely informed me that I wasn't qualified for it. Shortly after, I saw another position posted and applied again. Over the next 7 months, I continued to bug the recruiter regularly until someone finally broke down and gave me an interview. I bought a new suit just for it. When I showed up to interview, an engineer walked in with long hair and ripped up jeans.

I ended up getting the job and worked there through the end of 2000. Even though it ended up imploding, it was a great experience. I learned an incredible amount and I'm still friends with a bunch of people I met there. Many of the people from Be ended up at Android.

dang
I think you must mean 1998, not 2008?

This is a good story. I'd be curious to hear about things you learned that are related to unusual aspects of what Be was doing.

jeffbush
Oops, yeah, 1998.

I became very comfortable with multithreaded synchronization, since BeOS was "pervasively" multithreaded to try to scale to multiprocessor machines. Multithreaded programming is more common now. All synchronization in BeOS was based on counting semaphores (which could acquire with a count > 1), which I think are still not super common.

There was a philosophy of simplicity that still resonates with me. For example, the scheduler algorithm used a simple exponential priority scheme that was easy to reason about intuitively. It was straightforward to get glitch free media playback. Many years later, I was trying to debug a glitch with audio playback on Linux. Each time it would glitch and someone would say "oh yeah, this other heuristic is kicking in, make these changes to the config." We'd do that, another glitch would happen, and they'd say "oh, these other two heuristics are interacting badly," this went on for like a month. I don't mean to bash Linux, because it is far more sophisticated now than BeOS was and handles far more use cases, but I miss the elegance and simplicity of BeOS.

The thing that I found most interesting was the chance to interact closely with experts from so many different domains. When I started there, all of engineering fit on one floor (I think there were around 50 engineers). For better or worse, almost everything was built in-house. There were people who worked on graphics, the windowing system, 3D, kernel internals, device drivers, filesystems, application frameworks, etc. Everything was in one source tree, so you could check it out, type make, and have a full OS image in a few hours.

I worked at Apple later, but there are thousands of engineers who work on MacOS, and you only get a chance to meet a small subset of them. There are hundreds of individual projects in their own source trees that are built individually and get pulled together by a complex mastering system. It seems almost impossible for one person to wrap their head around it.

The fact that so much was built from scratch might be part of the reason that BeOS felt so snappy: it hadn't accumulated much cruft.

wowtip
Tested OpenBeOS / Haiku, any opinion on the project?
jeffbush
Yeah, I've seen it, although I haven't used it or looked too closely at it recently.

When Be did the focus shift, we were in the midst of the next major revision of the desktop OS (which consequently was never released). Part of that was an overhaul of the user interface, which looked more sleek and modern, and was themable. The squarish tabs were replaced with a slightly inset title bar area with rounded edges. I don't know if any screenshots of that have survived.

There were many other things that we were exploring and talking about at the time, like putting transparency and animations in the interface, but the hardware at the time wasn't up to the task yet (there weren't real GPUs with shaders like we have today; the best acceleration we could get were opaque bit-blits, which weren't supported on all cards). Had Be been able to stick around longer, the interface would have been much different.

So, personally, I don't really get cloning the original interface. A lot has happened in UI world the 20+ years since BeOS was originally written. I guess I'm not that sentimental.

That said the work the Haiku team has done is impressive. It's a cool project and I don't want to sound critical of it.

renox
I'm not sure what you focus on their interface work? I'm pretty sure you could have a BeOS-like interface on Linux with very little work, Haiku's main work is reproducing BeOS APIs and making work the new kernel.

The second part is really unfortunate IMHO, but hackers gonna hack..

tbe
I guess the next OS revision you mention would be "Dano", which according to the Wikipedia article[0] was leaked on the day the company closed down. In that case, there should be quite a number of screenshots floating around, such as [1]. YellowTAB Zeta[2] was also based on Dano.

[0] https://en.wikipedia.org/wiki/BeOS_R5.1d0

[1] http://qube.ru/files/images/beos_r51d0_on_amd_athlon_xp_1600...

[2] http://www.osnews.com/story/3692

jeffbush
The tabs look different than I remember, but yeah, that's it.
sudioStudio64
I found my BeOS install CD and floppy the other day.

I never really had hardware that could really use something like BeOS back then.

If you liked this then you should consider supporting the Haiku project. They are keeping BeOS alive.

https://www.haiku-os.org/

I'm wondering though if anyone here can speak to how much C++ was used in writing the kernel? I've heard multiple stories...some say that all of it was very OO, some say it was C with classes kind of stuff.

spain
If you like BeOS, you should check out Haiku which is an open-source OS inspired by BeOS [0].

[0] https://www.haiku-os.org/

72deluxe
It's a good project; thankfully there is still the BeOS book within it so you can write applications on it using the tidy BeOS API. I think there are GCC 2 and GCC 4 compatibility issues (attempting to maintain binary compatibility won't work with GCC 4), and they have got a package management system which some feel was too Linux-like, but it's an interesting project and can breathe life into an old computer somewhere.

Speedy too!

brunorsini
I ran BeOS for almost a year around that time. I remember it as fairly stable and packed with neat little features that made it feel lean and modern. Then I started to get disappointed that the community didn't seem to be growing fast enough... It felt a bit like computing while stranded on a desert island
KMag
I was tripple-booting Linux, Win2k, QNX, and often booting the BeOS demo from floppy.

The BeOS driver for my 3c905 Ethernet card was buggy and about every other morning I'd wake up to find a dialog to the effect of "Your network driver has crashed. Click OK to restart the driver." It stunk to have a poor driver, but it was really cool that a bad driver didn't take down the system. A couple years later, I had a CD-R with some of my old financial records on it that got corrupted and would kernel panic Linux, Win2k, and OS X. These days, DVD and CD drivers should really be in user space.

Though, I was playing around with semaphores and found a way to reliably kernel panic BeOS, though the exact same source ran perfectly on Linux. Enough kernel panics and I slowly got filesystem corruption. One morning I woke up to my disk light being on. It spun all night and wore a visible track into the coating of that floppy. Both the floppy drive and the floppy were ruined. I used the vendor's version of tar to back up my data before reinstalling the OS. That's when I learned that all of the NetPositive browser bookmarks were stored as zero-sized files with the links in metadata that wasn't preserved by tar. When resizing the partition once, I learned that non-tar-preserved metadata was also what kept the userspace drivers from showing up in the BeOS task bar.

As much as it would have been cool to have OS X based on BeOS, QNX was much lighter weight and robust. Plus, the QNX Photon GUI subsystem was like X11 and Synergy on steroids. OS X is a lot of things, but the Xnu kernel is quite heavy, and it's not clear that it's possible to implement Mach ports in a lightweight manner. Both BeOS and QNX show it's possible to have a light weight high performance kernel that's isolated from most driver bugs.

A_COMPUTER
Haha, I was doing the same thing in college. It was a great time for messing with OSes. I started running BeOS and QNX on a daily basis because for some reason my machine started crashing in Windows and I needed to get work done. BeOS beat out Linux because the install was simple, it booted fast, and the desktop was a hell of a lot nicer.

People forget this, but Sun created a Java runtime for BeOS before Linux.

I agree about QNX though, it wasn't as stylish as BeOS, but it was far more robust and polished.

xj9
This video always makes me wish Apple would have bought Be, Inc. instead of NeXT. The BeOS was much more interesting than NeXTSTEP!
dnautics
A lot of the android architecture is inspired by beos... Iirc Especially the stuff around intent-based IPC.

Two key android devs, romain guy and Dianne hackborn, were on beos.

jbrooksuk
It'd be interesting to know where Apple would be if they had bought Be, Inc. Would we have iOS? Would Apple be successful? All very interesting!
pjmlp
Assuming that Apple had been successful, it would mean both major desktop systems wouldn't be UNIX based.

On the other hand, Mac OS X being NeXTStep is what brought many back to Apple world in those hard years.

I was at CERN when Apple used to visit us showing how Mac OS X was a good fit for UNIX developers, given its BSD heritage.

So the BeOS route might have meant failure to recover from their situation.

However as a fan of OS architectures, the alternative reality of a successful Apple with BeOS is quite interesting.

laumars
BeOS had a bash command line and partial POSIX compatibility, so it might still have faired well with UNIX developers. Particularly given that Linux was still playing catch up at that point in time.
pjmlp
It gave first class treatment to C++ and I doubt you could implement cleanly features like fork() or signal handling in its thread model.

Sadly the "The Be Book" is not clear on what was supported, and I don't remember it any longer.

KMag
Well, if they had gotten Steve Jobs but had BeOS, I imagine there would have been something iOS-like. The BeOS kernel is much lighter weight than Xnu.
laumars
If I remember correctly, NeXT was largely bought for Steve Jobs rather than NeXTSTEP. Also the iPod and the iPhone (or rather the iPad, as that was what Apple were originally working on before they miniaturised it and released it as a smart phone) was Jobs' concept. Obviously prior art existed, but the digital music player and tablet markets were pretty underwhelming to the average layman before Apple released their products.

So I don't think Apple's business would have exploded like it if they bought Be Inc. However Be's core business was multimedia (like Apple's was per-OS X) and that industry has since felt a little neglected by Apple as they seem to be concentrating more on gadgets than power users. So if I had to speculate, I'd say Be Inc / Apple's business would have grown with the media industry and stayed more faithful to their core users.

I also think Apple would have consumed more of Linux's market share than it has done now because an Apple owned BeOS would still have retained, at least, partial POSIX compatibility meaning the leap from UNIX / Linux would be equivalent as it is now, but there would have been less fanboyism which was largely drummed up by Jobs to increase sales. I think, perversely, that has put many new users off from migrating to Apple from the FOSS community who want to use an open API without the corporate BS.

Lastly, I think Macs might have become the de facto standard web developers machine. Partly because of being more true to it's core multimedia business (and the web now being as much a multimedia platform as any other these days), partly because of it's POSIX compatibility meaning it would be trivial to run the same Linux / OS X develop tools, partly because of it taking business from Linux (see above), partly because I think Apple would have continued to allow 3rd party manufacturers to build Apple-compatible machine (IIRC that was one of the first things Jobs terminated when he rejoined Apple), and partly because Windows users would feel less threatened making the switch (BeOS always felt a less significant paradigm shift from Windows than OS 9 / OS X did).

So in short, I think OS X would have had a greater market share if Apple bought Be Inc, but I think Apple would have been less successful as a company since their biggest market at the moment is consumer gadgets (specifically the iPhone).

This is all just guesswork, obviously. But sometimes it's fun to speculate.

georgeecollins
I do not think that is correct. Apple needed a new OS badly. I developed game software for PCs in the Win 98 / OS8 era for both platforms. Macs were still a nice experience but their performance relative to a PC was terrible.

Steve Jobs at that time did not have the reputation he does now, or even had a few years later. Many people view Next as a business failure at the time. They were still in business, but relative to the huge investment they had gotten they were doing pretty poorly. There is a book written at the time, "Steve Jobs and the Next Big Thing." That sums up conventional wisdom circa that time.

The fact that they were considering Be and Next at the same time tells you that they were looking for an OS.

laumars
I suspect you may have misread my post since you're offering a counter argument to points I wasn't discussing.

I wasn't suggesting that Apple shouldn't have bought nor released any new OS. I was speculating about a future where Apple bought Be instead of NeXT (hence my frequent references to Be Inc)

However on the (unrelated) points you raised, I do completely agree with you.

jay-saint
Ah the nostalgia. In 1999 I was a sophomore in college and built my first PC that was all my own. I used the legendary Abit BP6 Motherboard http://en.wikipedia.org/wiki/ABIT_BP6 I had a pair of Socket 370 Celeron 366a processors overclocked to 100mhz, which made the system think it had a pair of P3 550mhz cpus.

BeOS was a great OS for experimenting SMP. The CPU monitor in the video would show you the loads for each CPU and you could click the left side to toggle a CPU on and off. You could immediately see frame rates drop and the remaining cpu's usage spike.

strictnein
Crazy, I have almost the exact same story and the exact same setup. I replaced a new 400mhz Pentium 2 setup with that Abit BP6 board and ran the 366mhz processors at 466mhz or 533mhz when it was cool out (I opened the window to help cool my system). Probably wasn't the greatest investment, since the 400mhz Pentium 2 was still a really nice processor when I did this (early 1999, I believe, towards the end of my freshman year of college).

I ran multiple OSs: BeOS and Redhat Linux to do my CompSci work for school and then Win98 (which only supported one processor, of course) to play games, mainly Quake 2 and Tribes.

cturner
Similar story. I put together a dual-P2/350s on (I think) a Tyan motherboard in December 1998 as a dedicated BeOS host. Wasn't good at budgeting and found myself in another state having run out of money a few weeks later, and had a bit of a challenge getting home.

Parts of haiku remain impressive. Example, there's two performant message-bus mechanisms available via standard system calls:

* The flagship IPC mechanism is the BMessage. You can exchange messages unicast, or broadcast using BRosterm, or pub/sub with StartWatching/SendNotices.

* You can also treat the filesystem as an async messaging system using BPathMonitor. Yes, you can create RAM disks. They have FUSE as well.

It runs pretty well under VirtualPC. There's a trick to configuring the mouse to touchscreen once you've installed the seamless-mouse tools.

gonzo
I had an original BeBox, which I upgraded to the faster CPUs when available. I bought the BeOS team a case of champagne when they shipped a major release.

I'm happy Apple bought NeXT.

fit2rule
I've still got my BeBox and boot it every year just to confirm it still works. It still works! I have no clue what I'll do with it - its not worth selling. Anyone got any clues?
crxgames
I'd be interested in it.
fit2rule
What do you think it'd be worth to you? Shipping from Europe.
robin_reala
http://www.computerhistory.org/ don’t list a BeBox in their collection…
72deluxe
Out of interest, why are you happy that Apple bought NeXT instead of Be?
bestham
Can't answer for parent, but Steve Jobs is a good enough reason.
gonzo
This. Apple wouldn't be what they are today without the return of Jobs.
npunt
Ah this brings back memories! I happened to go to the Be developer conference back in summer 1996 - didn't know anyone, wasn't even a developer, just a curious high schooler interning at a mac user group down in LA. Hopped on an airplane alone to come out to San Jose because why not?

I remember talking to some Apple exec sitting next to me in the presentation room as some Be people finished their speech. I asked about their prospects and he said something like 'yeah its been tough for us but Gil Amelio is going to turn things around'. Gil had just joined as CEO back in February. I remember thinking that the guy didn't sound like he really believed what he was saying though - there was a a certain resignation, or just a moment where he didn't feel he had to keep the 'everythings great' performance up for a high school kid. After the NeXT acquisition I remember thinking about this guy and that he probably didn't have a job anymore. Who knows, maybe that was actually Jonny Ive.

At the conference there was a whole group of Apple people checking things out and talking to everyone. I don't remember if it was public knowledge that they were negotiating with Be at the time, but it was no secret why they were there. There was a lot of excitement among the developers there, but also a lot of trepidation - there was just loose talk of being useful for really specialized AV fields but no path or believable / clear idea of how this could go to mass market. Even a 15 year old knew that. Plus, Microsoft was unstoppable back then. A fair amount of talk among the devs was about how this could be the next Apple OS, and I think that's what excited people. But everything hinged on acquisition.

Afterwards I think I exchanged a few emails with Dominic, because I was interested in BeFS. I didn't really know how to talk to him so I think I just offered him access to my mp3 server. Meanwhile when I got back to the mac user group, I installed the developer preview CD on some PowerComputing macs we had and they just transformed into beasts. BeOS was otherworldly, using it felt like you were 5 years in the future but in an alternative universe (full of tiny yellow title bars). Because all of the apps were just apps to show off Be's capabilities, it was also vaguely reminiscent of the 90s demoscene, with Be one-upping Future Crew by dropping an entire OS.

History shows NeXT was the right move. But Be... they were something special.

Scottn1
OMG brings back memories and makes me sad at same time it failed to gain ground. I was a BIG BeOS fan. In fact I'd probably be able to dig up from my storage the actual purchased BeOS package and the "The BeOS Bible" book I bought too. It was the "Opera Browser" of operating systems at the time to me. Lightweight, nimble, fast and just completely felt it was well thought out by brilliant engineers that weren't tied down to legacy.

I remember pitching it to my co-workers and praising its ability to still play multiple movies smoothly in a dragging window even with other tasks running in the background, which at the time was really cool considering same hardware Windows/Linux could not. And to this day, it had my favorite all-time newsreader (can't remember the name but it was threaded news that used multiple windows)

In the end, it just didn't the apps and the world was Microsoft centric and this thing called "Linux" was the buzz, along with Apple gaining ground.

jason_slack
Wow, I haven't seen this video in a long time.

Buying a BeBox was one of the best things I ever did. I took out a loan for about $2,000 from my bank to get one and start writing apps. Not a lot materialized out of it financially but the experience was invaluable. I met a lot of people from the community that I still chat with to this day.

Be, Inc had some really good ideas.

crxgames
I owned a dual 66mhz back a few years ago. Hands down the coolest piece of computing hardware I've ever purchased. I regret selling it at least once a month. I will never find another one.

It still blows my mind how good the OS was. Could deadlock the entire system, but I'll be damned if your CD was going to stop playing perfectly. Not even a skip.

jason_slack
I did a lot of experiments writing C++ to lock up resources in an effort to try and crash the system. It was very hard to do. It was amazing to be banging on the system and my tunes were still humming away! It is funny to hear others say this too.
fit2rule
Still got your BeBox? I turn mine on every now and then to make sure it still works - I don't think I can ever bring myself to sell it, but I'm still at a loss as to what I will do with it.
crxgames
Which model?
fit2rule
Rev A, 66mhz. I bought a new PC instead of upgrading to the 133mhz model, and ran NextStep on it, then BeOS .. then Linux. ;)
jason_slack
I do! A Rev A 66, IIRC.
Aqueous
BeOS is a good example why if you're already big you can be proprietary but if you're small it's better to release as open source (when there are big players in the market.)

If BeOS had simply released its source early on it might still be very much alive and healthy today. Given that it achieved on the desktop what Linux still has not, it might have bested Linux in at least the desktop arena. It's main mistake was trying to make it in the proprietary OS market, up against both Microsoft and Apple. Certainly after Apple had failed to buy BeOS, their next best bet was transform BeOS into a GPL or BSD-licensed project.

Now, we have Haiku OS, but it is a bit late, and can only ever be an approximation of the original experience / API.

mnml_
When I was a kid I was running BeOS and it was really cool at the time.
endeavour
I remember being impressed that the BeOS media player could play audio CDs backwards in real-time. Not really sure why you'd want to but it was quite a good party trick.
LoSboccacc
yeah, could transcode anything to anything in real time and feed it to an appropriate player.

BeOS is the only OS which implemented the same UX of unix pipes in a way that actually made sense in a graphical environment.

lectrick
Is it time yet for a new OS that emphasizes immutability, security, concurrency and other functional-language paradigms?

I feel like even today, we are not seeing the true possible performance of our hardware due to all the "sediment layers" (as Jean-Louis Gassée put it) of our popular OS'es. We're also not seeing the possible reliability and security, either.

tel
MirageOS? (http://openmirage.org/)

I kid, but only a little.

agumonkey
I feel the same watching this or Shawshank redemption. There's some lost beauty in it. Even BeOS source code / API was beautiful. I remember some code to query and filter FS results based on metadata, not far from a tiny SQL C++ eDSL.
whoopdedo
At 13:00 he demonstrates positional audio. Whoever uploaded this to Youtube reversed the channels so the sound pans right when he moves the source to the left.

Twenty years later and why doesn't my computer come with a 3D mixer already?

mrsirduke
Running BeOS on my Pentium II Celeron 333 MHz was pure joy. It's such a shame it didn't amount to more than it did.

That and OS/2.

feld
There's plenty of OS/2 still in production. You should be concerned about that.
mrsirduke
You mean in ATM's? I'm almost more concerned by them running a full Windows XP, but that's just me.
Zardoz84
ATMS of Santander Bank (Banco Santander on spanish) keeps using OS/2. I just saw it a few month ago when one got bricked and I saw the old nice OS/2 bootup splash screen.
sigzero
Hey no OS/2 bashing. I was out of work and got a job building a satellite NOC using OS/2. :)
protomyth
I views OS/2 like I view the AS/400, setup and let it run forever. Not cool, not current, but they do the job.
headgasket
This is the perfect example as to why we need to focus on the customer problem not the technical solution.
sigzero
I remember that. I also remember all the excitement around it and then it fizzled.
alper
Are current Linux distributions as good as this yet?
furyg3
I was lucky enough to have a friend working at BeOS at the very end. This is when they abandoned all hope of getting bought by Apple (erm... I mean gaining presence on the desktop) and focused on "Internet Appliances" like eVilla.

BeOS was really amazing at the time, especially in comparison to 'next generation OSes' like Windows NT, the vaporware that was Copland, Linux (which was unusable for mortals) or Solaris. It was clear to me that the future was going to look like BeOS or NeXT, but I could actually easily download and run BeOS on my mac. It was incredibly useable... I installed it and was able to use it for months. It just needed apps.

When Apple went with Jobs (and thus NeXT) I thought that made sense. But when I saw the early releases of Rhapsody I really thought they had made a huge mistake. BeOS was just so much better.

It's really fun to fantasize about what the world would have been like if BeOS made it big. There's still a lot of tech there that has not yet made it into desktop OSes.

LordKano
Gassee really overplayed his hand.

Had he been more reasonable, Apple would have acquired Be Inc and based their next generation OS on BeOS.

I had a similar feeling when I tried Rhapsody DR1. Yeah, it was kind of cool and had potential but Be had a real working OS that was just waiting for that Apple polish.

In the end, Jobs and his vision were probably better for the long term health of Apple but at the time, I too thought that Apple was making a mistake.

I still like to muse about what could have been.

frik
BeOS was great, but compare Steve Jobs' NeXT presentation (1992) to the BeOS video: https://www.youtube.com/watch?v=gveTy4EmNyk

NeXTSTEP is the predecessor of OS X, the rest is history.

BeOS filesystem "BeFS" had extended attributes (metadata) with indexing and querying features similar to a relational database and Bill Gates' vision of "information at your fingertips". Though Microsoft failed to complete Cairo-OS as well as WinFS. The BeFS main developer wrote a book about it, as it is out of print now he released it as PDF for free: http://www.nobius.org/~dbg/practical-file-system-design.pdf , http://en.wikipedia.org/wiki/Be_File_System

Check out HaikuOS, an open source BeOS reimplementation: http://en.wikipedia.org/wiki/Haiku_(operating_system) . And there were two BeOS inspired OS: ZETA-OS and SkyOS with some shady history: http://en.wikipedia.org/wiki/Magnussoft_ZETA , http://en.wikipedia.org/wiki/SkyOS

Some of the BeOS vs. NeXTSTEP history is documented in Apple Copland OS article: http://en.wikipedia.org/wiki/Copland_(operating_system)#Canc...

Another interesting video is: https://www.youtube.com/watch?v=UGhfB-NICzg (In 1991 Steve Jobs' company commissioned an head-to-head programming competition to show how much faster and easier it was to program a NeXT computer vs a Sun workstation. The NeXT operating system went on to be the foundation for Apple's Macintosh OS-X about a decade later.)

threeseed
What was amazing was all of what BeOS could accomplish on almost exactly the same hardware Apple was shipping. Pretty extraordinary accomplishment at the time. There are some technologies in NeXT which are invaluable today e.g. the native PS/PDF support and UNIX underpinnings but I think BeOS would've delivered far more bang for the buck in the short to medium term. I agree it's great to look back and wonder what if.

One technology that always gets forgotten in this saga was WebObjects. Had Apple actually released it for free it could've been the standard for all web applications built during that time. Absolutely nothing came remotely close to it. Instead it went on to inspire technologies like Hibernate and JSP/JSF which dominated early web applications.

hedgehog
At one point you could get a version that ran on Apple's own PowerMac hardware. It was an impressive contrast to MacOS 8.
protomyth
Of the two, I found NeXTSTEP / OpenStep easier to program and a lot more feature rich. BeOS felt like it had better performance, but was harder to program and didn't really have the development tools.

In retrospect, it was good thing Apple bought NeXT, but they sure lost a lot of cool things in the transition to OS X. BeOS seemed like the OS that could have still competed if they had skipped "Internet Appliances" and gone for the sub $500 market. They could be performant on low-end hardware.

baldfat
There has yet to be a better technology to win in the desktop OS.

Examples 1) CPM was so much better than DOS

2) Amiga was 7 years ahead of its time (Though the OS had a ton of bugs the first 18 months)

3) BeOS

4) We still deal with a corporate IT that is 90% Window shops for corporate desktops. (Though Windows has gotten MUCH better)

5) Gnome and GTK+ gained more mind share than KDE :) Okay that one is personal opinion but :)

dheera
I think in most cases it's due to compatibility reasons. The "better" technology usually comes later and is thus less compatible with the rest of the ecosystem.

I used to use Gnome in the Gnome 2.0 / KDE 3.x days. Even though I thought KDE was better, it simply wouldn't play nice with GTK and Gnome apps. Conflicting sound servers would crash, themes and widgets a mess between the two ecosystems, never-ending font issues.

Narishma
All those cases mentioned by the post you replied to came out earlier than the "worse" technologies that ended up winning.
Zardoz84
OS/2
baldfat
That wasn't "better" :)
agumonkey
That's the way of the world.

'better' is a fuzzy term, what catches on 'is' the best, in some way, if that's the metric we observe. Everything that was 'better' failed in some regards, either came too early, wasn't aligned with mass market of users (lisp, smalltalk, sml), neglected some detail that wasn't a detail for the mainstream (why PHP instead of Perl), etc etc.

I spent the last 10 years wondering why so much beautiful things died in the past while we suffer subpar systems. And everytime there's a stupid indirect but perfectly valid reason. Less is more, or something of this kind. Nature.

romaniv
You're forgetting about external factors that have nothing to do with the product itself. Having better marketing team or deeper connection in the industry does not make your actual OS any better.

Survival of the fittest doesn't really apply to human artifacts and especially to art and cutting edge engineering.

I didn't know about BeOS booting process, but I stumbled upon good old demos[1] recently, I remember how I was amazed back in the days. I'm still amazed, more than before. Sad.

[1] http://www.youtube.com/watch?v=BsVydyC8ZGQ

Apr 29, 2012 · beosrocks on The Dawn of Haiku OS
dr_dank's comment on Slashdot ( http://slashdot.org/comments.pl?sid=66224&cid=6095472 ) pretty much sums up its awesomeness:

BeOS was demonstrated to me during my senior year of college. The guy giving the talk played upwards of two dozen mp3s, a dozen or so movie trailers, the GL teapot thing, etc. simultanously. None of the apps skipped a beat. Then, he pulled out the showstopper.

He yanked the plug on the box.

Within 20 seconds or so of restarting, the machine was chugging away with all of its media files in the place they were when they were halted, as if nothing had happened.

If you've never played with BeOS on an old Pentium II, it's hard to imagine the kind of performance it was able to squeeze out of that hardware. Here's a rough idea:

http://www.youtube.com/watch?v=BsVydyC8ZGQ#t=17m36s

falling
and now that Lion reimplemented the feature people are hating it.
shadesandcolour
I think people dislike it because it's not as efficient. Lion takes a while to startup everything where you left it, sometimes I wonder if I would have been better off to close it all and open it with the time it takes. It's getting better though.
Lewisham
Lion takes a good 3-5 minutes to start up from cold to having restored my windows, due to all the disk thrashing it does loading all the programs back up at once. I get the feeling it doesn't often respect my choice of not opening the windows, so I get to sit there and watch it grind away opening programs I probably don't want it to open as that's the reason I rebooted in the first place.
alxp
It seems Lion's implementation of restarting apps depends on SSD type speeds. Also a fairly fat application like a browser has an entirely different amount of work to do before it stops loading compared with the very thin media players that this BeOS demonstration would have been demoing. Apples to Oranges, but BeOS's being multi-thread-mandatory from the start where Lion's application resume was an add-on feature to mature apps will make the difference pretty glaring.
remixhacker
it loads mega slow
nkoren
I'd be more forgiving of Lion if it could play even one movie on my Macbook pro without occasionally going into frame-dropping VM-swapping. There's no question that my hand-built BeOS box, 15 years ago, had a far snappier userland experience.
batista
>I'd be more forgiving of Lion if it could play even one movie on my Macbook pro without occasionally going into frame-dropping VM-swapping.

Try my MacBook Pro then, I play movies all the time on it with no problem of frame-dropping or swapping. And it's a 2007 model.

neuroelectronic
Oh, why should I try a free OS to improve performance when I can just purchase a $2000 machine?
nkoren
Well, that's weird. Mine's a 2009 model and I sorely regret installing Lion on it. The thing absolutely crawls.
batista
I'm running Lion too. If you have repeatable problems, there should be some specific source for it. Tried "Activity Monitor" when it occurs?

Common problems can be: Spotlight doing indexing at the time, Flash fucking around, too little available hard drive space (less than 5GB), some rogue app, etc.

MBP 2007, 2GB RAM. I have now open: Chrome with 7 tabs, Sublime Text, Terminal (2 tabs, one SSH), Mail, iA Writer, Adium, iTunes, TunnelBlick VPN, Photoshop, Transmit, Dropbox, Alfred, Little Snitch and VLC and the movie plays just fine. I use either VLC or MPlayerX though, very rarely QuickTime w Perian.

Now, some people open 50 tabs and think that the browser should automagically handle them all, with 20 instances of Flash running in videos and apps, etc. Not so. VMs are also very resource hungry.

That said, the laptop is noticeably slower than my 4GB / i7 iMac, but not to the point it swaps --unless I start my 1GB linux VM (VMWare).

laaph
I have Lion, on a 2010 MBP, and I have Activity Monitor open ALL THE TIME.

On Snow Leopard, I found what you say to be true. On Lion, either the mysterious process is eluding me, or it is just much slower. I do development and I can tell you that the iPhone Simulator is a frequent culprit, VirtualBox is also rough (but if I ssh in to my virtual machines rather than use the GUI it's fine), also the Time Machine daemon trying to backup causes things to slow down (even when not plugged in to a disk), but even after that performance is downright crap compared to Snow Leopard.

If you have any idea what my specific source could be I'd be thrilled to hear it. Frequently at the top of my Activity Monitor is WindowServer and kernel_task, except when compiling or other fun things. Even when the interface is locking up I can't find anything fun.

Zirro
Do you by any chance have one or more external hard drives attached? I've been having a lot of lock-up issues while I have six of them connected (through a hub) to a MacBook Pro from 2011 running Lion. It looks like this (also shows it dropping to normal levels instantly):

http://cl.ly/1s3N2r2X1u2J2K010H2i

There's no process running wild from what I can see in the Activity Monitor, but it appears to happen a lot less frequently or not at all when I don't have my external hard drives attached.

laaph
No external hard drives attached, except when I remember to back up my machine (I frequently, but not always, plug it in when it is on my desk, but it is just for Time Machine).

Also, unless I am running heavy CPU things (XCode, VirtualBox machines, etc), I usually don't get a lot of CPU time like that.

ZenPsycho
step 1. go into /Library/Quicktime and move ALL the .component files out of there. step 2. Go into ~/Library/Quicktime and move ALL the .component files out of there. step 3. (re)-install perian step 4. Try playing a movie that usually craps out.

One of the annoying things about Mac OSX is that a rogue codec can mess up ALL media playing activities on the computer- so when you encounter problems, I've found the most successful strategy is to figure out which codec is messing things up.

batista
OK, I don't use VirtualBox, for performance reasons I use VMWare Fusion. Even with this, I find, as you, that using GUI on the VM slows the Mac much more than ssh'ing to the VM. So, the VM is a serious culprit. Certainly, if I have the VM running, I expect some slowdown and occasional swapping -- basically it means I just gave up 1 GB or memory to the VM (out of 2) plus tons of I/O scheduled by a different OS within my OS.

Other than that, I also don't run Time Machine at all -- I use Carbon Copy Cloner or Super Duper to make incremental and/or bootable backups every week or so. With iCloud, Dropbox etc for the important day to day documents, I don't see much need for Time Machine anymore.

So those would be two places to look at.

Besides those, you can try running some DTrace scripts for a more detailed look when your system starts to crap out.

_wwz4
The VirtualBox caught my eye... I loved using VirtualBox for a while. It's free. It basically works. It's free. Oh yeah, and it's free.

Recently, work sprung for a Parallels license for me. Yikes, I pity my past self for putting up with all the VB issues because it was free. Just little things like playing nicely with the app/spaces switch key combos, lack of crashes, and performance make me really regret wasting all those mental cycles on VB. Don't get me wrong, having a free x86 VM is wonderful and the VB developers deserve kudos galore... but I reminded myself that my time and sanity are worth a few bucks here and there.

gaius
it's hard to imagine the kind of performance it was able to squeeze out of that hardware

No no no, you have it backwards. It's hard to imagine what a "modern" OS (Windows, Linux, OSX) is actually doing with all those CPU cycles, that it can't do stuff like this even with >10x the compute power.

Bjartr
What are you talking about? A desktop today has no problem playing two dozen low-bitrate mp3s from disk, streaming a dozen movie trailers from youtube at 240p, playing WoW, and talking on Skype at the same time.

Now if you try to do that with only itunes or windows media player or equivalent "monolithic" player, you'll probably run into some slowdown, but only because those are designed to maximize the experience for one single piece of media at a time. Something like media player classic would do it in a heartbeat though.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.