HN Theater

The best talks and videos of Hacker News.

Hacker News Comments on
Here's What Happens When an 18 Year Old Buys a Mainframe

SHARE Association · Youtube · 674 HN points · 8 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention SHARE Association's video "Here's What Happens When an 18 Year Old Buys a Mainframe".
Youtube Summary
Connor Krukosky is an 18-year-old college student with a hobby of collecting vintage computers. One day, he decided to buy his own IBM z890. This is his story.

Connor's presentation "I just Bought an IBM z890 - Now What?" was featured at SHARE in San Antonio in March 2016. For more great content like this, register for the next SHARE by visiting!

© 2016 SHARE Inc.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 license.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I'm shure alot of you have seen this already. If not, it's an entertaining 45 minutes (Here's What Happens When an 18 Year Old Buys a Mainframe)
The barrier is not always high
Are you trying to say that it is wide and heavy as well :)

Joke aside, how are people actually getting started in this space? And more importantly, is it "worth it" financially? Is the mainframe skill shortage ("dreaded COBOL"?) a real thing?

> How are people actually getting started in this space?

Depends on your needs. Do you need near-perfect uptime? Most don't; their products aren't so critical, so they settle for a cheaper option. Do want to run a data center? Most don't; it's so much easier to push code to AWS and let them handle the infrastructure demands.

Startups historically survive because of adaptation and speed. The mainframe may not fit those operational principles, though that's up to a given startup (see below for industries which might benefit the most from a mainframe).

> Is it "worth it" financially?

The System Z series excels at transactions and updating records. Industries which deal extensively with these types of computations are banks, airlines, credit card companies, stock brokers, insurance companies, and certainly others. If you're in one of those lines of business, you probably should look into mainframes. More generally, if you need to maintain system state at all costs, mainframes are probably a good option.

I would not recommend mainframes for heavy, laborous computing loads (scientific computing, rendering boxes, etc.)

> Is the mainframe skill shortage ("dreaded COBOL"?) a real thing?

If a company wants to add a new feature to or fix a bug in a 40 year-old COBOL program, they'll likely have a hard time finding a young programmer, sure. Some older COBOL coders are helping fill the game while they can. Don't forget that the System Z mainframes have a level of backwards compatibility that makes x86 blush; your COBOL program will certainly still run.

I wager most new programs (<15 years) have been written with Java or C++, given z/OS supports more languages than COBOL:

COBOL is dying, as it would be ridiculous to start a new project in COBOL. But many legacy systems still work, so why change them if they aren't broke?

I've made a comment in the past about IBM mainframes which you also might find informative:

Thank you for the answer and sorry for not being clearer, but I was asking from the perspective of an engineer that wants to enter the mainframe world.

Let's say I want to focus on developing, maintaining and running the software on mainframes. How do I get "in" and are the skills paid well in comparison to a typical Java/.NET/C++ developer position nowadays?

You touched a bit on dying workforce when you mentioned COBOL. Is that dying workforce a real problem making it financially lucrative for the people willing to learn that stuff? Or is it just a myth?

> Let's say I want to focus on developing, maintaining and running the software on mainframes. How do I get "in" and are the skills paid well in comparison to a typical Java/.NET/C++ developer position nowadays?

IBM has quite a few training and certification programs available [1][2][3][4]. (Number four is quite interesting: a yearly competition designed to teach mainframe skills.)

From my understanding, Java development on the mainframe isn't significantly different from standard programming. Much of the heavy lifting happens in the background, and the programmer's focus then becomes learning the ins and outs.

As for compensation:

Mainframe dev: $74.9k. Software dev: $76.5k. Seems fairly equivalent.

> You touched a bit on dying workforce when you mentioned COBOL. Is that dying workforce a real problem making it financially lucrative for the people willing to learn that stuff? Or is it just a myth?

All good myths rely on elements of truth. :)

Will there be COBOL positions? Sure, for at least 10-20 years. But how do you want to set yourself up for additional growth?

If you invest time to learn traditional blacksmithing techniques, you might find a job at an interactive museum or on some specialized YouTube channel. Nothing wrong with that. Can't say there's lots of growth in that industry, though. The time and effort invested is about preserving the methods of the past. So, are you looking to preserve or create? Either option is perfectly fine, but each presents tradeoffs.







IBM has an internal program for training people in Mainframe skills, but for the life of me I can't remember what it is called. I was going to be part of it when I graduated college, but then 08 happened and I was moved into Open Systems support after a RIF.

A google search showed me this page.

Watch a teen rescue an old IBM z890 mainframe from the dead in his parents' basement:
Oct 18, 2019 · 6 points, 1 comments · submitted by gk1
Love this video and bless his parents for being so supportive of a outrageous hobby! I'm going to shamelessly quote a comment from YouTube that always makes me laugh:

> "This young man has a screw loose. Kudos to his parents for not tightening it!"

Reminds me of the kid that bought an IBM z890 mainframe and brought it to his basement. And got it to work!
His presentation was seriously awesome! Thanks a lot.
Consumer tech has been refined and dumbed down to the point where everything just sort of magically works, and since it's "magic" you never have to dive into the nuts and bolts. By making it so easy to use, we are depriving people of their opportunities to organically learn like people in my generation had to.

Exceptions exist.

* Here's What Happens When an 18 Year Old Buys a Mainframe - YouTube ||

In what way were people able to introspect into their computing devices for learning? By being able to poke around OS details or the filesystem?

Doesn't it sound more important to have an IDE + fluid building so people don't have to care about the details?

It’s often useful to be able to know what goes on behind the scenes, but the hardware/software of today makes this increasingly difficult to ascertain.
Whether you are talking about old or new, it depends upon what you are trying to accomplish as well as how you go about it.

Think back to the computers of the early 1980's. It was quite easy for someone to pick up BASIC and create simple programs. Their understanding of programming will be basic and their understanding of the machine will reflect that. That is quite similar to today.

Someone who whetted their appetite will move on to more advanced concepts. With early computers, that typically meant learning about the hardware or learning about algorithms. You either had to squeeze more performance out of your software or you needed to extend your reach beyond the language (and libraries). Today's programmer is, more likely than not, going to dive more deeply into libraries or learn more about their programming language. Yes, there will be exceptions based upon the domain that interests them or what they wish to gain from their learning.

Let's say that they decide to dive deeper into how the computer works. In the case of an early personal computer, the task was relatively easy. From a programmer's perspective, almost everything can be viewed sequentially and shuffling data across a bus. If they were more interested in electronics, it was relatively easy to see what was going on by looking at a circuit diagram or the board itself coupled with reading (admittedly more difficult to obtain) datasheets. With that knowledge, it was possible to modify the existing hardware and practical to build new hardware.

The notion that we are abstracting our understanding further and further from the workings of the machine while letting that limited understanding decide how what we can do with it isn't exactly a new one. My first real encounter with that idea was from one of the pioneers of digital computers, who described the machine as fundamentally physical and analog. In other words, making the physical world digital is in some sense and abstraction in its own right.

Granted, the article's author had relatively little to say on the fronts of programming and electronics design. It was more of a critique about younger people using software with very little understanding of its features or how it works, the sort of thing that can get you into trouble when things are not working as expected. While I disagree that this problem is specific to this generation, it is a solid reminder that the stereotype that younger people are better with technology is just plain wrong.

There's no such thing as "don't have to care about the details."

This is the biggest difference between then and now. In the 70s and 80s computing was all about details, all the way down to the hardware, which most developers understood at the register level.

8-bit amateurs grew up with simple toy machines. They learned enough detail to allow a natural transition to professional development on mainframes and minis - far more detail and more complexity, more of an OS to deal with, but with a recognisably similar outline.

Now most development is more like LEGO - clip moving parts to other moving parts in a slightly precarious way and hope nothing breaks.

It's a completely different way of thinking about computing - kit and cookbook based, with less space for original creation and problem solving. This is partly because the details are hidden and can't be changed, but also because the ethic of experimentation and creativity based on deep domain knowledge driven by curiosity isn't the same.

> There's no such thing as "don't have to care about the details."

The whole point of the many abstractions between the user and the hardware is so they don’t have to care about the details.

Abstractions are, almost by definition, always imperfect. A perfect abstraction sort of ceases to be an abstraction and is simply "the way things are".
Not caring about the details leaves us with people who can't fix simple issues and with applications that use way too much cpu, memory or energy than they really should. Sure, a lot of people don't care and just want to do the thing, but if you at all care that your computer that is 1000x as powerful as the one you had years ago, but doesn't really do a lot more, then you have to care. If you care about the users of your application and don't simply pass your lower development effort on to them in terms of performance cost, then you have to care. If you want to be able to fix your own issues, then you have to care.

Yes, the abstractions mean people can do more without caring, which in turn means more people can use them, but it comes at a cost too.

Every user application doesn't have to be written in C.
Of course not, but from reading HN, I’m not the only one frustrated with the performance state of many applications. They don’t have to be written in C to perform well. For example, I quite like using Qutebrowser, a minimalistic keyboard-centric browser that’s more lightweight than Chrome or Firefox, yet its written in Python (well, I guess the actual browser engine is C++/Qt)
I'm reading this (late) in qutebrowser! Mine has so many tabs open but doesn't hang (until it does), which is a fresh change from luakit (and Chrome but not Firefox, alas mine has many many more tabs open so that I don't open it anymore.) And I'm using because it's written in Python (and so easily scriptable from my own Python program.)
Performance is a business decision.

It doesn’t come for free and ultimately other things might be more important.

If any performance gains from Moore's Law get canceled out by inefficient software due to Wirth's Law, then what's the point of all the work that we technologists are doing?

You can call it a business decision, but that basically means the tech people are doing the work and making our execs rich without actually creating anything that's more beneficial to humanity compared to the computer systems of 10 or 20 years ago.

Regarding computer literacy, there are two glaring difference between the current generation and the one who grew with computers in the 90s: one is the one you mention; in the 90s we were our own tech support, so to speak, and when DOS didn't want to run that shiny game that you just got, it was up to you to figure out the problem and hopefully to find a solution. We learned a lot because we had to.

The other is that most devices are increasingly more geared towards conmsumption and less towards experimentation and creation. On one hand, I used to do crazy things with assembler, regarding direct access to ports and interrupts, that no current OS would allow. On the other, the shift towards phones has greatly changed (and IMHO, reduced, mostly) what we do with our computers.

The good news is that the info is out there for the ones who do want to learn; we didn't have that privilege. Still, these are a minority, and in my experience, the average 18yo person preparing right now to start a degree in CS (or whichever career centered around programming) knows way less about computer than what we used to know at their age.

The bad news is that you can't do any of that on modern computer systems anymore.

I started out learning how to program in Windows 95 to make games. I learned to use inline assembly to set the video mode, and then directly write to video memory at 0xA0000000.

You can't do that anymore on modern Windows even if you knew how. Today if you wanted to start game programming, you'd probably read a bunch of articles recommending Unity, which doesn't even easily support C++ FFS.

This vid has some good photos (also discussed on HN previously).

You don't want one if you ask me.

Jul 31, 2018 · dredmorbius on Hello World on z/OS
C'mon, you just needa little motivation. ;-)

Here's What Happens When an 18 Year Old Buys a Mainframe

Feb 17, 2017 · 7 points, 0 comments · submitted by gk1
Here is the long video from awhile ago about his adventure

Mar 28, 2016 · 653 points, 176 comments · submitted by tbatchelli
There's money to be made in the IBM midrange market. The company I work for is still using the iSeries (although we're switching to TOP_5_CLOUD_ERP system as we speak), and we use Apache and PHP to connect to modern systems.

Every year, IBM is at the user conference for the software we currently use. They're always showing off their newest Linux machine with 200 cores and I don't know why, because half the people there aren't even using bar coding, and everyone there is a small fry.

Here's where a niche developer comes in. You can use Java which is supported by IBM, Zend has a nice PHP setup, and IBM's bastardized version of Apache. Want to build an API to interact with this stone age fucker? You can do it. Build a Slack integration and look like a hero. Heck build some crappy web app because it's easier and faster to do development with your favorite framework than it is to do it on the IBM with RPG and whatever the fuck that thing runs. SQL your brains out!

These customers aren't leaving the IBM without investing a half a million dollars into a new platform. Every year, there are less and less people there, but they'll gladly spend $50k to integrate some simple workflow.

Rent a booth for $1,000, give away an iPad or some FitBits and pass out some business cards. Go to 3-4 of these a year and you should be able to keep busy for a while. And you already know SQL, PHP, Apache, you just get to discover the 'intricacies' of doing it on the IBM.

You didn't do that work for a water treatment plant by any chance, did you?
What do you mean by barcoding? Are you talking about retail bar codes? Or is there something mainframe related?
A company I used to work for built online shopping carts. We were building one for an industrial company who's inventory system ran on an AS/400, and we were doing an integration. I had ZERO familiarity with IBM mainframes at the time and presumed (wrongly) that it ran some flavor of UNIX. I was to work with their lead developer to get the integration going. He was maybe late 50s, I never met him face to face, only over the phone. I suggest that they build a "simple JSON or XML API" for us to connect to. This makes their lead very uncomfortable. He does NOT want to put the machine on the public facing Internet (reasonable enough) and has never worked with XML or JSON. He instead wants to set up a system where hourly it would email us CSV diffs. Maintaining data working from diffs sounded like a nightmare to me, plus I'd never programmaticly worked with receiving emails. Whole thing sounded iffy. Plus understanding their data model it would not be shoved into CSV without a fight. After a bunch of back and forth (weeks) we work out a system where the server would upload a combination of CSV and XML to an FTP share we controlled and I could reprocess the data hourly. It actually worked reasonably well. I'm curious if it's still in operation, I've long forgotten the name of the company we built this for, there were just so many.
I'm almost certain I've worked with the same guy...
I believe I just found the Reddit entry he made when buying this machine. It has some great pics of him getting this monster in his basement.

That's some serious job right there, just dismantling the thing and putting it back together.

I hate it when I assembled computers and I ended up needing extra screws... or worse, I ended up with "spare" screws which left me wondering what exactly I missed. Can't imagine doing it to something this big.

Or like when you buy something that comes packed neatly..and as soon as you unpack it it becomes impossible to put it back and make it fit as before.

Maybe I'm the problem... uhmmm..

I started running a video camera when I do stuff like that.

Then you can go back and see what you missed.

That's a neat idea. Never thought of that before.

At most I laid down a white sheet of paper and put the screws in sort of the same shape as I picked them up from, so I knew these two long ones were at the top, the short ones at the bottom, etc.

But still... one accidental hit to the table and boom..everything is messed up.

That's why I use a white cardboard and tape all the screws to a diagram of the computer, camera, or whatever product it is.
I used to have a roll of sticky tape on the desk, which acted as a nice area to put all the screws in, with the added advantage that it would prevent them going walkies.
Try a magnetic bowl. Works wonders.
A pill box works great for holding screws. 28 boxes, all with lids. I numbered them and then kept a sheet documenting where each screw came from when I had to replace the failing drives in my Mac Mini server.
I take a digital photo and print it out, full-page size. Tape the screws to the location on the photo. Write the number in which they were removed.

This is especially helpful if what you're disassembling has a ton of slightly-different-length screws. Also helps to keep a record of the cable routing which can be tricky on laptops and cameras.

That reminds me of my first time racking some servers. Actual admins laughed at me for using all the screws / mounts. On one hand side, I did a good job. On the other, those boxes were unscrewed and moved again at 2 times that year...
Using all screws can be beneficial to performance -

That's just amazing. I can't believe this is a real effect.

I wonder how much has this been researched? Do you have any more links related to this effect?

I hate it when I assembled computers and I ended up needing extra screws... or worse, I ended up with "spare" screws which left me wondering what exactly I missed.

Nah, that's just Rap's Law of Inanimate Reproduction in action!


That's one very wise law. Never heard of that before.

I wish 100 Dollar bills had screws...

I can totally relate to this. While not a mainframe, I had to move and reassemble and put in working order a film telecine recently. It's a huge beast of a machine that consumes 5kW's and weighs a ton or more and comes with several computers and consoles. Fun part was there's no support for it. Company went dead a few years back. It was a ton of fun though, figuring all out from ye olde pdfs and hooking up a computer with putty to various RS-232 terminal outputs to see what's going on. I got it in proper and working condition in the end and it works amazingly well and is now in production:

I even got to learn about HIPPI while at it. I wish I had more adventures like that.

I occasionally see Spirit 2k's destined for the dump and wonder what would be entailed in running one. Since they were a little notorious for poor frame-to-frame registration I wonder if that will get worse as the mechanics age.

It's a strange time for telecines, with so many companies having disappeared but newer, much more compact designs appearing, for archival scanning presumably. If you haven't discovered it already the archives of the TiG mailing list are a great resource for TK stuff - e.g. this thread about the new Blackmagic/Digital Vision/MWA/Kinetta scanners...

Most of the Spirits I've seen that are out there on the market are either for the parts or their business went bust. You could get one "relatively" cheap these days though.

Pin registration is not what telecine is about. That's where you get scanners and their area sensors vs telecine's linear array. We operate a few telecines (Cintel, MWA, Spirit) and a scanner (Arri), and tbh I've seen that "wobble" only on older telecines like FDL series, also from Philips/Thomson/Grasva.. they've changed quite a few companies in the past :) This particular telecine has a sprocket for film and is rock solid.

Telecines are extremely useful, even to these days. There's a difference between scanners and telecine, of course. Telecine's strength is that it's fast. It's realtime and, in case of this machine, it can work all day in shifts and not break a sweat. This is extremely useful if you have a lot of material. Not all of material on film is feature film. There is a ton of material out there on 16mm and 35mm, most from TV stations. This machine can do 35mm, 16mm, optical audio, magnetic audio. All in realtime and gives you 1080p/10bit with sound. Sensor isn't all there in 1080p as it could be, but for most material it's more than fine. 2K is important if you're doing 4:3 material from film since you get 1556 pixels of vertical resolution which is more than 1080. So, in a sense, you get a machine, for relatively cheap, that can be operated all day for all kinds of archival work.

There are better machines now, of course. Newer Spirits, Scanity, and you can even retrofit some of the older machines with Xena:

There was a lot of talk about Blackmagic's buying of Cintel and their mini telecine. Personally, considering it's Blackmagic and experience I have with their products, I would avoid their stuff in a wide arc. Especially if I would to run it all day long or, god forbid, run a business that relied on their stuff. They are just not reliable, both on product basis and as a company. MWA ( has interesting small scanners for 16mm, something which we also have. It's a nice machine, but Datacine/Spirit is on another level hardware-wise/reliability.

Kinetta is an interesting approach, and something I've had my eyes on. I would consider that for fragile/sensitive material considering that's what it's all about anyways.

Cool thing about modern workflow with film is that hardware is more or less the same. Telecine, 16mm/35mm optical blocks, light filters, scanners, (re)winders, wet gate system, audio heads, Lipsner smith film cleaners.. it's all the same. New things are new light sources (LED vs Xenon - though there's a debate which one is better), camera/sensors, and software. Software replaced a lot of "boxes" around the hardware. If you have a reliable machinery that outputs a good image base then it's all about software. Especially in restoration area. It got a bit competitive in that area over the recent years. Everybody seems to have some sort of restoration software as an offering now. Heck, even I have some of my own software that I wrote a few years back for automating most of the tasks cleaning up images that come through from film.

Cool, years ago I spent many months going back & forth to Weiterstadt, working on Spirit DataCines and Spirit 2K/4K machines, adding support for controlling them via Baselight.

There's some considerable engineering in those machines. Though you could replace almost all the boards with one or two FPGAs these days

What are you using it with? BONES?

I was always curious how much use that Spirit support saw - it must have been a ton of work, but it seemed like it was added just at the precise moment when everyone switched from running the film in real time during a session to scanning to DPXs beforehand!
We sold quite a few machines on the back of it, although those customers probably only got a couple of years use of it before their clients stopped shooting film.

Then most folk in London got rid of their Spirits.

Now this year half the big tent-pole movies are shooting 35mm film again!

When I first opened up the racks in the machine and all of those industrial electronics boards... Initially I thought I was way over my head. It took a bit of time, but now it's all clear and now I see the ingenuity of it. It's a very elegant machine and works like a charm. I had to replace a few of the things, installed new lamps, cleaned up everything and it's as good as new!

I have two signals coming out. I have a HD 1080p link coming from its HD serializers (actually four outputs via two serializers) which I laid out to several inputs (Tricaster, custom built PC with 10-bit input, monitors, etc..). For when I need full 2K/10-bit, machine works at 6fps, not realtime. Currently, machine has a HIPPI output and with that I have limited options. Well, one option to be specific. An SGI machine with HIPPI card and Phantom Transfer Engine on it which then transfers DPX files to a SAN. I would like BONES, or anything else for that matter, but that would require me to get a GSN instead of HIPPI and BONES, of course. I'm still looking for options out there to see if there are any. Preferably if there's an option for PC based solution and HIPPI. If not, GSN/BONES is what I'm considering. Price of that is a bit steep though (around $30k I think), but it's still an option that is considered.

Yay an SGI in production! Octane?
Indeed, an Octane 2!
Is it… just for you? Can you just digitize your own 35mm film?
It's not for me only, of course! It's now in a fleet of telecines.
I am a mainframe developer. It's sad that IBM isn't more open about z/OS and z/VM to hobbyists. I think it really hurts the mainframe in the long run.
There is z/PDT for hobbyists although it will cost $4k for a basic setup. It's a emulator that runs on PC and have 3x OSS hercules-390 performance.
"although it will cost $4k for a basic setup"

It's a little more than my two monthly salaries (after tax) here in Czech Republic, and it's per year. I don't think I am nostalgic enough yet to pay for that. :-)

Anyway, thanks for info.

Is it a cultural thing? Is there a reason for this?
It's a legacy of the era when IBM ruled the roost. All of the mainframe vendors were like that.

There was an office space in the middle of our data center at my first big IT job. Only Unisys people were allowed to enter, and crossing the threshold into that room, or opening a Unisys cabinet was a serious offense.

I'd be secretive too! That "mainframe" was a $25,000 Xeon server with a mainframe emulator running on it. They had these sweet contracts where they would lease the server based on performance -- MIPS/mo.

The IBM mainframes are big power boxes with custom asics for i/o and lots of mumbo jumbo to make it sound magical.

I still think that in IBM's case this depends on what you mean by mainframe. Midrange systems (ie. AS/400) are power boxes and today even sold as such (IBM will sell you the hardware as "IBM Power system" with choice of AIX, OS/400 or Linux as OS). On the other hand I'm almost certain that System z CPUs are microcoded monstrosities that run z/Architecture machine code pretty much natively.
I've worked on the compilers for Z systems and on the z/PDT emulator that someone else mentioned. The processors they run are indeed monsters. The instruction set is especially interesting as you can see the transition from 24-31-64 bit. Messing around with the SAM (set address mode) instructions was always fun. They also have some interesting interruptable instructions that can move or compare up to 2GB of memory.
It's odd that we rarely hear from you guys on sites like this one. Why isn't there a VM that hosts JCL and which you can pass your own projects around on?

I wonder whether part of the problem might be that it's so awkward to replicate mainframe style init on desktop operating systems. Desktop OS init tends to be very pale against a mainframe. I'm working on a project related to this at the moment.

Seems like a very well grounded kid. He took his GED at 16 and started attending a community college using extra time he gained by going early to do career exploration. Very smart approach in my opinion.
> very well grounded kid

This is important when working with ESD-sensitive electronics.

Great head on his shoulders, and feet on the ground
I use old HP Proliant as a workstation. For $900 you get 24 cores and 256GB RAM. Its pretty loud and power inefficient, but works great for testing concurrent and memory intensive code.
Woah. Where did you get it? Sounds cool. Do you run HP-UX on it or does some common Linux distro run well?
Ebay. It runs Lubuntu, I develop java
Linux works fine.
Some similar machines (HP Proliant, 24 cores, 256GB ram) on eBay:

That's 1500USD. I'm sure 900USD is possible if you keep an eye out, seems like a pretty sharp deal though.

Sigh, found one with 128GB RAM for $390. I say great, I might actually think about this. Then I look at shipping cost - $2000...
Wow, where are you?

I haven't found any with shipping (to Denver CO) more than ~$100. $2k sure is a bit steep....

And you can have a nice tensorflow box stuffing this puppy with 3-4 nvidia cards ;-).
"IBM said the z890 will ship next month, with prices starting at $200,000." - 2004.

And he got it for 237$, good deal :)

The value seems to have depreciated even faster than for consumer hardware. I think you would expect to pay a bit more than $2 for a working laptop that sold for $2k in 2004.
The laptop would be more useful to most people, even being a 2004 model. The market for folks with a use for a machine like this is very slim, even slimmer those who can deal with its weight/size/electrical draw.

That said, man I wish I had the space for one!

Yes, quite a deal. :) In the video @ 28:29 while showing the slide with his cost breakdown he says:

  "And I had someone from IBM run the serial number for me,
  and the original price tag was $350,000."
I would hope some corp directors/cio's see this and send this kid some spare parts for a great effort. Those scumbags at IBM could care less about making this into a better story by helping out.
He's getting help from some IBMers via MAIN. I suspect sidechannel support may be viable.

I've only done a small bit of MF work, but the old-school IBM mainframers I've met (usually at conferences) have always been exceptionally helpful and friendly. It's a fairly small community. Young blood is often quite welcomed.

And those beasts will likely be around so long as there's mains power.

I realize that the z series mainframes are better than those of old, but man, are Mom and Dad going to be pissed when they get their next power bill. A z890 is still going to need a 240VAC 30 amp power connection. Imagine running your dryer 24/7. :-)
In the video he says it's been negligible so far.
Thanks. At work, can't (well, more "out of courtesy to the client that pays me by the hour, won't") watch the video. I'm not terribly familiar with the Z architecture, but I guess if you're not working the thing real hard maybe it doesn't use much power. On the ancient hardware I grew up on, just spinning the drives would probably double your residential power bill.
Now I want to ask my parents if their power bill dropped when I moved out of the house.
I concur with "I know mine did" :) On the other hand it is interesting that at the same time their heating bill went up.

And as my (bank's) house has mostly electric heating I somehow still don't care about power consumption of various computers and electronics much.

I know mine did. I had various machines, including a 233mhz PowerPC tangerine iMac running in a cupboard for a while. They dropped their power bill even lower by switching from a ~150W desktop to a 90W MacBook (which has ridiculously good standby power usage).

It's interesting now, I would have never thought of designing machines (off the shelf bought parts) for low power usage before I moved out.

When I did move out, power was 30 cents per kilowatt-hour and I definitely noticed the difference moving from NVidia to AMD (for a PC that ran 24 hours a day). That PC is now back on NVidia because a single GTX 970 was more efficient than my 2x6870s. I really wish they'd allow me to turn off the dedicated GPU and go back to the Intel iGPU when I'm only running Visual Studio. Lucid Virtu could have done this but it's really buggy.

These days I just make sure Wake On Lan works properly (a challenge in Windows 10 with commodity hardware) and run mostly ARM boxes otherwise.

I don't think I'd even consider running old mainframe equipment in Australia (unless I had 2+KW of solar on the roof)

I had problems with WOL on Windows 10 solution was to just update the networkcard drivers to the latest version, get them from the manufacturer.
Wow, I guess electric power really is cheaper in Norway than in Australia. I just did the math, and I paid around 11 Austrialian cents/KWh last two months. And that's including a pretty substantial (for me, on a 2577 kwh bill) fixed price. It's a shame bandwidth is so expensive here, or there would probably have been more big server hosters in Norway.

Of course with electrical heating most of the year, running power hungry hardware 24/7 is arguably more efficient than just straight burning the electricity for heat...

...but instead of a dryer, it's a z890! Totally worth it!
Truuuuue, but my clothes are still wet. Combine the two, though, and it'll dry clothes faster than you can put them in.*

*An incredibly lame joke playing off the I/O throughput of mainframes, and...oh, never mind.

Very entertaining talk! Awesome project. Almost brought a tear to my eye when he showed the picture of the entrance to the basement being excavated by his father...a kind of stoic supportive parenting to aspire to!
Does buying one of these on eBay get you the right to run zOS ? If so that could be a lower cost alternative for people interested in learning about mainframes, since if you had the usage rights you could run zOS on Hercules and just store the actual hardware.
Another thing I wonder about, is how hard it would be to crack these things in order to run at full speed/cores. Assuming you're not a business dodging IBM licensing fees, I can't imagine the chances of getting caught are very big -- and in either case it's an interesting intellectual problem (ignoring for a moment the legality of breaking licence agreements).
You can run the OS but you can't run anything that a business would use on it (at least not without winning the lottery). Any software package you'll find in the real world is going to be $100k
In the USA you'd be able to copy any software available if it was for research, wouldn't you - provided it's also not commercial then there's a strong case for Fair Use exception.

Non-commercial is not an absolute defence not a requirement for Fair Use by it is a factor and with the educational/research facets ...

I'm pretty sure that's not correct. If it were why would research labs have to pay for things like Matlab ?
They're not researching Matlab, they're using it as a tool; it's also likely a commercial use (which isn't synonymous with profit-making FWIW).
For learning purposes being able to run zOS would probably be of considerable interest. If the machine actually requires 6 kW to run then I guess the electricity costs would be on the order of $1 per hour, depending on your local rates. Maybe you could set up a service to sell login access.

Note that I'm only speculating here. I'm not a mainframer, just a guy who's had a passing interest in these systems and when I was reading about them it appeared that the difficulty of gaining access to these environments was a significant obstacle to learning about them, whether for professional or hobbyist reasons, that despite the fact that there appears to still be quite a bit of demand for mainframe programmers.

In that case I'm not sure I see the point in buying something like this just to run Linux. Linux looks pretty much the same no matter what it's running on.
To learn and keep the basement nice and warm (all 6kw of them) . :-)
To preserve history and collect it.
Sure I get that but if you can't run the software that was/is most frequently used on these machines then you're only preserving half the history.
From hercules-390 FAQ (which I was a developer long ago).

OS/390, z/OS, and other ESA or z/Architecture operating systems are definitely licensed to a particular machine. Therefore, in practice you cannot run any classic ESA or z/Architecture operating system on your PC unless you can obtain a license from IBM allowing you to do so. It is believed that there are, however, four ways you could run z/OS, z/VM, z/VSE, OS/390, VM/ESA, or VSE/ESA under Hercules using currently available licenses:

1. Running under Linux on the Pentium processor of a P/390 which is licensed to run the OS.

2. Running under Linux/390 on a mainframe which is licensed to run the OS.

3. Running under the terms of a disaster recovery provision of the OS license (but I really don't recommend depending on Hercules to be your disaster recovery solution!).

4. Using the IBM OS/390 or z/OS DemoPkg, which is available only to IBM employees and IBM Business Partners

I love that someone offered to buy time on his machine.

Pro tip: when someone offers money for using something about which you don't care, say "Sure, how much do you think you need to get started?"

I own a number of old(er) systems programming manuals, JCL books, and even a couple of COBOL handbooks from a few years ago when I was playing around with hercules [1]. I had a ton of fun (from a hacker perspective). I recommend playing around with it, if only to get a look at how different things can be.


I've never heard of a computer running on three phase power before. That was unique.

My aged professors in college would joke about their interactions with mainframes. Some stories included making them vibrate at the right frequency such that they could move around. Apparently as a grad student one got in trouble for making them bump into each other. Kinda like having two Nokia phones vibrate in a "battle" against each other.

I wouldn’t say that it was that unusual.
Fantastic talk. Great speaker. Thanks so much for posting this
Great speaker, but he needs to learn not to do the powerpoint karaoke. The audience sees the slides, we see the slides, no reason to read them verbatim.
But it makes it viable to audio only listeners (be it watching in background or blind).
I'm not saying he should leave those things on the slides and not say them - that would be silly, it's good content. I'm saying that the slides don't have to have the same text. Give either summaries or extra info on them. Pictures are cool too. Just don't stop what you're talking about to look at the screen and read what's written there.

The disability argument doesn't really apply here - what about deaf users, why say anything that's not on the slides? Because that's what the transcript is for. Usability is orthogonal to what I'm saying.

The kid is 18. He's doing pretty well in the presentation department all things considered.

I'm sure he'll learn about principles of effective presentation i.e. "don't read the slides" in due time.


Exactly. 18, already capable of producing a witty, intelligent presentation like that and he can assemble a mainframe? He's going to go far.
How much processing power does IBM z890 have? What's it comparable to today?
Less than you think. Recent x64 is a great deal faster than recent System Z if you compare just a single core. I think the total bandwidth is still better, but I don't know with how much if you compare high end systems.

I would guess you can give a z890 a good match with a raspberry pi 3... Unless it's IO bound...

A quick google says: 2451 mips on a 4-core raspberry pi 3 against 9000 mips on a 32-core z990... hmmm.

Is this part of a new movement where people move "the cloud" into their own homes?
I have a couple of IBM X366 servers. They are quad-Xeon with 8GB of RAM each. They require two power supplies to be plugged in or otherwise will keep fans turned on full power constantly.

They are inefficient for daily use, but great for lab work. I spin them up, run whatever distributed task I need on 8 cores with 15K SAS drives, usually distributed Sony Vegas renderer, and shut them down. They also make a great lab for virtual machine work, so that does mean bringing the cloud in-house. :)

The X366 is a serious piece of hardware that later got renamed to 3850. Their current incarnation is

Same here. I have couple of old Opteron based servers which I start only for massive parallel ImageMagick/netpbm/tesseract computations. I think this has the best price per CPU ratio.
I thought it was awesome that he stuck with it to get it booted and running. Would be really fun to run MVS/TSO on it or VM/370.
Owning my personal mainframe is a long-held dream of mine, but practically, it wouldn't make a lot of sense unless somebody would let me put it into their datacenter.

My apartment is on the second floor, so a) I would not be able to get it upstairs and b) the floor would probably collapse under the weight. Plus, it is pretty big and would make my living room kind of ... crowded. And then, there is the electricity... And even if I somehow could solve all these problems, I would still need to attach a storage server to it, which kind of has the same problems.

So I envy this person, a lot, but realistically, I would not want to buy one for myself.

(IIRC, during the S/390 days, IBM built a 4U mini-mainframe one could rack-mount. There also was the P/390, which was a single-board implementation that came with one CPU, a channel controller and 128 or 256 MB RAM, all on a PCI card - I think it was meant for developers. Sadly, IBM has given up on small-ish mainframes...)

The P/390 is very nifty (and there were Microchannel versions as well as PCI), as is the earlier XT/370 & AT/370 (ISA-bus implemented in re-microcoded mc68000s).


These days there's always the Hercules emulator to get your IBM fix.

Me? I'm looking for a Micro A, the desktop version of the Burroughs B5000 mainframe.

... and the power bill would bankrupt you :)
Anybody else try to ssh into it?

   $ ssh [email protected]
I thought maybe the port would be still open and using password authentication. :-)

Still up, and it does prompt for a password. But he said the Chinese had been trying to get in for 2 weeks and failed, so it's probably not something obvious.

Ahhh, thanks for the find/replace.

Why is he not using public key authentication? Or even better, limiting access with a firewall.

It's unlikely to be a problem.

"The Chinese" (botnets) probe my ssh daemons all the time, but only with common, poor passwords.

Wow, I kind of assumed I was one of the only people under 30 interested in collecting old computers like this. Nice to know I'm not alone :).
I'm interested in collecting old machines, mainly older British machines like the ZXSpectrum and BBC Micro. I have nowhere near the space required though. Then there's also the question of what would I actually do with them all? Especially if I got into collecting old mainframes
There's even a whole subreddit about it /r/retrocomputing I think.
There are dozens of us!

You can get old SPARC servers super cheap on eBay too. I'm super tempted to get one. I was also trying to find an Itanium the other day but those are darn impossible to get. :(

there was a hardware surplus store in portland that carried off-brand sparcs at a very good price ($5-10) in the early 2000's. i still kind of regret not buying one.

though i got rid of my e4500 a while ago, and was glad to see that one go.

I've been dreaming about getting one of those old Sun workstations with monochrome bitmapped display that used to be the thing when I was an undergrad CS student in 1984.

I remember being so impressed at the size of the monitor and the crispness of the display...and plus the fact that the workstations only seemed to be in offices that had "clout."

There was an old Ars thread on this:
Laughed out loud when he introduced the power roadblock, and the pic of that connector.
How does this machine compare to a modern day consumer laptop (such as a Macbook Pro) in term of processing power?
AFAIK it has high I/O efficiency and very high memory bandwidth, so for some tasks it might perform very well. If you are comparing single-core performance, probably much worse.
Someone asks "how much RAM does it have", 8GB. So that's equal to a modern day laptop.

In terms of pure processing power though, still no idea

This is very cool.

However I am wondering, what could it be used for nowadays?

I would do some benchmarking, I don't know, e.g. LINPACK or maybe even projects, BOINC ones, etc.

Other than that? Maybe you could do some 3D renderings? could you put it head to head with a GPU doing something embarrassingly parallel? probably not with anything like a Titan X, but maybe it could give a run for its money to something less powerful?

I am very intrigued by what you could achieve.

You could run a number of real IBM OSes on it and learn mainframes. There's still a huge market for mainframes.

You could run a couple of hundred or thousand Linux VMs for fun.

It's an IBM's not about how fast the CPU is, it's about how fast the whole system is (cpu, memory, i/o).

Would it be feasible to say, install Kubernetes?

I've read that a lot of what made zSeries "special" was that they had special purpose chips for a lot of things, from I/O to encryption and other stuff (not really well versed in the area so please correct me if I'm wrong).

Not sure how many of those special features you can use if you use Linux instead of z/OS, but could it be in principle something tractable or not even close?=

I suppose you can load whatever Linux thingy you want that has a port (I've never tried Kubernetes). That said, the thing that makes machines like this "special" is the aggregate throughput and availability you can get out of them (the dedicated processors are part of that). To really unlock that, you do want to run z/OS and it's tools, and then run things like Linux on top of that. It's more complicated than "this is just another kind of Linux box".
Electric heat.

These days it's a somewhat inefficienct electric heater that also can do some computing tasks.

Its efficiency at heating compared to an actual heater is very, very close to identical.
Not sure why you say "very, very close". The efficiency for heating of all electrical devices is identical, unless they're somehow transmitting energy to a different location.
The highest-efficiency electric heaters use heat pumps for efficiencies way above unity.
Heat pumps aren't usually called electric heaters though. Electric heaters are usually considered to be devices that generate heat by consuming electricity, not devices that transfer heat. In any case a heat pump obviously falls into the category of a device that transmits energy to a different location, which my comment excluded.
> unless they're somehow transmitting energy to a different location.

Computers do this in the form of sound, fans, magnetic fields and network connections.

Of course, that's a very small amount of energy.

The sound, fans and fields all end up as heat pretty soon too.

A tiny bit of energy is stored in the bits of permanent storage. That can be kept for (presumably) indefinite time, and not released as heat until the bits are erased.

The network connection energy is mostly lost as heat in the wire, but to the extent that it transmits information elsewhere that energy is also retained.

Hence very, very close, but not identical, unless you have the pathological condition of never storing any data or causing any other process to store data. To do this you'd have to compute away and no-one could ever learn the results.

Edit: I may be wrong about the net heating due to storage, since setting storage bits implies erasing their old state which releases heat into the room. Anyone with better thermodynamics training than me want to give a better answer?

Or maybe I should get back to work...

Ah OK, I hadn't thought of potential energy in permanent storage.
But its volume and thermal mass are somewhat out of normal range for space heaters.

Also, after watching the very impressive video about many of the things he learned from the whole process (with hardware that's only 12 years old!) it turns out that since he's not putting any real load on it the power consumption isn't really that high.

This reminds me of the computer collection one of the lecturers in my department owns ( They have quite a number of historical pieces finishing with a number of Crays, an IBM Blue Gene a few nuclear plant control units.
Love this kid. It's great to see people discover their passions so early on and be able to dive fully into them. His experiences remind me a lot of Linus Torvalds and his book: "Just for fun". That said, I'm also a tad jealous of him. I would have loved to grow up with a crazy lab to hack around with!
Putting the IP address of your mainframe online and allowing password logins (not to mention as root) not the greatest idea, if you wanted to clean up the audio you could probably count the number of symbols he enters as well.

Awesome project though and he presents well.

He actually talks about that in the Q&A at the end of the presentation. Funniest line: "China has been trying to login".

Very impressive kid.

Ah missed that bit.
Loved that he offered to give out the root password to anyone that wanted to experiment on it. He said something like "burn it down, I have backups". This kid is going to go far with that kind of passion and attitude. He's a true honest to goodness hacker. I love it.
Big thanks to the poster, I looked on eBay and bought a early model Curta calculator for 50 euros, I thought that I would never own one, I hope it will arrive in good condition.
Just an update, of course the seller backed off after realizing the mistake he has done, I will never own one.
Kinda funny at the end... they zoom out and so much grey hair and bald heads :)
Man, I regret that I arrived one term too late to get a full System/390 from my university. That thing would have been sweet in my dorm room.
He should find an old Sun e10000 or e15000 now to fill up his basement.
I love read much about this
I submitted this exact same video 18 hours earlyer... WTF ?


By coincidence, I just replied to you there:

It's largely random which submission of a story gets traction on HN. We plan to make the dupe detector more sophisticated. One of our goals is to privilege the first submitter more often, because we hope that will incentivize people to submit more unique things, which should benefit the community. In the meantime, though, it's a lottery in any one instance—but it does even out in the long run, the more good stories you submit.

I am getting downvoted a lot and I really don't see why.

I didn't mean to be offensive towards the user that posted this entry, but just highlight this "issue".

My main "wtf" is due to the fact that in the past I submitted links that were already in the system and I got redirected to the comments page, as usual. Considering this is the exact case and there are eighteen hours between submission, i guessed this is a noticeable glitch.

So please, stop dowinvoting me.

You're probably getting downvoted since your comment isn't relevant to this post.

Yes, it's probably a glitch in the HN dupe detector, but people didn't click through to this article to read about that.

I believe that comment is being down voted because of its feel and tone. It's a quick one-line complaint with a negative "low-class" abbreviation at the end. It's a little disrespectful to both the ycombinator (who's system is being criticized) and to the person who submitted the video with better luck than you had. People who read a comment like this are more likely to feel emotions of annoyance, mild anger, and contempt.

These things happen sometimes, you say something and it gets interpreted in a way you didn't intend. I wouldn't worry about it too much.

Probably a better version of this comment would be something like: "I actually submitted this 18 hours before this one went up. It's funny how sometimes they catch and sometimes they don't. Super interesting video though, glad it's being appreciated!"

what is this crap and why does it have so many votes?

If you've worked a ton with these machines, you'll know they are pieces of crap and theres a reason why technology has moved on.

This is so cool. Especially given that this thing can run Linux.

I had a sorta-kinda similar experience in college years ago. My college (a small community college in the middle of nowhere) had received an IBM AS/400 machine as a donation. I signed up for some "Survey of Operating Systems" class, and when I showed up the first day, the instructor (who already knew me) goes "Phil, your job is to take the AS/400, install an OS and get it on the Internet". Needless to say, at that point in time, I'd never seen or touched an AS/400 before.

Anyway, the instructor pointed me to a room with an AS/400, a huge stack of tapes (for the OS), a huge stack of manuals, and basically said "just do it." It was an interesting experience, but in the end, I got the thing on the 'net so you could telnet to it and work on it. And that indirectly led to the beginnings of my career in IT, as my first real IT job was as an AS/400 operator / Netware admin, and it was a job this instructor connected me with after this class.

So here's to taking vintage IBM equipment and dorking around with it and putting it on the Internet!

> Especially given that this thing can run Linux.

Interview with Linus:

What's the most amused you've ever been by the collaborative development process (flame war, silly code submission, amazing accomplishment)?

I think my favorite part is when somebody does something utterly crazy using Linux. Things that just make no sense at all, but are impressive from a technical angle (and even more impressive from a "they spent many months doing that?" angle).

Like when Alan Cox was working on porting Linux to the 8086. Or the guy who built his own computer using an 8-bit microcontroller that he wired up to some RAM and an SD card, then wrote an ARM emulator for it, and booted Linux (really really slowly) on his board.

I used to start the backup jobs on a AS/400 during a Summer job for my technical course.

The most interesting thing was while researching about them almost a decade later, I came to discover their programming model with a common runtime and a kernel level JIT for software.

.NET on Windows/WP, Android and iOS/bitcode might be the closest thing we have on current systems about that programming model.

The other interesting thing about them is single-level store. With a 64 (65-ish) bit address space, all your RAM and DASD gets mapped into it. So the objects in your program just have memory offsets, and the OS handles the underlying storage.

Their reputation for reliability is well deserved. The only time ours went down was for upgrades, and the few times we ran out of UPS power during storms.

> The only time ours went down was for upgrades, and the few times we ran out of UPS power during storms.

I'm surprised that you didn't have UPS for stopgap until the generators kick in. If you have a mainframe, it's probably for something mission critical to where a generator (or two or three for redundancy) is a good idea.

I'm curious what kind of business you're in where you can justify a mainframe and UPS but not generator, if you can talk about it.

This was many years ago - the firm wrote and sold accounting software for the AS/400, so while they were production machines, there wasn't really a 100% uptime requirement since there weren't end-users involved. The UPS was there to tide us over short power outages, and ensure a clean shutdown (IPLs after a hard shutdown took a very long time, as the OS would have to verify the whole filesystem).

Me and another guy were the only Windows programmers there. We put a friendly UI in front of the AS/400 via network calls (SNA APPC, later TCP/IP sockets) to an endpoint that the RPG guys wrote for us. Later, we wrote an Excel add-in to allow the users to access their financial data from the AS/400, and this became so successful that the entire company is now based around the same technology. Sadly, I don't get a percentage.

Thanks for sharing! That makes a lot more sense.
> Anyway, the instructor pointed me to a room with an AS/400, a huge stack of tapes (for the OS), a huge stack of manuals, and basically said "just do it."

"You're gonna learn to swim the Stennis way."

Me and a friend went to a programming class ages ago that was teaching AS/400 and programming in RPG/400 on them. That was a funny language. I was working at the state for a bit too, doing smaller programs for the database of inhabitants. Some of the bigger industries had a few AS/400 too but never got a job there programming.
Yeah, this same college started teaching RPG around the same time all this happened. I took two semesters of RPG programming, but I've never used it for anything in the real world.

RPG, at least back then, didn't suit me. It was very much column oriented, with this oddball tool (SEU - Source Entry Utility) that you had to program in. I understand the ILE version(s) of RPG have gone to more of a free-form format, but back then it was a royal pain.

No, wasn't my favourite programming language but at least it worked :-)
RPG is "Repulsive Programming Garbage". It is the worst production programming language ever conceived. It was designed in the 60's based on the IBM 402 tabulator paradigm, to bridge programmers from plugboard wiring, to the 360 stored-program mode. It has long outlived its usefulness. Yet it lives on, entrenched in legions of schlubs who, like Bre'r Rabbit, have never known anything else.
When was it? Must be at least 20 years ago judging by the Netware reference.
1996/1997 or thereabouts.
Did it at least support Ethernet by this point or did you get to learn all about Token Ring as well?
Wow, I dislike Token Ring. You just gave me flashbacks to my first network admin position way back in 1993.

Thanks. ;-)

Did it at least support Ethernet by this point

Luckily, yes. It had an Ethernet port and TCP/IP, so once it was up and running, getting it on the 'net wasn't too bad. The biggest challenge was actually getting the IT dept. to assign us an IP address, etc. Luckily I knew the IT guy pretty well from doing work-study with him earlier, so it wasn't too bad.

On the other hand, while you could telnet to the thing, you couldn't really use plain old command line telnet, which usually assumes something like a VT100 or VT200 terminal. AS/400's usually want 5250 terminal emulation, not VT100/220. So I had to find a 5250 terminal emulator to use to connect to the system and actually work with it.

Was it just the escape codes that weren't compatible, or was the as/400 trying to talk in EBCDIC?
A normal telnet connection will be assumed to be something VT100-ish, so it won't try to use EBCDIC. If you send one of a few specific values for the terminal type, though, the AS/400 will use block-oriented EBCDIC comms. This is even documented in RFCs 1205 and 4777.

A bigger issue can be with trying to send each of the 24 function keys when a VT102 didn't have any.

I would love to read a blog about this.

Perhaps a different person writing about a different system each month.

I'd write something about my experiences, but it was all so long ago now that I doubt I remember enough details to make it interesting. Still, something like this would be a fun project to do again in some ways. I've even toyed with the idea of doing something like the guy in this talk, but with an AS/400. The biggest reason I never did, other than the size/power/cooling issues, was the software licensing. I imagine an actual OS/400 license is crazy expensive, and if I just wanted a Linux server on vintage hardware, I would just fire up one of my extensive collection of old Sun SPARC boxes. :-)
I've installed OS's on plenty of AS/400's for a prior job.

When everything goes perfect, they are still a complete nightmare. I can't imagine how you pulled it off, frankly.

Let's just say it took me a while, and a lot of trial and error.
As someone who collects vintage computers my suggestion is to document the process. Having to re-discover an obscure command is just painful.
Please feel free to share some of these nightmares. These systems are so surrounded by an air of mystique and vague hints of danger, and I've mainly tried to read up on the supposed benefits of using mainframes to get why anybody would use them.

I'm honestly interested in all kinds of IT horror stories, I need a reminder every now and then to keep the IT part of free from things that break and require tinkering (starting with unnecessary on-premise servers in small biz settings). Mainframes are obviously nine orders of magnitude beyond what I ever to, but still: the lessons of expensive and proprietary can be somewhat universal.

Big Iron still has a place between commodity clusters and super computers. My dad used to work on them and he once referenced doing some operation on his local PC circa 2000 realized it would take several hours, so moved data to the TeraData machine where it took "Approximately how long it took my finger to lift off the enter key."

For some things having silly amounts of RAM and a few hundred disk drives and multi gigabit interconnects just works really well.

Not sure if Teradata will classify under big iron, though it does provide channel based connection to mainframe. IIRC Teradata nodes are UNIX based.
Not really my area of expertise.

IMO, it's less about OS than it is specialized hardware, you don't see multiple terabytes of RAM on normal motherboards. As such plenty of mainframes ran on a flavor of UNIX just like plenty of supercomputers used UNIX.

But, I can see people using other definitions.

Here's you a classic one I should've put in my other comment:

I still poke at them over this. Another was them running Hotmail on FreeBSD instead of Windows. Lots of problems when they switched. Least both experiences were good for debugging purposes. ;)

Note: The only thing better would be finding out Apple ran on a mainframe or AS/400 given Jobs' hatred of IBM. I doubt I'll get the satisfaction.

Once you get them running, they stay running. Like OpenVMS systems, you'll occasionally hear stories of how people lost them. They'd run without maintenance for so long that people would forget where they were when they needed it. My company has one that's been running all the critical stuff for years. It's the only thing that always works.

Btw, the original design was an awesome capability-secure system called System/38:

It was originally part of an even more awesome concept called Future Systems, which was supposed to replace the entire IBM universe with capability-based single-level storage systems.

It never got off the ground. System/38 and AS/400 are the only visible remnants of it, survivors from an alien world.

I didn't realize it was that big. I found in there this link...

...that illustrates the difference between knowledge and wisdom very well. Soya has a number of wise observations and recommendations that apply in enterprise software projects today. I've seen the same arguments. Just usually not that many at once given the narrow scope of any given enterprise tech.

He's definitely made me consider the Single-Level Store a weakness until proven otherwise. His argument that the customers saw no benefit except the performance tradeoff is disturbing and probably still applies a lot. I mean, the ability to avoid crashes, hacks, much technical debt, and many failed integrations with high-level machine are huge enablers. All people cared about was price for raw performance ratio, though.

Interestingly, I've been thinking similar to him on a number of angles as I look at these new projects. It's why I pushed CHERI processor over SAFE despite liking SAFE more. It's because CHERI is compatible with FreeBSD and MIPS while probably having less overhead. It's also why I pushed Java processors like JOP or Vega3 over another Secure Ada Target (or even Rust target) for embedded or enterprise servers. Python runtime or machine over LISP. And so on.

Time shows the path of least resistance with greatest performance and legacy integration must be taken. So, anyone wanting high-reliability or high-security has to work in a box. That's been rained on continuously for decades. ;)

working on the iSeries (AS/400) currently and to be honest, I don't know of an easier to maintain system. DB2 is nearly automatic in maintenance and the only time we suffer an outage is paranoia when a multiple level redundant system went out and they wanted to replace the backup units with the failed. The size of these machines range from barely seeming bigger than a PC to 64 way and hundreds of terabytes. IBM was smart in that pSeries and iSeries share hardware, though running a Linux VM/partition/etc is more for sheer fun of it than work.
I used to joke that slackers should specialize in AS/400 "repair" with revenue coming from support and maintenance contracts. I predicted it would be a really easy job. Just regularly run reports or something to hand management to justify your salary. Take credit for the uptimes. And so on. :)

"though running a Linux VM/partition/etc is more for sheer fun of it than work."

I thought that was to get benefits of modern stacks running side-by-side with legacy OS/400 apps. You don't use it that way? And while we're at it, did they ever provide an easy, machine-usable interface between those instead of a web app simulating a terminal typing individual characters in like some mainframe places do? That seemed kind of ridiculous.

Mar 28, 2016 · 8 points, 1 comments · submitted by znpy
Comments moved to

Good post; sorry you lost the submission lottery. It evens out in the long run if you keep submitting good stories. But we also plan to change HN's duplicate detection to privilege the first submitter more often.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ [email protected]
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.