HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Stanford Seminar - The Soul of a New Machine: Rethinking the Computer

Stanford Online · Youtube · 155 HN points · 14 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Stanford Online's video "Stanford Seminar - The Soul of a New Machine: Rethinking the Computer".
Youtube Summary
Bryan Cantrill
Oxide Computer Company

February 26, 2020
While our software systems have become increasingly elastic, the physical substrate available to run that software (that is, the computer!) has remained stuck in a bygone era of PC architecture. Hyperscale infrastructure providers have long since figured this out, building machines that are fit to purpose -- but those advances have been denied to the mass market. In this talk, we will talk about our vision for a new, rack-scale, server-side machine -- and how we anticipate advances like open firmware, RISC-V, and Rust will play a central role in realizing that vision.

View the full playlist: https://www.youtube.com/playlist?list=PLoROMvodv4rMWw6rRoeSpkiseTHzWj6vu
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
This company was created because somebody that actually built a large could with Dell/Supermicro suffered years and years of pain.

If you do as you suggest you will have to do a whole lot more, at the minimum you need to set-up (and pay) VMWare.

Then you will need to figure out how you do automated firmware upgrades, low level monitoring and if security is a concern you likely want to configure secure booting with attestation and things like that.

Consider how much work it is to do what you suggest, vs buying this rack. And then consider how good (and repeatable) the end result is.

That is at least the theory, see:

https://www.youtube.com/watch?v=vvZA9n3e5pc

walrus01
If you think the only solution to large scale virtualization on top of x86-64 bare metal is to pay vmware... yikes.
panick21_
Its not the only solution but its commercially by far the most successful one. Even if you use something else its still a whole lot of work.
Bryan Cantrill mentions in this talk that they saw no correctable errors, until they suddenly got uncorrectable errors. Turns out that the firmware was hiding the fact that it had been correcting ECC errors the whole time. https://youtu.be/vvZA9n3e5pc?t=1193
I think its an FPGA with a costume Open-Source Titan chip on it (RISC-V). It is not really a traditional BMC, its more like a service processor that does secure boot and gets you in the OS. It does a few other things but I think they really want it to have minimum functionality.

This is a great talk about what they do and why: https://www.youtube.com/watch?v=vvZA9n3e5pc

bcantrill
This is a very reasonable inference, as it absolutely was when I gave that talk. ;) Very shortly after that talk, however, we came to the realization that the OpenTitan was not going to be what we needed when we needed it, and moved to a Cortex M7-based microcontroller for our service processor (and a separate M33-based microcontroller for our root of trust); Hubris is the operating system that runs on those two MCUs.
panick21_
Not that it matters all that much, but why a Cortex when you can get RISC-V chips form SiFive (or whoever) that do about the same stuff?

Does that mean the product will not have an FPGA? I kind of liked that idea of updating the hardware.

P.S: Twitter spaces about the history of computing are really fun. Love to hear more about all the dead computer companies you researched. There is not enough content about computer history out there.

bcantrill
Yeah... a bunch of reasons. We definitely have a couple of FPGAs in the product (Bluespec FTW!), and we anticipate that we will have more over time -- but not for the RoT or SP, or at least not now. The reasons are different for each, but include:

1. ASICs are out of the question for us for a bunch of economic reasons

2. FPGAs by and large do not have a good security story

3. The ones that do (or rather, the one that does) has an entirely proprietary ecosystem and a terrible toolchain -- and seems to rely on security-through-obscurity

4. Once you are away from a softcore, the instruction set is frankly less material than the SoC -- and the RISC-V SoC space is still immature in lots of little ways relative to the Cortex space

5. Frankly, there's a lot to like about ST: good parts, good docs, good eval boards, pretty good availability (!), minimum of proprietary garbage.

We tried really hard (and indeed, I think my colleagues would say that I probably tried a little too hard) to get FPGAs to be viable, and then to get FPGAs + hardcores to be viable, and then multicore hardcores to be viable (one for the SP, one for the RoT). Ultimately, all of these paths proved to be not yet ready. And while we're big believers in FPGAs and RISC-V, we're even bigger believers in our need to ship a product! ;)

bcantrill
Also, where are my manners?! Really glad you're enjoying our Twitter Spaces[0] -- and thank you for the kind words!

[0] https://github.com/oxidecomputer/twitter-spaces

Coincidentally I was just watching this talk from Bryan Cantrill: https://youtu.be/vvZA9n3e5pc. Around the 58 minute mark he shares an anecdote that is relevant to this thread. It boils down to him doing something where he would have written a simple bespoke balanced binary tree in C, but when writing a reimplementation in Rust used a "naive" library approach and was surprised to see a 35% speedup, which boiled down to the library using a btree instead of a balanced binary tree.

This doesn't invalidate your point about there being use cases for binary trees that may necessitate you to write your own, I take you at your word that you've often had strong use cases for this, but I do think it illustrates something I believe to be true, which is that people reach for linked data structures too often, when contiguous ones are a better default.

May be instead of asking target market or audience, who are their competitors?

(Edit: Previous Discussions https://news.ycombinator.com/item?id=21682360 )

Also thinking if the Website is not finished? All the "Read More" actually hide very little information, if so why hide it? And doesn't seems to explain the company very well. Seems like we need to listen to their PodCast to find out what is going on. ( Edit: Found a Youtube Video about it https://www.youtube.com/watch?v=vvZA9n3e5pc )

>Get the most efficient power rating and network speeds of 100GBps without the pain of cable

100GBps would be impressive, 100Gbps would be ... not much?

A interesting thing is all the terminal like graphics are actually HTML/CSS and not images.

hujun
>Get the most efficient power rating and network speeds of 100GBps without the pain of cable

100GBps would be impressive, 100Gbps would be ... not much?

Even 100GBps(800Gbps) is not much for 2048 cores, depends on application, in certain applications, 32 cores could drive 100Gbps...

daddylongstroke
> ...who are their competitors?

Literally every single server vendor, and almost all (if not all?) storage vendors on the planet are pushing HCI because that is what mid and large size companies want. This is the fastest growing market segment in hardware (because they now realize that hybrid-cloud is the preferred customer model, and most of their customers now are deploying or already have deployed their own internal cloud). Oxide appears to me to be HCI done correctly. I currently work for one of their competitors, and for one, am keeping at eye on their careers page!

Also, 100Gb meets requirements for 99.999% of the customers out there.

To take these in order: for what we're doing, I would recommend the talk I gave at Stanford about a year ago.[0] It's a bit dated (and of course we have much much more detail now), but it remains broadly on point.

For the parts of Sun that we want to take after: the good bits, of course! ;) There were things that have influenced us with respect to mission, and then also with respect to hardware/software co-design (we are, contrary to the impression you might have, doing a lot of hardware[1]). Lest I sound too rosy about Sun, it had a lot of problems too[2] -- so we're trying not to recreate those! ;)

Thank you for the kind words on the podcast -- very much looking forward to getting back in the garage and recording some new episodes!

Finally, on Spring: yes, I need to find it (we just moved) and then find a CD-ROM drive (feels easier) and then I'll get it uploaded!

[0] https://www.youtube.com/watch?v=vvZA9n3e5pc

[1] https://twitter.com/bcantrill/status/1342294298692206593

[2] https://news.ycombinator.com/item?id=2287033

caslon
Thanks for the link to the talk! Putting it on right now. The summary for it mentions open firmware in the generic sense; on the Sun note, will Oxide be reusing OpenFirmware/OpenBoot? It's one of Sun's most fantastic accomplishments, I think, and it would be a shame for it to die off.

I love Sun! You can never get rosy enough, unless you're talking about many of its executives in the last twenty years of its life...

I'm glad you guys are taking after Sun on the hardware part (and hopefully avoiding replicating its actions in driving SPARC to the ground with RISC-V)! It will be extremely welcome to have another contender in the hardware space!

Can't wait to listen to new podcast episodes!

And thank you so much on Spring! Any bit of computing history that can be preserved is a win for our field!

I wonder how what they are working on relates to what's discussed here https://www.youtube.com/watch?v=vvZA9n3e5pc
Oct 31, 2020 · bcantrill on Requests for Discussion
So, believe it or not, we have spent quite a bit of effort trying to make this process lightweight -- and having gone through it a bunch myself (and having a strong aversion to unnecessary process!), it doesn't actually feel unduly arduous. The reason a wiki won't work for us: it's far, far too loose for describing the things we're building and discussing the trade-offs. We're a computer company[1]: we are building a ton of hardware and low-level systems software that needs to be pretty well described and carefully considered (with revisions formally tracked). It's not aerospace or biomedical devices, but... it's also not a SaaS that is amenable to an MVP. So we do need more formalism than might be suitable elsewhere.

In terms of newcomers: I actually think that's a tremendous strength of the RFDs: unlike many startups, we have a lot written down -- and newcomers (and indeed, even prospective Oxide employees) are able to read the stuff that often exists in people's heads.

All of that said, it's definitely not perfect! If we could get a Google docs (or equivalent) in which the underlying artifact were stored as AsciiDoc and every modification appeared as a git commit (and the comments themselves were also tracked via a git commit), that would almost assuredly be better than the GitHub + custom infrastructure we've built. But already having outrageously broad ambition, we drew the line at not reinventing design document management... ;)

[1] https://www.youtube.com/watch?v=vvZA9n3e5pc

Oct 11, 2020 · bcantrill on Rust After the Honeymoon
Sorry if that's a bit thin; more detail on what we're doing and why can be found in a (pre-COVID) talk I gave at Stanford[1]. We're definitely not a Deep Stealth company, but we've also been very busy building; expect many more details over the coming year!

[1] https://www.youtube.com/watch?v=vvZA9n3e5pc

galonk
That video is great, it was linked on HN a while ago and I watched the whole thing even though it's not my area at all. Really interesting :)
For whatever it's worth, the talk I gave at Stanford in February[1] goes into (much) more detail -- and given some of your other comments here, you will find many aspects of the talk educational.

[1] https://www.youtube.com/watch?v=vvZA9n3e5pc

Jul 28, 2020 · 125 points, 36 comments · submitted by tosh
Animats
And we'll deal with all those old standards by having our own new standard!

He discusses "open firmware", which is not Open Firmware.[1] That was a boot ROM system from the 1990s. It was written in Forth, and was intended for use with a console interface. That's not what you want today. A good question to ask is, what do you want today at that level, and what do you not want. For example, a security oriented "cloud" company might want to load the machine, restart the machine, and freeze and dump the machine in an emergency, but not have the ability to examine or alter memory while running. Who patches a running production machine any more? Today's server firmware, with an administrative CPU that phones home and listens for commands to do who knows what, tends to have way too much capability for making small changes quietly and listening to what's going on.

[1] https://en.wikipedia.org/wiki/Open_Firmware

spikepuppet
I'm a big fan of Bryan's talks and this was another great one. It's easy to forget that once you move outside of the hyperscalers, what's available to you is really showing it's age, or is filled with a whole bunch of honestly useless features. As such, i'm very keen to see what Oxide cooks up!
guerrilla
> While our software systems have become increasingly elastic, the physical substrate available to run that software (that is, the computer!) has remained stuck in a bygone era of PC architecture. Hyperscale infrastructure providers have long since figured this out, building machines that are fit to purpose -- but those advances have been denied to the mass market. In this talk, we will talk about our vision for a new, rack-scale, server-side machine -- and how we anticipate advances like open firmware, RISC-V, and Rust will play a central role in realizing that vision.
justicezyx
This is a bland PR oriented statement. I was roughly expecting this level of details from the speaker.

The one statement feels rather bland: "those advances have been denied to the mass market"

What does this mean?

It was not denied, they were just too complex for mass market. People are happy to pay AWS so that they can worry not the machines, and write JS code from day one.

jamwt
> People are happy to pay AWS

Many of them are not.

We have serious vendor lock-in now, where a very few companies are gatekeepers to almost any business that runs on the internet.

And their margins are _enormous_ on this business. It ends up costing much, much more to pay them to run our machines for us.

And increasingly, the expertise to do this is being consolidated in these companies, so the talent available to pursue any other way is diminishing as new grads never learn about the magic places their code runs.

The reliability outcomes are nearly the same, despite the deferral to their expertise.

Labor savings b/c you don't to learn about provisioning your own machines? Not much. AWS is so sophisticated you need to develop a nearly equivalent amount of (non-portable) expertise to actually operate it well. Remember, the alternative isn't just rack your own, it's... dedicated hosting! And lots of other options with less lock-in and more standards.

It's sort of frightening how complicit the broader technology industry is in this power consolidation.

justicezyx
You beat vendor lock in by standardization.

Vendors refuses to take part in standardization if they absolutely have the leverage.

Remember Amazon's reluctance in joining the CNCF and container groups?

By stating you are not happy, Amazon is perfectly ready to do what ever they can to please you, ad stated in their "customer obsession" (and I assure that that statement is as sincere as any human stating any commitment).

But back to the point, people in mass market primarily are no longer interested in managing machines, let alone building themselves.

bcantrill
No, they've been denied: I elaborate on this in the talk, but if you look at (say) an OCP-based system (e.g., Facebook's Tioga Pass[1]), the innovations in that system are simply not available for any price to the enterprise buyer. And yes, those buyers emphatically do exist -- and no, they are certainly not everyone deploying on elastic infrastructure.

[1] https://www.opencompute.org/documents/facebook-2s-server-tio...

sbierwagen
That's a 109 page PDF. If the innovations are listed in that PDF, they are not leaping out at me while skimming it.

Googling "tioga pass server" brings up https://engineering.fb.com/data-center-engineering/the-end-t... which says nothing and https://www.mitacmct.com/OCPserver_E7278_E7278-S who seem to be selling them.

Tioga pass appears to be a small dual-socket server. How is it different from a typical dual-socket blade server?

jhallenworld
The only advantage these hyperscale purpose built computing platforms provide is elimination of the profit taken by HP, IBM, Dell, etc.

They are complaining about the PC heritage in the server world- this is actually a huge convenience in that it is standardized hardware (so for example, it's easy to install any software made for PCs on them, including Linux). The cost of this compatibility is not very much these days (in terms of silicon area).

Blade servers had centralized power supplies since forever ago..

Also you can certainly get servers without CD drives :-) Rack front and back panel area is actually a limited commodity, so for example many modern servers are just packed with 2.5 inch drives..

rrss
I thought a number of vendors sell "OCP Accepted" products that use the OCP designs?

I've not yet watched the presentation, and I'm not familiar with this stuff, so apologies if I'm missing something, but what is the difference between buying a server from Oxide and buying e.g. https://www.opencompute.org/products/109/wiwynn-tioga-pass-a... (from one of the vendors on the right)?

justicezyx
There was always a need of making something commercially successful. But there needs to be proportional demand to justify.

OCP cannot produce their products to mass market unless there is a strong demand. Certainly it looks like market mainstream is not too passionate about building or managing their machines.

I don't deny that some people, in any circumstances, would demand different offerings from the market mainstream.

And I am totally understanding why such statement like "a was denied to b" was used here.

I was merely stating, for mass market, there is no serious demand for what's claimed to be denied from them. And I am stating that from a more technical perspective nor a marketing or PR one. (And I am very positive about the necessity of marketing and PR)

wmf
In the current modular[1] structure of the industry where the server is a product and the network is a separate product and the hypervisor is yet another product etc, there's no demand for components that aren't compatible with the morass of existing standards. So yeah, there isn't enough demand for OCP servers and such.

It sounds like Oxide is trying to break out of that by providing the whole stack.

[1] https://stratechery.com/2013/clayton-christensen-got-wrong/

kaliszad
There are so many old (and frankly even new) line of business applications, where the developers haven't considered among other things laws of physics like speed of light in optical fiber much. These systems (client+server applications) tend to run much better on premise. The applications are often not automated much, aren't really secured that well (so you would probably need a VPN to the cloud to run it safely) and the bandwidth of internet connections at some of these companies are not really suitable for clients on premise and servers in the cloud anyway. You are lucky, if the synchronization to a different location works well enough.

Also, cloud is very costly if you don't use the up and especially down scaling because your application/ infrastructure wasn't really designed for that. Also if you buy some new machine for the factory it usually comes with software (usually MS Windows Server + MS SQL Server + some machine control software) that has hardware requirements that don't really fit well with cloud pricing. Such machines tend to run for decades and the company certainly hasn't thought about being efficient with computing resources on the server. On premise hardware isn't that costly if you consider these factors, if the supplier cannot secure the machine properly, you slap it into its own VLAN and write an ACL for the RDP access (because that is how it is) and are done with it. Basically dedicated Gigabit speed with very little latency for any communication between the clients and the server. Remember, you are almost lucky if a Windows Update doesn't break the software/ software license on the server or the client...

kaliszad
For me as a systems engineer and systems administrator appliances like VMware VxRail are totally infuriating at times. Especially the deeply object oriented design of their APIs that really hinders you implementing anything not already present in Ansible or Terraform in a reasonable amount of time yourself. I could fill a talk ranting. They really should take a hint from Rich Hickey and stuff like "Simple made easy" even if they don't write any Clojure at all.

In the end, the less sophisticated Citrix XenServer we use now for about 10 years seems to be more hackable in some ways.

chillfox
Don't forget their convoluted documentation for those APIs, or how their libraries are poorly maintained, and has even worse documentation somehow.
None
None
guerrilla
Would someone like to tl;dr? I can't tell whether I want to watch this 1h26m video based on its vague title.
steveklabnik
This is probably the most thorough public explanation of what we're doing over at Oxide.
kaliszad
Good luck/ "kick ass and have fun" while pushing computing forward. I applaud the effort to make the very foundations of computing more robust and introspectable. The most laudable goal seems to me to be especially the general accessibility of some of these achievements in the long run even to non-customers. Maybe, open and robust firmware will become the standard. Please also embrace IPv6 to prevent taking all the brokenness in that area (e.g. network boot, remote management) for the ride into the 21. century.
steveklabnik
Thank you! It has been a lot of fun so far. My colleagues are some of the smartest, most helpful people I've ever worked with. I look forward to that future too :)
kaliszad
We will evaluate the next hardware generation at our company probably sometime in 2022-2023. I sure would love to get my hands on an Oxide computer :-) though I fear we are more in the 3-10x 2U rack computer area for most locations. This is probably the sizing for most businesses in middle Europe, e.g. Germany.
ArtWomb
This is a great talk! I have to admit my first thought wasn't to the data center. Which is obviously the predominant global energy waste. But to the "local" problem of green compute for IoT / drones / autonomous systems. What's the state of the art in OS dvelopment for high efficiency embedded hardware such as Contiki, Tiny OS, RIOT, Zypher, Mbed and Brillo? And what are the major insights that are missing?
steveklabnik
I don't know! Since we're focused on the data center, that's where I have been too. I joined partially for personal growth; this is an area that I don't know as much about as I'd like to. It's only been a few weeks, but I've learned a ton. And there's enough of it in that space that I haven't had as much time or energy to look into other spaces. I do agree that there's a ton of IoTish things, and that it matters.
rudedogg
When you posted about joining Oxide I couldn't quite figure out what they (now you) do by looking at the homepage. I stumbled across this other lecture (https://youtu.be/3LVeEjsn8Ts?t=2189) that is along the same line of thinking, and it started to make sense.
steveklabnik
Thanks, I'll have to check this out!
iamjk
Their podcast, "On the Metal" provides more context at the work they're aiming to accomplish.
jpm_sd
Got a TL;DW for us? Video is 86 minutes long.
steveklabnik
The sibling comment is good.

The problem that we're trying to solve is basically laid out on this slide: https://youtu.be/vvZA9n3e5pc?list=PLoROMvodv4rMWw6rRoeSpkise...

The business is "we will be selling servers." You can't buy any yet, but in the future, you'll be able to.

The talk lays out a history of servers, describes the problems with the servers that you can buy from vendors today, and lays out why we think we can build better ones.

armitron
Good luck, you will need lots of it.
synack
A few years ago I had the opportunity to speak with the product manager for iDRAC at Dell. When I mentioned that I was unhappy with the opaque and proprietary firmware with no standard API beyond IPMI, he said that he had never heard any complaints about it before. I was baffled. Can't wait to see what Oxide does.
PhantomGremlin
To add to sibling comments, I think this endeavor involves RISC-V based hardware supported by Rust software. That info is buried deep within the talk, I scrubbed thru so YMMV.

The specific complaint I (and the GP) have is: synopsis, synopsis, synopsis. Give me a few paragraph summary before asking me to invest an hour and a half of my time.

kaliszad
Basically they want to build rack scale computers with open/ auditable firmware all the way down and really design the hardware for "hyper-scale" like computing. That means, no VGA/ USB/ DVD on the server, power and networking will probably be solved for more servers at once, there will be APIs for all of the low level stuff that is probably inconsistent with your typical Dells, HPEs, Lenovos, SuperMicros.

I find, Bryan Cantrill talks are generally worth it to watch even just for entertainment if for nothing else.

agumonkey
I used to love listening to him, really.. (still remember his dtrace talk fondly) but this one was hard to focus on. Lots of uh ah hum. Surprising.
stevebmark
A 1.5 hour video with no context and no TL;DR top comment did NOT make it organically to the top of HN, it was upvoted strategically by members of Oxide. Could you at least do the rest of us a solid and post the TL;DR?
enneff
A new talk from Bryan Cantrill, a notable person in tech, seems worthy of upvotes on this tech focused website. Not everything is a conspiracy.
zeckalpha
The title is a reference to Tracy Kidder’s book: https://en.m.wikipedia.org/wiki/The_Soul_of_a_New_Machine
vaxman
yep, and was hoping the youtube would be a rock video...Question Stanford's position on this --DEC had a big office in Palo Alto, but it is where we kept the feral Unix (er Ultrix) monsters...and DEC won the 32-bit war baby (at least until the micro prism chip architecture was misappropriated into the 8080, er. Pentium "Pro").

TL;DR: DataGeneral (a spin off of DEC from before my time) with the Eclipse team was battling DEC (a 1950s tech giant that made its name subverting IBM) with the VAX team for "first 32-bit" bragging rights. (To make things interesting, for me at least, my Dad was an expert at Data General technology and I was a teenage mutant DEC nerd.) When I learned VAX MACRO32 they were drawing comparisons between it and the (even then, older than dirt) IBM 360 assembler and it was totally mind blowing. Getting rid of of the memory limit on the (preceding) PDP11's separate code and data segments and introducing a 4.3GB virtual address space (the "Virtual Address eXtension") changed everything. Prior to that, computer scientists had to rely on complex "overlay" techniques to swap program and data segments in and out of memory at the application level (under RT11 and RSX11, the preceding operating systems to DEC 32-bit VAX/VMS and under RDOS, the preceding operating system to DG 32-bit A/OS). Too hard for non-specialists, so someone wrote an entire operating system (RSTS/E) in BASIC which was quickly starting to dominate in the run up to VMS (also probably inspired Bill Gates and his BASIC interpreter ROMs for competing microprocessors).

iPhones/Macs and even Raspberry Pi are all dumping 32-bit now, but cost-effective 32-bit lives on in nRF and ESP class micro controllers that will surely "eat the planet" (who needs to carry a phone, dawn AR glasses/earplugs or sit at a desk/tablet when all planar surfaces for as far as you can see are I/O devices run by $0.10 micro controllers that network to edge nodes for any memory/compute heavy lifting).

teh_klev
It's a bit of a shame Cantrill jumps from the PDP11/70 to the Sun blob without even a brief mention of DG's finest (and the namesake company of the title of the presentation) such as the Nova and Eclipse ranges of their day. We should all feel cheated by this :) But anyway...

I was once-upon-a-time a Data General field engineer back in the 80's and bumped into The Soul of a New Machine around '87. It's a great read and a nice insight into DG as a company who were still considered a wee bit of a rogue outlier compared to DEC and IBM.

I'm a member of a closed group of ex DG employees on Facebook (was still permitted membership due to being in the broker game back then despite not being an employee), they're a really nice bunch of folks, though growing older by the day.

I had the pleasure and luck to work on older Novas (like the 800, 1200 and 3's) and the Nova 4 and the Eclipse 16-bit range such as the S/130's and all their associated peripherals (Phoenix and Gemini hard disks, Zebra disk drives the size of two washing machines etc). Fun fact - you could upgrade the Nova 4 to an S/140 by "obtaining" the correct microcode PROM's and performing some other minor patching; we did this, it wasn't always considered legal, but every other broker out there also was up to this game. DG didn't seem to mind because by then their mainline product by then were the MV's. DG was a very leaky company with regards to getting hold of "stuff", I don't recall anyone being sued for unlicensed and pirated copies of RDOS or AOS etc. A thing that was an expensive item from DG but almost like a consumable were these things called paddle boards. They're basically passive PCB's that allow interfacing between the inside of the machine to the outside world. We never bought these from DG, they always came from "some guy" and a box of 20 cost less than a bonafide DG part. DG knew this but never complained.

The diagnostic tools were tremendous (DTOS and ADES) which coupled with a portable fiche reader allowed you to diagnose and fix most problems on site. These were good times and I learned a huge amount about problem solving as a young broth-of-a-boy engineer. I still have a copy of "How to Microprogram Your Eclipse Computer" where I learned about microcode and that assembler wasn't really the true bare metal of a CPU :) I have other war stories I should write down some time.

bcantrill
Sorry to sell DG short! (I did give it a brief mention at the top, it was just very very brief.) For whatever it's worth, I did go into DG in more depth in my blog entry on re-reading of Soul last year[1] -- one that attracted some comments from some very closely associated with the company and book!

[1] http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-sou...

teh_klev
Hey no worries Bryan. Reading your article there just triggered a re-read of Soul for myself :)
So, I was trying to keep this post short and to the point, but if you want a longer description, straight from the mouth of one of the founders, you may like https://www.youtube.com/watch?v=vvZA9n3e5pc

> Are they trying to offer folks AWS or Azure quality servers for purchase?

I think "quality" doesn't really capture 100% of it, but in a sense, yes. The goal is extremely quality, performant, robust servers.

I will have to delegate the details to the above talk; I have a bunch of work to get started on, and would just end up re-typing out what it says above, ha!

mwcampbell
Do I understand correctly that the software layer of Oxide's product will be a hypervisor running VMs with hardware virtualization? If so, that's super surprising to hear from @bcantrill, considering how many talks he gave about running containers without VMs.
steveklabnik
It is a bit too early to get into details like this, to be honest. It'll all become more clear publicly as more work gets done :)
masonic
Just FYI, in trying to sign up for the email list on your website, I got this error in one attempt:

There are errors below (email address) Too many subscribe attempts for this email address. Please try again in about 5 minutes.

jessfraz
Hey! Thanks for the heads up, can you email Jess [at] oxide [dot] computer and I will add you and try to figure it out! Thanks!
twic
It seems that video has been posted to HN twice, but didn't attract any comments either time, sadly.

I don't have the energy to watch a 90 minute talk, but some skipping around finds a bit more detail about what is in the plan:

https://youtu.be/vvZA9n3e5pc?t=1299

The theme seems to be giving the operator of the computer a lot more control - eliminating opaque binary blobs, exposing the hardware more directly, etc. I can't tell if this is a particular bee in the founders' bonnets, or a real market demand, or if they know about some scary threat they can't talk about.

Whilst this is definitely cool, and something that will get a lot of people on HN super pumped, it doesn't sound like something any company i have worked at would really care about. Unless these machines also deliver significantly better performance. But then, those companies were not hyperscalers.

steveklabnik
My personal take on this bit is that it's not something that customers care about directly, but impedes overall quality, which they do care about. We'll just see how it all goes, of course!
Their co founder Bryan Cantrill gave a talk at Stanford on what they are trying to do, essentially offer on prem servers comparable to what “hyperscalers” like Google and Facebook put in their data centers — highly efficient and customizable (in low level software) iirc.

https://youtu.be/vvZA9n3e5pc

Mar 07, 2020 · 4 points, 0 comments · submitted by tosh
Mar 06, 2020 · 23 points, 2 comments · submitted by wmf
sgt
Okay - I have to say this is really cool.
hinkley
I'm a little disappointed this didn't get more play. Only found it because I was curious what details Oxide had leaked out over the last couple months.
Mar 04, 2020 · 3 points, 0 comments · submitted by lrsjng
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.