HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Doom on an IKEA TRÅDFRI lamp!

next-hack · Youtube · 1009 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention next-hack's video "Doom on an IKEA TRÅDFRI lamp!".
Youtube Summary
We ported Doom to the Silicon lab's MGM210L RF module found ni the IKEA TRÅDFRI RGB GU10 lamp (IKEA model: LED1923R5).

The module has only 108 kB of RAM so we had to optimize a lot the RAM usage.

The module has only 1 MB of internal flash, therefore we added an external SPI flash to store the WAD file, which can be uploaded using YMODEM.

The display is a cheap and widespread 160x128 16bpp, 1.8" TFT.

UPDATE: In the github repository we have removed the mip-mapping on composite textures, with no performance penalty, so graphics will be more detailed than what is shown in this video.


Links:

Article on next-hack website:
https://next-hack.com/index.php/2021/06/12/lets-port-doom-to-an-ikea-tradfri-lamp/

Article on Hackaday.io:
https://hackaday.io/project/180182-hacking-an-ikea-trdfri-lamp-to-run-doom

Github repo:
https://github.com/next-hack/MG21DOOM
HN Theater Rankings
  • Ranked #16 all time · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Jun 14, 2021 · 1009 points, 309 comments · submitted by kregasaurusrex
teekert
We should all take some time to consider that our light bubs have more powerful computers than the first computer many of us once owned.

This perspective makes scifi stuff like "smart dust" seem a lot more feasible. Ubiquitous computing, what will it bring us?

npteljes
>Ubiquitous computing, what will it bring us?

Ads, propaganda and surveillance.

teekert
I'm sorry you had to go through your childhood without Star Trek ;)
npteljes
I really did go without Star Trek! I had some small exposure to Star Wars, but what really grabbed my attention was the Neuromancer novel, and later the Matrix film series. Of course I'm cherry-picking my experience but it's a valid observation of yours that while I'm a technologist through and through, I often focus at its ugly side.
teekert
Yeah I really enjoy optimistic sci-fi :)

I do enjoy Dark sci-fi also every now and then but I generally like my heroes to be scientists, explorers, to solve ethical questions.

npteljes
I like that too. I like it when the threat is external, and the people get together and collectively do something that makes the threat go away.
adrianN
Not too far off:

https://en.wikipedia.org/wiki/The_Game_(Star_Trek%3A_The_Nex...

Sohcahtoa82
I just lost the game.
numpad0
The concept of digital advertisement was unknown to Humans in ST universe until Ferengis brought an example to Federation Starbase Deep Space 9 in 2372, so that’s one divergence between our universe and Star Trek version of it.
SyzygistSix
Sao Paulo did away with public advertising for a couple decades.

I believe it is creeping back in now. But it can be done.

lanerobertlane
Off topic, but Deep Space 9 was not a federation starbase. It was a Bajorian Republic station under Federation administration following the Cardassian withdrawal.
medstrom
S/he's very sorry for misusing "starbase", s/he means a station.
numpad0
Oh I assumed it was under sole Federation control from the prefix “Deep Space”, wasn’t aware it was under Bajorian ownership. I stand corrected.
selfhoster11
Star Trek was always a spherical cow as far as futures go. I'm not saying that it's not useful as an inspiration (socially, technologically and as emotional support), but realistically humans will pursue profit until we get a UBI system in place.
Andrex
Such a shame the current writers of the franchise apparently didn't, either. Which is depriving current and recent generations of kids of that optimistic ideal.

(Yes, Discovery season 3 is a thing I know about.)

sn41
There's an idea for a series: a space opera with a cutting edge analytics-enhanced Ferengi ship where "it's continuing mission is to explore strange new ad streams, to fleece out new life and new civilisations, to boldly trade where no one has gone before".

The main adversary will be the cookie monster.

SyzygistSix
Considering how much computing power is available and how much of it is used to deceive, misinform, or manipulate people, this sounds likely.

The only thing more disturbing and sad about this is how much consumer demand there is and will be for deception, misinformation, and manipulation.

nickpp
I think we already have those, even without Ubiquitous computing. Hell, we had them even before we had any computing whatsoever. Sure they are more efficient now (what isn't) but they always existed...
selfhoster11
Older ads were far less powerful. They also couldn't embed themselves into your everyday life so much. "Hey Google, ..."/"Alexa, ..."/"Siri, ..." is an exercise in brand reinforcement every time you want to ask your smart assistant about something. At one point, I creeped myself out by realising that my first instinct is to call out "Hey Google" when I need something, even if I'm away from my Google Home.
TheOtherHobbes
Optimistic. It's a small step from those to compulsory "social credit" - like money, but worse - and other more or less overt forms of thought control and behaviour modification.
torginus
I remember reading about some circuit where they replaced an 555 timer whose job was to generate a PWM signal with a fully featured microcontroller, because it was cheaper that way.
canadianfella
How do you pronounce 555?
m_st
While I'm a great fan of ubiquitous tech/computing in general, I must also say that it feels weird applying firmware updates to light bulbs. However, you want to stay safe these days, so it's better to keep up to date, right?
flyinghamster
Unfortunately, my experience has been that Ikea will push out a firmware update, and then I don't discover it until the outside lights fail to turn on or off at their appointed time and have to be rebooted. Yes, we live in an age when you can reboot a light bulb.

Very much to their credit, though, the Trådfri hub doesn't depend on a cloud service just to operate. If that ever happens, thus endeth my foray into smart lighting. I've put my foot down: if it needs Somebody Else's Computer to function, I don't want it.

Nextgrid
> Ubiquitous computing, what will it bring us?

Ads, obviously.

SyzygistSix
When adblocking software becomes indistinguishable from stealth technology.
handrous
Ubiquitous spyware in everything we interact with, in order to make ads, on average, 5% more efficient—juuuuust enough more efficient that having a stream of massive-scale spyware data is necessary to compete in the ad sales market. Totally a good trade-off, making all the world's computing adversarial so ads work slightly better.
tomxor
> our light bubs have more powerful computers than the first computer many of us once owned

Mine don't, and I first owned an AtariST with a 68000 CPU.

These are "smart bulbs". We are still in the .com bubble of IoT, so there are going to be a lot of silly things we can run Doom on for a while until it dies down. Lights don't need computers to operate, but that doesn't stop people trying to add "features" to lights with computers.

lupire
Computers have lights, why shouldn't lights have computers?
dTal
Nuclear power plants have phones, why shouldn't phones have nuclear power plants?
soheil
Given how computation is more and more energy efficient and requires near zero material to build, will there be a day that we consider computing cycles a priori a bad thing? Maybe there will be an argument about how terrible it is to have smart dust by those who consider it to be a new form of pollution and toxicity.
brainless
Thank you, I think I am just gonna quit my work now and spend the day thinking about it walking around and being anxious too. Sometimes it takes a while to understand how far we have come with miniaturization of tech.
nabla9
In 2018 IBM already demoed 1mm x 1mm sized computer on chip concept for crypto anchor that almost can almost run Doom.
zelon88
I believe there is a point at which we will be emulating silicon so fast that we will be able to realize true Single Instruction Set computing.

Such a machine would be nowhere near as efficient as a CISC computer in terms of work per clock cycle, but what if our "silicon" in the future can run an almost arbitrary frequency? We would be able to emulate any instruction set natively in real time. The perfect FPGA.

brokenmachine
>Ubiquitous computing, what will it bring us?

Ads.

Oh, and spying/tracking.

The future sucks.

clownpenis_fart
wow. I don't think anyone realized that before. really makes u think
Angostura
I always like the fact that the average musical birthday card, popular in the 90s had more compute capacity than the computer in the Apollo command module.
schlupa
Mmmmh, AGC was not that low level. It was a 16 bit computer, running at 1Mhz with 72 Kb of ROM and 4 Kb of RAM.
Eduard
I doubt.
nl
https://hackaday.com/2011/11/22/musical-greeting-card-with-m... shows building a music card on an ATTiny 85.

These are around 20 MIPS. The Apollo guidence computer had around 1 MIPS.

kolinko
You can build a music card on ATTiny, but music cards didn’t use ATTiny.
nl
Sure, but I couldn't find a teardown.

It seems likely they are using something similar. It's difficult to find a cheaper chip, broadly available chip these days.

You can find Z80 clones, but even they are generally upgraded and therefore more powerful than the Apollo computer.

nl
Wow the downvotes on this are pretty harsh!

Do people really think the Apollo computer is more powerful? And have any evidence? I'd be surprised if you can get a microcontroller with as little processing power these days.

kolinko
I think the downvotes come because you keep saying that music cards use microcontroles, when they do not.

Music cards use dedicated chips, they are not general purpose, and they don’t really do much computation aside from a few counters. AFAIK.

Now, in DIY tutorials they show microcontrollers, because they are easier to obtain, but it doesn’t mean that they are used in the commercial products.

nl
I realise they don't use microcontrollers, but the Apollo Guidance Computer only had ~12,000 transistors[1].

The cheapest, smallest (15s storage) "chip corder" (which is what these cards use) has GPIO, SPI, the ability to trigger different messages etc. There's no way this is under 12,000 transistors![2]

[1] https://en.wikipedia.org/wiki/Transistor_count#Transistor_co...

[2] https://www.datasheets.com/en/part-details/isd2115ayyi-nuvot...

rrrazdan
Citation please? That sounds unlikely!
notwedtm
This reminds me of the fact that 54% of all statistics are made up on the spot.
formerly_proven
Those ICs don't have any compute capacity at all. They're an analog oscillator driving an address counter connected to an OTP ROM whose data pins go into a DAC.
simias
A problem with this from my point of view is that while hardware engineers did an incredible job increasing the processing power of our thinking rocks, us software devs did a tremendous job of squandering 90% of it away. Of course there also are market incentives for doing so (time to market, dev costs etc...).

Empirically it seems that software simply doesn't scale as well as hardware does. I feel like this overhead would make "smart dust" impractical.

Or I guess I could put it that way: one hand hand you could be impressed that a modern light bulb can run Doom, on the other you could be alarmed that you need a Doom-capable computer to run a modern light bulb.

pwagland
Well, yes and no… as always!

The trick is that you probably don't need all, or even most, of that power to run the light. Sure the Zigbee protocol is _probably_ being done via software, and not a dedicated chip, but even then. The big thing is that this chip is most likely so cheap, especially in bulk, that it doesn't make sense to get the "cheaper" variant, even if that was still available. This is kind of supported by the "new" Trådfri having an updated chip, even though the Trådfri line never changed it's capabilities, it was probably cheaper to get the new more powerful chip and/or they could no longer get the old one with a five year supply guarantee.

a10c
In a similar vein, from memory the i3, i5 & i7 chip is absolutely identical in every physical way from a manufacturing point of view, except that the less powerful chips have cores / features disabled.
pc86
I have to wonder why this is done. I know it must make sense or it wouldn't be done, I just don't understand it.

If you're intentionally disabling functionality in order to sell at a lower cost, you're not actually saving any money because you still have to manufacture the thing. It also (I assume) opens up a risk to someone finding out how to "jailbreak" the extra cores and now you can buy an i7 for the price of an i3. Is the cost of having three different manufacturing processes so large that it's not worth switching? Is the extra revenue from having three different versions of the same physical chip enough to justify the jailbreak risk?

rickdeckard
This is done because your production yield is not 100%. So instead of throwing away every produced component which doesn't achieve the target of your 100%-product, you "soft-lock" the components with 80~99% performance into a 80%-product category, and the ones with 60~80% into a 60%-product. This way you increase the total yield-rate, and produce less waste. The counter-intuitive waste happens when the demand for the 60%-product is exceeding your "natural" supply of 60%-output, so you have to start to "soft-lock" some of your "80%-product" production to the 60%-grade to fulfill demand...
zingar
But do these production defects really meet the demand of the lower tiers? Also how is it possible to predict the number of defects in advance so that they can make useful promises to distributors?
mekkkkkk
I'm curious about this as well. It seems inevitable that some batches will be "too good" to satisfy demand of low end chips.

Either they just accept the fluctuations in order to maximize output of high end chips, or they would have to cripple fully functional ones to maintain a predictable supply. Interesting business.

BeeOnRope
It's not primarily about using defective chips (but that's a nice side effect). As a process becomes mature, yield rates become very high and there wouldn't be enough defective chips to meet demand for a lower tier, so good chips are binned into those tiers anyway.

The primary purpose is market segmentation: extracting value from customers who would pay more while not giving up sales to more price sensitive clients who nevertheless pay more than the marginal cost of production.

mekkkkkk
That makes sense, thanks. I wonder if it would be possible to de-bin one of the lower end ones, assuming it is a binned version of a fully functional higher tier chip. Or perhaps they completely destroy the offlined cores/features.
Pet_Ant
Well eventually as yields improve you start handicapping perfectly valid chips to main market segmentation.

I cannot say this for certain in CPUs but I know in other electronics with PCBs that this is how it is done. Sometimes lower-end SKUs are made by opening a higher-end one and cutting a wire or a trace.

HeavyStorm
Are you guys sure? I think manufacturing has nothing to do with it.

The real reason IMHO, is to have a larger range of product prices so you can cater for specific audiences.

It seems people are confusing cost with price. Those two things are orthogonal.

Arrath
This tends to be the case later on in a product's production run, as the manufacturer has fine tuned the process and worked out most of the kinks, the pass-rate of finished items increases.

At this point, yes they may lock down perfectly good high end CPUs to a midrange model spec to meet a production quota.

jffry
The term for this is "binning", and the explanation is wholly innocent. Manufacturing silicon chips is not an exact process, and there will be some rate of defects.

After manufacture, they test the individual components of their chips. These chips are designed in such a way that once they identify parts of a chip that are defective, they can disconnect that part of the chip and the others still work. (I believe they physically cut stuff with lasers but my knowledge is out of date). This process can also includes "burning in" information on the chip itself, like setting bits in on-die ROMs, so that if your OS asks your CPU for its model number it can respond appropriately.

Interesting side note: The same thing happens when manufacturing even basic electronic components like resistors. All the resistors that are within 1% of the target resistance get sold as "±1%" resistors, which means it's pretty likely that if you buy the cheaper "±5%" resistors and test them, you'll find two clusters around -5% and +5% and very few at the target value.

magicalhippo
> The same thing happens when manufacturing even basic electronic components like resistors.

EEVblog did some tests[1][2] some time ago on ±1% resistors, and found that while his samples were fairly Gaussian and within the spec, the ones from his second batch were consistently low. That is, none were above the rated value.

So yeah, don't assume a perfect Gaussian distribution when using resistors.

[1]: https://www.youtube.com/watch?v=1WAhTdWErrU

[2]: https://www.youtube.com/watch?v=kSmiDzbVt_U

Peaches4Rent
That's because there is a huge variance in the quality of chips produced because the process isn't 100% precise.

So the best chips which have the least errors in the manufacturing process are sold as top tier. The ones which have more mistakes in them get their defective parts disabled and then get sold as lower tier ones

boygobbo
It's called 'market segmentation'. It's why there are different brands of soap powder from the same manufacturer even though they are all essentially the same.
HeavyStorm
Yep. I don't think it has anything to do with manufacture issues.
HumblyTossed
> I don't think...

Instead of assuming, it's easy enough to confirm that CPU binning is real.

lordnacho
I think of you Google price discrimination or similar economic terms you get some explanations for this.

If you just have one price, you cut out people who can't afford it and people who can afford to pay more get away with more of the surplus.

If you have several prices and create just enough difference in the product that it doesn't change the expense much, you can suck dry every class of user.

Bit of an MBA trick.

lupire
"suck dry" is excessive editorializing for a practice that make sit possible for a market to exist without, well, "sucking the manufacturer dry".
Rastonbury
Exactly, it is not an economics trick for if companies could only supply a certain level of the market it might not even be profitable to produce. It is just economics...

We are talking about chips here so not the same bottle of soap with different labels, I'd call that a trick.

MontyCarloHall
This is done with cores/memory banks that didn’t pass QC. For example, a 6 core CPU and an 8 core CPU might have the same die as a 12 core CPU, but 6/4 cores, respectively, were defective, so they get disabled. I don’t think they’re crippling fully functional hardware.

See here: http://isca09.cs.columbia.edu/pres/09.pdf

Also here: https://www.anandtech.com/show/2721

“When AMD produces a Phenom II die if part of the L3 is bad, it gets disabled and is sold as an 800 series chip. If one of the cores is bad, it gets disabled and is sold as a 700 series chip. If everything is in working order, then we've got a 900.”

gurkendoktor
> ...but 6/4 cores, respectively, were defective, so they get disabled. I don’t think they’re crippling fully functional hardware.

Hmm, but what if 3 cores are defective? If that can happen(?), then it seems one extra functional core is disabled to get to an even core number.

Apple's M1 GPUs are the first where I've seen the choice between 7 and 8 cores (as opposed to 6/8 or 4/8).

geoduck14
> Hmm, but what if 3 cores are defective?

It gets sold as a coaster

islon
A coaster that allows you to play Doom.
Sohcahtoa82
More likely, a keychain:

https://www.amazon.com/Keychain-Ryzen-Threadripper-Computer-...

https://www.amazon.com/Keychain-Intel-Core-Computer-Chain/dp...

edgyquant
Kind of funny that Amazon recommends it be bought with thermal paste
simondotau
I imagine there is some trade off to be made between increasingly surgical disabling of components and avoiding a menagerie of franken-SKUs. Presumably the fault rate is low enough that tolerating a single GPU core drop takes care of enough imperfect parts.

Perhaps there is fault tolerance hidden elsewhere, e.g. the neural engine might have 17 physical cores and one is always disabled. Although this seems unlikely as it would probably waste more silicon than it would save.

flyinghamster
Specifically regarding Phenom II, I have a 550 Black Edition still plugging away, serving different roles over the years, and I was able to successfully unlock and run the two locked-out cores (via a BIOS option). It's never skipped a beat at stock clock. It could be that there was an oversupply of quad-cores, or perhaps (since it was a Black Edition part marketed to overclockers) the extra cores failed when overclocked. I know I wasn't able to have both overclock and four cores, but I considered the extra cores more important, since it was already a reasonably fast chip for its day.
SAI_Peregrinus
It's likely the latter (that it couldn't work when overclocked with all cores). The market for those is to allow overclocking, so if it can't do any overclocking with all cores AMD likely wouldn't want to sell it as a 4-core Black Edition, since it'd probably just get returned.
sly010
No manufacturing process is perfect, so you just sort, label and price the output accordingly. This is fairly normal practice. LEDs, vegetables, even eggs...
noir_lord
The problem is that the extreme effort to optimise hardware pays off for everyone, the extreme effort to optimise "random software project" rarely does (unless random software project is a compiler/kernel of course).

So the RoI is just different.

neolog
Browsers too
noir_lord
Indeed, browsers are a good example of ubiquitous software.
funcDropShadow
And they contain a compiler and almost an operating system ;-)
dwild
> you could be alarmed that you need a Doom-capable computer to run a modern light bulb.

An ESP8266 microcontroller can be bought in low quantity for less than a dollar. I means sure any cost reduction at scale is meaningful, but at that point I don't think the silicon is the expensive part at that point. It just doesn't make sense to give WIFI devices anything less than that performance, the gains in silicon space will be meaningless and you'll spend more managing that than anything.

2OEH8eoCRo0
I'm tired of the bloated software take. Hardware is meant to be used. Without these abstractions most software would be practically impossible to create. Without software solving more problems what's the point of the hardware?
ant6n
I don’t think the Point of N Ghz + 8 gb ram hardware is for me to sit and stare at a spinning mouse pointer while waiting for the explorer to change to another directory.
2OEH8eoCRo0
I dislike Nautilus too
hasmanean
How’s this take:

What is the minimal computer you can both compile and run Doom on?

funcDropShadow
Of course can good abstractions and tools help to make software possible that were practically impossible before. But there is also a tendency to add abstraction layers of questionable value. An Electron-based UI to copy a disk image to an usb stick comes to my mind, e.g. Certainly it is possible to create a GUI for a file to disk copy operation without two JavaScript engines, an html/css renderer, lot's of network code, etc. This is just a silly example, I know. But this happens all the time. That phenomenon isn't even new. Anybody, remember when the first end-user internet providers would all distribute their own software to dial in? In my experience, most problems with the internet access at that time could be fixed by getting rid of that software and entering the phone number and dial-in credentials in their corresponding Windows dialog windows.
2OEH8eoCRo0
>An Electron-based UI to copy a disk image to an usb stick comes to my mind

Subjective. Questionable to you. Nobody is bloating dd

There is definitely bloated software but it's not a huge issue. If it were, then the customer would care. If the customer cares the business would care.

N00bN00b
>A problem with this from my point of view is that while hardware engineers did an incredible job increasing the processing power of our thinking rocks, us software devs did a tremendous job of squandering 90% of it away.

That works both ways though. The highly qualified software devs did indeed squander some of it away.

But I'm a rather bad dev that writes really inefficient code (because it's not my primary concern, I'm not a programmer, I just need custom software that does the things I need done that can't be done by software other people write).

All this overpowered hardware allows my code to work very well.

I've been in situations where I could pick between "learn to program properly and optimize my code" or "throw more hardware at it" and throwing more hardware at it was definitely the faster and more efficient approach in my case.

psyc
What doesn’t scale well is John Carmack (and those of similar devotion).

And yes, I’m aware that there are also those with the chops, who are not permitted by their money masters (or alternately by their master, money) to write performant software.

bottled_poe
Those “money masters” are responding to market conditions. If the market demanded greater efficiency (eg. Through climate policy for example), we would quickly see a change in priorities.
Dah00n
So basically we need a climate tax on software to fix the problem. Putting the tax directly at energy would not cause much optimization in software in my opinion. I don't believe software development has a culture than can take on responsibility of its actions, neither in energy usage nor in security, which all leads back to programmers not being an engineer kind of worker but more like an author/writer. Hardware engineers on the other hand can and often do take responsibility. All in all I don't have any hope of software developers being up to the task if it landed in their lap to fix this so if we wanted their hand forced the tax needs to be directly on software instead of hardware or energy. I don't believe this is mainly market driven as the market is unlikely to be able to fix it. It's at least as much a culture problem.
photojosh
I think of John Carmack as the software equivalent of Roger Bannister, he of the first 4 minute mile. Yes, he is absolutely incredible, but once that magic is revealed to the world, it shows what is achievable and people can follow in his footsteps.

Of course, I'm over here with my poky 6 minute mile...

hasmanean
Yeah, doom advanced the state of the art by maybe a decade(?). Instead of needing a silicon graphics workstation or a Graphics accelerator, it allows a generation of kids to play on any hardware.

If you want to know how the world would be like without video game programmers, just like at internal corporate software and how slow it is.

How many other advances are we tossing away because people don’t know how to optimize and code for speed and fun!

hasmanean
Someone should remake Doom using today’s “best practises” coding standards and see how much performance it gives up.
otabdeveloper4
> using today’s “best practises”

That would be WebGL and Javascript, I presume?

I tried running a Quake port in that vein, but sadly none of the computers I own were able to play it without stuttering.

Sohcahtoa82
I'd like to see the DOOM engine written entirely in Python without using any GPU rendering besides the frame buffer.

DOOM ran in 320x200 at 30 fps on a 33 Mhz CPU, which gives it less than 18 clock cycles per pixel rendered. I doubt Python could get anywhere close to that.

Narishma
Software seems to scale in the other direction. The faster the hardware you give developers, the slower the software they produce becomes.
Blikkentrekker
Reading up upon some of the absolute ugly and unmaintainable hacks that software writers had to rely upon two decades back to fit things into the hardware, I am honestly quite glad the æra of that type of programming is now past us.

It was certainly impressive, but it was aso a case of often design having to give way to such hacks, such as the fact that many older first person shooter games had to have their entire level design work around the idea that the engines could not support two walkable surfaces vertically above each other for performance reasons or the famous Duke Nukem 3D “mirror” hack.

abraxas
Most web apps/websites aren't a whole lot more sophisticated than they were in the days of Netscape Navigator 3.0. They show text and pictures, accept form submissions and resize the viewport when I change the window size. Yet today they seem to require multi gigahertz CPUs to do anything acceptably fast. Yes there are a few that were not possible back then. Most though can't justify their resource consumption. Neither on the client nor on the server side.
sellyme
> Yes there are a few that were not possible back then.

"A few" is underselling it somewhat. I would posit that >95% of popular modern websites contain functionality that was not plausible in 1996.

abraxas
Excluding embedded video which indeed was very low quality then (although it existed) what things do you have in mind?
alexashka
> us software devs did a tremendous job of squandering 90% of it away

No.

It's power hungry scum who have reduced computing down to a few hardware choices, a few operating systems and a few software products that they tyrannically control and extract profit from.

Most software devs are like most people - they are wage slaves who don't know know they are wage slaves or that they live in a wage slave society. They don't know anything that didn't come from correct-think megaphone, they don't want to know, they want to eat, sleep, fuck and do better than their neighbour.

bottled_poe
In the scheme of things, this is short term. Market incentives for pushing performance are currently minor, but will have increasing influence over the next decade. Factors such As processing power hitting physical limits and energy prices as a result of climate policy will force engineers to build more efficient systems.
limaoscarjuliet
I do not think we will see "processing power hitting physical limits" anytime soon. Moore's Law is not dead yet, it is a good question if it ever will be dead. As Jim Keller says, the only thing that is certain is number of people saying Moore's law is dead doubles every 18 months.

https://eecs.berkeley.edu/research/colloquium/190918

jerf
Yes, it has to die. In this universe things can only grow indefinitely by less than an n^3 factor, because that's as fast as the lightcone of any event grows. Exponential growth, no matter how small the factor, will eventually outgrow n^3.

Once we attain the limits of what we can do in 2 dimensions, we aren't that many exponential growth events from what we can achieve in 3. Or once we achieve the limits of silicon technology, we aren't that many exponential growth events from the limits of photonics or quantum or any other possible computing substrate. Unless we somehow unlock the secret of how to use things smaller than atoms to compute, and can keep shrinking those, we're not getting very much farther on a smooth exponential curve.

squeaky-clean
Sure, it has to die eventually, but the key phrase is anytime soon. But do we have any evidence it will during our lifetime? Or even our great-great-great-great-great-great-grand-child's lifetime?
selfhoster11
Moore's law is dead as a doornail. Progress remains is parallel processing only.
Sohcahtoa82
Eh...for single-threaded processing, I'd say Moore's Law is dead and has been dead for more than a couple CPU generations now.

What we're seeing now is massive parallelism, which means if your task is embarrassingly parallel, then Moore's Law is very much alive and well. Otherwise, no.

dTal
Moore's law is a statement about transistor count on an integrated circuit - there is no "Moore's law for single threaded processing". And transistor counts continue to double.
_carbyau_
I feel this everytime I go to a checkout register or am on the phone with a rep doing some minor account thing.

The person operating the console only has to click an option or fill in some text field, same as 20 years ago. But today, with added slowness.

Knowing the amazing advances in hardware in the meantime, this hurts me.

vsareto
Smart dust will probably be terrible for human respiratory systems anyway
IgorPartola
Not if it can work it’s way out of your lungs, being smart as it is.

One of my favorite dad jokes is that if Smart Water was so smart, why did it get trapped in a bottle?

selfhoster11
That's assuming benign smart dust. Imagine a smart dust deployment by a hostile foreign power that is less than benign.
kilroy123
My guess is, that in the future, we'll have computers writing some highly optimized software for certain things. I'm not saying all of use software people will be replaced 100% but some stuff will be replaced by automation.

That's my prediction.

Blikkentrekker
That is already done; such software is called a compiler.

There is no reason to optimize the language that programmers work in when such optimizations can better be done on the generated machine code.

KMag
Will the specifications for the software also be machine-generated?

If the specifications are human-generated, then they're just a form of high-level source code, and your prediction boils down to future programming languages simultaneously improving programmer productivity and reducing resource usage. That's not a controversial prediction.

If I understand you correctly, I think you're correct that over time, we'll see an increase at the abstraction level at which most programming is done. I think the effort put into making compilers better at optimizing will largely follow market demand, which is a bit harder to predict.

One interesting direction is the Halide[0] domain-specific language for image/matrix/tensor transformations. The programs have 2 parts: a high-level description, and a set of program transformations that don't affect results, but make performance tradeoffs to tune the generated code for particular devices. The Halide site has links to some papers on applying machine learning to the tuning and optimization side of things.

I can imagine a more general purpose language along these lines, maybe in the form of a bunch of declarative rules that are semantically (though perhaps not syntactically) Prolog-like, plus a bunch of transformations that are effectively very high-level optimization passes before the compiler ever starts looking at traditional inlining, code motion, etc. optimizations.

At some point, maybe most programmers will just be writing machine learning objective functions, but at present, we don't have good engineering practice for writing safe and reliable objective functions. Given some of the degenerate examples of machine learning generating out-of-the-box solutions with objective functions (throwing pancakes to maximize the time before they hit the ground, tall robots that fall over to get their center of mass moving quickly, etc.), we're a long way from just handing a machine broad objectives and giving it broad leeway to write whatever code it deems best.

I suspect in the medium-term, we'll see a 3-way divergence in programming: (1) safety/security-critical programs generated from proofs (see Curry-Howard correspondence, and how the seL4 microkernel was developed) (2) performance-critical programs that are very intensive in terms of human expert time and (3) lots of cookie-cutter apps and websites being generated via machine learning from vague human-provided (under-)specifications.

[0] https://halide-lang.org/

airbreather
Article yesterday says google uses AI to design chips in 6 hours, sounds like a long way is now yesterday.
KMag
> google uses AI to design chips in 6 hours

No, Google's AI is floorplanning[0] (basically routing and layout) human-designed logic in 6 hours. That headline is misleading.

It's kind of like having a compiler will billions of optimization flags and using AI to select a pretty near-optimal set of flags for a particular human-generated source file. We wouldn't call the output an AI-designed program, even though such AI would be really helpful.

Google is using AI as a heuristic for decently fast approximate solutions to the floorplanning problem, where (IIRC) optimal solutions are NP-hard.

It's an important step forward, and presumably a big time saver, but they're nowhere near giving the AI an instruction set spec or examples of inputs and outputs ad having the AI generate the logic.

[0] https://en.wikipedia.org/wiki/Floorplan_(microelectronics)

Gravityloss
Maybe John Carmack could look at web page rendering and solve the problem once and for all.

A significant part of my day is waiting on my or someone else's computer trying to view some tickets.

cwkoss
By the end of the century, AI will probably be able to write software better than humans.
papito
This guy gets it.
TheBigSalad
Go back to using the software of the 80s then, before the evil software engineers made it all so bad.
candu
I'll choose instead to be amazed that Doom-capable computers are now inexpensive and ubiquitous enough that it makes total financial sense to use one in a light bulb!

More seriously, I see this argument all the time: that we are just squandering our advances in hardware by making comparably more inefficient software! Considering that efficiency used to be thought of on the level of minimizing drum rotations and such: the whole point is that we're now working at a much higher level of abstraction, and so we're able to build things that would not have been possible to build before. I for one am extremely grateful that I don't have to think about the speed of a drum rotating, or build web applications as spiders' nests of CGI scripts.

Are there modern websites and applications that are needlessly bloated, slow, and inefficient? Certainly - but even those would have been impossible to build a few decades ago, and I think we shouldn't lose sight of that.

curtis3389
I get your point, but putting these 2 thoughts together:

> we are just squandering our advances in hardware by making comparably more inefficient software

> we're able to build things that would not have been possible to build before

We get that not only are we able to build things that weren't possible before, but we can build things that are more inefficient than was possible before.

We can expect in the future to see new levels of inefficiencies as hardware developments give us more to waste.

Without something to balance this out, we should expect to see our text editors get more and more bloated in cool and innovative ways in the future.

It makes me think of fuel efficiency standards in cars.

simias
I think I would be more willing to embrace this sort of tech if there computing resources were easily accessible to hack on.

If I could easily upload my code to this smart bulb and leverage it either for creative or practical endeavors then I wouldn't necessarily consider it wasted potential.

But here you have this bloated tech that you can't even easily leverage to your advantage.

I do agree with the general point that the progress we've made over the past few decades is mind blowing, and we shouldn't forget how lucky we are to experience it first hand. We're at a key moment of the evolution of humankind, for better or worse.

Sohcahtoa82
I dunno...this attitude scares me a bit, that you would just shrug away wasted CPU cycles and accept the low performance.

CPUs are getting faster, and yet paradoxically, performance is worse, especially in the world of web browsers.

The original DOOM ran at 30 fps in 320x200, which meant it rendered 1,920,000 pixels per second with only a 33 Mhz CPU. That's less than 18 clock cycles per pixel, and even that's assuming no CPU time spent on game logic. If DOOM were written today with a software renderer written in C#, Python, or JS, I'd be surprised if it could get anywhere near that level of clocks/pixel.

These days, the basic Windows Calculator consumes more RAM than Windows 98, and that's just inexcusable.

derefr
What's "low performance"? Humans measure tasks on human timescales. If you ask an embedded computer to do something, and it finishes doing that something in 100ms vs 10ms vs 1us, it literally doesn't matter which one of those timescales it happened on, because those are all below the threshold of human latency-awareness. If it isn't doing the thing a million times in a loop (where we'd start to take notice of the speed at which it's doing it), why would anyone ever optimize anything past that threshold of human awareness?

Also keep in mind that the smaller chips get, the more power-efficient they become; so it can actually cost less in terms of both wall-clock time and watt-hours consumed, to execute a billion instructions on a modern device, than it did to execute a thousand instructions on a 1990s device. No matter how inefficient the software, hardware is just that good.

> These days, the basic Windows Calculator consumes more RAM than Windows 98

The Windows Calculator loads a large framework (UWP) that gets shared by anything else that loads that same framework. That's 99% of its resident size. (One might liken this to DOS applications depending on DOS — you wouldn't consider this to be part of the app's working-set size, would you?)

Also, it supports things Windows 98 didn't (anywhere, not just in its calculator), like runtime-dynamically-switchable numeric-format i18n, theming (dark mode transition!) and DPI (dragging the window from your hi-DPI laptop to a low-DPI external monitor); and extensive accessibility + IME input.

smoldesu
I think the other comment has a point though: these frameworks are definitely powerful, but they have no right to be as large as they actually are. Nowadays, we're blowing people's minds by showing 10x or 100x speedups in code by rewriting portions in lower-level languages; and we're still not even close to how optimized things used to be.

I think the more amicable solution here is to just have higher standards. I might not have given up on Windows (and UWP) if it didn't have such a big overhead. My Windows PC would idle using 3 or 4 gigs of memory: my Linux box struggles to break 1.

derefr
Have you tried to load UWP apps on a machine with less memory? I believe that part of what's going on there is framework-level shared, memory-pressure reclaimable caching.

On a machine that doesn't have as much memory, the frameworks don't "use" as much memory. (I would note that Windows IoT Core has a minimum spec of 256MB of RAM, and runs [headless] UWP apps just fine! Which in turn goes up to only 512MB RAM for GUI UWP apps.)

Really, it's better to not think of reclaimable memory as being "in use" at all. It's just like memory that the OS kernel is using for disk-page caching; it's different in kind to "reserved" memory, in that it can all be discarded at a moment's notice if another app actually tries to malloc(2) that memory for its stack/heap.

selfhoster11
Windows 98 had far more advanced theming than anything out there today. Today's dark mode is a far cry from what used to be possible.
IncRnd
That's well and good - when your program is the only software running, such as an a dedicated SBC. You can carefully and completely manage the cycles in such a case. Very few people would claim software bloat doesn't otherwise affect people. Heck the software developers of that same embedded software wish their tools were faster.

> No matter how inefficient the software, hardware is just that good.

Hardware is amazing. Yet, software keeps eating all the hardware placed in front of it.

derefr
I mean, I agree, but the argument here was specifically about whether you're "wasting" a powerful CPU by putting it in the role of an embedded microcontroller, if the powerful CPU is only 'needed' because of software bloat, and you could theoretically get away with a much-less-powerful microcontroller if you wrote lower-level, tighter code.

And my point was that, by every measure, there's no point to worrying about this particular distinction: the more-powerful CPU + the more-bloated code has the same BOM cost, the same wattage, the same latency, etc. as the microcontroller + less-bloated code. (Plus, the platform SDK for the more-powerful CPU is likely a more modern/high-level one, and so has lower CapEx in developer-time required to build it.) So who cares?

Apps running on multitasking OSes should indeed be more optimized — if nothing else, for the sake of being able to run more apps at once. But keep in mind that "embedded software engineer" and "application software engineer" are different disciplines. Being cross that application software engineers should be doing something but aren't, shouldn't translate to a whole-industry condemnation of bloat, when other verticals don't have those same concerns/requirements. It's like demanding the same change of both civil and automotive engineers — there's almost nothing in common between their requirements.

nabaraz
Reminds me of Ship of Theseus.

"If you replace all the parts of a ship is it still the same ship?".

This project is equivalent to "Doom running on 40-MHz Cortex M4 found in Ikea lamps".

Good work nevertheless!

loritorto
I think that the goal of the project is not "Doom running on a 40-Cortex M4" (actually an 80 M33...), which is pretty easy I guess, but "Doom running with only 108 kB of RAM", while keeping all the features (which is pretty hard, I guess). I recall that I had to bend over backwards to get it running on my 386 with only 4MB RAM.
audunw
The game is cheating a little bit, since it loads a lot of read-only data from external SPI flash memory, and all the code is in the internal 1MB flash. On your 386, everything including the OS had to fit on that 4MB RAM.

It also doesn't have quite all the features. No music, and no screen wipe effect (I worked on a memory constrained Doom port myself, and that silly little effect is incredibly memory intensive since you need two full copies of the frame buffer)

Retric
Not everything, only what’s relevant to the individual levels.
int_19h
Overlays were a thing in DOS days, as well. Not for Doom specifically, but I've seen quite a few 16-bit games using them.
Narishma
They're too slow for action games like Doom.
discardable_dan
Agreed. The actual lamp isn't... the thing. It's just reusing a chip with a monitor.
dspillett
Maybe this could be a new take on "a modern pocket calculator has far more computing power than the moon landing systems in '69". A modern lavalamp has as much computing power as my mid 90s desktop PC.
colonwqbang
A playstation doesn't have a monitor, controller, loudspeaker etc. built in. It's all external stuff you have to plug in before you can play.

Still, we say "I'm playing Doom on my playstation".

SiempreViernes
You also say you are "playing video games on my playstation" which doesn't make much technical sense, so clearly appeals to common idioms aren't without problems.

In any case, the argument is that the mini console they built is no longer a lamp, not that you can't play games on a console.

squeaky-clean
Ehhhh. Those things are meant to plug right in. I've never had to solder together my own breakout board and carrier board to hook a Playstation up to a TV while breaking the Playstation's main features in the process. That lightbulb is completely disassembled and won't function as a lightbulb anymore. And nothing they added was plug-n-play.

Edit: It's still a fun and cool project. But more like running Doom on hardware salvaged from an IKEA lamp.

user3939382
Thanks for the reference. I’ve pondered this for many years in the context of sports teams. If you can replace all the players, owners, coaches, logo, stadium, etc.. what are you really a fan of?
danellis
Yeah, "Game that ran on an 8MHz Arm in 1993 running on a 40MHz Arm in 2021" wouldn't have been as attention-grabbing.
loritorto
* "Game that ran in a 486 @ 66 MHz (for the same fps), with 4 MB RAM in 1993 running on a 80 MHz Cortex M33, with 0.108 MB RAM"
Narishma
At a quarter the resolution.
peoplefromibiza
tbf they ported a low RAM version, specifically the GBA version, not the original one.

So the OP is not entirely wrong: they ported a game played on a 16.8Mhz system with 256kb of RAM to a 80Mhz with 108Kb of RAM.

The writeup explicitly says «we could trade off some computing power of the 80Mhz Cortex M33 to save memory»

dolmen
So a port of a port isn't a port?
pigeck
Still the original Doom features are all there, except multiplayer. They also restored some missing graphics features of the GBA port, like z-depth lighting. Yes 4MB vs 108kB is more impressive than 256k vs 108k, but cutting in half the memory requirements is still notewhorty.
jonas21
I think it's fair to say they're playing Doom on the lamp (and even more impressively, it's not a whole lamp, but just a light bulb!). They use an external keyboard, monitor, speakers, and storage for the game data, but the processor and RAM are from the original bulb.

If someone said "I'm playing Doom on my PC" in 1993, they would also have been using an external keyboard, monitor, and speakers. And the game would have shipped on external storage (floppy disks).

loritorto
Actually, the correct technical term for "lightbulb" is "lamp", and the correct term for "lamp" is "fixture" :)
midasuni
Maybe in your language, but looking at my English dictionary it clearly says a lamp is a device for jibing light consisting of a bulb, holder and shade.

Historically a lamp would consist of the wick, oil and holder.

loritorto
(and yes, later it is stated: "Lamps are commonly called light bulbs;")
loritorto
From wikipedia (Electric light): " In technical usage, a replaceable component that produces light from electricity is called a lamp." EDIT: Yes, later in the same page: "Lamps are commonly called light bulbs;"
maybeOneDay
"In technical usage" means that this level of nitpicking isn't really accurate. When you say "they ran doom on a lamp" that isn't a piece of scientific literature. It's just conversational English and as such using the common dictionary definition of the word lamp as opposed to a technical definition is entirely appropriate.
Fatalist_ma
Before clicking I assumed the lamp had a small screen and they were using that screen...
Aeronwen
Was hoping they stuck the light behind a Nipkow Disk. I didn't really expect it to happen, but I still want to see it.
squeaky-clean
I was hoping to see them running a 1-pixel version of doom on an RGB bulb.
bayesian_horse
A corollary of Moores Law: the size of things you can run Doom on halves roughly every two years.
bognition
Carmack's law?
remarkEon
These are always awesome and I never stop being impressed by what folks can do.

What's left in the world for "DOOM running on ____"?

Here's my idea:

Could we do this with a drone swarm? And have players still control DOOM guy somehow? I'm imagining sitting on the beach at night and someone is playing a level while everyone looks up at the sky at a massive "screen".

Out_of_Characte
The current world record for drones is ~1300, 320*200 resolution has 50 times more pixels than drones. Therefore you need is powerfull 8-bit color drones and someone to design good edge representation. or just have massive fleet for better resolution.

https://www.airlineratings.com/news/art-sky-check-spectacula...

remarkEon
Someone will do it, I'm sure.

Now, to figure out how to pump in the soundtrack ...

piceas
Just dangle a string of 50 LED's below each drone. Close enough!
masswerk
What about a swarm of pigeons? Three RFCs can't be wrong…

[0] https://datatracker.ietf.org/doc/html/rfc1149

[1] https://datatracker.ietf.org/doc/html/rfc2549

[2] https://datatracker.ietf.org/doc/html/rfc6214

PhasmaFelis
Years ago, I saw someone implemented a vector display using a powerful visible-light laser on a gimbal, instead of an electron gun with magnetic deflection.

Then they used it to play Tetris on passing clouds.

ant6n
Game boy color (8-bit cpu, 2MHz, 40k ram).

Supposedly the guy (Palmer) who created the commercial gba version had done a tech demo for gbc, but Carmack decided it was too stripped down and proposed a commander keen port for gbc at the time instead. Gba came out a couple of years later and was powerful enough.

dolmen
Or on a building.

See project Blinkenlights from the early 2000 (not Doom, but still video games). https://en.wikipedia.org/wiki/Project_Blinkenlights

usrusr
Too easy. At first I was imagining some amazing drone dance, but then I realized that it would be just a wireless screen with horrible battery endurance.
Cthulhu_
They've been playing Tetris and Snake on some weird things already (I've seen Tetris on a high rise, and Snake on a christmas tree)
SonicScrub
This reminds me of this Saturday Morning Breakfast Cereal Web Comic

https://www.smbc-comics.com/comic/2011-02-17

unhammer
DOOM over Avian Carrier (since obvs trained pidgeons can implement a turing machine, see also https://en.wikipedia.org/wiki/IP_over_Avian_Carriers ).

DOOM in Game of Life.

DOOM as a Boltzmann brain (might take a while before that's implemented, but I bet it'll happen eventually)

TheOtherHobbes
It would be hard to prove that it hasn't already.
TheCraiggers
Just because pigeons can deliver messages doesn't mean it's turing complete. Although they could be used in data transfer. I've never seen anybody suggest that RFC1149 is Turning Complete, anyway.

Game of Life is totally Turing Complete though, so it's already proven that you can indeed run Doom on it.

unhammer
thus the hedge "trained" ;)
blauditore
Only tangentially related, but has been bothering me: How does a simple rechargable bicycle light cost $20 upwards?

- It can't be about chip/logic, as that's a commodity these days (as this post celebretes).

- It can't be LEDs, because they are dirt cheap too, especially red ones.

- Building the plastic case doesn't seem to warrant such a high price.

- The battery needs very little capacity, magnitudes lower than e.g. that of a phone.

- Is it maybe the charging mechanism through USB? Are there some crazy patent fees?

spython
The bike lights that cost 15-20 € in Europe cost 2-5€ in bulk on alibaba. Literally the same model. I guess it's mostly shipping, import duties, taxes, marketplace fees, free shipping to customer, returns handling and profit margin.
_9vzr
Considering that you can pickup free portable chargers at trade shows, they must cost next to nothing to source. The LED adds a little bit more to the price but again not much. The whole package is very cheap wholesale and can probably be marked with whatever brand you want if you ask an Alibaba vendor. The $20 comes into play when it reaches the store. The store has already bought them from some company that bought them from China so there's already a slight markup there and then the store adds a little more. They know people will buy them and depending on what kind of store it is, can charge a little more if they know their customers well. A Walmart-like store isn't going to be able to sell them for too much above cost but a specialty bike shop can mark them up more since their customers are already spending higher prices. A specialty bike shop might even order them with custom branding, adding a little more to the final price.

As for something like the Ikea bulb in the article, it includes an RF module that isn't that cheap. It costs about $7 per 1000 pieces. Maybe they get it for $5 or $6. Add in the slightly more expensive housing, it look like a halogen bulb but is LED, and then add in the cost of a quality RBG LED and the rest of the components and markup to get $15. Ikea does win out compared to other store for things like this because Ikea is buying the units from themself. The Ikea Sonos speaker is the only thing I've ever seen there that wasn't a pure Ikea product. They really have mastered horizontal and vertical integration.

rwmj
The price of something isn't (usually) the cost of the parts + a profit margin. There's a whole theory behind pricing.
jccooper
Low volume products need high margins to be worthwhile. Which is another way of saying "no one has found it worthwhile to sell a simple rechargable bicycle light for $18."
webinvest
It costs $20 and up because somebody priced it at $20 and up.
Ekaros
Because they can charge that much?

Actually, I find funny that the traditional solution of light and dynamo is 6€+5€ shipping+taxes...

lddemi
"rechargeable bike light" on aliexpress (hell even amazon) yields several significantly below $20 options.
chasd00
the price will be what the market will bear. How could it be otherwise?
milesvp
Generally you’ll find that the msrp is going to be roughly 9xBOM (bill of materials). That leaves wholesale prices to be roughly 3xBOM so that there’s some profitability at that stage. This is at least a common heuristic that I use when designing hardware. It’s easy to say, oh, this chip is way better and it only costs 1 dollar more in quantity, but now your final price is $9 more and you may have priced youself out of the market. These numbers change depending on volume, and how many zeros the final price has. And of course demand will also inform final price, but they’re numbers that seem to hold in a lot of manufacturing going back the early 80’s.

As for the BOM cost, you’re right that for the board, the highest costs are probably the charge circuit followed by the processor. Battery probably costs the most, but don’t discount the cost of the mould for the plastic, it’s a high up front cost that needs to be replaced more frequently than you’d guess.

In the end, that $20 bike lamp probably costs the shop $7-10 to aquire. And any shop that doesn’t charge at least 2x their average cost for small items will tend to find their profitability eroded by fielding returns and other hassle customers.

ralmidani
Is it fair to say “anything that can run Doom, eventually will run Doom”?
grecy
When I worked at the Department of Defence I got Quake III running on a monster SGI supercomputer that was somewhere around the $5mil mark.
eloisius
And people talk about $500 hammers…
NaturalPhallacy
The navy has $100K keyboards. I've touched them with my own hands. And this was in 2004 money.
peterburkimsher
Another guy got Doom running on potatoes, so I'd say yes.

https://www.youtube.com/watch?v=KFDlVgBMomQ

dhosek
Back in the 80s/90s there were some questionable ports of TeX do unlikely hardware. Perhaps the most egregious of these was TeX running on a Cray supercomputer. Time on these machines was metered heavily. I can't imagine anyone actually used it for formatting papers. I had a dream of doing a hand port of TeX to 6502 assembly to see if I could get it running on a 128K Apple //e. I envisioned heavy use of bank switching to enable code to jump back and forth between the two 64KB banks which could only be switched in predefined blocks as I recall (the page 1/2 text/lores blocks, page 1/2 hires blocks, the high 16k and I think the low 48k that wasn't part of the lores/hires blocks but it's a long time since I played with that hardware.
bombcar
128k seems at least in the same ballpark as the PDP-10 so it should be possible - especially if disk is available.
andredz
The video, and the write-up (https://next-hack.com/index.php/2021/06/12/lets-port-doom-to...) seem to be unavailable.
alexweber
Here's a mirror: https://web.archive.org/web/20210615035229/https://next-hack...
kencausey
So, why does this device need such processing power? Can this really be cost effective?
eru
By today's standard Doom doesn't need much processing power.

You could probably find exactly the right chip that only has exactly just as many bits of RAM as you need for the lamp's functionality. But that would probably be more expensive to develop and chip than just using standard parts?

Even more radical: I suspect with a bit of cleverness you could probably do everything the lamp needs to do with some relays and transistors. But today, that would be more expensive.

Compare https://www.youtube.com/watch?v=NmGaXEmfTIo for the latter approach.

jfrunyon
Implementing wifi/RF with "some relays and transistors" doesn't sound fun.
eru
Yes. I should have been less sloppy: you'd have to rethink what the lamp needs to be able to do slightly, too.
nxpnsv
Not to you perhaps, but I’d watch the video if some patient/insane/genius built it...
moftz
You could do something very basic with discrete components for controlling wireless lighting systems but system starts to get out of hand when you need to have a bunch of lights nearby. It's much cheaper, simpler, and smaller to reduce it down to a chip and move to a digital RF system. I've got a bunch of RF controlled outlets in my house but it's just about the dumbest system you can buy. It's on par with the typical remote garage door opener. You can program the on/off buttons on the remote for each outlet but that's as far as it goes. I'd like to be able to remotely control them away from home or be able to give each light or outlet its own schedule and that requires either a central controller or each device having network access for network time and remote control.

Interestingly, a friend rented a house in college once that had a system of low voltage light switches that ran back to a cabinet filled with relays that controlled light switches and outlets. No major benefit to the user other than a control panel in the master bedroom that lets you control the exterior and some interior lights. It was a neat system but definitely outdated. I'd imagine a retrofit would be to drop all of the relays for solid state and provide a networked controller to monitor status and provide remote control.

foobar33333
It doesn't need it, its just that chips that can run doom are the dirt cheap bottom tier chips now. Rather than making some custom chip only just powerful enough to run the lamp software. You may as well just stick a generic one in.

These IKEA smart bulbs cost about $9 so yes, it is cost effective.

marcan_42
Chips that can run Doom are nowhere near the dirt cheap bottom tier. The dirt cheap bottom tier is this $.03 chip:

https://hackaday.com/2019/04/26/making-a-three-cent-microcon...

Chips that can run Doom, though, are just about at the low end for internet-connected devices. You can't run an IoT stack on that $.03 thing. The chip in the bulb is exactly in the right ballpark for the application. You do need a fairly beefy chip to run multiple network protocols efficiently.

foobar33333
There is no network stack on the ikea bulbs. They only support local communication via zigbee. No IP/TCP/etc. Its the gateway device that does wifi/networking.
marcan_42
ZigBee is a network protocol with a network stack. Just because it isn't TCP/IP does not mean it's not a network. It has addressing, routing and routing tables, fragmentation and reassembly, discovery, error detection, packet retransmission, and everything else you'd expect from a functional network protocol.
Karliss
It is a IoT thingy with wireless connection which puts it in category where certain factors combine.

* Cost effective solution for lightbulb would be not having it wireless connection at all instead of having less powerful MCU. So it being IoT already means that target audience doesn't care about the price that much. * It uses of the shelf FCC certfied wireless module with embedded antenna. For product designer it makes sense using a ready module because it avoids need to do RF design and the certification.It also simplifies the design if you run your user application inside the wireless module instead of having an additional MCU communicating with wireless module. Such modules tend to have medium to high end MCUs.

Why do wireless modules need such processing power?

* 2.4 Ghz Antenna has certain size requirements so the size constraints for rest of the system aren't too tight. * Wireless circuitry puts the module in certain price category, it makes sense to have the MCU in similar price category. Wireless certification is a pain, so there will be less product variants for wireless modules compared to something like 8bit micro controller which come in wide variety of memory, size and IO configurations. If you have single product variant better have slightly more powerful MCU part making it suitable for wider range of applications. * The wireless communication part itself has certain amount of memory and computing requirements. Might as well split the resource budget on the same order of magnitude for wireless part and user application. N kB of memory for wireless and N kB for user application instead of N and 0.001N especially if the first part can easily vary by 10% or more due to bugfixes and compiler changes. Similarly there are basic speed requirements for digital logic buffering the bits from the analog RF part and doing the checksums. * Modern silicon manufacturing technologies allow easily running at few tens of MHz so if the SOC has memory in the range of 30-500KB and isn't targeting ultra low power category it is probably capable of running at that speed.

marcan_42
The processing power needed to run a decent internet connected device with typical software stacks these days is about the same as the processing power needed to run Doom.
spookthesunset
Because mass produced microcontrollers are dirt dirt cheap. It’s easier to source a way overpowered CPU than some perfectly spec’d one.

Plus how else will malware people run their stuff?

foobar33333
The ikea bulbs are actually pretty good malware wise. The bulbs do not connect to the internet, they use zigbee to communicate with a remote. Which can either be a dumb offline remote or the gateway device. The gateway also does not connect to the internet, it is local network only and can be hooked up to the Apple/Google/Amazon systems for internet access.

If you had to design an IoT bulb, this is the ideal setup.

jfrunyon
> The gateway also does not connect to the internet, it is local network only and can be hooked up to the Apple/Google/Amazon systems for internet access.

In other words, it does connect to the internet, it also sits on the LAN to give attackers access to all your other devices, AND it sits on Zigbee to give attackers access to those as well.

foobar33333
There are 2 layers of relay devices in between. And the only one with direct internet access is a device you already have on the internet and is developed by the top brains and maintained for many years to come unlike your average smart bulb directly hooked to the internet with minimal security.

If you buy the dumb remote, you get a useful smart light setup with no internet or even local network connectivity. Its useful because you can turn a room full of lamps on at once or adjust their color.

jfrunyon
Oh boy, it sure is impossible to exploit something through a proxy in a training diaper!
midasuni
You obviously put it on it’s one /30 on the lan and limits it’s connection to what’s needed
franga2000
No, it can be commanded from the Internet - big difference. It never has a direct connection to the Internet and even that is entirely optional and a hard opt-in (buying more hardware).

And if you have attackers on your LAN, you're at the point where controlling your lightbulb is the least of your problems. As for Zigbee, go on, present your alternative - I'm all ears!

jfrunyon
> No, it can be commanded from the Internet - big difference.

Big difference from what?

You do realize that the vast majority of remotely exploitable security vulnerabilities are in software which can be commanded from the Internet, right?

franga2000
Source?? I'm quite certain that it's much harder to exploit something that you can't even send a TCP packet to. If the device is only directly connected to the hub (amazon/google/apple box) and the hub is only connected to the cloud service, how would you even send a payload to the device, even if an exploit existed?

You could exploit the cloud service directly and gain control of the device, but that's like stealing the security guard's master keys - you can't call that a vulnerability in the door lock, can you?

jfrunyon
Why on earth do you think you can't send a packet to the device? How do you think the cloud service communicates with it?!
marcan_42
As far as I know only Apple does the local network stuff. If a device is Alexa or Google Home compatible, it talks directly to some cloud service from the manufacturer on the Internet which then talks to Google or Amazon. So it connects directly to the internet, and moreover there is the additional attack/privacy surface of the manufacturer's cloud service.

Source: I run a HomeAssistant local IoT hub and to integrate it with Google Home I had to give it a public hostname and sign up as an IoT vendor with Google to register it as a developer/testing mode service (if I were a real vendor it would be one cloud hub for all my customers, it wouldn't be Google to individual homes, it's just that in my case there is only one user and the server is at my house).

foobar33333
>If a device is Alexa or Google Home compatible, it talks directly to some cloud service

This is how some IoT devices work. As far as I can tell. IKEA has no servers or infrastructure for their devices. And the Apple/Google hubs manage everything for them.

vinay427
The IoT device can certainly work like that. The comment is specifically talking about Google Assistant support, which as HomeAssistant users have experienced, does require cloud server access even if this seems unnecessary in cases when the devices are only being controlled within a local network.
marcan_42
IKEA has to have servers for their devices to integrate with Google Home and Alexa. That's how those systems work. Only Apple offers direct local connectivity as far as I know.

These days Google Home has local fulfillment, but that seems to only be offered as an addition to cloud fulfillment. It always has a cloud fallback path.

Here's how you hook up Home Assistant to Google cloud. As you can see, turning it into a cloud service from Google's POV is required. You can either use Home Assistant Cloud (see? cloud service) or set up your own single-user cloud integration (which is what I do), turning your "local" server into a cloud service (with public IP and SSL cert and domain and everything) and registering yourself as an IoT vendor in their developer console, pointing at your "cloud" service URL.

https://www.home-assistant.io/integrations/google_assistant/

There is no way to keep the entire system local and have the Google Home devices only access it locally, without any cloud infrastructure. The commands flow from Google Home devices, to Google's cloud, to the vendor's cloud, to the vendor's devices. There is a bypass path these days for local access, but it is always in addition to the cloud path, and only an optimization.

psanford
I don't know how the IKEA hardware works. However it is not true that Alexa has to talk to a cloud service to integrate with all IoT devices.

I know this because I run some local Raspberry PIs that pretend to be WeMo devices and I'm able to control them without any cloud connections from the PIs. The echo discovers the WeMo devices via upnp.

This has been a thing for quite a while[0].

I believe you are correct that Google Home has no local network device control.

[0]: https://hackaday.com/2015/07/16/how-to-make-amazon-echo-cont...

franga2000
We're still talking about the light bulbs, aren't we? Bulb --Zigbee--> Zigbee Gateway --WiFi/eth--> LAN. If further integration is desired, an Internet gateway can be used (could be a Google/Apple/Amazon box thing, but could also be a Pi with HomeAssistant!). How that gateway connects to the Internet is up to it - but at no point is either the lightbulb or its LAN gateway in any way connected to the Internet. Therefore, neither the bulb nor the gateway pose a direct security or privacy risk. All the security is offloaded to the gateway and you are entirely free to chose the vendor of your Internet gateway or indeed opt for none at all (and possibly use a VPN if external access is desired)
marcan_42
foobar33333 said "The gateway also does not connect to the internet", which cannot be true, because connecting to the internet to speak to a manufacturer-provided cloud service that then speaks to Google is required to integrate with Google Home. That's how it works. The IKEA gateway has to talk to an IKEA cloud service. If you think otherwise, please link to the Google Home docs that explain how that could possibly work, because I can't find them.

Here's how you hook up Home Assistant to Google cloud. As you can see, turning it into a cloud service from Google's POV is required. You can either use Home Assistant Cloud (see? cloud service) or set up your own single-user cloud integration (which is what I do), turning your "local" server into a cloud service (with public IP and SSL cert and domain and everything) and registering yourself as an IoT vendor in their developer console, pointing at your "cloud" service URL.

https://www.home-assistant.io/integrations/google_assistant/

There is no way to keep the entire system local and have the Google Home devices only access it locally, without any cloud infrastructure. The commands flow from Google Home devices, to Google's cloud, to the vendor's cloud, to the vendor's devices.

Bulb --> Zigbee --> Zigbee Gateway --> WiFi/eth --> LAN --> Your router --> WAN --> IKEA cloud --> Google cloud --> WAN --> Your router --> LAN --> WiFi --> Google Home device.

If that sounds stupid, congrats, this is why they call it the internet of shit.

franga2000
See, I have in fact set this up in the past, although not with IKEA lamps, but some other cheap Zigbee-compatible ones. The Zigbee-LAN gateway (along with all the other WiFi devices) sat on its own VLAN with no Internet access at all and a HomeAssistant box had access to both the IoT VLAN and the main one (that had Internet access). The HomeAssistant instance was configured with a dev account to work with Google's crap, but the devices themselves only ever talked to it, not Google or any vendor-provided server.

EDIT: Perhaps the terminology got somewhat twisted around here: when I talked about the LAN gateway, I meant specifically the thing that does Zigbee-LAN "translation". Now, that same physical box might also have the capability to work as a Zigbee-Alexa or Zigbee-Google transaltor, which would require a vendor server as you said, but those options are, well, optional. You can certainly disable them and use something like HASS or openHAB as the bridge to whatever cloud service you wish. Same way that my home router has a built-in VPN feature, but I don't use it because I run a VPN server on my NAS instead.

marcan_42
Of course, if you set up Home Assistant you can firewall them off the internet. That's how I do it too, with an IoT VLAN. It's not how these devices are intended to work, and not how they work if you just follow the manufacturer's instructions for Google/Alexa integration. You're replacing the vendor's cloud service with Home Assistant, effectively.

For example, I had to work out that in order to get Broadlink devices to stop rebooting every 3 minutes because they can't contact their cloud crap you have to broadcast a keepalive message on the LAN (it normally comes from their cloud connection, but their message handler also accepts it locally, and thankfully that's enough to reset the watchdog). This involved decompiling their firmware. I think that patch finally got merged into Home Assistant recently.

My point is that this is not the intended use for these devices. Normal people are going to put the gateways on the internet and enable the Google integration; in fact, it's quite likely that they will sign in to some IKEA cloud service as soon as you put the gateways on a network with outgoing internet connectivity, even before you enable the integration.

bouke
This is why HomeKit is superior: when you’re in your home, it doesn’t need WAN to function. The connection would be Bulb —> Zigbee —> Zigbee Gateway —> Wifi —> iPhone.

When you’re away from home, iCloud will be used, and no IoT vendor systems come into play. This means that all IoT devices can be kept offline and limited to your LAN. The connection would be Bulb —> Zigbee —> Zigbee Gateway —> Hone Hub (Apple TV or iPad or HomePod) —> WAN —> iCloud —> WAN —> iPhone.

jfrunyon
> all IoT devices can be kept offline and limited to your LAN

> Hone Hub (Apple TV or iPad or HomePod) —> WAN

bouke
IoT devices being smart appliances. Not Home Hub (Apple device).
jfrunyon
I'm not sure why you think that having a proxy in the middle will protect you.
bouke
Simple: I trust the security of Apple/iCloud over the security of the servers of any IoT vendor.
nerfhammer
I bet it connects to wifi and/or bluetooth so you can control it with some smartphone app
malux85
Economies of scale through mass production is more cost effective than custom chips that exactly fit a requirement and no more.

The bigger question of “Does a lamp need a CPU?” Is No, imho

mrb
The system-on-chip is the MGM210L which needed to be powerful enough to run multiple wireless protocols (Zigbee, Thread, Bluetooth), so the lamp can be controlled by any of these. These are very complex protocols. The Thread spec for example is hundreds of pages. I did a formal security review of it on behalf of Google back in 2015. Bluetooth is even more complex. RF signal processing, packet structure, state machine, crypto, logical services, etc.

The software complexity of these protocols is greater than the complexity of a rudimentary 3D game like Doom, so it's expected that whatever chip can run these protocols can also run Doom.

Datasheet of MGM210L for the curious: https://www.silabs.com/documents/public/data-sheets/mgm210l-...

Cthulhu_
I would have thought they'd have specialized chip hardware already to deal with these, but maybe I don't know enough about that kinda thing. Pretty sure it'd have specialized hardware and/or instructions to deal with the cryptographic aspects though.
nabla9
It's already there.

The chip is SoC with Arm core, FP, Crypto Acceleration, DSP extensions and FPU unit plus radio parts.

gtsteve
Baking something like that into hardware is probably not a good idea because then you can't update it when vulnerabilities are found.
nekopa
Makes me smile to think that one day my bank account will be drained of all its BTC because I forgot to patch my bedside lamp last tuesday...
kwdc
That would make me frown. Just saying.
dolmen
But do you get software update (and deployment to your device) when vulnerabilities are found? ;)

The real reason is it reduces the cost (and duration) of iterating in the development phase.

elondaits
My Hue lamps had their firmware updated through the app a couple of times, and I had them for 4-5 years.
kmadento
Since Ikea update the firmware of the trådlös units quite often and has security updates in the changelog I would guess... yes They also mentions improvement of stability and performance towards different protocols in the changelogs.
oaiey
complexity .. maybe, 80Mhz ... heck no. However, this is all off-the-shelf ware and as long as the energy consumption is not a problem, I am fine.
marcan_42
It's burst processing. You do actually need high processing speeds for short periods of time to implement network protocols like these effectively. Think cryptography, reliability, etc. The CPU isn't doing anything most of the time, but it makes a big difference if it can get the job done in 500μs instead of 5ms when there is something to do (like process a packet).

Also, higher clock speed = lower power consumption. It sounds counterintuitive, but getting the job done quickly so you can go back to low power mode sooner actually saves power, even if the instantaneous power draw while processing is higher.

baybal2
Nearly all cryptography on MCUs is hardware based. It would have otherwise be completely impossible to do timing sensitive crypto work like WiFi, or BT.

The WiFi, or BT protocol stacks themselves are however almost always in software on MCUs simply because nobody would bother making a separate ASIC IP for that which will be outdated by the next standard errata.

marcan_42
Symmetric cryptography is often hardware based, but asymmetric crypto rarely is. The latter is commonly used for pairing/key exchanges, and would be painfully slow on a slow MCU.
baybal2
> The Thread spec for example is hundreds of pages.

And this also assured it being dead on arrival.

mrb
Actually Thread is alive and kicking! It has a lot of momentum at the moment. But if it gets displaced, IMHO it's going to be by WiFi Ha-Low or successors.
baybal2
It is not alive. I haven't seen a single device working with it, while I at least seen one Apple Home compatible device in an Apple store.

Tuya on other hand is just in every retail outlet, it's just people don't know that Tuya is under the bonnet.

notwedtm
Please stop lying to people: https://www.threadgroup.org/thread-group#OurMembers
kortex
That just means little more than "who bought or used the spec at some point." It has little bearing on contemporary real-world commercialization of the thread protocol.
baybal2
I am not lying, the assortment of Thread based devices ever seeing sale is much smaller than the size of Thread's illustrious board of directors.

They literally have more talkshops per months than devices released.

mrb
Maybe not in your country(?) In the US there are quite a few commercial systems built on Thread: Apple HomePod mini, Google Nest security system, etc. Don't get me wrong: we are still very early in real-world deployments. It was just 2 years ago that the consortium/Google released the open source OpenThread which is when momentum really picked up.
phkahler
>> The software complexity of these protocols is greater than the complexity of a rudimentary 3D game like Doom

That's unfortunate. Protocols, particularly those used for security should be a simple as possible. I know it's a hard problem and people tend to use what is available rather than solving the hard problem.

wildpeaks
Both the video and the write-up appear to be gone? Did IKEA complain?
1m2r3a
https://web.archive.org/web/20210615035229/https://next-hack...
trzeci
Looks like the article and video was pulled down : (
kregasaurusrex
I'm curious as to why it was taken down too- the video, blog post, and Github repo are all gone. Here's a mirror to the original: https://archive.is/JguQH
HugoDaniel
the pregnancy test doom was fake! :O
djmips
This one is fake for me too since they had to add a screen and other stuff. Meh, I prefer my Doom runs on X to mean that X wasn't modified or beefed up.
abluecloud
it wasn't fake as much as misreported.
DudeInBasement
Underrated
djmips
The author helped that along.
stuntkite
With the chip shortage, you best believe some people are going to be buying some of these to scavenge for project parts.
kwdc
Is there a PCB shortage as well? I feel like that couid put a dampener on the proceedings. Asking for a friend.
Doxin
I doubt it. PCBs are much more of a commodity than chips. You can make passable PCBs at home without a crazy amount of effort, and you can definitely get to professional-grade PCBs with some effort.
kwdc
I haven't etched a pcb for years. This will be a good weekender.
varjag
Yes, there is a copper laminate shortage at the moment.
stuntkite
I mean maybe? I don't think it's that consiquential for hobbyists as the lack of microcontrollers or other components. For instance, I can electroplate quite a few different substrates if I want to cut my own boards. Also after years of doing this, I have so much proto crap I'm probably good till the next pandemic but probably only have a couple unused Teensys and maybe an arduino or two laying around. I don't see the supply chain ever springing back to what it was. We are in a whole new world of scarcity for at least a few years IMO. Which I'm not that upset about at all really. It's inconvenient and possibly dangerous but I think the reorganization will create resilience in our supply chains and also the lack of gear will encourage people to do very interesting things with stuff that would be considered trash in 2019 and we need more of that. E-Waste is a huge problem.
mschuster91
> I don't see the supply chain ever springing back to what it was. We are in a whole new world of scarcity for at least a few years IMO.

The current issues are driven by the automotive industry having screwed up and shitcoin miners snatching up GPUs. Neither is going to be a long term issue.

varjag
It's more than that, supply chain issues started around 2016.
sireat
This was a nice porting job! https://www.reddit.com/r/itrunsdoom/ - could use some new content.

Next level would be finding the cheapest modern mass produced device that can run Doom with no hardware modifications.

This means use whatever I/O the device comes with for controller and display.

Using external display sort of distracts from the coolness.

Second part - it has to be currently in production(like this Ikea lamp). I mean you can find a $10 device/computer from 10 years ago that will run Doom.

djmips
I agree, adding a display and other mods isn't so impressive. They might as well order the microprocessor board from the manufacturer.
bayesian_horse
Truly a light bulb moment.
linuxhansl
"Only 108kb of ram."

That quote reminded me of my first computer: The ZX81, which had 1kb of RAM! About 150 bytes of that was used by system variables, and depending on how you use the screen up to 768 bytes are used by the display buffer.

And yet I managed to write code on it as a kid. :) (Of course nothing like Doom)

optimalsolver
Reminds me of this comic:

https://www.smbc-comics.com/comic/2011-02-17

pwagland
SMBC is starting to reach parity with XKCD… between the two of them there really _should_ be a comic for everything!
ck2
Kinda disappointed to see no-one has backported Doom to a TRS-80 (yet) but that's probably asking a bit too much.

The pregnacy-test-kit running Doom was fake? Somehow missed all the front-page retractions on that one.

sombremesa
Title is more misleading than the pied piper. I was expecting to see impressive visuals like the Line Wobbler game and inventive controls, not someone attaching a controller and a screen.
aasasd
It's a relief to see something other than ‘Doom running on a cinematograph projector’ where the device only acts as the video output.
roudaki
I know this sounds crazy now but with improving brain computer interfaces, and I know it is not as good as they say, in a couple years maybe lets say 20 someone will run Doom on that brain implant. We all know we will see this during lifetime. I mean, you can buy in a shop right now a device that helps you control your pc with your brain waves. How is that not exciting? Is it amazing to be a science nerd?
ravenstine
Wow, it runs even better on that lamp than it did when I installed it on my iPod! (using Linux)
winrid
This is just the case, right? None of the internals (chip, screen) are from the lamp?
icelancer
No, the chips and internals are from the bulb. The screen obviously is not.
winrid
Awesome that a light has 100mb of ram, if I remember from the video correctly.
tyingq
Not quite. The MGM210L has 108kB RAM, 1MB of Flash, and an 80MHz Cortex M33.

He added external memory (an 8MB W25Q64).

winrid
Ah. I only watched the video briefly. Thanks.

Amazing clock speed for a lamp. I guess they need it to get the OS started quickly...

grillvogel
kinda lame tbh
alkonaut
Agree it should be able to run on a 1x1 resolution (the lamp) and no audio, out of the box. It wouldn’t be a very cool video though.
beebeepka
Lamp, the final frontier.

What now, you might ask. Well, RGB brain implants running Quake, of course.

stefap2
Looks like it was hidden now.
detaro
and the blog post also removed. weird.
timonoko
Fake News (Just a little bit). I was just thinking about using unmodified lamp as a Linux Terminal. I learned morse in the army in 70's. I already had a lamp which morsed time of day, but it was annoying, because morse numbers are so long.
v4rp1ng
The video is now private, did someone upload it somewhere else?
ferros
Why does a simple lamp have a chip capable of running Doom?
intricatedetail
This is what you have to do if you run out of chips.
wonks
Has the "running DOOM" meme gone too far?
phkahler
No.
croes
Next step, can it run Crysis?
int_19h
Doom was released in 1993, so 28 years ago. Crysis was released in 2007, so ... maybe in 2035?
quickthrower2
But can it run Slack?
vmception
That was entertaining
accountofme
Excuse the swearing: but pretty fucking cool.
aninteger
"Of course it runs Doom"

Nice work!

RosanaAnaDana
We were so busy wondering if it could be done, we never stopped to ask if it should be done. Will the future forgive us?
swiley
You can get absolutely tiny chips (maybe 4x the area of the shadow of a ball-point pen ball) that can run Linux for ~$1. Computers in IoT cost nothing unless you need to do graphics or run electron/Ml.
baybal2
That a very powerful lightbulb!

It has more CPU perf than my first computer, and costs 1000 times less at the same time.

The progress of semiconductor industry is fantastical.

failwhaleshark
Let's make a fortune by making lamp crypto malware.

Who's down?

How do I type IDSPISPOPD on this thing?

clownpenis_fart
wow who would have believed that doom could be easily ported to any sufficiently powerful cpu+framebuffer hardware
etrautmann
What an awesome project. I need to update my intuitions a bit
chews
Golf clap good human!
systemvoltage
Constrained resources often lead to exceptionally brilliant software. It’s as if resource constraints inspire a form of discipline and frugality that’s demanded from us. We become acutely aware of it, and reprimand our brains from complacency. From Apollo program to today’s Internet-of-Shit, somewhere we lost our ability to focus and bloated ourselves with sugar-high of processing power and memory. Just because it exists, and is cheap, doesn’t mean it’s good for you.
runawaybottle
Constraints is the mother of all creativity. Like Andy Dufresne and the the rock hammer, how about I give you nothing, could you possibly dig your way out?
Jare
Creativity would ensue, success probably not.
kregasaurusrex
The creator posted a full write-up here: https://next-hack.com/index.php/2021/06/12/lets-port-doom-to...
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.