HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Why Unreal Engine 5.1 is a Huge Deal

Unreal Sensei · Youtube · 156 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Unreal Sensei's video "Why Unreal Engine 5.1 is a Huge Deal".
Youtube Summary
Unreal Engine 5.1 launched and it finally brings the power of nanite to foliage.

Want to learn Unreal Engine 5 - checkout the UE5 Starter Course:

Unreal Masterclass:

00:00 - Intro
00:13 - Nanite Explained
1:18 - Nanite Foliage!
3:15 - Large Forest
3:38 - Leaf Scattering
4:17 - Glass and Water Reflections
6:15 - Lumen Improvements
7:05 - Path Tracing
7:25 -Gpu lightmass
7:41 -Large Worlds
8:15 - Layer Bars

Follow Me -
HN Theater Rankings
  • Ranked #1 this this week · view
  • Ranked #5 this month (nov/dec) · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Nov 27, 2022 · 152 points, 95 comments · submitted by ksec
5.1 also includes improved DMX Plugin for proper lighting shows, the plugin adds support for DMX over Ethernet (sACN, Art-Net) and allows lighting fixtures based on devices imported via GDTF [1], [2].



I’m way into DMX and lighting but only familiar with Unreal as a game/3D engine. What kinds of lighting things are people doing with Unreal?
One big thing is Virtual Production, where we shoot video in front of large LED screens showing a 3D scene in Unreal. DMX allows seamless integration of live show control lighting with the lighting in Unreal.
If you want to help debug the build of UE5 on native elf/linux, on steam, get the demo of "vein", play it and report back the bugs to help the devs pushing them upstream.
When real time looks this good, low budget CGI movies or TV-series look even better.
When it is real-time, we may see this get used for live programming too like The Weather Channel or for dynamic animated ads put over whatever you're watching.
Every kind of technology can be abused, doesn't mean progress should stop.
The Weather Channel and Unreal Engine 4, 4 years ago:
There is a stark effort, skill level, art direction and knowing the limits of cgi cliff between good and "slightly off". How else do you explain district 9 from 13 years ago vs e.g. 200 mio. budget black widow from a year ago.

Meaning: The best tools in the wrong hands can still produce mediocre results.

I recently watched jurassic park 1 and jurassic world 1 back to back, it's the perfect example of that
The VFX studio puts out what they are briefed to do. When you go to Siggraph or FMX and watch the long making of presentations most of the time they had better design or comps and had to change because of the director/studio. The cats presentation was fascinating they had really good designs for the cats and had to go with the abomination we saw.
Related story from a few months ago: "Hollywood’s visual effects crisis":
District 9!! Thanks for reminding me about such a funny movie.
Why do you say funny?
Wasn't that Jackson's first big deal?
Blomkemp. Jackson's first big deal was Braindead/Dead Alive. Still his best movie, imho.
Ah! Thanks for the corrections!
Peter Jackson? If so, no. District 9 came out years after LOTR and Peter Jackson only produced it. The advertising however splashed his name all over the movie to try to associate it with LoTR so that more people would go and see it. "From the Director of a few car and shoe commercials, plus a couple of indie short films you definitely haven't seen" doesn't really have the same ring.

Fun fact. District 9 was born out of a failed attempt by Peter Jackson to make a Halo movie. Due to financing troubles the Halo movie ended being put on hold, but since they already had a team and all these sets in place they decided to use that to make a cheaper and quicker movie instead, and so District 9 was born.

Someone will mention AI in this thread I promise you.
Exaggeration in blockbuster movies doesn't help in this respect. Action sequences are so beyond reality that no matter of how well the cgi is done it will still look unrealistic. On the other hand District 9 in addition to artistic excellence it was also grounded in terms of plot which helped with suspension of disbelief.
The TL;DR of the video is that 5.1 can render foliage (trees/shrubs) better, and that is a significant advance.

It's not that it's a bargain, or a good value for the price. The title of the video is "Why Unreal Engine 5.1 is a Huge Deal".

Without the word "Huge" it comes across rather differently.

There's a HN auto filter that removes unnecessary words from titles

That could be the cause

This are the weird and visible cases of HN's title rewriting algorithm going wrong.

I guess the cases where it does the right thing are not noticeable, and that's why I only see the bad cases. Are there examples where HN title rewriting did something good, or cases where before HN did this titles were very bad or clickbaity?

Thanks! (Including the descendant commenters.)

Wondering about that is why I bothered to comment at all. I assumed there must be some logical-but-not-quite-working reason. :-D

Expressions like "huge deal", should perhaps be left alone. Either that, or replace both with "significant" or "important". But that too seems silly.
Perhaps? It completely changes the original meaning.
That's definitely not all it can do.

Foliage without pop in and into Nanite is cool, but Nanite still has work to be done on (AFAIK you still cannot deform and animate Nanite assets). Lumen advances to get translucent materials working is massive. Large worlds is super cool. And I can't remember if it's in 5.1 in beta, but it brings a new layering model for complex materials that works better than clear coat (which is the current 2-layer model, like a car with blue paint and a protective layer on top) with many more possibilities.

Putting those abilities into the hands of anyone is a massive thing (well, putting all of UE5). I've seen firsthand people with absolutely no experience in gamedev but in other media production get up to speed with building environments in less than a week, complete with animations & all. Making small virtual worlds is available to basically anyone.

And yes, the author of the video has a target audience of people who already use Unreal Engine. To these people, 5.1 is pretty damn massive.

2:20 for the impatient.

This really is a huge deal.

For gamers: Unreal 5.0 removed the awkward transitions between low and high poly assets, but removing the need for low and high poly assets (called Levels of Detail or 'LOD' s) at all. The engine just works out how many polys are necessary.

But 5.0 didn't do this for foliage - so trees and shrubs and forests look crappy when you zoom out.

5.1 does this for foliage.

So you can now...

See the forest from the trees.

(sunglasses emoji)

Next timestamp that jumped out at me as around 3:45. The camera pans around within the forest showing that all the leaves are also now translucent, with the result that shadows suddenly become more accurate. Wow.
You could say you, nailed it, nailer.

I saw this video on my youtube recommended and man am I glad they're figuring out how to phase out billboarding. One of my pet peeves in video games is pop-in and billboarding of far objects.

> One of my pet peeves in video games is pop-in and billboarding of far objects.

Yep pop-in within natural environments is one of the last big immersion breakers. Fly or climb to what should be a majestic view and suddenly fake-looking trees spoil everything.

That's one of them for me. A few others are:

-visible banding, especially in skies, in some games, which is very possible even with 24 bit color in this types of gradients, sometimes the monitor itself can cause it too. Dithering should fix this (24-bit color dithering should be almost unnoticeable), since banding looks so ugly this would be a great fix.

-textures that repeat themselves. Even in some modern games like Cyberpunk 2077, you'll e.g. see a repeated pattern of reflective rain water on road surfaces. Imho, have 2-4 varitions of the same texture and use a pseudorandom pattern to tile them, that completely eliminates this. Even if it's 4x lower resolution to have 4 variations in the same amount of memory it'd look better.

-round things (barrels, round tables, a goblet, ...) that look like a blocky polygon instead of a circle, in games with otherwise incredible amounts of details. Either use more triangles, or, I wish these were a thing, quadratic surfaces. In games that are low detail in general this matters less, in those it fits the style.

Fix those things, and several immersion breakers would be gone!

> textures that repeat themselves.

Use this one weird trick to fix that.[1]

The theory: [2]

This can be done in the GPU, so you only need one copy of the repeating texture in VRAM. It's good for semi-random patterns such as dirt, gravel, water, grass, asphalt, forest floors, etc. Not good for regular patterns such as bricks and tiles.



Years ago I worked on an ill fated game project with the Unreal 1 engine. It is dumbfounding to me to compare that to the modern engine. It's just incredibly sophisticated.
Love it. It supports ubuntu 22.04 with a 22GB download size. I need find a better desktop with better graphic card to run this I assume, any recommendations?
You can get an external gpu (eGPU) to keep your current setup. Check the Razer Core X, and add a graphic card of your choice.
Thanks! looks like it only supports windows and macos though.
There are numerous posts of people getting it working on Linux, with some caveats:
The egpu forums might be a good resource:

Worked fine on i7-9900k + AMD RX 580. Not sure about supporting nanite/lumen though.
What about the epic launcher to download your marketplace assets or megascans?
Runs fine on Rocky 8.6 as well.
Just FYI, it also needs 200+ GB hard disk space to install, I believe minimum GPU recommendation is Nvidia 2080 (or equivalent) with 4-6 GB GPU VRAM. I've seen posts online where they recommend higher-end chips and much more GPU memory (12+ GB VRAM) for building larger worlds and higher poly resolutions.
Is Unreal 5 now mostly software-rendered in the Nanite mode and only large triangles (>32 pixles diameter) are HW-accelerated?
Anything animated isn't supported as well, only static meshes.
Everything you see made by the Unreal engine is hardware rendered. More and more computations are being offloaded to the GPU over time, not less.
I thought so as well, then I read the Unreal 5 SIGGRAPH 2021 presentation where they stated:

"Turns out we can beat the hardware with triangles much bigger than expected, far past micropoly. We software rasterize any clusters whos triangles are less than 32 pixels long."

It is still done on GPU, you just don't use its builtin triangle rasterizer and do the rasterization yourself using "compute shaders" (programs that run on GPU). It doesn't mean they use CPU for rasterization.
This looks like something Euclidian / Unlimited Detail promised 10 years ago

is nanite foliage just shader mesh? and
Title has been mangled to the point that the meaning is gone.
That’s a huge deal ;)
A bit off topic, but are there any free 3D real-time rendered scenes where you can move around that exists just to be pretty/show off the graphics card abilities?
Yeah, the Ue5 engine comes with a bunch of demos, like a city you can walk around in with a bunch of buildings and cars and streets and such.
Archviz (architecture visualization) is very good for this. Just search “ue5 archviz“ on YouTube
If you're looking for a raw "benchmark" style program to measure performance of your CPU and GPU for 3D tasks, I highly recommend the open-source Blender Benchmark . It also has a bunch of existing data points for a large number of CPUs and GPUs already.
There are a bunch of demos for the unreal engine that you can download. I wanted to try them out recently after I built my new computer but unreal engine crashes every time I launch it. Oh well.

Edit: funny, reducing my DDR5 memory speed seems to have fixed unreal engine. New platforms are fun.

Some Unreal Engine demos:

- 2013 - The Valley [1] A nice, explorable forest with weather. Requires a reasonably powerful PC and graphics card.

- 2017 - Superposition.[2] A very detailed lab scene. Requires a good gamer PC.

- 2021 - The Valley of the Ancients [3] The first UE5 demo. Requires a really good gamer PC.

- 2022 - The Matrix Awakens [4] The high-end UE5 demo. If you have to ask what hardware it requires, you can't afford it.

The Valley is from 9 years ago, and it still looks good. Try it first.

From a metaverse engineering perspective, hardware requirements limit what you can do. What's possible today is incredibly good. What you can run on the average user's laptop in the browser is way, way below that level. This is the era of the $1000 phone and the $100 laptop. GPU price/performance hasn't improved much in years. NVidia doesn't market anything below US$250 any more, although some old cards are sometimes available.

The average Steam user has an NVidia 1060, released in 2016. It cost $250 back then, and it costs about $250 now on Amazon.

Yes, there's cloud gaming, where the GPU is in a data center. Now look at the pricing on cloud gaming. Many cloud gaming companies have gone bust, including Google Stadia, because the economics don't work. To get a mass market, you have to sell at a loss. To run at a profit, the pricing looks like Shadow PC.





You could actually already preserve the shadows when using billboards in unreal by using distance field shadows
Unreal is the platform for the metaverse
Disagree. A good metaverse (not the corporate hellscape metaverse of SnowCrash), requires copyleft, and UE is not it. I was devving in UE since 4 was a paid product, until I realized there are a lot of questionable practices by Epic and TS, and that the license was actually quite bad, and moved into Godot. Are there missing features? Yes, but many features in 4 are quite nice, and not too far behind UE/Unity.

I had been thinking about my version of the metaverse for a long time and even before FB did their changeup, my conclusion was that this is exactly the reason the corporate metaverses will fail in the long run.

Look I'm no lover of Unreal or Unity and adore Godot but...

> Are there missing features? Yes, but many features in 4 are quite nice, and not too far behind UE/Unity.

Godot is about infinitely far behind from Unreal. That doesn't mean it's not useful, but Unreal is borderline research, cutting edge techniques, applied in a production engine. And Godot will continue to lag until they start putting out techniques before Unreal. Unreal has the advantage that they can just hire researchers before their papers are published.

Their business model is sane, sustainable, and pretty flexible.

> until I realized there are a lot of questionable practices by Epic and TS

Such as?

> and that the license was actually quite bad

In what ways? It's about the best commercial license I can imagine.

Most of the stuff people talk about as next gen is more like feature sugar, while the core of Godot has lots of the same core features in a different state. I recompile main branch almost daily, and things like Vulkan are working very well. Of course I'm not saying feature parity, but more to the point that despite the lack thereof, licensing itself is one of the things that does and will make Godot so powerful and why I think it or another FOSS engine like it will become much more than what people assume now.

I won't go too much into Epics lies about the engine and other issues (things like refusing all of the linux editor pr's and a community run one was better maintained, not releasing a launcher, using phone home stuff, to be fair this was UE4 days) TS talking lots shit about linux, pushing more launcher fragmentation via store exclusives, and other things that just showed me repeatedly Epic is not to be trusted... and thats before we even get to the license stuff.

Commercial licenses are bad, copyleft is good. Even for games, and sometimes I miss out on a few features by making almost all of my daily stack gpl-compat, but I also believe that computing is an inherently philosophical choice and I wish, more than anything on the topic, that people would get over their fear of selling copyleft software.

Tron fought for the user. Thats what copyleft does. Commercial licenses do not. Niether do BSD style licenses.

Its already a cooler name _for_ the metaverse if you think about it. Welcome to the unreal.
Wow, yes! "The Unreal". That's really good.
“And so it was that men themselves conjured The Unreal, without hint of irony or foreshadowing, but rather out of boredom on a pretty standard Sunday afternoon.”
Just a few weeks ago I remember reading Zuckerberg wanting to acquire Unity as part of their leaked VR strategy

I wonder why they didn't consider unreal instead?

Epic already has a big big backer in Tencent
I’m curious what Zuckerberg thinks when he sees things like this compared to Meta’s own attempts. The sheer magnitude of the difference has to make him wonder (and if not, shareholders certainly have!)
Unreal is a platform for creating applications. The Meta product you refer to is an application built on such platforms (not Unreal in this case).

Meta's app has to run on what is essentially a mobile phone. You can barely run Nanite in VR on desktop let along on a mobile phone.

Not saying Meta's product couldn't be better but we are talking "better like Rec Room" not "better like UE5's best efforts"

Take a look at Roblox and their 60M DAUs
Mark isn’t stupid. His metaverse is specifically targeting lower end hardware so it can run on stand alone headsets and so eventually those headsets can be cheap.

A lot of the earlier PC Oculus UI was unreal engine anyway.

I know it’s not practical but I wish Oculus had a mode where you could plug it in and the graphics were supercharged.
Indeed. Carmack isn’t stupid either, and he is pushing harder than Mark for “more avatars” before “hi res avatars”.
If only.

Unreal Engine relies on two things large metaverses don't have - precomputation of entire scenes in the creation tools, and extensive asset reuse. In a real metaverse, there are thousands of creators, all making assets. Second Life has over 60,000 different chairs for sale. Each was created independently. The world is assembled dynamically. There is no external "level editor" for a good metaverse, only object editors. You build the world in the world. Although some of the low-end metaverses, like Decentraland, do make you edit your entire land parcel externally and upload it as a unit.

Epic might eventually offer a more dynamic system. It would involve back-end servers doing much of the optimization that the Unreal Editor does now. Maybe in UE6 or UE7. It's certainly possible, but a big job.

Nanite does not eliminate levels of detail. It just does them within meshes, rather than external to them. In most computer graphics, you have explicit objects which appear more than once. That's called instancing. But within a mesh, there's no optimizing out duplicate submeshes. That's what Nanite does. A Nanite mesh, rather than being big lists of vertices and triangles, is a directed acyclic graph, in which common subsections are combined. So, if you have a huge area of buildings, but not that many different windows, the windows are unduplicated.

Notice that in the UE5 demos, they have lots of dirt and rocks, lots of copies of the same statue, and buildings which have a lot of repetition. That's the key here. Those are all things for which this optimization works. Even then, the download for the Matrix Awakens demo is about 16GB, which expands to about 250GB after decompression.

The instancing is recursive. A good example would be a long chain-link fence. If you zoom in close enough, you can see the bends in the wire and the flaws in the galvanizing. Maybe now and then there's a piece of trash or a leaf stuck in the fence. If you zoom out far enough, you can see kilometers of fence. That can all be one Nanite mesh. And if you need to cut a hole in the fence somewhere, that will work.

The level of detail system is automatic. The mesh representation contains within it level of detail information. Nearer areas go further down the DAG to higher levels of detail. There's a clever geometry trick which makes the transitions look good. In general, the idea is to maintain about one triangle per screen pixel. The key to all this is that there are only so many pixels on the screen, and that controls how much geometry detail needs to be displayed. So there are still levels of detail, at a fine-grained level.

All this, unfortunately, turns out to be badly matched to what GPUs do. So about 60-70% of the rendering is done in the main CPUs. Nanite really needs a new generation of GPUs, ones that are good at chasing around complex data structures with internal links and offsets.

If you want to understand Nanite, here's the 155 page paper.[1] There's a video which goes with that. It's brilliant, but it's not magic.


The return of the raycasting renderer, with a vengeance. ;)
Those reflections and lighting effects are incredible!
Just in time for AI to solve the entire problem.
The narrator mispronounces foliage every single time. It has three syllables, not two.
English has a wide range of accents, and correct pronounciation of even common words can vary widely.
They do the same with “quality” too so I guess it’s just their accent.
Yes, I was thinking ESL, no big deal either way. Noticed with the pronunciation of cupboard as two separate words.
ESL is not a likely explanation for someone using a pronunciation of "foliage" that is counterindicated by the spelling. (It is a much more likely explanation for pronouncing cupboard according to the spelling.)
I think more along the lines of what people hear everyday. If you're in an English speaking country day in day out, you will hear it phonetically. If not, much more possibility of interpreting a written word differently.
> If you're in an English speaking country day in day out, you will hear it phonetically.

Yes, that's how you get pronunciations that differ from the spelling.

If a word is rare enough, its pronunciation will be regularized to match its spelling. And for a foreign speaker, almost all words are that rare.

That two-syllable pronunciation initially struck me as wrong too, but it turns out:

> The disyllabic pronunciation \ˈfō-lij\ is very common. Some commentators insist that foliage requires a trisyllabic pronunciation because of its spelling, but words of a similar pattern such as carriage and marriage do not fall under their prescription. [1]

The speaker otherwise has a fairly General American accent, so I'm curious if the two-syllable version is a geographic thing? Unfortunately, it was never included in the Harvard Dialect Survey [2].



>> The disyllabic pronunciation \ˈfō-lij\ is very common.

MW does also note that that pronunciation is nonstandard.

I checked the first 20 samples on youglish ( ). 16 used an unambiguous three-syllable pronunciation, 3 unambiguously omitted the middle syllable (#9, #16, #22†), and one (#12) was ambiguous. Of the disyllabic pronunciations, only #9 had a marked regional accent.

† Why is #22 one of the first 20 samples? Two of the earlier samples are different timestamps into the same video, which I only counted once, and one is missing, which I counted zero times.

I think it's a big deal because of efficient micropolygons. I know for myself a custom sub-pixel shaded micropolygon renderer running natively on vulkan is what I am looking forward to most for xmas holiday hacking ;)

Building a micropolygon rendering pipeline

This is really interesting, and brings to mind Larrabee, Intel's attempt at making a graphics card out of a chip with a ton of x86 cores, and some specialized hardware, like texture samplers, but notably no rasterizer, as that was to be done in software, like Nanite does in UE4.

The traditional pipeline of `vertex shader->raster->pixel shader` has persisted basically since programmable graphics hardware has existed, it's refreshing to see some innovation now.

Nice recall about Larrabee - I seem to remember they were also going for a ray traced approach? Nanite feels like a stock 3D pipeline in most other ways when it comes to geo.

Something of note for Nanite, is that they still do use pixel shaders for some of the larger triangle workloads. But yes as you point out, no pixel shaders for the smallest micropolygons - they are running in compute - it's quite amusing.

Reading the CMU article, how do you actually make it fast? For each frame you need to collect all objects in the scene (for ray traced scenes anything that can be reflected), split them into micropolygons (either from a triangle mesh or from parametric meshes) and then render all this at e.g. 4K. Each of these steps is extremely demanding.
Segmentation in CPU is pretty common as there's huge opportunity for caching. It's possible to push segmentation into GPU as an optimization but it might not be worth it if the GPU is busy enough with downstream operations. Even then you still cache the results.

I would be surprised if they don't have a dynamic pipeline that can be optimized at runtime.

The segmentation happens during importing assets into the editor. 50% of the talk on Nanite is concerned with their new compression system for geometry, which enables seemless segmentation of meshes due to some clever graph theory trickery.
I can't remember the specifics to give you a summary, but this presentation was fascinating:
You do as much of that offline as possible. The "split them into micropolygons" happens at editor time, along with a search tree that makes it really easy to find them.

It's still very demanding! But the goal is that you bake as much of it that you can.

I suppose it depends on what you define "editor time" to be. Importing an asset into a scene live appears to be pretty seamless, at least in comparison to preprocessing stages of yore.
Nov 23, 2022 · 2 points, 0 comments · submitted by mfiguiere
Nov 22, 2022 · 2 points, 1 comments · submitted by ksec
because it works out-of-the-box on native elf/linux?
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.