HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Oculus Connect Keynote: John Carmack

Oculus · Youtube · 5 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Oculus's video "Oculus Connect Keynote: John Carmack".
Youtube Summary
Oculus CTO John Carmack discusses the Gear VR and shares development stories at Oculus Connect. Learn more about Oculus Connect: http://bit.ly/OculusConnect2014

Official Oculus Channels:
Oculus: http://www.oculus.com/
Facebook: https://www.facebook.com/oculusvr
Twitter: https://twitter.com/oculus
Instagram: http://instagram.com/oculusvr
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
> What was Carmack's development focus during Oculus?

I'm not sure exactly but there's a bunch of keynotes from Carmack around Oculus Connect on YouTube. Here's one from 2014: https://www.youtube.com/watch?v=gn8m5d74fk8, and another one from a year later where he's live coding in a VR environment https://www.youtube.com/watch?v=ydyztGZnbNs. You'll find a bunch more if you search too.

I remember skimming through them years ago not because I'm into VR development but I've been a life long fan of Carmack's work and like listening to him talk about things.

Nov 08, 2016 · pfranz on Inside Magic Leap
Found where Carmack talked about it[1].

For VR, the consensus I've seen seem to push for multi-sample AA > full-screen AA > temporal AA (I feel like I've even seen no AA is better than temporal AA). I'm not quite sure if it's performance, architecture (they really prefer forward renderers instead of deferred renderers for low latency), or aesthetics. In the little bit I've done, when just playing with knobs temporal AA looks better to me. Without AA specular highlights are way too distracting and pop too much.

This reference[2] talks about the extra velocity buffer needed for temporal AA and the fact it tends to over-blur which can fuzz out fine details (low resolution is a touchy subject in VR).

There's a lot of moving pieces when making decisions. In talking about interlacing, he was ideally talking about not having to create a whole image buffer before sending data to the hardware--just rendering parts (well, scanline) of the image that moved.

[1] https://youtu.be/gn8m5d74fk8?t=10m54s [2] https://www.youtube.com/watch?v=Xk3WUk5T2TQ&feature=youtu.be...

Jun 12, 2015 · lhl on Oculus Rift
While this may be true for high fidelity gaming, for moving some textured triangles around, mobile GPUs are fine. Also, they can (and do) perform much better clock-for-clock vs a PC (and especially a Mac) thanks to the ability to optimize drivers. For a fascinating discussion of the optimizations possible, Carmack's 2014 Oculus Connect keynote is well worth watching: https://www.youtube.com/watch?v=gn8m5d74fk8

This talk was a slightly long but very informative discussion of the various challenges in reducing latency and dealing w/ PC rendering: https://www.youtube.com/watch?v=PoqV112Pwrs

Excellent list of things to keep in mind.

Of course, being The Statue is allowed, if you happen to be John Carmack, legs planted firmly on the stage like a colossus: https://www.youtube.com/watch?v=gn8m5d74fk8

Speaking of "frameless" rendering, I noticed during Carmack's Oculus keynote (https://www.youtube.com/watch?v=gn8m5d74fk8#t=764), he talks about trying to persuade Samsung to integrate programmable interlacing into their displays in order to give dynamic per-frame control over which lines are being scanned.

This would give you the same "adaptive per-pixel updating" seen in your link, though primarily to tackle the problems with HMDs (low-persistence at high frame-rates).

None
None
agumonkey
Weird, I missed this part. Vaguely reminds me of E. Sutherland fully lazy streamed computer graphics generation since they had no framebuffer at the time.
dgallagher
This AnandTech overview of nVidia's G-Sync is worth reading (meshes a bit with what Carmack mentioned about CRT/LCD refresh rates in that talk): http://www.anandtech.com/show/7582/nvidia-gsync-review

It's a proprietary nVidia technology that essentially does reverse V-Sync. Instead of having the video card render a frame and wait for the monitor to be ready to draw it like normal V-Sync, the monitor waits for the video card to hand it a finished frame before drawing, keeping the old frame on-screen as long as needed. The article goes into a little more detail; they take advantage of the VBLANK interval (legacy from the CRT days) to get the display to act like this.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.