HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
John Carmack's keynote at Quakecon 2013 part 4

Kostiantyn Kostin · Youtube · 357 HN points · 22 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Kostiantyn Kostin's video "John Carmack's keynote at Quakecon 2013 part 4".
Youtube Summary
Some of the things Carmack talks about in this video:
- OpenGL
- functional programming
- Haskell
- Lisp
- Scheme
- strong and weak typing
- multithreading
- events
- garbage collection
- QuakeC vs Scheme

Part 1:
new console cycle
AMD hardware
game controllers

Part 2:
Kinnect
Digital distribution
Portable consoles
Andriod and iOS
Cloud gaming
Creative vision vs technology
Unified memory
PowerVR and tiled rendering

Part 3:
displays
head mounted display
movement tracking
sound
large scale software development
optimization
OpenGL

Part 4:
OpenGL
functional programming
Haskell
Lisp
Scheme
strong and weak typing
multithreading
events
garbage collection
QuakeC vs Scheme

Part 5:
programming

Q&A:
space
AMD vs Nvidia vs Intel GPUs
CPU architectures
GPU computing
id Tech 5
id Software company

Part 6 Q&A:
PC and upcoming console hardware
MegaTexture
virtual reality, augmented reality and Google Glass
voxel, ray tracing
AMDs virtual texturing
console cycle beyond Xbox One and PS4
SSD
strobe lighting in LCD technology
control devices advancement
when single person can do a AAA game like MW3?

Part 7 Q&A:
id Tech5 and Tango Gameworks
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
> Lack of safety is a feature in game development,

Bullshit.

How many games have had releases delayed and major wrenches thrown into marketing plans because of progress ruining heisenbugs? How many failed certification passes from banal data races and other undefined behavior?

We trap UI in actionscript or javascript, and gameplay programmers in other scripting languages - perhaps python or lua - for faster iteration times, hot reloading, and safety. Because it's difficult enough to keep the build stable when it's merely all the engine programmers who should know better screwing things up with C++.

This results in large messes of poorly performing, poorly optimized, garbage-collector laden code that Rust would handle much faster. We're leaving lots of performance on the table, often for little other purpouse than "safety".

Console first day patches may have taken off some of the pressure for getting the first release right, but handhelds aren't always online and still have a pretty high bar.

> the optimizations required are typically unorthodox

Rust's `unsafe` keyword and intrinsics let you do all the unaligned intrinsic-laden data-racey technically-undefined-behavior micro-optimizations you might want to do in C++ in Rust just fine.

It'll hopefully trigger a more stringent code review and force you to justify your pile of bugs, but that's a good thing. Or you can skip the code review if your entire company really disagrees.

> a super strict language slows development down

C++ is also super strict, just in an unenforced-at-compile-time way that result in plenty of late nights chasing heisenbugs. Don't get me wrong - language strictness can slow development down - but that's one of the reasons people eschew C++, too.

As for C++ vs Rust? I'm going to spend more time and be far less certain of catching the issues in a C++ code review than I would be in a Rust code review. And while it took a few months for my development speed in Rust to catch up with my development speed in C++, it did happen.

Rust merely forces you to acknowledge when you're being sloppy.

> Additionally, object ownership can be unclear in a game development setting, which typically makes use of global variables for state.

I have solved so many sources of endless heisenbugs by eliminating some of these global variables. John Carmack as far back as 2013 was using phrases like "horror show" to describe similar parts of his own codebases[1] and has been agitating for more functional styles.

An unclear mudball of global variables is entirely possible - and easy - in Rust if you really want that. You merely need to make it either thread safe, or resort to `unsafe` if you really can't tolerate the overhead of not having threading-related heisenbugs.

[1]: https://youtu.be/1PhArSujR_A?t=125

shultays
You tell me. How many?

I am a game developer that mostly works with c++ and my answer is none so far. The projects delay but nor because of the language.

Undefined behavior is c++'s strength. If it was not, we wouldnt have it in the first place

MaulingMonkey
Just counting my personal experience, I've seen multiple titles fail cert and delay release thanks to bug backlogs that were mostly filled with memory issues, even after months of working to reduce the backlog of crash bugs. Perhaps you can't blame that entirely on the C++ codebase, but you sure can't absolve it entirely either.

At every single studio I've worked at I've helped build out crash collecting, symbolizing, or deduplicating infrastructure just to get a lead on where shit is exploding, and seen major productivity outages when tooling crashed frequently enough to disrupt the workflows of my coworkers. Address sanitizer, valgrind, extra debug allocators - I've sunk a lot of time into just making these crashes shallower to debug and quicker to catch.

One of the smallest and quickest projects that comes to mind was an NDS -> iOS and Android port. Do you think most of the time was spent in porting APIs and control schemes? More than half the time on that project was spent just chasing down really stupid memory related crash bugs that were exacerbated in frequency and time by the port. I made damn sure to communicate my concerns about the schedule, and if my memory serves me correctly, we managed to soft launch "only" a week or two late.

C++'s strengths are it's lean portability, speed, and huge existing codebases and tallent pools. Undefined behavior is simply the cost we begrudgingly pay, not it's strength, and even that we often mitigate by trying to figure out ways to write as little C++ as possible - keeping it only for the parts of our codebase that are actually performance sensitive. Tooling is often C#, Python, anything but C++. UB is a pox and constant time sink.

Whenever people call out C++'s UB as it's strength, I'm left wondering how many of their bugs do they actually fix, and how many are begrudgingly fixed by their coworkers. Most of the people whom I've worked with with that attitude don't pull their own weight in the debugging department, so they don't see the full costs of it.

shultays
I don't share the same experience. Crashes do happen but I wouldn't call the time sunk there a lot. If you get a ctd, usually a callstack and a bit of thought is enough for figuring out what is going on. At worst case you have valgrind. Very rarely it goes beyond that but even with that I would say it is a well trade off on how relaxed C++'s memory management is compared to Rust. And I believe C++ is relatively safe when you have well designed systems where ownership is clear enough. I have a feeling that if my company suddenly flipped to Rust now, the struggling against borrow checker would be a bigger time sink

For me majority of game development is feature development and fixing gameplay related bugs. Which will be same regardless of language you use. And depending on language feature development might be even slower

MaulingMonkey
> I don't share the same experience.

You can say that again!

> If you get a ctd, usually a callstack and a bit of thought is enough for figuring out what is going on.

Very different experiences. Sometimes that happens.

Sometimes the symbols have been evicted from the symbol server. Sometimes the minidump didn't capture relevant memory (and the full dump would be a couple dozen gigs, making external QA reluctant to constantly capture those). Sometimes the crash is the result of memory corruption from unrelated systems minutes ago that doesn't reproduce when you enable any of your debug allocators because reproing relies on pointer reuse in a hashtable. Frequently custom allocators defeat tools like valgrind and address sanitizer, requiring extra work to either bypass or explicitly annotate valid/invalid memory ranges. Sometimes the process only exit(3)s (technically not even a crash!) from an unrelated thread on a specific bit of UI with no relevant callstacks nor logging, and if you open up the windows charm bar for more than 10 seconds without a debugger attached, and you resort to bisecting p4 history once you've spent the several days even figuring out repo steps.

That wasn't even our bug, but it was our workaround... even when the codebase is good, the middleware often isn't! And that is but one of many I've had to deal with.

> C++ is relatively safe when you have well designed systems where ownership is clear enough.

Meanwhile, the poster I was originally replying was pointing out that you can have unclear object ownership full of globals and C++ is somehow supposedly good at dealing with this. Hopefully you're at least agreeing with me here, in disagreeing with that! :)

A lot of gamedev code isn't very well designed, IME, and "relatively" safe can be suprisingly unsafe as well.

> I have a feeling that if my company suddenly flipped to Rust now, the struggling against borrow checker would be a bigger time sink

The first couple months after I picked up Rust, I had that phase. Now it's quite easy for me - sometimes it requires a deep `.clone()` or two, but in the equivalent C++ codebase you wouldn't have even dreamed of not making the deep copy - because in equivalent C++ code it'd be impossible to do the equivalent fancy zero-copy borrowing nonsense even remotely safely.

I have no idea about how Doom Eternal does it, but John Carmack has some ideas on how to parallelize game engines here: https://www.youtube.com/watch?v=1PhArSujR_A&feature=youtu.be...
Reasoning about code requires reasoning about relevant state. On the one extreme, you have pure functional programming, where all state is passed in and returned out - all relevant state is explicit and "obvious". On the other extreme, you might use global state for everything - relevant state requires diving into all your code. This sounds unthinkable in the modern era, but similar styles aren't entirely uncommon in sufficiently old codebases with codebases that didn't really bother to use the stack.

This is part of the reason why memory corruption bugs can be so insidious in large codebases - if anything in your codebase could've corrupted that bit of memory, and your codebase is millions of lines of code, you have a large haystack to find your bugs in, and your struggle will be to narrow down the relevant code to figure out where the bug actually is. This isn't hypothetical - I've had system UI switch to Chinese because of a use-after-free bug relating to gamepad use in other people's code, for example.

(EDIT: Just to be clear - globals don't particularly exacerbate memory corruption issues, I'm just drawing some parallels between the difficulty in reasoning about global state and the difficulty in debugging memory corruption bugs.)

> Game development

John Carmack on the subject, praising nice and self contained functional code and at some point mentioning some of the horrible global flag driven messes that have caused problems in their codebase, mirroring my own experiences: https://www.youtube.com/watch?v=1PhArSujR_A&feature=youtu.be...

> you need globals in order to even have the game in many cases.

Simply untrue unless you're playing quite sloppy with the definition of "globals" and "global state". The problem isn't that one instance of a thing exists throughout your program, it's that access is completely unconstrained. Game problems do often involve cross cutting concerns that span lots of seemingly unrelated systems, but globals aren't the only way to solve these.

https://youtu.be/1PhArSujR_A?t=988 was Carmack's presentation on an immutable Haskell game engine. Unfortunately, he doesn't go into too much detail.

https://news.ycombinator.com/item?id=15036591 was the previous HN thread about it.

According to John Carmack, the acclaimed expert in the field of game programming, "Functional Programming is the Future" - https://www.youtube.com/watch?v=1PhArSujR_A
repolfx
Yes, I know, Tim Sweeney has also got an interest in PL theory.

Nonetheless, that was in 2013, it's now 2019, video games are still written in C++ and not any FP language. FP has been "the future" for as long as I've been alive and I don't think it'll ever happen.

Excellent read, I still remember a lot of it. There is also the accompanying Quakecon presentation where John Carmack talks at length about all this: https://www.youtube.com/watch?v=1PhArSujR_A
Jan 03, 2019 · aasasd on Scratch 3.0
John Carmack has mentioned that he uses a visual Lisp environment on iPad. Scheme-based, iirc.

It's somewhere in here, I think (but alas I'm not up to listening again through half an hour of the old lady voice): https://youtube.com/watch?v=1PhArSujR_A

I feel like marrying Lisp's meta-programmable DSLs to an easy visual environment is the ultimate cosmic dream of 'domain-centered' programming and customization for non-coders―but it also seems to me that Lisp is rather text-based, trading a readymade set of operations picked in the interface for infinite extendability through text. So I'm not sure if a visual Lisp is any convenient to use.

Also, for me the immediate downside of the Lisp environment in question is that it's only available for iPad.

open_bear
He uses Racket.
DonHopkins
I'd say Lisp is S-Expression based, and it's easier to make both text and graphical interface to S-Expressions than to the syntax of typical text based languages.

By Apple's decree and app store policy, any programming language on the iPad that isn't purely based on the JavaScript interpreter in the web browser isn't allowed to download and run executable code, much to the frustration of Alan Kay, who is weary of people comparing the iPad to the Dynabook without acknowledging that Apple left out and prohibited his most important idea on purpose: user programmability.

If Carmack's visual Lisp is implemented as an iOS app with locally running native code, and not purely in JavaScript running in the web browser, then he has to build it himself in XCode on his own Mac for his own use with his own Apple developer certificates, and he isn't allowed to distribute it on the app store.

If you can implement your language purely in JavaScript, like Snap! or the web version of Scratch 3.0, then it'll run everywhere, and isn't an iPad app, just a web app, so it's not bound by the restrictions of Apple's app store. That's the way to go if you can.

https://snap.berkeley.edu/

Jul 30, 2018 · pjmlp on Java's Magic Sauce
My games development experience is quite irrelevant, however Tim Sweeney and John Carmack might know a thing or two about game engines.

"The Next Mainstream Programming Languages: A Game Developer's Perspective" by Tim Sweeney

http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt

https://wiki.unrealengine.com/Garbage_Collection_Overview

"Considering that 8 bit BASICs of the 70s had range checked and garbage collected strings, it is amazing how much damage C has done."

John Carmack - https://twitter.com/id_aa_carmack/status/329210881898606593

Quakecon 2013 keynote section about GC

https://www.youtube.com/watch?v=1PhArSujR_A&feature=youtu.be...

undefuser
Honestly, this is a poor argument. While both Tim Sweeney and John Carmack are legends of the gaming industry, their opinions are still should be taken with a pinch of salt.

Here's why: both Unreal Engine 3 and Unreal Engine 4 are giant OOP behemoths that powers sluggish games and sluggish editors. They consume a rediculous amount of memory, especially in large projects. Even in the slides you linked, they admitted that there is no obvious way to optimize the engine or scale it over multiple cores, because its performance suffers from death by a thousand cuts. Recently Epic spent a huge amount of time trying to optimize their garbage collector, which, arguably, without it they could have better spent the man hours elsewhere. And then there's the new network replication blueprint which is designed to overcome OOP performance issue. And Tim Sweeney swears by OOP. All of these while the industry is moving more and more towards Data Oriented design. Regarding John's tweet, an interesting question was that could he have built the successful Doom games with BASIC? Most likely no.

peterashford
It's not a poor argument at all. The claim was: "And the notion that a garbage collector should ever be part of a game engine (or anywhere near it) is frankly baffling to me. .. unless you're making some turn based thing." But UE4 uses GC. One of the premium games engines uses the very thing that the post claims should never be used in a game engine. I have some sympathy for the idea that GC should at least be tightly controlled and that no-GC has its place. But to claim that GC should never be part of a game engine is simply contradicted by reality.
undefuser
Well I wasn't arguing with that claim, but rather pjmpl's references to what Tim and John said, that I believe should be taken with a grain of salt.

With regards to GC, do check out Unity3D new ECS/Job system/burst compiler. Essentially sidestepping the garbage collector, C# safety checks, while trying to work with the how the new generation of CPUs are designed to perform.

peterashford
I'm a Unity developer. I'm well aware of what the ECS/Jobs/Burst architecture brings. I still disagree with your assertion. Note the fact that this data oriented pivot occurs NOW with Unity. It's managed to do pretty well up to this point despite NOT having that feature. Using GC didn't stop it becoming wildly successful.
undefuser
No where did I say that the GC hindered Unity's boom in anyway. In fact I do agree that the GC, together with C#, allowed Unity to be wildly successful. But it did so by allow too many people to write sloppy code that has performance problems. And that code would leak memory all over the place without the GC. It is sort of eroding the art of programming.
pjmlp
They aren't sidestepping the garbage collector nor the C# safety checks for the whole engine, because HPC# is only used in a very focused area, everything else is plain old C# as always.

I can also state that in the old days C wasn't worthwhile to be useful as a programming language to write game engines, because proper game engines were written mostly in Assembly with C, Pascal or Basic as their scripting layer if at all.

For example,

https://www.atariarchives.org/

https://www.amazon.com/PC-Intern-Programming-Encyclopedia-De...

SamReidHughes
Early versions of the Build engine were actually written in QBASIC, I think. But I might be mixing my history up.
pjmlp
It was Visual Basic for Windows already.

https://www.gamasutra.com/blogs/DavidLightbown/20180109/3094...

SamReidHughes
No, I'm talking about the Build engine.
pjmlp
Ah sorry, I had my head on Unreal.

It was a mixture of C and QuickBasic.

http://advsys.net/ken/build.htm

pjmlp
Well, I would gladly like to know which other game engines, using beautiful Data Oriented design, free of OOP cruft and GC as you put it are able to beat Unreal at this.

"A Star Wars UE4 Real-Time Ray Tracing Cinematic Demo"

https://www.youtube.com/watch?v=lMSuGoYcT3s

"Siren Real-Time Performance"

https://www.youtube.com/watch?v=9owTAISsvwk

After all, it should be relatively easy to challenge such giant bloated sluggish behemoth OOP engine.

As for BASIC, probably. If John made use of a proper BASIC compiler for MS-DOS like PowerBasic, nee TurboBasic, with the right set of compiler flags.

Naturally it would also have its share of inline Assembly, just like Doom, as every owner of "Zen of Assembly Language" first edition knows that C and C++ compilers for MS-DOS weren't the speed daemons developers nowadays taken them for.

You should spend some time learning how AMOS used to be loved on the Amiga Demoscene back in the day.

undefuser
Not surprised to see such things created with Unreal because they have loads of money and man power to throw at problems. But then again the 2 examples you show does not really fairly represent anything, because the raytracing demo uses super expensive Nvidia hardware that not many people can touch, the second demo also requires specific mocap hardware and there's only one character active in the whole scene. Data oriented is still relatively new, give it some time. But I would say if Unreal 4 was written with data oriented design in the first place, it would have been much better, at least performance wise. Even if it is a mixture of OOP and DOD, no one says you cannot do that. It was a lost opportunity. Working with it right now is a pain, really. After all, if the engine's performance is already so great, why did they need to spend so much time fixing performance problems in their latest releases?

If you want to see what data oriented design can do, with regards to games, you can check out Unity's lastest show of their beta ECS and Job systems. And their custom C# compiler called Burst. Essentially sidestepping the Garbage collector, compile away many of C# safety features, like bounds check, under specific conditions. While processing entities in straight array iteration fashion. All in the name of performance, undoing the damages caused in the past. I guess you know that newer generations of CPUs require effective use of the cache as well as multithreading to extract the full power out. At least Unity technologies should be applauded for trying to do that, and for trying to bring it to the mass.

I'm pretty sure that many game engines do use data oriented design to some capacity, especially those that used to run on PS3. But just because they are not available off-the-shelf, no source code to look at, doesn't mean they don't exist. Off the top of my head is Insomniac Games, arguably successful studio which is also a strong advocate of DOD.

All this is to say that there are times when even the legends hold questionable beliefs, which we should take with a grain of salt.

pjmlp
Yes, I have seen all GDC and Unite 2018 presentations about HPC# and what is currently available.

You should find some submissions done by myself.

HPC# job is only to replace those parts currently written in C++, everything else is written in plain C#, with GC, as Unity always was. In fact they have migrated to .NET 4.6 with 4.7 on the roadmap.

Just like Unreal uses C++ with GC, but naturally in performance critical paths they resort to other techniques.

Which isn't much different than on the old days, avoid malloc() and new on the performance critical paths.

Just because there is a GC available doesn't mean one has to use it for every single allocation.

Unity's ECS is implemented in a OOP language, C#, and anyone that has spent some time in the CS literature about component systems, knows that they are just another variation how to do OOP.

One of the best sources uses Component Pascal, Java and C++ (COM) to described them (1997), with the 2nd edition (2002) updated to include C# as well.

"Component Software: Beyond Object-Oriented Programming"

https://www.amazon.com/Component-Software-Object-Oriented-Pr...

undefuser
Indeed, you can always say that avoid GC, avoid allocations on the hot path, but then in practice people write sloppy code all the time, then trip over by the GC, I'm guilty of it myself on numerous occasions in the past. Maybe, just maybe, without the GC in the first place, people are forced to write better code? The same thing goes for OOP. In particular, inheritance is severly abused in too many code bases, that then causes all sort of problems. IDK, maybe because I hold an extreme view about those things, that they are better off being left out completely? ¯\_(ツ)_/¯
Sep 19, 2017 · akmiller on Global Mutable State
What you describe above is similar to what Carmack discussed at his QuakeCon keynote several years back when toying with a game engine on Haskell. Very interesting talk, https://www.youtube.com/watch?v=1PhArSujR_A
benaiah
I've watched that, and it almost certainly contributed to my comment above. Thanks for linking it for others to enjoy - it's an A+ watch IMO.
Aug 17, 2017 · 258 points, 84 comments · submitted by tosh
cthulhujr
I really like John's perspective on things, he can take a listener or reader from the most abstract concepts to the nitty-gritty without losing focus. He's a tremendous asset to the programming world.
ksk
I agree with the second part of your sentence. I have watched every single one of his talks, and while I find them entertaining, I don't think his brain-dump style of communication is appropriate for teaching/instruction (to clarify: I use his talks as a springboard for my own discovery, not as a terminal point to aggregate knowledge).
waivek
https://youtu.be/lHLpKzUxjGk

I disagree. See the above link for a lecture where he describes the difficulties in VR in a manner that anybody with minimal programming experience can understand.

pmarreck
He wrote a very good piece here: http://www.gamasutra.com/view/news/169296/Indepth_Functional... on the same topic (is it in fact the same?)
seanalltogether
I come back to this quote over and over when talking about desktop and mobile apps/games, especially since they tend to be highly state driven view collections and devs are always trying to come up with DRY patterns to bury important features in subclasses or helper utils.

> A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in.

pmarreck
This is also simultaneously a strong argument for unit tests (which encode knowledge of that state into a proof of sorts).

The limit of a programmer is his/her brain's ability to contain all these possible states. Bugs always come from missing some mental modeling of state or having a flawed conception of it, either at the point of design, the version 1, or the rewrite.

OO just buries that state "elsewhere", so things seem easier superficially to contain mentally (but the state is still there, ready to get corrupted and pounce on you). FP makes the state explicit, so you're forced to deal with it at all times (this perhaps not uncoincidentally also makes FP easier to unit-test AND reason about). If managing that state upfront becomes unwieldy, then that becomes a good code smell/indicator that your design is suboptimal.

The upshot of all this is that I think that using FP in conjunction with unit-testing reduces bug production by some statistically-significant amount, especially as a codebase grows. We definitely need more empirical data about this, though. But that's what my intuition says.

nickpsecurity
It's an even stronger argument for Dijkstra's method of Design by Contract, esp with spec-based generation of tests. You not only document the intended behavior: you'll know exactly where it went wrong if any kind of testing violates one of the specs.
None
None
grogenaut
one of the hard things about game dev and testing is that there are all sorts of bits of code that don't really have strong "success criteria". A lot of it comes down to "did that feel good" or "does that look better" which is very hard to quantify in unit tests.

However I think there's also a huge swath of code in games that can be unit tested. Scorekeeping systems, ai evaluation trees, etc. The industry as a whole has much more of an integration or manual testing bent focus than a unit testing one.

I will say that coming from a no tester all dev unit / integration test world to no automated test, 30-120 high skill manual tester world (30 gameplay testers, every other employee using new builds 2x daily), there is definitely something to having lots of really good manual testers with very short feedback loops. People noticed bugs that were hard to test for within 10-15 minutes of checkin on a regular basis.

Really if I did a studio I'd have both but then I'd likely be spending too much money and go out of business.

lallysingh
Why would unit testing cost extra? It saves money.
grogenaut
Engineers have to do work.. I would unit test core libs... I would also hire manual tester because they were very useful. I'd likely not chase down 100% automated coverage. The goal is to ship a game not have 100% coverage. And I work 100% pure tdd in my day job and side projects so take that for what it's worth.
coldtea
Only if you intend to change/refactor stuff later. Which in games never happens: engines are mostly thrown out at the end of the game.
pmarreck
Hasn't Gamebryo evolved since the earliest days at Bethesda?
lgas
Like most technical work, it saves money when done well and costs money when done poorly. Most people find themselves somewhere in the middle most of the time so it turns out the discussion is a little more nuanced than that.
grogenaut
Exactly! But it's so much easier to be an unyielding zealot than to think and understand why and if something is working.

As I said above I'm a big advocate of tdd... Do it all the time. So when I say I wouldn't do it for parts of games maybe I have a reason (to gp not you)

bunderbunder
> there are all sorts of bits of code that don't really have strong "success criteria"

The approval testing model is really useful for this sort of stuff.

http://approvaltests.com/

jandrese
Interesting that he ran into friction exactly where you would expect him to.

He was talking about having the AI scripts running in different purely functional threads. Later (around 21 minutes in) he mentions what happens when two AIs decide to move to the same place at the same time, and has not figured out the solution.

Of course parallel programming is easy if you ignore the data dependencies, but eventually you run into unavoidable dependencies and your clean elegant system has to pick up some smell.

ece
He talked about a whole FP-powered architecture to solve this.
sololipsist
Motivated reasoning. He prefers imperative programming (or something other than functional programming), so he hears the "criticism" without hearing the solution. I mean, of course he physically hears both, but one is prominent in his consciousness. It doesn't matter that there is a solution, he heard what he needed to hear to get the evidence he needs to maintain his bias, and shuts off his frontal lobe.

To be clear: Everyone does this. I just think it's interesting.

RUG3Y
You can train yourself to recognize this pattern in your own behavior.
sololipsist
Mhm. I try to. Why?

Are you implying I prefer functional programming? I don't. I don't have a horse in this race.

RUG3Y
No, I was commenting on statement, "everyone does this". What I was trying to say is that with effort, you can begin to identify times when you are biased toward a certain thing (in this case, different programming styles), and consciously choose to leave your emotions out of the equation.
sololipsist
Okay. Sorry, I'm used to reddit were everything is a veiled accusation.

Yeah, you can train yourself, but everyone does it nonetheless. Everyone does it by default, and I don't think it's reasonable to claim that anyone has completely or effectively rid themselves of motivated reasoning.

I try to do this as much as possible, though. It usually ends up with me just having no opinions, rather than an "unbiased" one.

Though I'm not sure how effective training yourself in this way is. There have been studies that show it's difficult to induce this in people, but that statistical, of course. Some people likely take to it much better than others - some people may even take to it very well. I like to think I've had some success, but, you know, that might be motivated, so I don't really have a strong opinion.

scott_s
I think you misunderstand Carmack's position. As of this talk, he had not yet solved this problem, but he is optimistic that it can be solved, and ece mentioned the solutions he outlines. He goes on to say such a system would be beneficial to all game developers.
sololipsist
He says it's a problem that can be tackled, it just hasn't been yet. Like you look at a square block, and you need to get it in a square hole, you just haven't gotten to it.

It's not the language of someone that isn't confident that there isn't a perfectly fine solution. I mean, it's likely that other people have already figured this out, just not him.

jandrese
Yes, and I'm curious how it turned out. Are there any followup talks on this that anybody knows about? It's certainly a solvable problem, but how many of the advantages remain once the data dependency problem is solved?
ece
I think the crux of his point is really between constantly having to update state in a mutable way (with a decent probability of starvation and possibly deadlock) vs. pure functions, immutability and the async FP architecture he talked about. The advantage of the later is it brings maintainable code, and this stays true throughout the life of the code.

Off course, you'll have to hire or train FPers as he mentions.

craigsmansion
As I heard it, he talked about independent agents which traverse a world by taking it as an input and reporting their own state as an output, and how easy to code such a model was in functional languages because the world was immutable. You don't have to mess around with the world data to keep the actions in sync.

Of course there's a level of conflict resolution needed at some point, which wasn't something he had not figured out, but something that was out of line with the elegance and simplicity of how the basic game mechanics were implemented so far.

As I understood if, it's wasn't so much about easy parallel programming as it was his discovery of clean models and elegant solutions as encouraged by functional programming.

crimsonalucard
It's not about either. He is illustrating a problem that only occurs in functional programming:

There is no concept of time in pure functions.

In imperative programming when two actors approach the same spot, whether the application is parallel or not, one actor ALWAYS arrives first and the other will arrive subsequently. Therefore the Second actor can read the state of the first and act accordingly...

This is not the case for functional programming. Per frame the state of each actor changes (or in other words: new actors with updated states are created) at the same time and thus you can have two actors approach the same spot at the same time. You will then need an extra step for conflict resolution. Of course there's a bunch of ways to deal with this. Carmack briefly mentioned something about using some other attribute of the actor as a priority number... it's not like this is some crazy issue that's impossible to solve.

ece
But one actor can always read the state of the other actor, and modify their actions accordingly. The state of each actor is immutable, so each action taken based on the state is sure to be conflict free. As opposed to imperative programming, were each actors position might not get updated correctly or in time until too late. Deadlock and starvation don't happen when you have pure functions with no side effects and immutable state like they do in non-FP languages.

Timing is harder to get right when everything is async, but if your game design is good, you just need to better implement it to get it right frame by frame. He also talks about having monads to handle things like this.

crimsonalucard
If two actors moved to one spot at the same time which actor occupies that spot? How can an actor act accordingly if they moved at the same time?

In functional programming this can happen... in imperative programming it NEVER happens, because the actors move imperatively, aka step by step or one at a time. This is the problem he is talking about.

ece
It can only never happen if you use a lock correctly. In separate threads, this can absolutely happen in imperative programming. As a reader-writer or dining philosopher problem will show, two actors can try to do the same thing at the same time unless you EXPLICITLY stop them by using locks.

In FP, you will have an immutable variable for state instead of a mutable variable wrapped in a lock. You will be passing a whole new immutable state variable to a pure side effect free function every time, and won't have to worry about getting a lock and releasing it explicitly (these would be side effects). As long as the state is immutable and synced across threads, actors in any thread can just plot their next actions using that state.

In imperative programming, like I said, you'll have to explicitly get and release locks to make sure two actors don't occupy the same spot.

So, if you use atomic immutable variables with pure functions, and the logic in your actors can be conflict free, you can have horizontal scalability across as many cores as you want pretty easily. If your actors cannot be conflict free, you will need to wrap a lock of your choice in a monad and use that, but you will still have gained better debugging, testing and maintainability by using FP.

Now if only every zero-cost OO abstraction had a straight forward FP alternative that was also zero-cost, we'd all be doing FP as of yesterday.

crimsonalucard
Man. You are arguing with me as if I disagree. When did I bring up threads and locks? I'm just elucidating the problem Carmack brought up. Jesus.
ves
I don't think the latter point is true. FP requires a lot of thinking up front, which is not how most engineers like to work.

That's not to say that FP is at odds with iterative programming, but it means that you have to work out a "specification" of your code pretty completely from the beginning.

Although, even then it's not so bad because the excellent type systems and compilers mean you can develop the specification interactively (see Idris' typed holes).

ece
Which part? I think even John Carmack in the video talked about the upfront thinking required for FP. If every CS student learned category and type theory like they learn complexity theory; and FP compilers could optimize type classes and all the object creation, FP would definitely be more widely used. But, yes, there is definitely a cost to picking up FP today in almost any domain, but it's getting lower.
None
None
ves
FP is not gonna wall you off from doing stuff like this. For example, and this is just off the top of my head, you could:

* provide an explicit sequencing of when actors take their turns, passing updated resources to each in turn. This is your "step by step" imperative approach, and you're basically writing a main game loop. * don't sequence the actors, but implement a lock on the resource using a TVar. This is the simplest async approach. * many more approaches I haven't thought of

The point of FP is to be provide as much information about the logic of the program as possible. If two actors can move to the same point at the same time, then you need to write down logic to handle that case, whether actors move in sequence or independently.

Not writing down that logic because "I have an imperative language" is how you get bugs, especially race conditions.

crimsonalucard
I'm not arguing about which paradigm is better. I'm not saying theres no way around it. I'm just describing a single pitfall in the functional paradigm. That's all.
ece
Being able to write asynchronous (multi-threaded or not) code so easily using FP is most definitely not a pitfall.

Even in the most simple multi-threaded imperative programs, no compiler or hardware will give you almost any sort ordering guarantees by default. You have to use atomics or volatile or something else to get specific ordering guarantees. It's the same in FP compilers too.

crimsonalucard
>Being able to write asynchronous (multi-threaded or not) code so easily using FP is most definitely not a pitfall.

Your point? You're saying this as if I disagreed with you. What are you trying to argue here? That FP is the better than imperative? Did I ever dispute that claim? Where are you going with this?

craigsmansion
> There is no concept of time in pure functions.

Very true, but there is a fundamental concept of time, almost by definition, in game-worlds.

Whereas in imperative programming it's true that one actor always arrives first in computation, he might not have arrived first "in the game": conflict resolution is still a necessity.

As such I don't think this is a problem that only occurs in functional programming.

It's my impression that it's the events up to any conflict are much cleaner to express functionally.

crimsonalucard
Correct. But when things arrive at the same time and the same place "in game" the imperative programming style resolves this automatically by giving precedence to the object that was computed first, thus this problem is handled automagically in the imperative world. In fact you probably don't even have to really think about this issue when using the imperative style, unlike the functional style where this issue must be explicitly dealt with.

FYI I'm not saying either paradigm is worse or better, it is what it is.

babuskov
It actually starts here: https://youtu.be/1PhArSujR_A?t=2m7s
ballpark
Should add (2013) and [video] to the title
zerr
I guess we all had that FP click at some point in our lives, but then we got back to our regular (imperative) flow, hopefully as better engineers :)
yogthos
I got that FP click, switch to Clojure, and pretty happy with that move 7 years later. :)
None
None
ece
I wonder about the present state of the functional projects he talks about and is currently working on. A bit old, but good video talking about FP and static typing from a veteran perspective.

Would like to see a conversation between Sid Meier and Carmack, the modern Civ engines seem to have made some strides in stability methinks.

vog
While he says a lot of interesting things, it is too bad he is really just sitting and talking. No slides, gestures or any other facilities. Those would have brought more structure into the talk, making it easier to follow - especially for non-native speakers.
epigramx
Or more distracting and annoying.
zengid
The thing that's interesting about Haskell is that lazyness had a payoff of leading to the discovery of monad-based IO [1].

[1] https://youtu.be/re96UgMk6GQ?t=31m22s

bmc7505
Has there been any progress on garbage collection since then? He discusses GC in games starting here: https://youtu.be/1PhArSujR_A?t=1443
keymone
this is a best video to explain to your C++ friend why functional programming is worth it even in gamedev world.
pjmlp
C++ has been getting functional programming goodies since C++11.

Nowadays with C++17, functional programming in C++ is a common talk subject at C++ conferences.

keymone
that's nice, but generally speaking the mindset is still very much imperative.
pjmlp
Well, it is still a huge fight to get many devs to stop writing "C with C++ compiler" style anyway.

The C++ community that cares about C++Now, CppCon, ACCU kind of material, does care about applying functional programming ideas to their daily coding.

f00_
is it bad that I prefer C to all the shit in C++?

Now I don't even write C/C++ regularly, so my opinion is probably shit. But I really like the simplicity of C over all the features of C++.

laythea
C + classes does the job for me. I hate decoding code - that's the compilers job!
pjmlp
What is so simple about more than 200 documented cases of UB and all the ways one can write unsafe code without even knowing about it?

C++ has stronger type safety, offers the libraries and language features to only go down to C like unsafe coding as last resort.

scierama
TL;DR Famous game developer take an hour here and there and "makes himself" do "some stuff" in Haskell, looks back through some old code on another project that was written in a functional style and forgotten about; forgetting all the other working code that was forgotten about because FP is being discussed so why not and declares that FP is really good.

Haskell disciples on HN dig out this video which mentions FP among other unrelated topics and post it on HN as the definitive reason why all other programming methods should be abandoned in favor of FP, _because Carmack_.

asdfnn88
Community cynic gets meta in sarcastic tone, not realizing it's undermining discourse, not contributing to. Outcome generally being that community cynic can carry around "I'm so smart" feeling all day.
vog
EDIT: Why all those downvotes? Are famous developers not supposed to criticized on HN?

While John has a lot of interesting things to say, the presentation is awful, almost an imposition to the audience.

There's not a single slide, or any repetition to clarify structure, or any notable gestures to make up for that. A simple overview, just a damn simple list of keywords, would already go a long way. That would add a lot of structure and would make the talk so much easier to follow, especially for non-native speakers.

Just because one is so much respected by the audience that they will tolerate everything, one should not act like the audience will tolerate everything.

saganus
HN's downvote mechanism is a funny one.

As far as I know there's no clear meaning to what a downvote means and it seems like everyone has their own definition.

For some, downvoting is a way of "flagging" out of place comments, aggressive ones, etc. However for others it's just a way of disagreeing.

The guidelines don't seem to establish any particular definition to it, so it's kind of a community-driven thing.

At first I was certain that downvoting was a way for the crowd to silence unwanted comments, but "unwanted" has many values depending on the person. Personally, I prefer to downvote when there are clearly aggressive or out-of-place comments, and if I disagree I rather respond with my disagreement. That's the way to have a civil discussion in my opinion.

However, like I said, other people give downvoting a different meaning, so it's not so much that you can't criticize famous celebrities, but that the community seems to disagree with you. I don't see anything "flaggable" about your post so that's why I assume it's the reason.

But again, downvoting is an "undefined" behavior in HN. Guidelines don't mention what shoul or should not be downvoted so it's kind of up for debate.

Don't worry too much about it, as long as you are not clearly being uncivil, downvotes are probably just a lazy way of saying "I don't agree with you" :)

optimusclimb
imho this isn't specific to HN, Reddit is plagued by "downvote to disagree."

Downvoting should be for, as you said, flagging aggressive, off-topic comments.

I've really lost interest in commenting on reddit, because if you say anything that disagrees with or goes against a certain sub-reddits current group think on a subject, you're just going to get downvoted into oblivion. It really just intensifies the echo-chamber effect.

dingo_bat
Do you think upvoting something you agree with is a valid action? If yes, does it not follow that downvote to disagree is also valid?
JdeBP
You need to observe that the asymmetry is built into Hacker News itself, thereby contradicting the hypothesis that these are supposed to be symmetric actions. There is, now, no longer a downvote button against your comment here, for example; there is still an upvote button, though.
savanaly
Personally, I watched it with rapt attention throughout and I'm eagerly anticipating having time later to do so for the other 6 parts of the keynote. Can't remember the last presentation that captured me so. I think he is a very good public speaker and the talk did not meander and was not hard to follow in any way.

I did crank the playback speed up a lot though, which helps considerably (I'm not sure I would enjoy it half as much if it were live, where of course I have to hear it at 1x speed).

rootlocus
This was a keynote at a Quakecon, a conference for game players, not a technical presentation for software developers.
vog
If the audience isn't even technical, isn't that even more a sign that more effort should be put into the presentation, especially for a keynote?
laythea
Yes but is the aim here not to bedazzle a star struck audience, rather than properly educate?
wtetzner
> Just because one is so much respected by the audience that they will tolerate everything, one should not act like the audience will tolerate everything.

He gives these talks because there's a demand for them (from previous audiences). He's not on stage talking because he wants to force people to consume the information.

I suspect it's a trade-off between a talk of this format or no talk at all. Preparing slides etc. takes a time investment, and if it takes too much time, maybe he just wouldn't be able to do the talks.

joncampbelldev
Perhaps native speakers enjoy this kind of talk a lot. I can't speak for others but I certainly did.

I don't feel that listening attentively for a couple of hours is that much of an imposition.

dingo_bat
I am not a native speaker. I watched the linked part while I was having my dinner (or whatever you'd call a couple of McBurgers at 10pm). I think it needs attention, but that is needed for anything that's not fluff.
Kiro
I downvoted you because:

1. You state your criticism like it's an objective fact. I think it was a brilliant talk.

2. Your edit. It violates the HN guidelines and you insinuate people only downvote you because they are Carmack fans.

optimusclimb
> You state your criticism like it's an objective fact, when in reality I'm sure most would disagree with you.

He stated his opinion, it's how people discuss things. Ironically, by saying "in reality, I'm sure most would disagree with you", you do the same thing (express your opinion as if it is fact), AND use the weasel words of "in reality" to add gravitas to your opinion. I didn't like the talk either FWIW.

JohnCarmack
I am aware that my presentations aren't optimal for communicating targeted information, and it does weigh on me more and more as the years go by.

So far, I haven't been able to justify to myself the time required to do a really professional job, so I just show up and talk for a few hours. I like to think there is some value in the spontaneity and unscripted nature, but I don't kid myself about it being the most effective way to communicate important information.

I'm taking some baby steps -- I at least made a rough outline to guide my talking at last year's Oculus Connect instead of being in full ramble mode.

pera
Actually I really like the format of your presentations: just yesterday I watched your live coding session with vrscript (https://www.youtube.com/watch?v=ydyztGZnbNs) and it was fantastic because, first, you were showing to your audience how to work with Racket (and some of the features in DrRacket), and second, you build from zero a demo in a very casual, easy going way. I honestly don't think that a standard slide-based presentation would have been better in any way...
dgritsko
While this may be true, please don't let it cause you to shy away from "full ramble mode" when the opportunity presents itself! I know I speak for many when I say that I have learned much from hearing these sorts of talks of yours over the years. Your willingness to share your wealth of experience is inspirational, regardless of the format.
bluejellybean
Agreed, full ramble isn't something many people do well and it's fun too watch. Random but useful information will fall out of peoples brain and it's great!
leggomylibro
Along the lines of 'functional' programming, Jonathan Blow reposted an old email of yours which I thought was a nice format for getting across some points about reliability, consistency, and minimizing side effects:

http://number-none.com/blow/blog/programming/2014/09/26/carm...

joncampbelldev
Just to say, from my perspective I love your talks, zero distractions, just a long cogent train of thought to follow. Please don't feel the need to make too many changes.
savanaly
For me, at least, it's not a problem that it doesn't optimally communicate targeted information. I watch talks like this for entertainment and maybe to pick up some information by osmosis and the undirected style is good for that. This was a very good talk by the way, and reinforced my positive feelings towards functional programming, thanks for doing it.
fatso83
Really interested in hearing if the ideas on how to transform game programming into a more productive form panned out. Did you end up using the ideas in a later product?
treebeard901
Your contributions to computing over the years more than outweigh any lack of preparation for a presentation.
mmargerum
I actually prefer your data dump continuous talking style. I find myself having a hard time paying attention in talks with lots of pauses or segues.

I really appreciate your insights because I'm dabbling in functional programming after 25 years of c/c++/objc

cm2187
Actually I prefer it a lot more to something scripted. It's normally hard to stay focused watching a guy talking for 1 hour on a youtube video. But with your presentations, I find myself looking for more at the end of the hour!
tosh
Might not be the easiest if you intend to get a certain thought or point across but I really enjoy the format. Reminds me of long-form radio shows.
jasonkostempski
When exploring lisp, did you have a chance to play with Clojure?
hdhzy
Ha, it's funny because for me the format is just perfect. There are numerous presentations on the internet focused on one specific topic but your free form style is just like I'd talk with a friend. The interdisciplinary nature and hands on examples are nice added bonus.

Please don't change this unique approach too much.

The use of monads is a side-effect (ha!) of committing to purity throughout a language, and that's what FP is being equated to: pure statically-typed FP.

(You can argue about how justified that is, of course. I'm not going into that, but you might want to see what John Carmack has to say[0]. No, he doesn't end by saying "we should all convert to the church of Haskell now", but he does talk about how large-scale game programming refactors are made easier when you're working with no (or very disciplined) side effects.)

Monads are not the only way to deal with effects while keeping purity, although they were the first discovered and so on: algebraic effect systems as in Koka[1] (and as simulated in Idris or PureScript) are another alternative. Koka infers effects, so it's probably easier for a C-family programmer to pick up (I know little about it, though).

[0]: https://www.youtube.com/watch?v=1PhArSujR_A

[1]: https://www.microsoft.com/en-us/research/project/koka/

Raymond Hettinger's talk about good code reviews -- https://www.youtube.com/watch?v=wf-BqAjZb8M

Carmack's talk about functional programming and Haskell -- https://www.youtube.com/watch?v=1PhArSujR_A

Jack Diederich's "Stop Writing Classes" -- https://www.youtube.com/watch?v=o9pEzgHorH0

All with a good sense of humor.

someone7x
Came here to add "Stop Writing Classes", a fantastic talk to show how to refactor away from dogmatic OOP.
mixmastamyk
Yes, RH's Beyond PEP8 is great, even if you don't do Python. Will put the others in my queue.

I'm reminded of Crockford's "Good Parts" of Javascript, I believe where he introduced me to the "Mother of all Demos."

johnhenry
Everything I've seen by Crockford is great!
Jul 26, 2016 · 2 points, 0 comments · submitted by jstejada
Jan 10, 2016 · dottrap on Why I Write Games in C
John Carmack thinks there is potential in Haskell or other functional languages for game dev, so much so that he ported Wolfenstein 3D to Haskell as a summer project.

I think he talks about in in this QuakeCon 2013 segment: https://www.youtube.com/watch?v=1PhArSujR_A

Jan 03, 2014 · wting on Xkcd: Haskell
From my perspective the Haskell community is fairly active and growing. At the moment #haskell has 1100 users tying #python and easily beating out ##c, #c++, #clojure, #lisp, #java, #javascript, and #ruby on Freenode.

John Carmack is exploring functional programming[0], and this is Randall Munroe's second functional programming comic in recent weeks.

[0]: https://www.youtube.com/watch?v=1PhArSujR_A

[1]: https://xkcd.com/1270/

Backend devs can probably use more computer resources, particularly cores and RAM. We want to simulate whole clusters on our dev machines and instrument them with tools like Ansible and Docker, and then deploy multiple (fairly heavyweight) processes like JVMs to them. But yeah, 4 (fast) cores and 16GB of RAM is available in a laptop these days, along with an SSD and the best display you can buy, for $3k. (Of course I'm speaking of the MBPr).

Games can always use more resources. AFAIK there is still a lot of progress being made with GPUs. 60fps on a 4K display will be a good benchmark. The funny thing is that GPU makers have taken to literally just renaming and repackaging their old GPUs, e.g. the R9.[1] As for the game itself, there is a looming revolution in gaming when Carmack (or someone equally genius-y) really figures out how to coordinate multiple cores for gaming.[2]

But yeah, most everything else runs fine on machines from 2006 and on, including most development tasks. That's why Intel in particular has been focused more on efficiency than power.

[1] Tom's Hardware R9 review: http://www.tomshardware.com/reviews/radeon-r9-280x-r9-270x-r...

[2] Carmack at QuakeCon talking about functional programming (Haskell!) for games and multi-core issues: https://www.youtube.com/watch?v=1PhArSujR_A&feature=youtu.be...

jophde
You really can't get the screen, and the amazing OS support for it, anywhere else.
It depends on which type of games you're talking about.

AAA game engines are still all about performance and tight memory budgets due to the nature of the platforms they're developing on - consoles and PCs. Hence the aversion to garbage collected languages, although higher level languages usually make it in the engines as scripting languages for game logic: UnrealScript for UDK, C# and UnityScript for Unity, Lua for CryEngine... The industry is traditionally oriented towards C++ and imperative / OO languages, but there's a lot of potential for functional languages in the "embedded scripting" area. In his 2013 QuakeCon keynote [1], John Carmack talks a bit about his experience with functional languages, and how, for example, Scheme could be an ideal candidate as an embedded scripting language.

As far as smaller teams are concerned, and especially indie development, there is a lot of potential for OCaml, Clojure and other functional languages. Many mobile and indie dev teams already use a heck lot of C#, Python, Java and other specialized tools like Haxe, ActionScript. But all these languages have well liked and mature frameworks geared towards game development (XNA / MonoGame, PyGame, OpenFL, libgdx, ...), which is maybe what Clojure and OCaml are missing right now.

[1] http://www.youtube.com/watch?v=1PhArSujR_A

Google cache: http://www.google.ca/search?q=cache:functionaltalks.org/2013...

The article is essentially just a link to the fourth part of his keynote at Quakecon ( http://www.youtube.com/watch?v=1PhArSujR_A )

Here's the video: http://youtu.be/1PhArSujR_A (the text on the page just introduces Carmack)
haxorize
He starts talking about FP/Haskell at the 2:06 mark.

http://www.youtube.com/watch?v=1PhArSujR_A#t=126

Aug 11, 2013 · 90 points, 39 comments · submitted by macmac
msie
It's inspiring to hear how he struggles learning a new language and having trouble doing things not related to his work.
macmac
I agree. It is quite charming to hear him point quite precisely to the qualities that makes him fond of Haskell, but then struggle to put words to the strengths of Scheme.
oacgnol
He's one of my favorite people to follow on Twitter [1] because of this; he regularly tweets about stuff he's learning, especially new languages. It's quite inspiring and uplifting to see a such a figure in the industry still make time to learn on his own and candidly share his thoughts.

[1]: https://twitter.com/ID_AA_Carmack

BgSpnnrs
I strongly recommend listening to his Quakecon keynote in full. It meanders over a huge variety of topics and I personally find him a very engaging and easy to listen to orator.
Sprint
Absolutely! I uploaded it as single segment at https://www.youtube.com/watch?v=o2bH7da_9Os , will archive it to archive.org shortly.
sherbondy
Is there any chance you could extract the audio and put it online somewhere? I am currently finishing a cross-country bike trip, and would absolutely love to listen to his keynote, but only have a phone on me currently. It would mean a ton!
macmac
A phone that doesn't do YouTube - is that even legal?
Sprint
Sure thing! http://archive.org/details/Quakecon_2013_-_Welcome_and_Annua... includes the AAC track extracted from the video as well as a mono 32kbit/s VBR Opus in Ogg to save you bandwidth. Enjoy!
Sprint
Sorry, that ogg was borked. At least I cannot play it anywhere. Will replace it with a standard Ogg Vorbis one.
_random_
Glad to see him think that strong static typing is the way to go.
gnuvince
There's only so many times you can get an error in your programs before you begin thinking that static typing might offer non-trivial benefits.
voodoomagicman
I write ruby and javascript all day, and while I make plenty of errors, I find that they are rarely related to types. What are you working on where you regularly run into these? I hear this argument often enough that I assume there is something to it, but I struggle to understand it.
gnuvince
I don't want to write a big wall of text, but a lot of situations come up where bugs could (if you so choose) be encoded into the type system. Here are some examples:

- The Maybe/Option type: you explicitly declare that a value may be missing, so you cannot call methods/functions on it willy-nilly; the compiler will force you to handle both cases. Say bye-bye to NoneType object has no attribute 'foo' errors.

- Different types for objects that have the same representation: in a language like Python, text, SQL queries, HTTP parameters, etc. are all represented as strings. Using a statically-typed language, you can give them each their own representation and prevent them from being mixed with one another. See [1] for a nice description of such a system. See also how to separate different units of measurements instead of using doubles for everything.

- Prevent unexpected data injections. With Python's json module, anything that is a valid JSON representation can be decoded. This is pretty convenient, but it means you must be very careful about what you decode. With Haskell's Aeson, you parse a JSON string into a predefined type, and if there are missing/extra fields, you get an error.

- When I was doing my machine learning class homeworks, I very often struggled with matrix multiplications errors. An important part of that was that the treatment of vectors vs Nx1 matrices was different. I feel that if I could encode the matrix size in the types, I'd have had an easier time and less errors.

These are simple examples, but whenever I code in Python, I inevitably make mistakes that I know would've been caught by the compiler if I had been coding in OCaml or Haskell.

[1] http://blog.moertel.com/posts/2006-10-18-a-type-based-soluti...

CmonDev
Also, if you use F# flavour of OCaml you can use units of measure that would eliminate even more compile-time errors (if those matrices had something to do with physics for example).
rybosome
The bugs you make may not be related to types because your definition of what a "type" is refers to the very weak guarantees given by C, Java et al.

Have you ever had a bug due to something being null/nil when you didn't expect it? How about a string or list being unexpectedly empty? Perhaps you've discovered an XSS or SQL-injection vulnerability? What about an exception that you didn't anticipate, or really know what to do with?

In a more robust type system, these could all be type errors caught at compile time rather than run time. A concrete example of the null/nil thing; in Scala, null is not idiomatic (although you can technically still use it due to Java interop, which is understandable but kind of sucks). To indicate that a computation may fail, you use the Option type. This means that the caller of a flaky method HAS to deal with it, enforced by the compiler.

My "come to Jesus" moment with the Option type was when writing Java and using a Spring API that was supposed to return a HashMap of data. I had a simple for-loop iterating over the result of that method call, and everything seemed fine. Upon running it, however, I got a null-pointer exception; if there was no sensible mapped data to return, the method returned null rather than an empty map (which is hideously stupid, but that's another conversation). This information was unavailable from the type signature, and it wasn't mentioned in the documentation. The only way I had of knowing this would happen was either inspecting the source code, or running it; for a supposedly "statically-typed" language, that is pretty poor compile-time safety.

This particular example of a stronger null type would be doable in the "weaker" languages, but it isn't done for several reasons - culture and convenience are the two most prominent in my opinion. In this sense, "convenience" means having an interface that does not introduce significant boilerplate; any monadic type essentially requires lean lambdas to be at all palatable. "Culture" refers to users of the language tolerating the overhead of a more invasive type system, which admittedly does introduce more mental overhead.

CmonDev
> "Have you ever had a bug due to something being null/nil when you didn't expect it? How about a string or list being unexpectedly empty?"

I understand that's not exactly the point, but I find that LINQ (Sequence monad), built-in Nullable<> and custom Maybe monad make your life easier in C# in that respect.

rybosome
That's certainly true. Although I'm not an expert on the really advanced type system features, this is an example of convenience without culture, IMO. C# having painless lambdas allows this particular example to exist, but it's not required (or idiomatic, in some code bases) - one can still use plain 'ol null. That said, I'd much rather use C# than Java for exactly this reason.
derefr
I have a feeling that all the people still arguing in favor of dynamic typing at all, are tilting at the windmill of static languages without type inference. Nobody really thinks those are good languages any more :)
brandonbloom
> all the people still arguing in favor of dynamic typing at all, are tilting at the windmill of static languages without type inference

Some of us who argue in favor of dynamic typing have a much more nuanced and informed view...

I, for one, am of the mind that the there are precisely zero production-caliber statically typed environments that possess a sufficiently powerful type system for the kinds of problems I tackle on a regular basis. Haskell doesn't count, since you need to turn on about a dozen GHC language extensions in order to incorporate the last 20 years of research. There's also quite a bit of design warts that newer academic languages are starting to iron out. In particular, I don't think monad transformer stacks are a reasonable solution to computational effects.

That's not to say you can't write any program in an environment where the type system is constraining you. You can. You simply implement a "tagged interpreter", which is something that's so simple to do, people do it all the time without realizing. Either you have a run-time map or you pattern match on an sum type data constructor, then loop over some sequence of those things with a state value threaded through. Poof! You've got a little interpreter.

I find that this happens a lot. And, I also find that a lot of problems are easier to reason about if you create a lazy sequence of operations and then thread a state through a reduction over that sequence. Now, in Haskell, I've got a type correct interpreter for an untyped (unlikely turing-complete) language! Sadly, I can't re-use any of the reflective facilities of the host language because my host language tries to eliminate those reflective facilities at compile time :-(

I'm in favor of optional, pluggable, and modular type systems. I think that a modern dynamic language should come with an out-of-the-box default type system that supports full inference. If, for some reason, I build a mini interpreter-like thing. I should be able to reuse components to construct a new type system that lets me prove static properties about that little dynamic system. This level of proof should enable optimizations of both my general purpose host language and my special purpose embedded "language".

Additionally, I require that type checking support external type annotations, such that I can separate my types from my source code. In this way, type checking becomes like super cheap & highly effective unit tests: The `test` subcommand on your build tool becomes an alias for both `type-check` and `test-units`. You just stick a "types/" directory right next to your "tests/" directory in your project root. Just as a stale unit test won't prevent my program from executing, neither will an over-constrained type signature.

CmonDev
> "there are precisely zero production-caliber statically typed environments that possess a sufficiently powerful type system for the kinds of problems I tackle on a regular basis"

This tells us nothing.

> "...since you need to turn on about a dozen GHC language extensions in order to incorporate the last 20 years of research..."

So it's a bad thing that you can turn off parts of language that you don't like? Also, don't I need to make an effort to go and download the latest version of e.g. Python just to get "the latest 21 year of research" (yes, it's that old)?

brandonbloom
That's an apples and oranges comparison. Python's type system doesn't restrict me in the same ways a static type system does. There are Haskell programs that were not valid that are now valid if you enable a compiler flag and change something in the type sub-language as opposed to changing something in the term sub-language.
mietek
> Haskell doesn't count, since you need to turn on about a dozen GHC language extensions in order to incorporate the last 20 years of research.

I don't follow. Turning on a language extension is as simple as adding a single annotation to your source file. Is this a problem?

Are you objecting to "the last 20 years of research" not being part of the definition of the Haskell language standard? This is a little off the mark, as Haskell was conceived in 1990, and the latest version of the standard is from 2010.

Moreover, Haskell is evolving rapidly precisely due to the use of language extensions. Research is done, papers are written, and extensions are added — then validated through practical use, and either kept or removed.

> There's also quite a bit of design warts that newer academic languages are starting to iron out. In particular, I don't think monad transformer stacks are a reasonable solution to computational effects.

Granted, monad transformer stacks can get unwieldy. Fortunately, writing in this style is not required. Monad transformers are just library code, so it's not necessary to invent a new language to replace them.

brandonbloom
> Haskell was conceived in 1990, and the latest version of the standard is from 2010

Haskell was released in 1990, but the designs started on Haskell itself in 1987 and were heavily based on prior languages; standardizing on a common, agreeable subset. The fact that there is a 2010 version of the spec provides zero insight into how much the language has evolved over that 23 year period. That's not to say it hasn't evolved, just that it's silly to pick on my obviously hyperbolic trivialization of 20 years of progress.

> Haskell is evolving rapidly precisely due to the use of language extensions

Sure. And the fact that it is still rapidly evolving, especially in the type system department, is proof that there are interesting classes of problems that don't fit in to Haskell's type system in a sufficiently pleasing way.

Evolution is a good thing & I have a ton of respect for both Haskell & the PL research community. See the rest of my post for how I'd prefer an advanced language/type-system duo to work in practice.

_random_
Very valid point. Really hate spelling out things for the compiler these days. Unfortunately for some reason language designers think it's a good idea to go crazy about syntax when developing something functional, because it makes things "terse".
jcurbo
He wrote a really good article about using static code analysis with Visual Studio here: http://www.altdevblogaday.com/2011/12/24/static-code-analysi... I imagine this was on the same train of thought that led him to investigate statically typed languages.
macmac
At 1:44:00 https://www.youtube.com/watch?feature=player_detailpage&v=o2... it sounds like he is actively looking for an opportunity to use Scheme as an embedded language in a game. Does HN know of any examples of such use?
null_ptr
Abuse [1] uses Lisp for game logic. You'd have to dive in the source code to find out more, a quick online search didn't turn out much.

[1] http://en.wikipedia.org/wiki/Abuse_%28video_game%29

unknownian
Some guy on /g/ is developing a scheme gamedev library. https://github.com/davexunit/guile-2d
None
None
JabavuAdams
Naughty Dog is the canonical example. They were founded by two ex-MIT AI Lab guys. They designed a couple of in-house languages for Crash Bandicoot, and Jak & Daxter.

In the early days of the PS2, they supposedly derived a benefit from having a higher-level language that could be compiled for any of the PS2's various processors (EE, VU0, VU1). I think that at the time, you couldn't do VU programming in C.

In the end, they had to revert to industry standard C/C++ due to hiring issues.

dadrian
Actually, only one was MIT AI lab. The other was an economics major at the University of Michigan. But, same result nevertheless.
georgemcbay
While their games used to be even more Scheme-based than they are now, AFAIK they do still use PLT Scheme (at least as far up as the Uncharted games, haven't read too much on the tech under The Last of Us) for the sort of traditional scripting one would use UnrealScript or QuakeC for.
noelwelsh
PLT Scheme, or Racket as it is now known, is in use in the Last of Us: http://lists.racket-lang.org/users/archive/2013-June/058325....

A bit of insight into the use of Racket here (scroll down): http://comments.gmane.org/gmane.comp.lang.racket.devel/6915

Vekz
It blows my mind that this is his first foray into functional languages. Makes my imagination wonder what the current state of the gaming and programming industries would be like, had he built Wolf 3D in a functional language and inspired everyone else from there.
Tloewald
Did you listen to the end? He wonders aloud what might have been if QuakeC had been QuakeScheme.

It's not his first foray into functional languages though. It's his first attempt to write production scale code in a pure functional language.

_ZeD_
while listening to the structuring of the haskell version, with a "world" and an "actor" for each element, and the interaction as message-passing, I would think Carmack would find using erlang very satisfing as another alternative :D
nabilhassein
Is there a transcript?
macmac
Not that I know of besides the one auto generated by YouTube.
mkilling
It struck me how well Carmack's ideas about running all actors in the game world in parallel map to Clojure's agents[1].

Clojure introduces a bunch of interesting ideas on how to handle parallel programming. "Perception and Action" by Stuart Halloway is a great talk to listen to if you're interested[2].

[1] http://clojure.org/agents [2] http://www.infoq.com/presentations/An-Introduction-to-Clojur...

macmac
In fact one of Rich Hickey's very first demos of Clojure was ants.clj which uses agents to simulate the ants. While not a game but rather a simulation it includes many elements that would also be present in a game. A literate version of ants.clj may be found here: https://github.com/limist/literate-clojure-ants/blob/master/...
Aug 11, 2013 · 2 points, 0 comments · submitted by wting
Aug 05, 2013 · 1 points, 0 comments · submitted by Lavinski
Aug 02, 2013 · 4 points, 0 comments · submitted by swannodette
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.