HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
CppCon 2015: Scott Wardle “Memory and C++ debugging at Electronic Arts”

CppCon · Youtube · 143 HN points · 2 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention CppCon's video "CppCon 2015: Scott Wardle “Memory and C++ debugging at Electronic Arts”".
Youtube Summary
http://www.Cppcon.org

Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/cppcon/cppcon2015

Scott Wardle a senior software engineer Electronic Arts will talk about the current memory and C++ debugging setup and tools used in games.

PS4 and Xbox One have virtual memory and 64 bit address spaces, GPU and CPU are getting closer in the ability to work virtual memory. So our tools are getting better and better and closer to PCs. Most of a games memory goes towards art and level data like bitmap textures and polygon meshes. So artist and designer need to understand how much their data takes up. Giving them call stacks of memory allocations does not help. They want to know how big is a group of building is. Why is this group of building bigger than this one? Maybe this one has some animation data or one of the textures is too big. But there are 10,000s of objects built by 100s of people all around the world.

Hey Everyone, I am Scott Wardle, I have been in games over 20 years. Much of that in EA Canada Vancouver (though I started my career in EA Japan.). I like to solve hard problems. I love good data visualization and metric systems and using them to fix hard bugs. Also, I like to find good interfaces that use both tech and people together to flip throw hard problems such that they become easy and solve them selves.

Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
This is a console game, and consoles are always starved for memory since it's such a huge BOM cost so they put as little in as they can get away with. This ends up screwing the console game developers who have to start accounting for every single byte.

So there are a lot of these things in games where they just run a script on all their assets and what not to figure out how much memory they will need for it, then allocate fixed pools at the start. These programmers would love to just throw that stuff into a vector but given their growth strategy you can end up wasting half your memory. Even worse, every element added into the vector might now cause it resize and when that happens during gameplay it might just ruin your 16 ms frame target. A fixed pool is as fast as you are going to get, no random chance your game is going to stutter and no waste.

Here is a great talk on this:

https://www.youtube.com/watch?v=8KIvWJUYbDA

timthelion
Thanks, I learned something, though if they really were accounting for every byte, I'd say that function call obfuscation really didn't help! But I understand what you're getting at.
tgb
It's possible that's only in the PC version. I don't see anything on Google about Securom on xbox, so this seems likely.
Scott Wardle gave an interesting talk at CppCon 2015 which mentions EASTL a bit, with the reasons they use it (and other non-standard tools): “Memory and C++ debugging at Electronic Arts” https://youtu.be/8KIvWJUYbDA
jonkalb
CppCon 2015 was also the site of the first formal meeting of SG14, the game developer study group of the C++ standards committee. Michael Wong has just announced that it will meet again at CppCon 2016: https://groups.google.com/a/isocpp.org/forum/?fromgroups#!to...
Oct 07, 2015 · 143 points, 70 comments · submitted by adamnemecek
danbolt
Having the privilege of worked nearby Scott Wardle before, I can say he's an incredibly approachable and helpful guy. It's exciting to see him on the front page of Hacker News. Go Scott!
danjayh
OK. Maybe I just don't understand game development very well, but I don't get why some of the problems they had in the post-MMU era even existed.

1) Memory Fragmentation - why? This seems like it would only be a problem if they ran their entire game in the same virtual address space.

2) Subsystem A corrupting Subsystem B's RAM - once again, wouldn't one run the sim code in one address space, the rendering in another, etc. and have strategically shared pages for them to transfer data back and forth?

It really seems to me like the idea is 'yeah, we had virtual memory, and it was great, but we didn't really use it'... Granted, they put guard pages on some allocations, but that's akin to using supercomputer to solve your family budget. Lots of power available, but using it in a trivial way.

I acknowledge that there's some cost to a context switch, and that the 2005 generation used PowerPC with its fairly cache-inefficient hashed pagetable (poor locality), but the benefits of having a hard wall in memory between major system functions is immeasurable (I'm guessing that on PPC, they just set up a few Block Address Translation registers and called it 'good enough').

Then again, I come from a safety critical background .. we make aerospace code, so it's developed using the methods described in the article about 'the right stuff' that was up a day or two ago. We're really fond of having really hard walls between every resource we can (memory, CPU time, I/O bandwidth, FS bandwidth ... you name it) for major system functions. I realize that in game programming most of those walls aren't practical (due to performance impact) or necessary, but the memory one is fairly cheap and incredibly helpful.

PS - if you're wondering, we prevent memory leak problems by just disabling dynamic allocation post-boot, which happens on the ground. Our stack utilization is also required to be statically analyzable, which means that you can't do things like recursion. These limitations make the design of some algorithms extremely painful, but what other software have you ever heard of that can truthfully claim 0 memory leaks, guaranteed. We also do 100% code coverage (at the machine language level), 100% decision coverage, static and dynamic worst-case timing analysis, etc. etc. etc. Look up 'DO-178B' or 'DO-178C' if you're curious about what's done and why safety critical stuff is insanely expensive. More recently we're throwing in fuzzing for good measure, but fuzzing doesn't turn up a whole lot in extremely thoroughly tested code that's been reviewed line-by-line by dozens of engineers (and is subject to strict design limitations).

corysama
1) Virtual memory only helps you to the granularity of a page. So, 4k at minimum. Meanwhile, a PS3 has exactly 256 megs of memory available to the CPU and we are highly motivated to use every last goddamned byte of it because if we don't, our competitors will. Meanwhile 2, the vast majority of allocations are under 512 bytes. In practice, even that subset is mostly <64 bytes. In that situation, fragmentation within a single page becomes serious business. You can't afford to have 10% of a given page be 9-15 bytes gaps between allocations. That could add up to 15-25 megs out of 256 wasted! When you get really tight, you can't even afford a heap header for each allocation --thus the specialized small block allocators.

2) A lot could be done better here. 4 gigs is not a ton of address space to protect different systems from each other. But, if you assume no system is going to ever need more that 64 megs, then you could divide memory into 64 spaces of 64 megs each and never have to reuse memory between different systems. I haven't heard of anyone doing this. You'll still get corruption from garbage data ending up in pointers. But, virtual memory segregation would cut out a large class of problems.

Static allocation post-boot is not an option in games. You can be very strict and structured, but the needs of different situations are still too varied. Sometimes you need 90% UI, sometimes you need 10 gameplay regions, sometimes you need 100% devoted to a single, small room for a boss fight.

Similar to fuzzing, I've worked on games that user controller-monkeys. An AI "player" was set up to artificially progress through the game while spamming the system with random actions. We would set all of our machines to monkey-mode each night before going home. In the morning, we would have a fresh set of obscure crashes to investigate.

danjayh
you actually get 4 gigs per process, so you could have 64 spaces of 4 gigs each, with the 256mb of physical ram mapped into those spaces as needed.
corysama
No app-space multiprocessing on consoles. The OS is extremely minimal. In the PS2 days, there was no OS. Just the BIOS and a few standard libraries to statically link.
panic
Since the VM system works at the granularity of pages, using N entirely different regions wastes up to $PAGE_SIZE * N bytes of memory just in the free space at the end of the region. You also make your fragmentation problem worse, since you can't place an allocation from one system within the free space of another.
danjayh
Not suggesting that everything go in separate regions, but let's say they have N major functions (where N < 10) .. maybe physics, AI, render, network, etc. that would each run in their own address space. If N were even, say, 20 this still only comes out to 20 * 4KB (PPC page size) = 80KB, which I would classify as trivial.
danjayh
Also of note is that fragmentation of physical RAM is completely irrelevant, aside from locality issues. They can map random physical pages into one nice, contiguous virtual space, and the only thing that matters is fragmentation in the virtual space. Since a each process gets its own entire 4GB virtual address space on a 32bit system, fragmentation nearly becomes a non-issue.

This is also why I suspect that they were mapping their virtual address space using block translation instead of a page table - if they had a page table, even in one process, it shouldn't have been a big issue since the virtual address space is many times larger than the available physical ram on the 360/PS3/Wii. The other possibility is that much of their RAM needs to be accessible by the GPU, for which fragmentation is an issue.

roghummal
I humbly suggest that if by some horrible twist of fate you had to work on games instead of "proper" software, you would quit programming.
MaulingMonkey
> Since a each process gets its own entire 4GB virtual address space

2GB of which is reserved for kernel shenanigans unless everyone enabled /3GB in their kernel boot parameters (they haven't.)

Default allocators start failing at around half that amount.

Meanwhile I'm running out of memory on the new generation of consoles where I have 5GB of physical memory and no virtual memory fragmentation to worry about.

danjayh
Truth on the kernel. I'm used to working in an environment where we own the OS, the board design, and the design of many of the chips ... which means we can make it do whatever we need (downside is, you have to maintain it all too). However, our needs are unique enough to justify it -- computing is hard in an environment where atmospheric radiation can flip random bits for you (See: https://en.wikipedia.org/wiki/Single_event_upset).
MaulingMonkey
You have to manage memory for things like textures, models, and maps - each of which are generally going to be way larger individually than 80KB.

Here's an example of a pathological - but entirely reasonable to encounter - case:

You allocate several textures of various sizes. Your larger 3MB texture blocks might end up seperated by smaller allocations for, say, icons. Or other 3MB textures that can be reused on the next level (to reduce load times). Long story short, unless you do work to ensure textures are correctly grouped by lifetime (which in many cases will depend on and diverge based on user input) you're going to end up with lots of 3MB holes.

You switch levels, and release many/most of your 3MB environment textures. But the icons remain loaded for your UI, so you don't free those. So now you have 3MB holes everywhere.

You start to load the next level. It's using a lot more alpha textures in it's environment - lots of water and glass. The alpha channels adds another 33% to the size - these new textures are mostly 4MB. None of these new textures fit into the 3MB holes. You've already potentially just wasted about half your available memory (or address space), with a single texture size in two formats of note. Within a single system ("render"). Oops.

danjayh
OK. So the issue is the GPU then. That same scenario with data that could use virtual memory would be a non-issue, since you could mash the available fragmented phsyical ram together into whatever size blocks were needed using the MMU.
MaulingMonkey
> That same scenario with data that could use virtual memory would be a non-issue

Untrue I'm afraid - the case I'm basing this example off of could (and did) happen in virtual address space as well.

> since you could mash the available fragmented phsyical ram together into whatever size blocks were needed using the MMU.

Okay, so your physical memory is no longer fragmented. Unfortunately, your virtual address space still is. If I need to allocate a 3MB contiguous chunk of virtual address space (I do), I'm still SOL.

All the MMU gains you in this respect is a "free" memcpy - twiddle some MMU bits instead of copying data around from one buffer to another. It won't fix up any of your pointers.

nikanj
By running everything in a single process and address space they can get n % more fps, where n>0.
danjayh
Perhaps, but it's possible that it was actually the other way around since Virtual Memory and multicore came at about the same time. By running everything in a single process, they probably avoided forcing clean interfaces and dependencies between their functions, which might have actually hurt their ability to utilize the CPU efficiently.
pandaman
Here is all you need to know to understand game development. It's just two things.

Firstly, on a home console there are just two playable frame rates: 30 fps and 60 fps. This means you can only have frames either less than ~16.6 ms or ~33.3ms long. The next step is 20 fps and if you were even allowed to ship a game like that it would not make any money so you would likely get fired anyway.

You will do very unsafe things to stuff as much as possible into a game's frame because your competition is already doing it and your designers demand it. There is only one thing you could possibly sacrifice a little bit of performance for - development time.

Thus there is this second thing: a game has to ship before you run out of money. The most common production time is about 18-24 months. This is for millions LOCs of source code and terabytes of source assets (textures, models, video, sound etc) written, often from scratch, and tested.

How long does it take to ship software for a new plane in your business? I'd be terrified to even approach a plane that had shipped in 1.5 years, least fly in it.

bluecalm
>>Firstly, on a home console there are just two playable frame rates: 30 fps and 60 fps.

This is probably very basic but I am going to ask anyway: why are 40fps, 45fps or 47fps not possible?

pandaman
The video out is fixed at 60hz so, at constant framerate, you can only output every frame, every two frames, every three frames etc. In other words you framerate is a factor of 60. You could tripple buffer and do variable rate so an average could be something like 40, but it would cost you 8M and variable framerate is perceived worse than constant. If you cannot hit 60 you better spend more time on getting game prettier at 30 than waste memory on a tripple buffer and get a stuttering game as a result. On current gen memory is not as much an issue but the variable framerate is still not perceived well.
NickPollard
Because the display is normally running at 60hz, so if you don't want tearing (where the frame to be displayed changes halfway through monitor refresh, leading to a sharp tear across the screen), you need to run at an integer number of refreshes per frame (known as vsync). 1 = 60fps, 2 = 30fps, 3 = 20fps.
jharsman
Traditionally TVs only support 60 Hz refresh rates (or 50 Hz for older PAL sets), so you either render a new frame for each frame the TV can refresh, or you display a frame for two TV refreshes.

Thi sisn't strictly true any more, since many TVs now support 72 Hz (to be able to display 24 fps content like film), but my guess is that doesn't have wide enough support to rely on.

MaulingMonkey
Because your monitor is (probably) refreshing at 60hz. Your frames will display for some integer number of monitor refreshes - at 1 you'll get 60fps, at 2 you'll get 30fps.

Over a second, it's of course entirely possible to get something in-between, where some frames are displayed for 1 refresh, and others for 2 refreshes. Inconsistently waffling between these is going to be more jarring than sticking to one.

Recently, you have things like nVidia's... gsync? Which make the monitor refresh rate variable with compatible monitors, to match the game framerate, making 45fps entirely possible, and reducing the jarring from varying frame lengths. But this recent, rare, and likely PC and/or GPU vendor specific still.

melling
I imagine game developers are the few people left using C++. Does this come down to not being able to get around pauses in garbage collection?

I wrote a little C++ in iOS a few years ago when I thought I might try to share some code with Android. Unfortunately, Objective C++ was slow to compile. I think the biggest problem with C++ is that it's simply harder to get correct code:

http://www.gamasutra.com/view/news/128836/InDepth_Static_Cod...

[Update]

Sure, there's a lot of code that was started 15-20 years ago that was written in C++ because it was the best thing at the time. Java's JIT has greatly improved. I'm sure C# is also great. Large apps like Hadoop are written in Java. IBM's Watson is mostly Java.

http://www.drdobbs.com/jvm/ibms-watson-written-mostly-in-jav...

Why does C++ offer advantages over C# on Windows, for instance? In 2015, starting from scratch, where is C++ needed?

Please consider that I was using C++ when it was cfront. I seem to be getting a lot of "how naive" from people who've probably never used C++. Yes, there are dozens of desktop apps in C++, especially legacy, but there are millions of apps, mobiles apps, and web apps that are in other languages.

plexchat
At PlexChat[1], we intend on using C++ to write a lot of our infrastructure (where it makes sense, that is). The language has advanced significantly in the last decade and writing idiomatically correct, and safe code, is much easier than it used to be.[2]

Garbage collection is certainly a part of it, but really, it's about programming with a deterministic runtime. Controlling memory budgets and doing things described in this talk is possible in an environment where the programmer has control, but instrumenting a blackbox runtime to identify performance bottlenecks and pain-points can be a huge endeavor. Other languages optimize understandability of the programming semantics (what is this algorithm doing) but do very little to aid in expressing runtime semantics (how will this algorithm execute on the machine).

[1] http://plexchat.com [2] https://www.jetbrains.com/cpp-today-oreilly/books/Cplusplus_...

pjmlp
C++ isn't the only language that offers such garanties.

What it is, is the 90's survivor of all wannabe C replacements.

pcwalton
> Other languages optimize understandability of the programming semantics (what is this algorithm doing) but do very little to aid in expressing runtime semantics (how will this algorithm execute on the machine).

I think this is painting other languages that aren't C++ with too wide a brush. There are many non-C++ languages that make it "easy" to control the low-level generated code (though I question really how easy it is with how incredibly aggressive in optimization modern C++ compilers are).

plexchat
Other languages, not all other languages ^^. Yes of course there are several languages that map more directly to one's mental model of how code is executed. Rust (which you're working on) is an one such exception. Optimizations are certainly aggressive but the micro-optimizations are usually either well known (like NRVO, loop unrolling, function inlining), or not important relative to more costly operations that are occurring (allocations of heap memory, fetching from the heap, executing a system call, acquiring a mutex, etc).
inDigiNeous
I can answer only for my own account, but for starting a new project in late 2013 december, and now been working for it the past two years, related to custom geometry generated in VR, C++ was the only viable crossplatform option.

With C++ (using the C++11 standard) I can assure pretty sure that the code compiles on Mac, Linux, Windows, Android, iOS and pretty much every platform.

With the VR -platform we also need all the performance we can get, because target FPS is 90 on a stereo rendering sized area.

Also, library support, existing code to use and so on all win here.

Definitely C++ is not the most nice language to get into, Objective-C for example is much nicer IMO. But the more you use C++, the more nicer it gets as you learn the ropes more and more. It's a very complex language, but knowing it pays off in the long run, and the modern C++11 and C++14 are a completely different beast than the previous implementations in terms of ease of usage and code syntax niceness.

So, crossplatform is a big thing. If I would write it in C#, it would be tied to either Microsofts C# or the Mono implementation. If I would have chosen Obj-C, it would be pretty much Mac and iOS only.

When you need performance + crossplatform, C++ just wins.

pjmlp
However that is a consequence of available implementations, not of the language itself.

Back in the day, I went Turbo Pascal -> C++ exactly for that reason (Turbo Pascal -> C was never an option).

If the UNIX systems I had to work with, had a Turbo Pascal compatible compiler, I would kept using it.

inDigiNeous
Well, partly yes. If this was year 2030 and Rust was available on every platform so that everything worked perfectly, I would have probably used that.
aarongolliver
Desktop applications requiring high performance also use C++. Photoshop (and perhaps many parts of the Adobe suite?), and Tableau come to mind.

Your intuition about gc pauses is correct, when you have to push frames every 15-30 miliseconds, you can't afford any kind of unexpected pausing. Game devs also use a huge number of hacks to hit their performance targets, something that using c/c++ makes much easier due to their relative looseness.

vcool07
No, C/C++ is still very commonly used in many real time systems, embedded systems etc. But, since most of the topics in HN are in the web/android/ios areas, you might not see many devs talking about it here. C/C++ (esp on linux/RTOS) still has a very active user base and market demand !
berkut
C++ is still very heavily used in industries where speed (and memory compactness) matter. Games, simulation, VFX...

For example, most software used to make film movies / CG imagery: from modelling (Maya, SI), to texturing (Mari), to rendering (PRMan, Arnold, VRay, Mantra, Hyperion, Manuka), to compositing (Nuke).

And it probably will be for many years to come as there's such an infrastructure there.

panic
It's quite likely that C++ code rendered this web page for you. If you're on a Mac, that code was itself likely compiled using C++ code.
jcranmer
Or on Linux or Windows: gcc is now written in C++, and I'm pretty sure MSVC itself is in C++.
StephanTLavavej
Yes, both MSVC's front-end C1XX and back-end C2 (aka UTC) are written in C++. Even our CRT is mostly C++ now.
Flow
Do you update it to follow the latest standards or is it early 2003-ish C++?
StephanTLavavej
C1XX uses new features fairly aggressively. For example, it uses lambdas, auto, range-for, etc.
davidgrenier
Too bad nobody directly addressed your question as to whether it had anything to do with pauses which would be unbearable on the gamer's experience.

Perhaps the question is still open whether properly managing allocation, using object pools as well as other strategies would enable writing triple A games with a managed language.

Or perhaps all we need is a GC race (à la JS engine race we've seen in the major browser) which Google might very well be starting with the recent efforts on the Go garbage collector.

vvanders
See my comment below, GC isn't the problem. It's being able to layout your memory appropriately so you don't cache miss which when everything is a single heap allocation becomes almost impossible.
pcwalton
> Or perhaps all we need is a GC race (à la JS engine race we've seen in the major browser) which Google might very well be starting with the recent efforts on the Go garbage collector.

This has been happening for a long time with Java HotSpot and Azul C4, which still represent the state of the art. Go isn't really doing anything new—it's still not generational, for example.

plexchat
The speed of the GC isn't the issue (although it certainly can be). The non-determinism is. It's hard to control when a GC should happen, and when it happens at a bad time, the ramifications are perceived as an awful user-experience.
kayamon
It's not that you couldn't write a triple A game with a managed language.

It's that you can _quite easily_ write a triple A game without using a managed language.

For games, managing memory is not a big problem. For other kinds of apps, it can be. In some apps the lifetime of an object is not well defined. But in games, it tends to be very well defined.

If you want a good explanation of the problems game developers face when trying to use GC, go read Rich Geldreich's post on it (http://richg42.blogspot.com/2015/05/lessons-learned-while-fi...).

Zardoz84
There some AA or AAA games using languages that not are C/C++ . Severence: Blade of Darkness (Aka Blade: The edge of darkness), have a lot of code on pure Python. And it's well know that Naughty Dog use a variant of Lisp on his games.
kayamon
I'm sure there are - that's not what I said.

Also, Naughty Dog's games do _not_ use Lisp, they use a custom language that looks a little like Lisp, but has no garbage collector.

lispm
Lisps without GC have been used in various places for delivery.

An example was Thinlisp, which tries to be mostly compatible with Common Lisp, but without GC. The compiler for Thinlisp is written in Common Lisp.

G2's Gensym is written in such a Lisp. Thinlisp also comes from them, IIRC. http://www.gensym.com

Some Lisp don't/didn't use a GC, but reference counting or similar schemes.

z3phyr
Naughty dog used to use Game Oriented Assembly Lisp (GOAL), which was very scheme like. They dropped it for C++. Although they still use a Racket dialect for some gameplay scripting.
vvanders
It's not garbage collection(although that's a part of it, but Unreal3 actually had a GC).

It has to do with memory layout. Cache misses = lost performance and being able to structure your memory appropriately is super-important. Look up Mike Acton or any of the Data Oriented Design stuff for more details.

kayamon
Many, if not most desktop applications are written in C++. Just because you don't use it doesn't mean anyone else doesn't.
dahart
> I imagine game developers are the few people left using C++. Does this come down to not being able to get around pauses in garbage collection?

That is one reason. But if you watch OP's video, you'll see a lot of other reasons. In particular, games frequently have their own allocators & memory management. This is one reason why it might be hard, inconvenient, or impossible to use Java or C#.

That said, a lot of gameplay logic these days is actually written in a scripting language like Lua, and C++ is used for lower level systems like rendering & physics.

> In 2015, starting from scratch, where is C++ needed?

The main question is: "starting what"? For most web dev, you'd be crazy to start with C++. For the internals of a game engine, like unreal or unity or a competitor, C++ is probably the only sane choice. Also anything embedded. I guess Arduino & others are getting interpreters now, but C/C++ (notably minus the crazy kind of stuff in the video) is the easiest way to start.

I used to do hobby projects in C++, mainly graphics & image processing. My reasons were I knew it, it compiled to very fast code compared to anything else, and it was often the easiest way to interface with whatever open source libraries I needed.

These days most libraries are available in your favorite scripting language, and the performance can be good if you pay attention. I don't personally have a reason to use C++ anymore, even for hobby projects that need high performance.

wsxcde
I use a fair amount of C++ for my code and its entirely because all the other tools I use - SAT solvers, model checkers, etc. are written in C++. I don't like it very much and many others in my community don't like either, but we're all stuck in this local maximum.

Moving to an entirely new language is impractical for any single project so all the new projects end up using C++ which in turn means the projects that come after them ending up using C++ as well. For a concrete example, look at something like ABC (https://bitbucket.org/alanmi/abc). There is very little incentive to rewrite the whole thing in a safe language but if you're doing new circuit verification research having ABC's humongous library of transformations and verification algorithms will be a huge help in implementing new things.

There have been a few attempts to break away though. Some folks at SRI wrote the model checker SAL in Ocaml, a lot of CMU folks seem to like Ocaml too: for example the bitblaze project uses Ocaml. And z3py has helped make Python popular.

I've lost countless hours of my life debugging the sorts of issues this guy is talking about, so I really hope we do make the switch to something better.

vectored
I use C++ in my work - robotics/Comp-vision. We need it for the performance and the cross-platform characteristics. It is a highly powerful language once you get used to its complexity. In my experience, it is the most popular language in the robotics community, though not for prototyping (Matlab, python are more common for prototyping)
Kristine1975
Here's Tim Sweeney of Unreal fame talking about GC: http://lambda-the-ultimate.org/node/1277#comment-14252 (granted, it's from 2006, but I still think it's relevant)

TL;DR: It's not a problem.

adrianN
Check out this talk by Herb Sutter https://sec.ch9.ms/ch9/ddaf/d4642f30-491c-481d-97c5-62aa5ab6...
fungos
yeah, few people left using C++ ...
denim_chicken
Such naivety.
z3phyr
A major chunk of software developers use C/C++ (including a lot of Game Developers); Although lot of people see only the tip of the iceberg. Look into your systems. Almost everything down there from your database to your compiler/interpreter/VM to your operating system, your web browser, your music player, the internals of your search engine and yes your favorite games (renderer, physics, most of the gameplay) is probably being developed and maintained in C/C++.

The reason for C++ being used is not only Performance, but also a sense reliability that the implementation will actually work.

gh02t
C++ is huge in scientific programming. Fortran is the cliche language for science, but the vast majority of newer large-scale codes are being written in C++.
e12e
AFAIK Debian isn't very diligent about tagging packages with "implemented-in", but even so:

  dpkg -l |grep ^ii -c #Installed packages
  3608
  # packages tagged as being in c++, and installed:
  aptitude search '?tag(implemented-in::c++)' \
    |grep ^i -c
  930
I'm sure there's lots of c++ in the other 75% (or they depend on a runtime/compiler/library written in c++) -- but at any rate - one in four packages is nothing to sneeze at.
lultimouomo
Linux is more of a C than a C++ land, so the figure does not look unreasonable to me.
melling
This is completely irrelevant to the question. Furthermore, most of the code was written a decade ago. Why can't I use Go, for example?
lorenzhs
Your search query needs escaping for the pluses, otherwise it will match 'implemented-in::c' as well (aptitude's matching is weird).
e12e
Whoops, you're absolutely right. Eg:

  '?tag(implemented-in:c\+\+)'
On a different desktop right now, so can't check -- but that'll probably lower the count considerably (eg: on a different server I get 226 for "c/c++" and 19 for just "c++" ("c\+\+)").
pjmlp
I like C++, but I am also quite older than the language and remember when the language was seen as unsuitable to write any of those systems you have mentioned.

Eventually it got adopted by people like myself that didn't care for the naysayers and its compilers improved to the point many think it was always like that.

Likewise others will use languages that the naysayers of today won't see as proper to write them and in ca 20 years from now, everyone will think they were always that way.

z3phyr
You are right. I see a huge potential in Rust. It seems very suitable for the C++ level domains. The only thing keeping me away from production usage today is the lack maturity with ecosystem and libraries.
jguegant
Rust is kinda the Godwin's law of C++ threads these days.
magoghm
In the early 90's I posted on some game developers forum that when consoles got more powerful processors it would be possible to write games in C instead of using assembly language. Everybody told me that I was an idiot.
wsxcde
They say they can't pass parameters to delete so they had to use a macro instead to be able to pick the right allocator. But why not just store the pointer to the allocator along with the block and then have operator delete call the allocator? Is it just because they don't want to store an extra pointer with each block?
duaneb
> Is it just because they don't want to store an extra pointer with each block?

I would imagine; this could add up quickly with an e.g. linked list.

kayamon
It's kinda an implicit rule in game development that you don't do at runtime what you can do at compile time.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.