HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
John Carmack Keynote - Quakecon 2013

IGN · Youtube · 10 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention IGN's video "John Carmack Keynote - Quakecon 2013".
Youtube Summary
Watch John Carmack give his annual speech at this years Quakecon.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Feb 16, 2021 · oconnor663 on Functorio
John Carmack spent some time in one of his famous keynotes talking about functional programming. On the one hand, obviously video games tend to be very stateful, and implementing them in imperative terms is natural. On the other hand, tons of specific things you do inside a game (or in any program) can be described as pure functions, and doing that is often a great idea. I think it was this part of this keynote: https://youtu.be/Uooh0Y9fC_M?t=4660
milesvp
Your comment made me feel compelled to mention a Blizzard talk I came across recently on the ECS (Entity Component System) architecture they used for Overwatch. It looks suspiciously like a function first architecture in that they don't allow mixing of data and functions. I'm not sure how purely functional it is, but it certainly strikes me as the kind of architecture that Carmack would appreciate in dealing with complexities that lead him to a more functional programming style.

This talk is 3 years after release, and includes all the hindsight they gained from that. Also, the parts talking about the netcode are quite amazing, and the precision of their predictors that they attribute to using ECS.

https://www.youtube.com/watch?v=W3aieHjyNvw&feature=youtu.be

fredrikholm
> I'm not sure how purely functional it is

ECS systems overlap functional programming (FP) only in that both are data first.

Purely functional would mean never mutating an existing value, functions only receiving one value and returning another, monadically binding side effects and so on.

To put it bluntly, a non-trivial computer game in pure FP would hog every resource in your system to barely hit single digit frame rates.

(Disclaimer: I love FP, and use it at my day job. It is not fit for games though)

John Carmack has some ideas on how to use immutable copies of the game state and functional programming concepts to create something that's easily parallelized. I saw it in an old QuakeCon keynote: https://youtu.be/Uooh0Y9fC_M
jfkebwjsbx
Duplicating state has always been an approach for multithreading, but the problem is that it is not always fast enough and merging back can be hard.

There is no silver bullet. There are decades of academic research and engineering on this and we have yet to reach a good answer.

Perhaps possible in isolation or on a small team. I don't think this is practically achievable on teams that have grown past a certain size, though.

To paraphrase Carmack, "Any syntactically valid code, that the compiler will accept, will eventually make it into your code base." [1]

[1] https://www.youtube.com/watch?v=Uooh0Y9fC_M

AstralStorm
Not if you review your code reasonably well and use static analysis plus some testing, making it a process to not get bad code into your codebase.
occamrazor
At some point one will include external libraries, written with different styles and standards. And then will modify the libraries, making them effectively part of the core code base.
This is the part... John discusses his functional programming adventures in Haskell at QuakeCon 2013.

"So what I set out to do was take the original Wolfenstein 3D, and re-implement it in Haskell."

[...]

"I've got a few conclusions coming from it. One of them is that, there's still the question about static vs dynamic. I know that there was a survey just coming out recently where the majority of programmers are still really not behind static typing. I know that there's the two orthogonal axes about whether types are strong or weak, and whether it's static or dynamic. I come down really pretty firmly, all my experience continues to push me towards this way, that strong, static typing has really significant benefits. Sometimes it's not comfortable, sometimes you have to build up a tight scaffolding to do something that should be really easy, but there are real, strong wins to it."

https://www.youtube.com/watch?v=Uooh0Y9fC_M#t=4876

(Starts at approximately 1:21:16 in case the direct link doesn't work correctly.)

AndrewOMartin
This is great when you're re-implementing Wolfenstein.

Not so much when you're doing exploratory computer science, or blue-sky prototyping. To build upon his analogy, scaffolding is most helpful when you have a reasonable knowledge of the shape of the building.

"It depends on the context" isn't exactly a shocking discovery however.

I love Carmack, love his presentation style, and when he talks from experience, I listen.

the_af
> Not so much when you're doing exploratory computer science

I'm not sure. The following experiment is outdated (I'd love to see it redone more rigorously and with modern languages) and has several methodology flaws, but "An Experiment In Software Prototyping Productivity" (1994, Paul Hudak et al) shows that Haskell and static types are actually great for rapid prototyping and exploratory programming, even in the face of vague or incomplete requirements. This runs contrary to common sense, which is why the experiment was fascinating.

hellofunk
Thanks very much for sharing this. Awesome.
meta_AU
Another great quote from that same talk is akong the lines of

"Any syntactically valid code, that the compiler will accept, will eventually make it into your code base."

To sceptics: John Carmack believes that cloud gaming can be successful. Here's his view on this (2013): https://www.youtube.com/watch?v=Uooh0Y9fC_M&t=26m24s
For something like a decade (2004~2005 until 2013), Carmack gave a fluent hours-long speech/discussion/brain-dump at QuakeCon. I doubt a few minutes's speech is something he needs "plenty of time to prepare and rehearse" for at this point.

Recent examples:

2h50 2013 QuakeCon keynote https://www.youtube.com/watch?v=Uooh0Y9fC_M

2h20 2014 SMU talk https://www.youtube.com/watch?v=3_oTvUl88hs

1h30 2014 Oculus Connect keynote https://www.youtube.com/watch?v=nqzpAbK9qFk

bojo
His 2013 keynote where he touched on functional programming is what inspired me to attempt learning and writing a game server in Haskell.
Watch one of his keynotes. He sits down with some bullet points on a tablet and then talks for three hours:

https://youtu.be/Uooh0Y9fC_M

masklinn
Carmack's talks are mind-blowing, he just talks off the cuff fluently for two-three hours and it's consistently interesting, it's completely insane. Here's one he gave at SMU in 2014: https://www.youtube.com/watch?v=fOzkUKJCxTw
Boys and girls, I present my first game, Rocket Renegade.

"Don't you want a little taste of the glory, see what it tastes like?" [1]

I rocked all of the code and the music [2], so you can run and tell that.

The sprite-based graphics appear courtesy of Daniel Cook [3]. Thank you, Daniel, for the graphics that you lovingly created nearly 20 years ago on the Amiga 1200. I hope that my game makes you feel proud and nostalgic.

Thank you, John Dunbar [4]. John created Plasma Sky. He was kind enough to answer a number of development-related questions that I had. I'll add that you really need Plasma Sky in your line-up if you love you some shmups.

I wanted to talk about a feeling I've been experiencing. I think that you may be able to relate, but the most notable experience that I had during development was working to get comfortable with hardcore, radical, relentless persistence. What I mean here is, I worked incredibly hard to make this game a reality. I hit many roadblocks with Swift in the early betas, and it was tough. Near launch, I hit a bug where things ran flawlessly on hardware, but would have a yard-sale on some, but not all, simulators. Beyond a single hardware unit, I had to rely on simulators due to budget constraints. I've logged millions of points playing this game so that it could be the very best that I could deliver. Millions of points. I've broken down in tears, for a complex reason that's hard to explain, but I'll try, because I feel like it's important to talk about this; I can't be the only one:

It's this feeling that, overall, you just want to be a success. You know you want to finish, but at the same time, you want to rest, but the reality is you can't stop. Actually, you feel like you have the mental capacity and power to stop, you feel like you have complete control to sit back and relax a bit, but when you lift your foot off the accelerator, you find that the F1 vehicle keeps traveling at ~322 km/h (~200 mph) because you can't fight your very DNA; it turns out that you're wired that way. Or, at least the perception that you're wired that way is so strong that you might as well be, even if it's actually all mental. It's as if you know you need to rest, and you want to rest; you want to rein in elements of your life that you've let spiral out of control because of your game, but, simultaneously, while you are cognizant of this fact, there is a higher-order, autonomic function that is axiomatically in control, and overrides any of your attempts to stop working, to stop perfecting, to stop bringing it with everything that you have within you.

You retire to bed at a reasonable hour, but you are "eyes wide open" at 2:00am or 3:00am, literally waking up from a dream that's an answer to a problem in the code. The urgency kicks in... you try to fall back asleep, but it's futile, so you throw yourself in the shower, get dressed and light the fuse.

Unfortunately, too many nights like this cause your immune system to be compromised.

Being trapped between those two worlds (i.e., being driven to deliver and trying to rest) is absolutely heart-breaking. This feeling is exacerbated near launch, because you're literally a few hundred meters from the summit, but you're exhausted, delirious, hungry, thirsty, sleep-deprived, and everything else. The wind and the cold is cutting through your gear, making it feel as if you are wearing nothing but your small-clothes. Your visor is completely frozen over, and nightfall is looming, but you still have to summon everything from within you to bring it, because there is no one who can bring it but you. No one is going to summit for you. No one is going to slide their stacks "all in" but you.

Even the people around you won't understand the mental suffering that you are silently muscling through; you, torn between two worlds as the aforementioned autonomic function pushes and strains you to your personal limits. You measure the day's progress in centimeters rather than meters. You will either summit, or freeze to death on the mountain in your boots; the summit in sight, but just out of reach. Which-ever event happens, you feel alone, either way, because no one is carrying the sheer weight of The Vision than you. It is all you. It has only ever been you. There is no one to save you.

I'm starting to tear up just trying to put this feeling into words. Has anyone experienced this?

So, this really is the story (perhaps not unlike yours) of grinding at the mine during the day, returning home, spending time with and cooking for the family... then, quietly donning the white coat and slipping into The Lab and clocking back in after dark to bring it for the next several, precious hours, working to make the dream real... then, catching some sleep, waking up, turning around and dropping the hammer all over again. It would be absolutely romantic for me to be able to headline this as a, "move-fast-and-break-things-built-in-N-hours-MVP", but I've no such lock and load glam-story to recount for you here.

I share all of this because I know that you are the type of people that can appreciate what goes into creating, developing and shipping a commercially-viable game; not just the technical aspects, but the personal aspects: the sweat, the tears, the schedule juggling, the grinding, etc.

Through it all, I've learned even more to embrace the grind.

In closing, I have two items that I wanted to ask you:

1. Have you bootstrapped game development to the point that you were able to launch out on your own? I'd enjoy hearing your thoughts and your story. I'm so far away from those shores that I can't see land at this point, but I hold hope, no matter how implausible it may seem. From my time here on HN, I know that some of you are stacking from a taste of that sweet SaaS, but have any of you rolled a knot in games?

2. Have you "gone functional" in your game development? I would like to travel this path eventually. As you likely know, Swift does offer functional elements. Plus, I've listened to John Carmack talk about his experiments with Haskell [5], and it sounds interesting. However, I'm simply not there yet. My simple, first-step was to be as immutable as possible where I could, and start becoming more aware of immutability as I developed, overall. A small start, but admittedly, a far cry from, "Warp speed, Mr. Sulu!" functional.

Finally, I would certainly field your questions, if you are so gracious enough to have any. I enjoy answering questions, so AMA and you'll likely find yourself diving delightfully deep into a quite heavily-frosted TIL.

Thank you for reading this far, and for checking out the game. Here's wishing you a wonderful and productive 2015!

[1] https://www.youtube.com/watch?v=tkRvLFdrbTU#t=63

[2] Slip on your headphones, FTW!

[3] http://www.lostgarden.com

[4] http://plasma-sky.com

[5] https://www.youtube.com/watch?v=Uooh0Y9fC_M#t=4876

In case you missed it, John discussed his functional programming adventures in Haskell at QuakeCon 2013... well worth a listen.

"So what I set out to do was take the original Wolfenstein 3D, and re-implement it in Haskell."

[...]

"I've got a few conclusions coming from it. One of them is that, there's still the question about static vs dynamic. I know that there was a survey just coming out recently where the majority of programmers are still really not behind static typing. I know that there's the two orthogonal axes about whether types are strong or weak, and whether it's static or dynamic. I come down really pretty firmly, all my experience continues to push me towards this way, that strong, static typing has really significant benefits. Sometimes it's not comfortable, sometimes you have to build up a tight scaffolding to do something that should be really easy, but there are real, strong wins to it."

https://www.youtube.com/watch?v=Uooh0Y9fC_M#t=4876

(Starts at approximately 1:21:16 in case the direct link doesn't work correctly.)

virtualwhys
and for the whole picture vis-a-vis game dev:

"I do believe that there is real value in pursuing functional programming, but it would be irresponsible to exhort everyone to abandon their C++ compilers and start coding in Lisp, Haskell, or, to be blunt, any other fringe language.

To the eternal chagrin of language designers, there are plenty of externalities that can overwhelm the benefits of a language, and game development has more than most fields.

We have cross-platform issues, proprietary tool chains, certification gates, licensed technologies, and stringent performance requirements on top of the issues with legacy codebases and workforce availability that everyone faces." [0]

Which may be why we haven't heard a peep from Carmack on the subject since. Could see pure FP languages entering performance critical domains like game development in the not too distant future, but for now the state of the art isn't quite there.

[0] http://gamasutra.com/view/news/169296/Indepth_Functional_pro...

thesteamboat
Another possible reason we haven't heard a peep from Carmack on the subject since is that he joined OculusVR and has a bunch of interesting hardware problems to work on.
m_mueller
I'm doing HPC work for a few years now and I'm familiar with FP concepts, use it (albeit never purely so far) in non performance critical code where maintainability and testability is more important. My question: How could it even work in performance critical domains? Most of the time memory bandwidth is the limiting factor, thus copying memory and cleaning up unused memory are very expensive. With pure FP, all I can do is hope the compiler won't create all the copies I'm instructing it to do and optimize it back to procedural in-place update. When that doesn't work all I can do is reimplement it procedurally. How does anyone think that this programming concept can be applied for high performance domains? What am I missing?
lmm
You know how GCC compiles mutable C code? It converts it into immutable SSA form, optimizes it in this form, and it's only at the register allocation stage that things become mutable again - if two different reassignments of your variable end up compiling to the same register, at some level it's only coincidence that they did so. So unless you're doing handwritten assembly, you're already relying on the compiler's ability to optimize away redundant copies.

Why does GCC do it like this? For the same reason FP systems do: because it's easier to reason about immutable values, and easier to optimize things you can reason about. In general the closer we can get to telling the compiler our intent, the better the performance we should ultimately be able to unlock. If we tell the compiler to create a loop variable and increment it, it has to do that (or at least, has to simulate the effects of doing that). If we just tell the compiler to apply this function to every element of this list, it has the option of being smarter about it.

There's an argument that this will be much more important as the world becomes ever more multicore. In a multicore environment immutable values become much easier to work with because they can be safely copied around; with mutable variables your program can end up spending more time on synchronization overhead than actually doing things. I'm not sure how much I believe this, but it could explain the recent rise in popularity of functional languages.

the_why_of_y
Indeed, here's an interesting paper arguing this point:

SSA is Functional Programming. Andrew W. Appel. http://www.cs.princeton.edu/~appel/papers/ssafun.ps

What is omitted there however is a discussion of pointer parameters and call-by-reference; probably these cannot be treated in the same way as local variables since they point to a fixed address passed in by the caller, so at that point SSA can't be functional, unless I'm missing something.

ansible
The world is already pretty multi-core.

I was recently spec'ing out a high-end system for simulation work. You can now get Intel Xeons with up to 14 cores now, and soon 18.

Talking about 36 high-end cores (dual-socket system) used to be the realm of Sun E10Ks and other very expensive systems. This is amazing to me.

Athas
A possible solution would be to devise a language with a type system that tracks allocation, such that a garbage collector is not needed, and that all allocation deterministic, and yet safe. Some work has been done on this already, in the form of region inference systems, although they have to my knowledge only been used to compile functional languages to run without a garbage collector, and not been exposed in the user-visible type system. The MLkit compiler for Standard ML is the most prominent region-user I am aware of.
codygman
I believe that ATS[0] has/can do this, perhaps I'm confusing another feature. Anywho, there's also a very comprehensive ATS book[1].

0: http://www.ats-lang.org/ 1: http://ats-lang.sourceforge.net/DOCUMENT/INT2PROGINATS/HTML/...

twic
Obligatory Rust comment: Rust's type system relates allocation to variable lifetime in such a way that garbage collection is not needed, and almost all deallocation is deterministic, and yet safe.

Rust is still in flux, and its documentation is a bit all over the place, but here are some references:

http://doc.rust-lang.org/guide-lifetimes.html

http://pcwalton.github.io/blog/2013/03/18/an-overview-of-mem... (warning: contains old syntax and defunct features!)

orbifold
Well if you are in HPC you are probably familiar withhttp://en.wikipedia.org/wiki/SISAL, what primarily necessitates garbage collection in modern functional programming languages is the insistence on functions as first class values (essentially they want to live in a cartesian closed category). If you look at something like Intel Thread Building Blocks or even CUDA, you will recognise that at its heart it really is a functional programming model. I mean sure in the case of CUDA you can try to write the same variable multiple times, but in reality to preserve parallelism you would not implement for example matrix multiplication by overwriting one of the matrices in the process. Moreover a more functional / mathematical view on most HPC actually yields additional insight on how to parallelise a problem.
robmccoll
TBB and CUDA seem like odd choices to me for an example case. They are based much more heavily around vectorized / SIMD style for regular more general purpose operations. The in-place vs not thing making those functional is a stretch. Very much bulk parallel procedural.
m_mueller
One of the pitfalls when analyzing HPC requirements is to start with a model that's too simplistic - and matrix multiplication is typical for that. What you usually want to run is a solver or simulation. These have timesteps and numerical approximation algorithms (e.g. Runge-Kutta) where you want to make sure that intermediate values live only exactly as long as they need. The reason being that when you distribute your main memory to your threads, especially for GPGPU, you only have a few hundred kilobytes per thread if you want to achieve saturation. So what do you do? In C you typically see the inner timestep functions being called with output and input pointers, then these are swapped for the next step - no allocation, no copying, no overhead, very simple code and nothing that any compiler could screw up. That's just one example of a trick that makes a HPC programmer's life easy, not just because it performs optimally 100% of the times it's used, but because it doesn't complicate performance analysis. In order to be able to analyse code performance properly, one must be able to understand to device code that comes out of the compiler, and how it will interact with the pipeline, the cache etc. If there's too much of a mismatch, it becomes near impossible to understand what's going on. In theory compilers could always achieve the optimum for you and a programmer wouldn't have to care about hardware at all, and just live in his logical bubble. Experience shows that this ideal is pretty far off in the future.
vog
At at beginning he says that he is kind of sold on most Haskell, but doesn't know if lazy evaluation is so great.

I wonder why he didn't try OCaml then. He doesn't even mention this possibility, he just goes on to compare Haskell to Lisp.

Regarding issues with lazy evaluation, there may be some truth to it. This reminds of a great article by Robert Harper:

https://existentialtype.wordpress.com/2012/08/26/yet-another...

fulafel
Notice that he stresses there are benefits, but nowhere does he say that the costs necessarily outweigh the benefits.
nickbauman
I used to think that way too. I've met programmers who still don't write unit tests and are not test driven and remain brilliant. Carmack is one of these guys. I think the static / strong typing thing works well with them because they never developed the practices. If you're test-driven, you're basically building up your own, domain specific compiler as you go not having to play by a language-specific static type systems rules. Dynamic typing make writing code this way really easy. Static typing doesn't. In the end I want the types of my system to work they way my system needs to. I could care less about the type system of merely a language (as applied to my system). And types are just one way to enforce correctness.

Lately I find with languages in the Lisp family, these other types of tools feel much more productive, easier to reason about and flexible to me than type systems. When I believed in static typing I felt very busy, but I was mostly spending my time making the compiler happy, rather than the other way around.

lmm
I've had just the opposite experience. I used to write a lot of tests, but now I can usually encode my constraints in the type system, where I get a lot for free thanks to the structure - often I just need to compose a couple of existing things specialized to the right types. All the useful ways I've found to enforce correctness boil down to types in one guise or another.

When I was writing tests I felt busier because I spent more time typing; now I spend more time thinking and less time writing code, but I'm ultimately more productive.

codygman
> If you're test-driven, you're basically building up your own, domain specific compiler as you go not having to play by a language-specific static type systems rules. > I could care less about the type system of merely a language (as applied to my system).

These two sentences are contradictory. You are merely making up your own language-specific static type system rules instead of learning how to use the one already provided by static type systems.

I will agree that languages with less flexible type systems to get in the way, but languages like Haskell, Ocaml, and other ML do a lot of the work for you.

> If you're test-driven, you're basically building up your own, domain specific compiler as you go not having to play by a language-specific static type systems rules. Dynamic typing make writing code this way really easy. Static typing doesn't.

Dynamic typing makes this easier for you to start writing code, not necessarily to get your end result faster. Dynamic typing allows you to build your own "type system of merely a language (as applied to your system)", and you can apply your lines of thought to it.

Static typing will require that you think about the types/type system of the language you are using and will inevitably slow you down at first. However it's not much different than the trade-off of using an existing library for a programming task or writing your own.

The marked difference is that I don't trust myself or many others to recreate a comprehensive type system rivalling that of mature compilers and people most likely much smarter than us.

The end result is that you have an ad-hoc type or effect system that isn't well defined, and the quality/correctness is assured through brute force by way of writing all the unit tests you can think of.

I assure you that you can't brute force test more edge conditions than your computer.

This post is getting long, but I feel like it really hits on why many "real world programmers" use Haskell and stronger static type systems: We don't trust ourselves, have been slapped in the face by our limitations and mistakes, so we would like to offload a ton of complexity to the compiler.

illumen
Nicely put.

I'd also argue that the divide between dynamic, and static typing systems has fallen down. Almost completely.

Java/C# and even Haskell have dynamic typing. They even sort of have a scripting feel.

However, you can also statically type dynamic languages, and dynamic languages are getting static typing too. For example, JavaScript with asm.js. Lots of type inference is done by IDEs and lint systems now. Even with python, static type checking is very possible (both at the C level, and at the python level).

moron4hire
I don't know about the other languages, but while C# may have dynamic typing, it's extremely uncommon to see it in use. It's introduction was to make certain types of interop easier. It was not intended to be a general-purpose programming feature.
Dewie
> When I believed in static typing I felt very busy, but I was mostly spending my time making the compiler happy, rather than the other way around.

And now you're busy writing tests instead?

nickbauman
Yes but I feel like I get to where I need to be faster than I did with static typing.
dllthomas
What statically typed languages have you used?
sbergot
Except that to perform the level of checks that a compiler does with a good type system, you would have to write a large amount of tests. Code is a liability, and testing code is no exception.

Writing tests in static languages is still required. You just have to write a smaller amount of them. What's more, type checking is enforced in a more systematic way. With tests, you have to rely on people to write them. If a type system is well designed, people will use it to model their code, and the type checking is done automatically.

What's more, types should not be reduced to the type checking phase. Types are a way to provide a high level model/structure. You still need this if you are working in a medium/large project. With static types, this model have nicer semantics, and is more useful.

nickbauman
Ok try this: Go model a square and a rectangle in your favorite statically typed language. You will find that square extends rectangle and setting the length on a square also sets the width. So far so good? So if I have a collection of rectangles which contains an unknown number of squares of which I set the width on each of them. You can't answer the question as to whether the result is correct or not. And I have just destroyed your type system's purpose by using polymoronism, expectoration and dispossession, the three-legged stool of static typing OO, with a very simple and well-defined domain subset of geometry.

Of course it's contrived but since I've been programming (which is longer than I care to say here, because it really dates me) this is basically where I'm at with static type systems. AS long as I've spent with them I find they're not as useful as they should be. I haven't tried Haskell (I probably should) but I'm getting too much done with dynamic languages, especially Lisps, to look back right now. Too much power to look away. Tschau!

dragonwriter
> Go model a square and a rectangle in your favorite statically typed language. You will find that square extends rectangle and setting the length on a square also sets the width.

The coordinates of the vertices are part of the identity of a square and in any reasonable model of a square cannot be changed.

An object with mutable size that is constrained to always be a square probably cannot be a member of a class which extends a class representing objects with mutable length and width that is constrained to remain a rectangle (but necessarily a square), since the properties of the former under mutation are not a simple extension of the properties of the latter, so that the two do not really have an is-a relationship (unless the transformations applicable to the two are designed in an unusual way -- as might be the case where the transformations on the rectangle were constrained to preserve the aspect ratio in all cases.)

People often on initial analysis how mutability really destroys is-a relationships which are based on relations between immutable entities.

nickbauman
Mutability or not doesn't change the fundamental assertion that the result is not intuitive even to the author of the class hierarchy. My whole point is not whether the modeling of the problem in a typical type system is correct or not. The point is that a typical type system allows for a wide range of "incorrectness" that still compiles and runs and does exactly what it was designed to do but still leaves the programmer with an inscrutable result.

Therefore, it can be seen, that there might be other, better ways to assert your system is correct than using types.

dllthomas
"with a very simple and well-defined domain subset of geometry."

I don't think you're actually working with a well-defined subset of geometry. What is breaking here is the notion of persistent identity over time - "take this square and change its width - leaving us with not a new square/rectangle but the same square changed" - which I don't recall encountering in geometry.

lmm
> You will find that square extends rectangle and setting the length on a square also sets the width.

"setting the length"?? Do you mean "making a copy with the length set to a different value"?

If you have a square, which is-a rectangle, you can of course use the rectangle's copy function, or a rectangle lens or some such, to make a copy with a different length. Which will be a rectangle. Utterly trivial.

the_why_of_y
Well if you have a Square type that's a subtype of Rectangle then you can't have methods on Rectangle that, when called on a Square instance, would invalidate the invariant of a Square. If your system is sufficiently dynamic (prototype-based?) you can exchange the class of an instance at runtime. If not, simply make your Rectangles immutable, so a setWith method returns a new instance and you can then return a new Rectangle or a new Square from a Rectangle's setWidth. That should work in any statically typed language with inclusion polymorphism.
nickbauman
Yes I think you're beginning to understand the point.
pjmlp
In an enterprise project some would be changing the widht instead.
codygman
> Go model a square and a rectangle in your favorite statically typed language. Okay.

    data Shape = Square Int | Rectangle Int Int deriving (Show)
> You will find that square extends rectangle and setting the length on a square also sets the width. Why? I don't really see a need for square to extend a rectangle.

> So if I have a collection of rectangles which contains an unknown number of squares of which I set the width on each of them. Like this?

    > let unknowns = [(5,4),(3,2),(4,4)]
    > let toShape (x,y) = if x == y then Square x else Rectangle x y
    > map toShape unknowns
    [Rectangle 5 4,Rectangle 3 2,Square 4]
Tomorrow I'll see if I can make this type-safe by using dependent typing as shown in: [0][1][2]

This was easily solvable in my staticly typed language as you can see. In something like Idris I could have very easily encoded the toShape function the the type signature.

0: http://www.alfredodinapoli.com/posts/2014-10-13-fun-with-dep... 1: https://www.fpcomplete.com/user/konn/prove-your-haskell-for-... 2: http://jozefg.bitbucket.org/posts/2014-08-25-dep-types-part-...

nickbauman
> I don't really see a need for square to extend a rectangle.

Except that a square is-a rectangle! Especially in terms of geometry. So you should be able to use this isomorphism in a type system as a perfect isomorphic mapping this is the entire purpose of the Liskov Substitution Principle. But the polymorphic result doesn't work polymorphically in a way that helps people reason about how the system works. It does things that are "correct" from a LSP and static typing system POV, but isn't helpful.

the_why_of_y
Mathematically, a rectangle and a square are entities that do not have mutable state; any particular square in mathematics has a fixed width and height.

If you define something that has a setWidth method, then it is not isomorphic to anything in mathematics! "is-a" is defined in terms of the operations that are valid.

What semantics would you specify for the setWidth of a rectangle?

The only reasonable contract for a general Rectangle is that setWidth sets the width but does not affect the height. Your proposed implementation of setWidth on a Square would violate that contract, and any other implementation would violate the invariant of the Square, and so it's clear that mutable Square "is-a" mutable Rectangle is simply not true.

Basically the problem you are looking at is a variant of the reference typing problem, where you have, given S <: T, for writing Ref S <: Ref T and for reading Ref T <: Ref S, and in the general case of the read-write reference there is no subtype relationship between Ref S and Ref T.

codygman
I'll admit I'm a little rusty on my geometry, but I'm fairly sure I was never told that a square is a rectangle. If you mean insofar as operations on rectangles should work on squares, this could be accomplished with typeclasses.
nickbauman
Ah ha! Now you see the problem. Whether or not a square is a rectangle in a Euclidian sense it is a normal thing to use a type system in this way. And yet a discussion of it has emerged to whit this exchange. Gives one pause about the value of type systems for correctness eh?
codygman
> Whether or not a square is a rectangle in a Euclidian sense it is a normal thing to use a type system in this way.

Why do I care about whether or not it's normal way (for some people/languages) to use a type system in this way? I care about reality and having my types reflect it precisely.

> Gives one pause about the value of type systems for correctness eh?

I actually don't see how. It seems like you are saying "regardless of whether a square is a rectangle in reality, people (mis)use type systems to say this".

Someone’s misuse of a tool doesn't call the tools value into question.

Basically, I may not be able to model your exact relationships in the exact same way but I can model real world concrete things in the type-system which obtains the same or more flexibility and maintainability.

nickbauman
Exactly my point. This continuing misuse is VERY common even among experienced developers. The ongoing argument itself is a counterargument for type systems as being useful for correctness. At least it should give you some pause?
dragonwriter
> Whether or not a square is a rectangle in a Euclidian sense

It is. A square is a rectangle whose length and width are equal.

> it is a normal thing to use a type system in this way.

It is a common mistake to misuse an OO type system this way -- to take an intuition about is-a relations of immutable objects and infer an is-a relationship about mutable cousins of those objects, which is almost always wrong without careful constraints on the mutations. It is this kind of mistake which is addressed by the Liskov Substitution Principle. Of course, that's the reason the LSP is usually taught fairly early on in any OO programming course, because the mistake is well-known.

> Gives one pause about the value of type systems for correctness eh?

No, type systems are great for enforcing correctness, but they are no better than the analysis behind the definition of the types.

Of course, violating the LSP with a bad subtyping relationship is just as problematic without a static type system.

nickbauman
> It is a common mistake to misuse an OO type system this way.

So common, I would argue, that it suggests there is a fundamental problem or question of utility with using type systems for correctness. You're right that this has nothing to do with a _static_ type system specifically but they're almost always used together that when you say "static" you are almost always heard as "statically typed", so the point is moot in my experience.

dragonwriter
> Except that a square is-a rectangle! Especially in terms of geometry.

The thing in geometry that is called a "square" which is a special case of the thing in geometry which is called a "rectangle" is an immutable object whose side lengths and other features are part of its defining identity.

Modeling this relationship works perfectly fine in an OO language -- but squares and rectangles of this type are immutable objects.

A mutable square with operations which mutate the side lengths while preserving squareness is not a special case of a mutable rectangle with operations which mutate side lengths while preserving rectangleness without also preserving aspect ratio.

The analytical problem here is mistakenly assuming an is-a relationship that applies to immutable entities in geometry applies to mutable objects whose mutation operations are defined in such a way that the is-a relationship does not logically hold.

> So you should be able to use this isomorphism in a type system as a perfect isomorphic mapping this is the entire purpose of the Liskov Substitution Principle.

Actually, the entire purpose of the Liskov Substitution Principle is to provide an analytical basis for excluding mistakes like subclassing a mutable "Rectangle" class for a mutable "Square" class with operations that aren't consistent with an is-a relationship simply because of an intuition about an is-a relationship between squares and rectangles in geometry which doesn't actually hold when you analyze the operations on the particular entities you are modelling (which are outside of the geometric definitions of squares and rectangle.) The LSP defines what can and cannot be a subclass, and it rules out the kind of mutable square to mutable rectangle relationship you have proposed in this thread.

nickbauman
If LSP defines this why do people keep modeling things this way? More importantly, why do all the type systems let me create models this way. They have one job: to help me with correctness. Looks like that failed. So you see my point.
pfultz2
Thats cool. Here's the somewhat equivalent in C++:

    struct square
    {
        int x;
    };

    struct rectangle
    {
        int x;
        int y;
    };

    boost::variant<square, rectangle> to_shape(int x, int y)
    {
        if (x == y) return square(x);
        else return rectangle(x, y);
    }

    std::vector<std::pair<int, int>> unknowns = {
        {5, 4},
        {3, 2},
        {4, 4}
    };

    auto shapes = unknowns | boost::adaptors::transformed(boost::fusion::make_fused(to_shape));
Of course, the structs are not polymoprhic and can't be printed out. I just left that out because it requires a little more code to do that. Obviously with haskell its much less code.

I would very much be interested in the dependently typed version.

codygman
Cool, it's always good to see other languages approaches (especially when idiomatic) to try out their way of thinking.

> I would very much be interested in the dependently typed version.

I'll try to get to it later today, I'm pretty busy until then.

he_the_great
And the same thing in D:

    struct Square
    {
        int sideLength;
    }

    struct Rectangle {
        int length;
        int width;
    }

    import std.variant;
    alias Shape = Algebraic!(Square, Rectangle);

    import std.typecons;
    auto to_shape(Tuple!(int, int) shape) pure nothrow
    {
        if (shape[0] == shape[1]) return Shape(Square(shape[0]));
        else return Shape(Rectangle(shape.expand));
    }

    void main() {
        auto unknowns = [
            tuple(5, 4),
            tuple(3, 2),
            tuple(4, 4)
        ];

        import std.stdio;
        import std.algorithm;
        writeln(unknowns.map!(to_shape));
    }
[Rectangle(5, 4), Rectangle(3, 2), Square(4)]
codygman
I've been debating whether or not to choose C++ or D to learn as my "when performance really matters" language. I'm still going to learn Rust, but I'm very interested in D. I also like that it has a pure keyword ;)
If you liked his talk, definitely check out his QuakeCon keynotes and his talk about physically based lighting.

https://www.youtube.com/watch?v=wt-iVFxgFWk https://www.youtube.com/watch?v=Uooh0Y9fC_M https://www.youtube.com/watch?v=IyUgHPs86XM

Sammi
I've watched the one on lighting twice, just cause it blew my mind so much. I finally (think I) understand light.
DonHopkins
When I first read that, I thought "lighting" was some kind of mind altering drug. It is, if you do it right! ;)
iamshs
I finished watching first one, and my admiration for him grows even more. Immense knowledge with a gift of presenting mere facts. What a likable personality. Thanks for the links.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.