HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Joe Rogan Experience #1342 - John Carmack

PowerfulJRE · Youtube · 17 HN points · 16 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention PowerfulJRE's video "Joe Rogan Experience #1342 - John Carmack".
Youtube Summary
John Carmack is a computer programmer, video game developer and engineer. He co-founded id Software and was the lead programmer of its video games Commander Keen, Wolfenstein 3D, Doom, Quake, Rage and their sequels. Currently he is the CTO at Oculus.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
They have a very good thing going. Perhaps there is no great reason to bite off so much at one time. They can take their time and do that later if it makes enough sense. I would expect it would require a very substantial effort to rebuild their platform in a different language.

If you're 75/100 of where you want to be on performance, it can be easy to lose immense amounts of time chasing a 95/100 type ideal performance outcome when you can maybe far more easily get to 90/100 by making eg straight-forward caching improvements to what you already have and not have to rewrite all of your code.

Good enough is almost always underrated in tech. People destroy opportunity, time, money, and entire businesses chasing what supposedly lies beyond good enough.

John Carmack has a good example of this in his Joe Rogan interview [1], in how id Software burned six years on Rage, making incorrect (in hindsight) choices that involved trying to do too much. He regrets his old standard line and approach that it'll be done when it's done. He wishes they had made compromises instead and shipped Rage several years earlier. That's a pretty classic storyline in all of tech, taking on far too much when 85% good enough would have worked just as well most likely.

[1] https://youtu.be/udlMSe5-zP8?t=8630

switch11
very good point

very good example

John Carmack is practically a machine.

He's openly talked about his work ethic in a bunch of places. He's the type of guy who after a life time of coding calculated he's 100% efficient up until 13 hour work days and then he drops off[0]. Although he did mention working those long hours is often best working on multiple things instead of 1 topic but maybe with AGI there's a bunch of different avenues to explore.

[0]: https://www.youtube.com/watch?v=udlMSe5-zP8&t=4773

modeless
Sure, but that doesn't mean he wouldn't get any value from working in proximity to top minds in the field, which Facebook has in spades. Especially at the beginning when he has a lot to learn. And although you can easily rent unlimited amounts of compute power in the cloud and access it from home, data is harder to come by. Facebook has that too.

It seems weird to distance himself from Facebook for this work, when they have one of the best AI labs in the world. Maybe all the apocalyptically-bad press about Facebook is getting to him, and I wouldn't blame him for that. Or maybe this is just his way of retiring slowly.

codesushi42
Uuh, AI research has nothing to do with coding all nighters. This is a common misconception among software engineers. It is more a science, and less an engineering problem. It is more about running experiments than it is writing fancy algorithms.

You are bound by the amount of data and computational resources you have at your disposal. Neither are tied to man hours. You can stay up all night for days waiting for your model to train, and it will do you no good.

Rapzid
He must have some very good advice on getting a good nights sleep. Makes all the difference IMHO.
archagon
He did tweet that unlike many engineers, he can't be productive unless he gets a full 8 hours of sleep (IIRC).
I'm an idiot. I wrote John Cusack when I wanted to write John Carmack. I don't know how I could even confuse the two names.

Here's the Carmack podcast where he talks about it. Fantastic listen for anyone honestly. Carmack is a fascinating guy.

https://www.youtube.com/watch?v=udlMSe5-zP8

He did Joe Rogan's show a few days ago: https://www.youtube.com/watch?v=udlMSe5-zP8

I've heard a lot of good about this book: https://www.amazon.com/dp/B000FBFNL0

Not reading, but he just did a joe rogan podcast. https://www.youtube.com/watch?v=udlMSe5-zP8
He talks about AGI at https://youtu.be/udlMSe5-zP8?t=2776

I wonder if all the really smart people who think AGI is around the corner know something I don't. Well, clearly they know a lot of things that I don't, but I wonder if there's some decisive piece of information I'm missing. I'm a "strict materialist" too, but that doesn't mean I think we can build a brain or a sun or a planet or etc within X years, it just means that I think it's technically possible to build those things.

I don't see how we get from "neural net that's really good at identifying objects" to "general intelligence". The emphasis on computational power also makes no sense to me. If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

Sometimes I wonder if AGI (and the concept of a "technological singularity") isn't just "intelligent design for people with north of 140 IQ". Maybe really smart people tend to develop a blindspot for really hard problems (because they've solved so many of them so effectively).

dnadler
I agree with you that AGI is not around the corner. I think the people who do believe that are generally falling for a behavioral bias. They see the advances in previously difficult problems, and extrapolate that progress forward, when in reality we are likely to come against significant hurdles before we get to AGI.

Also, seeing computers perform tasks they haven't done before can convince people that the model behind the scenes is closer to AGI than it really is. The fact that deep neural networks are very hard to decifer only furthers the mystical nature of the "intelligence" of the model.

Also, tasks like playing starcraft are very impressive, but are not very close to true AGI in my opinion. Perhaps theres a more formal definition that I'm not aware of, but in my mind, AGI is not being good at playing starcraft, AGI is deciding to learn to play starcraft in the first place.

That's my 2 cents, anyways.

xamuel
It's like if someone watches "2001: A Space Odyssey" and takes HAL as the model for AI, so they work really hard and create a computer capable of playing chess like in the movie. "Well, that's not really the essence of HAL, it's just that HAL happened to play chess in one scene." So then they work really hard some more, and extend the computer to be able to recognize human-drawn sketches. "Well, that's still not really the essence of HAL, it's just that HAL did that in one particular scene." So they work still harder and create Siri with HAL's voice, and improve its conversation skills until it can duplicate the conversations from the film (but it still breaks down in simple edge cases that aren't in the film). "Well, that's still not the essence of HAL..."

The Greeks observed these limitations thousands of years ago. Below is an excerpt from Plato's "Theaetetus":

Socrates: That is certainly a frank and indeed a generous answer, my dear lad. I asked you for one thing [a definition of "knowledge"] and you have given me many; I wanted something simple, and I have got a variety.

Theaetetus: And what does that mean, Socrates?

Socrates: Nothing, I dare say. But I'll tell you what I think. When you talk about cobbling, you mean just knowledge of the making of shoes?

Theaetetus: Yes, that's all I mean by it.

Socrates: And when you talk about carpentering, you mean simply the knowledge of the making of wooden furniture?

Theaetetus: Yes, that's all I mean, again.

Socrates: And in both cases you are putting into your definition what the knowledge is of?

Theaetetus: Yes.

Socrates: But that is not what you were asked, Theaetetus. You were not asked to say what one may have knowledge of, or how many branches of knowledge there are. It was not with any idea of counting these up that the question was asked; we wanted to know what knowledge itself is.--Or am I talking nonsense?

davinic
This is a good example of Nassim Taleb's Ludic Fallacy: https://en.wikipedia.org/wiki/Ludic_fallacy
klenwell
This is a great example of 1 of the 2 fundamental biases Kahneman identifies in Thinking Fast and Slow: answering a difficult question by replacing it with a simpler one.

The other one (also perhaps relevant to the general topic of this thread): WYSIATI (What You See Is All There Is).

mordymoop
The problem here seems to be that you think the state of the art resembles “being really good at identifying objects”. This makes it clear that you are not keeping up with the frontier. I recommend looking up DeepMind’s 2019 papers, they are easily discoverable.

When you read them, you will probably update in the direction of “AGI soon”. It’s possible that you won’t see what the big deal is, I suppose. I personally see what Carmack and others see, a feasible path to generality, and even some specific promising precursors to generality.

It also helps to be familiar with the most current cognitive neuroscience papers, but that’s asking a lot.

scep12
What are some of the highlights from DeepMind that gives you optimism for a path to AGI? I am not seeing it, personally.
guskel
Various meta-learning approaches and advancements in unsupervised learning and one-shot learning.
EGreg
I would like to know as well
criddell
Is there anything in the structure of the brain that makes you think "of course this is an AGI"? For me, the answer is no. That's why I think progress on narrow AI and AGI is going to be unpredictable. Nobody will see the arrival of an AGI until it's here.
coolspot
Some also think that nobody will see the arrival of an AGI even after it’s here, because after arrival there will be no one left to see.
rnernento
Can you explain the general path in layman's terms in a few sentences? As far as I can tell AI is really good at analyzing large datasets and recognizing patterns. It can then implement processes based on what it's learned from those patterns. It all seems to be very specific and human directed.
fulafel
Link to DeepMind's papers: https://deepmind.com/research?filters=%7B%22collection%22:%5...
leftyted
> The problem here seems to be that you think the state of the art resembles “being really good at identifying objects”. This makes it clear that you are not keeping up with the frontier. I recommend looking up DeepMind’s 2019 papers, they are easily discoverable.

I freely admitted that I'm not "keeping up with the frontier". My knowledge in this area is not significant and boils down to writing a neural net in a college class that identified hand-written numbers.

Anyway, you're telling me that "DeepMind’s 2019 papers" combined with "the most current cognitive neuroscience papers" are the "decisive piece of information I'm missing". I doubt I could make heads or tails of those papers but maybe I'll take a look.

It would be cool if you could tell me what I'm missing instead of saying "just go read some papers" but maybe these things are simply too complicated to lay out that way.

woeirua
You're going to have to be more specific about what constitutes a major advance forwards. So far DeepMind's work (while impressive) has proven to be very brittle, and not transferable without extensive "fine-tuning". Previous attempts at transfer learning have been mixed to say the least.

I'm going to be pessimistic and say that AGI is probably decades away (if not centuries away for a human-like AGI). There are clearly many biological aspects of the brain that we do not understand today, and likely will not be able to replicate without far more advanced medical imaging techniques.

tenaciousDaniel
I'm not well versed in this area, but from my perspective, I see this as the fundamental problem:

Every action my brain takes is made of two components: (1) the desired outcome of the thought, and (2) the computation required to achieve that outcome. No matter how well computers can solve for 2, I have no idea how they'd manage solving for 1. This is because in order to think at all, I have to have a desire to think that thought, and that is a function of me being an organism that wants to live, eat, sleep, etc.

So for me, I just wonder how we're going to replicate volition itself. That's a vastly different, and vastly more complicated, problem.

09bjb
I agree that there's some aspect of volition, desire, the creative process...whatever you want to call that aspect of human thought that seems to arise de novo.

But speaking of de novo, I'm not at all sure that a desire to think a thought it required in order to think. The opposite seems closer: the less one tries to think, the more one ends up thinking.

I'm pivoting from your point here, but I see that bit as the hurdle we're not close to overcoming. We are likely missing huge pieces of the puzzle when it comes to understanding human "intelligence" (and intelligence itself is not the full picture). With such a limited understanding, a replication or full superseding in the near future seems unlikely. Perhaps the blind spot of the experts, as /u/leftyted alluded to, is that their modest success so far has generated a reality-distorting hubris.

tenaciousDaniel
It's like the more advanced stage of "a little bit of information is a dangerous thing".
Izkata
If I'm remembering my terms right, "embodied AI" is one theory or group of theories about interaction with an environment creating the volition necessary before generalized AI can be created.
ankeshanand
There's active research in Model-Based RL right now that tries to tackle 1) and 2) together.
realbarack
It isn't hard to give an AI a goal but it is hard to do so safely. As a toy example we could design an AI who treated say, reducing carbon emissions as it's goal just as you treat eating and sleeping as yours. The issue is that the sub goals to accomplish that top-level goal might contain things we didn't account for, say destroying carbon-emitting technology and/or the people that use it.
etherealG
Human's have many basic goals that are very dangerous when isolated in that way. It seems to me that nature didn't care (and of course, can't care) if it was dangerous at all when coming up with intelligence. Maybe we shouldn't either if we want to succeed with replicating it.

Worrying about some apocalypse seems counterproductive to me.

ellius
I think people also have a very hard time conceptualizing the amount of time it took to evolve human intelligence. You're talking literally hundreds of millions of years from the first nerve tissues to modern human brains. I understand that we're consciously designing these systems rather than evolving them, but nevertheless that's an almost incomprehensible amount of trial and error and "hacking" designs together, on top of the fact that our understanding of how our brains work is still incomplete.
whatshisface
Flight took a while to evolve too.
foobiekr
A difference here is that flight also evolved and re-evolved over and over. General intelligence of the scale and sort that humans feature just once (that we know of and very likely in history).
whatshisface
That's influenced by the anthropic principle. The first species to obtain human-level intelligence is going to have to be the one that invents AI, and here we are.
jackcosgrove
I don't think that's the same. We're not trying to reverse engineer flight. We're trying to reverse engineer how we reverse engineered flight.
whatshisface
The thing is, airplanes are not based on reverse-engineered birds. Cutting edge prototypes still struggle to imitate bird flight, because as it turns out big jet turbines are easier to build. It could very well be easier to engineer a "big intelligence turbine" than it would be to make an imitation brain.
bart_spoon
> It could very well be easier to engineer a "big intelligence turbine"

Is that not what a computer is? We have continuously tried and failed to create machines that think, react, and learn like the brains of living things, and instead managed to create machines that manage to simulate or even surpass the capabilities of brains in some contexts, while still completely failing in others.

zawerf
I thought you were going to go the other direction with your first sentence. It took some 4 billion years to go from the first cell to the first homosapien. Maybe another 400,000 years to get from that to how we are today.

That means 0.01% of the timeline was all it took for us to differentiate ourselves from regular animals who aren't a threat to the planet.

0.01% of 100 years is 3 days.

mrnobody_67
And just 4 hours for AlphaZero to teach itself chess, and beat every human and computer program ever created....

DNA sequencing went from $3b per genome to $600, in about 30 years, much, much faster than Moore's "law".

nkurz
Why do you say "much, much faster"? $600 to $3 billion is about the same as going 2^9 (512) to 2^32 (4.3B), which requires 23 doublings. Moore's law initially[1] specified a doubling every year (30 years would be 30 doublings), then was revised to every two years (15 doublings), but is often interpreted as doubling every 18 months (20 doublings). Seems pretty close to me!

[1] https://en.wikipedia.org/wiki/Moore%27s_law

pxndxx
That's a very anthropocentric view, and not how the timeline works. Unicellular organisms are also smart in a way computers can't exactly replicate. They hunt, eat, sense their environment, reproduce when convenient, etc. All of these are also intelligent behaviours.
meowface
Also as a strict materialist, after reading estimates from lots of different people from lots of different disciplines, and integrating and averaging everything, I think we'll have likely have human-level or above AGI around 2060 - 2080. I think it's relatively unlikely it'll happen past 2100 or before 2050. I'd even consider betting some money on it.

I'm kind of coming up with these numbers out of thin air, but as much of a legend as he is, I agree Carmack's estimate seems way too optimistic to me. It's possible, but unlikely to me.

That said:

>The emphasis on computational power also makes no sense to me. If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

In this interview with Lex Fridman and Greg Brockman, a co-founder of OpenAI, he says it's possible increasing the computational scale exponentially might really be enough to achieve AGIc: https://www.youtube.com/watch?v=bIrEM2FbOLU. (Can't remember where he said it exactly, but I think somewhere near the middle.) He's also making a lot of estimates I find overly optimistic, with about the same time horizon as Carmack's estimate.

As you say, it can be a little confusing, because both John Carmack and Greg Brockman are undoubtedly way more intelligent and experienced and knowledgeable than I am. But I think you're right and that it is a blindspot.

By contrast, this JRE podcast with someone else I consider intelligent, Naval Ravikant, essentially suggests AGI is over 100 years away: https://www.youtube.com/watch?v=3qHkcs3kG44. I think he said something along the lines of "well past the lifetimes of anyone watching this and not something we should be thinking about". I think that's possible as well, but too pessimistic. I probably lean a little closer to his view than to Carmack's, though.

mirceal
I believe that 100 years is optimistic. I would say that it's hundreds of years away if it's going to happen at all.

My bet is that humans will go the route of enhancing themselves via hardware extensions and this symbiosis will create the next iteration(s) in our evolution. Once we get humans that are in a league of their own with regards to intelligence they will continue the cycle and create even more intelligent creatures. We may at some point decide to discard our biological bodies but it's going to be a long transition instead of a jump and the intelligent creatures that we create will have humans as a base layer.

meowface
Carmack actually discusses this in the podcast when Neuralink is brought up. He seems extremely excited about the product and future technology (as am I), but he provides some, in my opinion, pretty convincing arguments as to why this probably won't happen and how at a certain point AGI will overshoot us without any way for us to really catch up. You can scale and adjust the architecture of a man-made brain a lot more easily than a human one. But I do think it's plausible that some complex thought-based actions (like Googling just by thinking, with nearly no latency) could be available within our lifetimes.

Also, although I believe consciousness transfer is probably theoretically achievable - while truly preserving the original sense of self (and not just the perception of it, as a theoretical perfect clone would) - I feel like that's ~600 or more years away. Maybe a lot more. It seems a little odd to be pessimistic of AGI and then talk about stuff like being able to leave our bodies. This seems like a much more difficult problem than creating an AGI, and creating an AGI is probably the hardest thing humans have tried so far.

I'd be quite surprised if AGI takes longer than 150 years. Not necessarily some crazy exponential singularity explosion thing, but just something that can truly reason in a similar way a human can (either with or without sentience and sapience). Though I'll have no way to actually register my shock, obviously. Unless biological near-immortality miraculously comes well before AGI... And I'd be extremely surprised if it happens in like a decade, as Carmack and some others think.

mirceal
I'm no Carmack but I do watch what is happening in the AI space somewhat closely. IMHO "brain" or intelligence cannot exist in void - you still need an interface to the real world and some would go as far as to say that consciousness is actually the sensory experience of the real world replicating your intent (ie you get the input and predict an output or you get input + perform an action to produce an output) plus the self referential nature of humans. Whatever you create is going to be limited by whatever boundaries it has. In this context I think it's far more plausible for super-intelligence to emerge and be built on human intelligence than for super-intelligence to emerge in void.
meowface
How would this look, exactly, though? If you're augmenting a human, where exactly is the "AGI" bit? It'd be more like "Accelerated Human Intelligence" rather than "Artificial General Intelligence". I don't really understand where the AI is coming in or how it would be artificial in any respect. It's quite possible AGI will come from us understanding the brain more deeply, but in that case I think it would still be hosted outside of a human brain.

Maybe if you had some isolated human brain in a vat that you could somehow easily manipulate through some kind of future technology, then the line between human and machine gets a little bit fuzzy. In that respect, maybe you're right that superintelligence will first come through human-machine interfacing rather than through AGI. But that still wouldn't count as AGI even if it counts as superintelligence. (Superintelligence by itself, artificial or otherwise, would obviously be very nice to have, though.)

Maybe you and I are just defining AGI differently. To me, AGI involves no biological tissue and is something that can be built purely with transistors or other such resources. That could potentially let us eventually scale it to trillions of instances. If it's a matter of messing around with a single human brain, it could be very beneficial, but I don't see how it would scale. You can't just make a copy of a brain - or if you could, you're in some future era where AGI would likely already have been solved long ago. Even if every human on Earth had such an augmented brain, they would still eventually be dwarfed by the raw power of a large number of fungible AGI reasoning-processors, all acting in sync, or independently, or both.

mirceal
yes. we probably have different definitions for AGI. For me artificial means that it’s facilitated and/or accelerated by humans. You can get to the point where there are 0 biological parts and my earlier point is that there would probably be multiple iterations before this would be a possibility. If I understand you correctly you want to make this jump to “hardware” directly. Given enough time I would not dismiss any of these approaches although IMHO the latter is less likely to happen.

also, augmenting a human brain for what I’m describing does not mean that each human would get their brain augmented. It’s very possible that only a subset of humans would “evolve” this way and we would create a different subspecies. I’m not going to go into the ethics of the approach or the possibility that current humans will not like/allow this, although I think that the technology part would not be enough to make it happen.

jackcosgrove
I am not an expert, but I don't think computational power is the limitation. It's the amount of data processed. Our brains are hooked up to millions of sensory signals, some of which have been firing 24/7 for decades. Also our brains come with some preformed networks (sensory input feeding into a region with a certain size and shape) that took millions of years to "train". Even then, our brains take 20-25 years to mature.

Machine learning at this point seems closer to a tool designed analytically (feeding it well-formed data relevant to the task, hand-designing the network) than to AGI.

gameswithgo
Things that support the notion that it is soon are that napkin math suggests the computational horsepower is here now, and that we have had few instances of sudden, unexpected advances in how well neural networks work. (Alpha Go, Alpha Zero, etc).

One might extrapolate that there is a chance that in 10 years, when the computational horsepower is available to more researchers to play with, and we get another step-change advance, that we will get there.

My own feeling is that it is possible AGI could happen soon, but I don't expect it will.

someexgamedev
This is how I feel about AGI too, and I also include self-driving cars. I don't think those are just around the corner either.

In general I don't think our current approach to AI is all that clever. It brute forces algorithms which no human has any comprehension of or ability to modify. All a human can do is modify the input data set and hope a better algorithm (which they also don't understand) arises from the neural network.

It's like a very permissive compiler which produces a binary full of runtime errors. You have to find bugs at runtime and fiddle with the input until the runtime error goes away. Was it a bug in your input? Or a bug in the compiler? Who knows. Change whichever you think of first. It's barely science and it's barely a debug workflow.

What pushed me all the way over the edge was when adversarial techniques started to be applied to self-driving cars. That white paper made them look like death machines. This entire development process I am criticising assumes we get to live in the happy path, and we're not. The same dark forces infosec can barely keep at bay on the internet, and have completely failed to stop on IoT, will now be able to target your car as well.

Worst thing is all our otherwise brilliant humans like Carmack are gonna be the guinea pigs in the cars as they head off toward their next runtime crash.

seagreen
The economics of the situation aren't friendly to humans, because human intelligence doesn't scale up well. Take energy consumption-- once you're providing someone 3 square meals they can't really use any extra energy efficiently. So we try training up lots of smart people and having them work together, but that causes lots of other problems-- communication issues, office politics, etc.

Additionally you can't replicate people exactly, so even when Einstein comes along we only have him for a short while. When he passes away we regress.

Computers are completely different. We can ring them in power plants, replicate them perfectly, add new banks of CPUs, of GPUs, wire internet connections directly into them, etc.

This didn't used to matter because the old "computers can only do exactly what you tell them to do, just really fast" limitation. Now that computers are drawing, making art, modifying videos, playing Chess and Go preternaturally, playing real time strategy games well, etc we can see that that limitation doesn't really hold anymore.

At this point the economics start to really kick in. More machine learning breakthroughs + much, MUCH bigger computers over the next decades are going to be interesting.

phatfish
Einstein comes along only once but his knowledge lives after his death. The same way he iterated on the knowledge of those before him.

If you give Deepmind "x" times the compute power (storage, whatever) it just plays Starcraft better. It's not going to arrange tanks into an equation that solves AGI.

That breakthrough will be assisted by computers I'm sure, but the human mind will solve it.

platz
Also I think that CS people's understand of neurons are horribly underestimated. The idea that there are bits 'in' neurons is a misconception. They each neuron is a multi-cellular entity with variate modes of interaction and activation.

So these napkin estimates comparing brainpower what server farms can do doesn't inform us at all about how that gets us closer to AGI.

abledon
I always wonder how they think AGI is close when neuroscience is still scratching in the dark with brain scans, and we don’t know how digestion works 100% nor how to build a single cell in a lab then and have it skip millions of years in evolution to make a baby in 9 months. The AGI will definitely be different in structure than a human brain. Will it have a microbiome to influence its emotions and feelings?
ses1984
You don't need AGI to do serious damage. I think it's just easier for the layperson to reason about the ethics and implications of AGI than it is to reason about how various simpler ML models can be combined by bad actors to affect society and hurt the common good.
Kaiyou
One such missing piece could be that AGI already exists, but is kept behind NDAs.
xamuel
If someone has discovered AGI, it should be trivial for them to completely dominate absolutely everything. There would be no more need for capitalism or anything, we would be in a post-singularity world.
Kaiyou
Imagine you'd have discovered AGI and want to exploit it as best as you can without having everyone else notice that you have discovered AGI.
xamuel
You don't even need to do the imagining yourself. You can just have your AGI do that imagining for you. "Computer, come up with a way for us to take over the world without anyone noticing."
Kaiyou
And then hope that the answer isn't "I'm sorry, Dave, I'm afraid I can't do that."
chillee
Speaking as someone doing research in this field, I have an unbelievably hard time imagining this to be the case.

The ML community is generally extremely open, and people know what the other top people are working on. If an AGI was developed in secret, it would have to be without the involvement of the top researchers.

prepend
Without the involvement of who we think are the top researchers. If I were smarter and had more time, I would look for bright young researchers who published early and then stopped, but are still alive.
mirceal
you're assuming that AGI is going to come from ML. While interesting I strongly believe that ML is not going to generate anything close to AGI ever. ML is more like our sense organs that it is to our brain. It can take care o processing some of the input we receive but I don't see it moving past that. super advanced ML + something else will probably be at the root of what could evolve into AGI.
Kaiyou
You probably have a blind spot for people that are not able to speak English and are working in conditions that are kept secret by design. Coincidentally I know someone in that situation who works on AI since at least two decades and who has kept radio silence since one decade on what he's working on exactly.
carapace
Kind of a tangent but I can't see why we wouldn't be able to make "A"GI using human brain organoids.

https://en.wikipedia.org/wiki/Cerebral_organoid

I know of at least two people that are eager to make "Daleks" and given the sample size there must be many more.

marvin
> Sometimes I wonder if AGI (and the concept of a "technological singularity") isn't just "intelligent design for people with north of 140 IQ"

You're not the first person to express this idea, but it's pure speculation. There is obviously a possibility that it will be proved correct at some point in the future. But historically, very smart people have been ridiculed like clockwork for expressing ideas that were beyond their time but philosophically (and physically, eventually technologically) possible.

I'd be wary of adding to such sentiment. It also feels suspiciously like an ad hominem criticism, although in your case it's expressed more like a question. I think there is clearly something to the idea of very smart people having an intellectual disconnect with the reasoning of their closer-to-average peers (and hence expressing things that seem ludicrous, without considering how they will be received), but not one that negatively affects the quality of their deductions.

IMHO, the ideas of AGI and a "technological singularity" (let's call it economic growth that's extremely more powerful than anything seen up until now) aren't so different from earlier, profound developments in human history. The criticism of "smart people developing a blind spot" could have been applied equally to e.g. the ideas of agriculture and the following power shift, industry, modern medicine, powered flight and spaceflight, nuclear weapons or computers, networking and robotics.

All these ideas put the world into an almost unimaginally different state, when seen with the eyes of an earlier status quo. Maybe AGI is relatively different, it's hard to say without having lived in ancient Egypt. It's certainly qualitatively different, since it involves changes to intelligent life, but I'm not sure the idea feels much more alien than things we've already experienced.

simplecomplex
“AGI” seems exactly like “God.” It has no objective or scientific definition. It’s whatever you want it to be.

The singularity theory is based on the premise of accelerating progress, despite progress no being a quantitative thing and therefor something that can accelerate.

vehementi
He did couch it in the caveat that once the hardware is there it'd more be a matter of thousands of people throwing themselves at the problem -- we're waiting I guess for the hardware to be good/cheap enough for those people to be widespread
drcode
I sort of agree with your skepticism, but you gotta admit that some of the things the ML folks are doing are uncanny in terms of how they seem to model the human visual system and perform other human-like tasks. Additionally, we already have tons of CPU horsepower that can get close in terms of raw processing ability. Even though we don't yet know what the missing "special sauce" is, I don't think it's inconceivable that someone in 5 years figures it out (though 50 years is just as likely)
jcims
I know it’s just a splinter of AGI, but conversational language understanding and generation is undergoing some rapid advancement. This subreddit is all GPT2 bots and while moat of it is still bad, there are glimpses of the future in there. (Note: Some of it is NSFW)

https://www.reddit.com/r/SubSimulatorGPT2/

AbrahamParangi
What I believe is the core insight is that gradient descent in many dimensions (or weaker strategies like evolutionary algorithms or reinforcement learning) is unbelievably powerful.

It's less obvious what problems are not tractable for sufficiently large gradient descent than what are.

While you could solve the AGI problem with insight, it's credible that you can solve it in the near future with brute force search.

jjaredsimpson
Reading the AI go FOOM debate solidified a lot of the mushy parts of my "singularitianism"

I think the linchpin of my belief is recursive self improvement. I think machine intelligences are a different kind of substance with different dynamics than the ones we typically encounter.

I don't think someone will compile the first AGI and presto there is it. I think a long running system of processes will interact and rewrite its own code to produce something, which eventually a reasonable boundary could be drawn to distinguish the system and anyone interacting with the system would say: "this thing is intelligent, the most intelligent thing on the planet". It would have instant access to all written knowledge, essentially unbounded power to compute new facts and information and model the world to as accurate of an approximation as needed to produce high confidence utterances.

I just don't see how a system like that couldn't come into existence one day. Issues around timelines are completely unknowable to me. But I would put a distribution of something like I would be surprised if it happened in the next 50 years and shocked if it didn't happen within the next 1000. Very fuzzy, but it "feels" inevitable.

If a collection of unthinking cells can coordinate and produce the feeling of conscious experience then I can't see what would stop silicon from producing similar behavior without many bounds inherent in biological systems.

woeirua
But that's the rub. Biological systems are not just random interactions. The entire system is meticulously orchestrated by DNA, RNA, etc. We don't even fully understand yet how it all works together, but it's very clear that these processes have evolved to work together to achieve something that none of them could have ever achieved alone.
jjaredsimpson
Biological systems climb up energy gradients and outcompete other systems.

Artificial systems should be able to climb given a suitable gradient. I think the hard part of AGI is going to be designing the environment and gradient to produce "intention", I don't think the hard part is studying the human mind to find out the "secret of intelligence"

The goal of AGI isn't silicon minds isomorphic to human minds at each level of interpretation. Just the existence of an intelligent system.

sullyj3
> If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

https://en.wikipedia.org/wiki/AIXI

m3kw9
Looking at the trajectory of things, AGI is not impossible. There you have it, I think thats all anyone can say about AGI till another breakthrough comes, what ever that may be
pacala
"Neural net that's really good at identifying objects, processing symbols and making decisions."

* Neural nets play Chess or Go better than us. They will soon play Mathematics better than us.

* They turn keywords in photo-realistic images in seconds, and will soon do the same for text. Literature and Arts down this path.

* They learn to play video games better than us, from Starcraft to Dota. Engineering down this path.

There is no hidden information. You just need to look at the width of the field. There is a credible challenge for all our intelligent capabilities.

leftyted
I have no trouble believing that neural nets can beat people at all of these things (including, eventually, driving). And that, in itself, is incredibly impressive and incredibly useful.

The question is how you get from that to AGI.

tzs
One thing that is still missing, I believe, is adaptability. Take chess.

Between rounds at an amateur chess tournament you will often find players passing the time playing a game commonly called Bughouse or Siamese chess. It's played by two teams of two players, using two chess sets and two clock. Let's call them team A, consisting of players Aw and Ab, and team B, consisting of player Bw and Bb.

The boards are set up so that Aw and Bb play on one board, and Ab and Bw on the other. They play a normal clocked game (with one major modification described below) on each board, and as soon as any player is checkmated, runs out of time on their clock, or resigns, the Bughouse game ends and that player's team loses.

The one major modification to the rules is that when a player captures something, that captured piece or pawn becomes available to their partner, who can later elect on any move to drop that on their board instead of making a move on the board.

E.g., if Aw captures a queen, Ab then has a black queen in reserve. Later, instead of making a move, Ab can place that black queen on Ab's board. The capture pieces must be kept where the other team can easily see them.

You can talk to your teammate during the game. This communication is very important because the state of your teammates game can greatly affect the value of your options. For example, I might be in a position to capture a queen for a knight, and just looking at my board that might be a great move. But it will result in my partner having a queen available to drop, and my partner's opponent having a knight to drop. Once on the board a queen is usually worth a lot more than a knight--but when in reserve it is the knight that is often the more deadly piece. So, I'll ask my teammate if queen for knight is OK. My teammate might say yes, or no, or something more complicated, like wait until his opponent moves, so that he can prepare for that incoming enemy knight. In the later case, if I've got less time on my clock than my teammate's opponent has, the latter might delay his move, trying to force me to either do the trade while it is sill his turn, or do something else which will let his teammate save his queen. This can get quite complicated.

OK, now imagine some kid, maybe 12 years old or so, who is at his first tournament, and is pretty good for his age, and had never played Bughouse. He's played a ton of regular chess at his school club and with friends, and with the computer.

A friend asks him to team up, quickly explains the rules, and they start playing Bughouse.

First few games, that kid is going to cause his team to lose a lot. He'll be making that queen for knight capture without checking the other board, shortly followed by his partner yelling "WHERE DID THAT KNIGHT COME FROM!? AAAAAARRRRRGGGHHHHH!!!".

The thing is, though, by the end of the day, after playing a few games of Bughouse between each round of the tournament, that kid will have figured out a fair amount of which parts of his knowledge of normal chess openings, endgames, tactics, general principles, etc., transfers as is to Bughouse, which parts need modification (and how to make those modifications), and which parts have to be thrown out.

To get his Bughouse proficiency up to about the same level as his regular chess proficiency will take orders of magnitude less games than it took for regular chess.

I don't think that is currently true for artificial neural nets. Training one for Bughouse would be as much work as training one for regular chess, even if you started with one that had been already trained for regular chess.

pacala
"While neural nets are good at organizing a world governed by simple rules, they are not proven good at interacting with other intelligent agents." This is an interesting point, for example squeezing information through a narrow channel forces a kind of understanding that brute forcing does not. I've stopped paying close attention to the field a year ago, but I have seen a handful of openai and deepmind papers taking some small steps down this route.
09bjb
I disagree that the field of Mathematics can be reduced to a definable game. Many recent breakthroughs have been a creative cross-pollination of mathematical fields...not that this will be totally off-limits to a sufficiently general AI. I didn't do much Math in college but my impression was that after you've learned the mechanics of calculus, algebra, etc., there's no obvious way to advance the field. Lots of "have people thought of things this way before" rather than crunch the numbers harder!!.

Anyone with more training want to chime in?

povertyworld
AGI is becoming like communism in that it seems theoretically possible, might usher in utopia or be really scary, and apparently intelligent people often believe in it. Along that line of thought one can imagine a scenario where some rogue military tech kills 100 million people, and the world moves to ban it, but a small cadre of intellectuals insist that "wasn't real AGI".
mmjaa
Until we can define intelligence, we cannot create artificial intelligence. We still do not know what intelligence actually is - bloviating academics clambering for subsidies to support their habits, notwithstanding.
AllegedAlec
> Until we can define intelligence, we cannot create artificial intelligence.

Until we define 'cake', we cannot create cake.

mmjaa
We have well and truly defined cake.

I mean, words have meaning don't they? Or, if not, then what's the fucking point?

prepend
Only because we made it so much.
MAXPOOL
I think you have a point.

AGI is a scientific problem of the hardest kind, not an engineering problem where you just use existing knowledge to build better and better things.

Marving Minsky once said that in mathematics just five axioms is enough to provide the amount of complexity that overwhelms the best minds for centuries. AGI could be messy practical problem that depends on 10 or 25 fundamental 'axioms' that work together to produce general intelligence. "I bet the human brain is a kludge." - Marvin Minsky

The idea that if many people think this problem very hard, the problem is solved in our lifetime is prevalent. It's not true in math and physics, so why would AI be any different? Progress is made but you can't know if there is breakthrough tomorrow or if it happens 100 years from now. Just adding more computational capability is not going to solve AI.

Currently it's the engineering applications and the use of the science what is exploding and getting funded. In fact, I think some of the best brains are lured from the fundamental research into applied science with high pay and resources. What the current state of the art can do now has not been utilized fully in the economy and this brings in the investments and momentum.

melling
Nature has already solved AGI. Now we just need to reverse engineer it.
SamReidHughes
Unfortunately, Von Neumann is long dead, so we only have damaged approximations of AGI to work with.
_nosaj
We can say this about anything in the universe though.
liability
Nah not really, there is loads of stuff invented by by humans that, as far as we know, did not appear in the universe before we did it. For example, I'm unaware of any natural implementation of a free spinning wheel attached to an axle.
MAXPOOL
"just"?

Neuroscience is full of problems of the hardest kind.

melling
Yes, one of Paul Alan’s gifts to the world should help:

https://alleninstitute.org/

bumby
It seems like a similar parallel is with the enthusiasm with self-driving cars. There was an initial optimism (or hype) fueled by the success of DL with perception problems. But conflating solving perception with the larger, more general problem of self-driving leads to an overly optimistic bias.

Much of the take-aways from this year's North American International Auto Show was that the manufacturer's are reluctantly realizing the real scope of the problem and trying to temper expectations. [0]

And self-driving cars is still a problem orders of magnitude simpler than AGI.

[0] https://www.nytimes.com/2019/07/17/business/self-driving-aut...

esmi
Also, we know what a self driving car is, how to recognize it, and even measure it.
gameswithgo
>And self-driving cars is still a problem orders of magnitude simpler than AGI.

You sure? It might very well be a single order of magnitude harder, or not any harder. Given that solving all the problems of self driving even delves into questions of ethics at times (who do I endanger in this lose lose situation, etc)

bumby
I could certainly be wrong, it's just speculation on my part on the assumption that self-driving issues would be a smaller subset of AGI problems.

I actually don't think the ethics part is all that hard if (and that's a big if) there can be an agreement on a standard approach. An example would be a utilitarian model, but this often is not compatible with egalitarian ethics. This approach reeks of technocracy but it's certainly a solvable problem.

xamuel
Re: Comparing self-driving cars to AGI: It's counterintuitive, but depending how versatile the car is meant to be, the problems might actually be pretty close in difficulty.

If the self-driving car has no limits on versatility, then, given an oracle for solving the self-driving car problem, you could use that to build an agent that answers arbitrary YES-NO questions. Namely: feed the car fake input so it thinks it has driven to a fork in the road and there's a road-sign saying "If the answer to the following question is YES then the left road is closed, otherwise the right road is closed."

Compare with e.g. proofs that the C++ compiler is Turing complete. These proofs involve feeding the C++ compiler extremely unusual programs that would never actually come up organically. But that doesn't invalidate the proof that the C++ compiler is Turing complete.

chronolitus
Very well put. And you could argue that it is not as much a stretch as it seems.

Self driving cars would realistically have to keep functioning in situations where arbitrary communication with humans is required (which happens daily), which tends to turn into an AI-hard problem quite quickly.

bumby
Good points.

I was thinking in terms of "minimum viable product" for self-driving cars, which I have a hunch will be of limited versatility compared to what you describe. To have a truly self-driving car as capable as humans in most situations, you may be right.

xamuel
They already made a minimum viable product self-driving car. It's called a "train".
bumby
I know this is meant jokingly, but for many cities (especially relatively remote ones), trains are not considered viable because they have strictly defined routes.

Many cities choose to forego trains for busses in large part due to the lower upfront costs and the ability to change routes as the needs of the populace change.

taneq
That's the problem with all of the fatuous interpretations floating around of "level 5" self-driving.

"It has to be able to handle any possible conceivable scenario without human assistance" so people ask things like "will a self-driving car be able to change its own tyre in case of a flat" and "will a self-driving car be able to defend the Earth from an extraterrestrial invasion in order to get to its destination".

They need to update the official definition of level 5 to "must be able to handle any situation that an average human driver could reasonably handle without getting out of the vehicle."

(Although the "level 1" - "level 5" scale is a terrible way to describe autonomous vehicles in any case and needs to be replaced with a measure of how long it's safe for the vehicle to operate without human supervision.)

On open sourcing Wolfenstein, Doom, Quake: https://youtu.be/udlMSe5-zP8?t=591

Neuralink: https://youtu.be/udlMSe5-zP8?t=2523

Artificial General Intelligence: https://youtu.be/udlMSe5-zP8?t=2778

Quantum Computing: https://youtu.be/udlMSe5-zP8?t=3106

“Engineering is figuring out how to do what you want with what you’ve actually got”: https://youtu.be/udlMSe5-zP8?t=3190

End of Moore’s Law / On CPU architecture: https://youtu.be/udlMSe5-zP8?t=3860

5G and streaming (games & video): https://youtu.be/udlMSe5-zP8?t=4288

edit: like already mentioned there are a lot of topics covered, some even for just a few sentences, the conversation flows, worth watching the whole thing

ladybro
Here are these timestamps on one clickable video, feel free to add your own: https://use.mindstamp.io/video/XFnaNsKJ?notes=1
tosh
this is great!
algaeontoast
I'd be very curious to see if an Ask HN on the premise of “Engineering is figuring out how to do what you want with what you’ve actually got” but in the context of understanding what you can actually manage to learn or build with your current known ability to learn and skills. Could be a very interesting take on imposter syndrome and understanding effective ways to learn?
lucaspottersky
> “Engineering is figuring out how to do what you want with what you’ve actually got”

same lines as... _life is figuring what to do with what you’ve got_.

Nokinside
Carmack briefly mentions 5G and streaming games. I think there is good economic reason for the 5G gaming is eventually coming (it may take some time) because low latency enables it.

If you think about the price of gaming PC or console, there is huge discrepancy between the budget of hardcore gaming enthusiast and casual gamer. It would be nice to get $5000 gaming tower in every house, but you don't. Many casual gamers would rent $5000 - $10,000 worth of gaming hardware for few hours a week it was simple. Only way to get bleeding edge high-end gaming to masses is to put GPU and some other parts to the edge (or at least within the same city) and stream or partially stream the game to cheaper computer device and screens.

Consider $5,000 worth of bleeding edge hardware that costs $1/hour to run. If you rent it for $8/hour, and it only sells for 5 hours per day for gaming, the HW pays itself back in 5 months. It could be rented out for other stuff in the meantime. Cloudflare, what do you think?

I could see a market emerging similar to school VHS/DVD/Game rentals. There is limited computational resource near you and you can rent it for gaming. If all is taken (weekend evenings) the 'shelves' are empty. On working/school days and hours you get the same thing cheaper.

It probably happens in Japan, South Korea and Nordic countries first.

bluedevilzn
So.... Google Stadia? Except it's more like $10/month for 4k gaming.
mxfh
There seems to be confusion about what hardcore gaming means? Competive Gaming, high-end user experience or bragging rights spec based enthusiasts.

My benchmark would be the XBox One X, which is perfectly capable of 4k 60 fps gaming with HDR and Atmos, try to get that stable running with any PC, good luck. That's 400 for the console and in total under 2000 if you need to get an additional OLED-TV and Atmos-enabled-speakerset, which are not exclusively gaming budget.

The only shortcoming there is the absence of fast current generation NVMe mass storage and RTX-enabled GPU.

Reasonable high end PC gaming is possible well below $2000 or even $1000, even $500 will get you decent performance for even casual competetive gameplay.

The priceyness of gaming rigs setups comes from the insane demand of getting an edge at 4k 120+fps in FPS-shooters. That bracket can't be conviced by any streaming services physical limitations for the next 10 year or so at least.

mosselman
> That's 400 for the console

In reality it is 400 for the console + 60 / year for online gaming, which means it is about 640 for 4 years of gaming.

Others have mentioned that you can't get 60fps @ 4k, but I have never played on an xbox one x, so I can't attest to that.

saiya-jin
You are not correct. The hardware in gaming PCs and consoles is +- the same, the only advantage is game devs can optimze for a specific hardware setup instead of all possible combinations like on PC.

If console is running 4k@60Hz, it means that level of details is probably somewhere in low settings compared to PC version. For that, you don't need 2000USD PC, something much cheaper would be enough. On top of that you can use it, well, as PC. But yes consoles will be cheaper, just not that much and its purpose is much, much more narrow

satysin
> My benchmark would be the XBox One X, which is perfectly capable of 4k 60 fps gaming

Going to disagree with you here. I don't think there is even one game that runs at full 4K (not CBR) and maintains 60fps. If there is one I would love to know so I can check it out but so far everything I play on my Xbox One X is either running at well below 4K in order to maintain 60fps or runs at 30fps with 4K CBR. Very few games are true 4K.

okmokmz
>try to get that stable running with any PC

Easy. It will obviously be more much expensive than a console, but it's perfectly doable if you have the budget for it

mxfh
I have the budget, but not the time. Unless by budget you mean paying you own household QA technician. Dolby Atmos over HDMI and HDR does not rhyme with NVIDIA drivers, or very few of them, which then again tends to breaks my VR.

https://www.reddit.com/r/nvidia/comments/ao4c0u/anyone_with_...

prepreference
I’m not so sure you know what you’re talking about.

4K and >120 FPS are incompatible goals. If you are optimizing for frames in a competitive title the first thing you will do is turn total render target size down to the minimum. Conversely, if you want to render at max quality, those pixels look a lot nicer as resolution than as frames.

If you’re a real pro, you can spend five times as much money to hit 80% of cutting-edge performance on both metrics simultaneously. You hardcore gamer, you.

People who talk like that are usually kids spending their parents’ money.

FYI, genuine demand for higher gaming performance is mostly being driven by VR, where sentences like “8K at 144hz” aren’t just big numbers.

mxfh
Me east german born orphan peasant gamer has of cause no idea what words mean. ¯\_(ツ)_/¯
fkdo
The Xbox One X doesn't maintain those specs. The resolution drops during game play.

You actually need a machine north of $2k to support 60fps 4k.

prepend
My kid plays 1-5 hours of Xbox every day. There’s no way I could pay $1/hour.

Renting game consoles only works for casual gamers or super hard core people who want $5k rigs. Casual gamers don’t care, I think, and use their 5 year old iPhone. And there aren’t a ton of super high end gamers. And I hang out in gaming cafes where people pay $6/hour.

esmi
$1/hr at 5 hours per day is ~$150/month. I have no idea what you can afford but that’s basically the price of cable these days.
prepend
It’s also $1800/year. I can buy an Xbox or ps4 for $500. I can buy a gaming pc for $2k.

Paying $1800/year/per person forever is a bad deal.

homonculus1
It's also the entire cost of a console these days. The current status quo is a better comparison than a different service where cutting it out entirely is a meme due to ludicrous cost and subpar value.
Ajedi32
Thankfully, actual cloud gaming services are way cheaper than $1/hour. Stadia, for example, has no recurring monthly cost for 1080p gaming; you just have to pay for the games.
novocaine
Those countries already have fibre - are you saying 5g is better than fibre for streaming gaming?
Nokinside
Even in the wonderland of fiber, South Korea, fiber to the home has slowed down and fiber to the building is common. Korea Telecom still has lots of coaxial setups they stretch into gigabit speeds using 1:N connections. Gigabit penetration in Seoul was still below 50% few years back, much lower elsewhere.

Btw. Fiber has no latency advantage. What you need is servers at edge in both use cases.

ovi256
Fiber has a latency advantage in real world usage. In the common scenario of multiple computers and users sharing a single connection, if one of them does something latency sensitive (gaming or video chat), he will be bothered a lot less by others heavy bandwidth use on fiber than on ADSL, or in a lesser measure cable.

That's because heavy bandwidth use (downloads, video streaming) will saturate a low bandwidth link and packets will drop. Or bufferbloat will increase latency without dropping, which is the same for latency sensitive usage.

With multiple video streams going on in the average household being a common case nowadays, it's nice to have enough bandwidth.

cm2187
Do you really get significant cost savings? You can't really use hardware from other time zones / continents, because the latency would be very visible. So the hardware has to be reasonably close to the users.

Then I would imagine the problem is that everyone is playing video games at about the same time, i.e. in the evening or at certain times of the week end. So you need to provision for peak usage.

So to me the only benefit to renting is if there are lots of occasional gamers, that do not play every day / every week, including peaks where they would all suddenly play at the same time on certain days (christmas, long week ends, release of new game, etc).

And then of course because the hardware progresses so quickly you need to amortise the hardware pretty fast.

I haven't seen the actual economics but I am surprised it would be much cheaper for consumers.

kevstev
I first heard about this year ago, and was also amazed that the numbers could work and RDP lag would make it viable. I can't find the reddit post at the moment, but we had a few exchanges where it was explained that latency is not an issue, and availability is not an issue.

This is still a fairly niche market, and while there are certainly evening peak times, younger kids have a lot more free time than you might think.

How well the company hosting this is doing, I couldn't tell you, but the product itself seemed to work and work well, so much so that I have had it on my to-do list to try and experience it for myself.

ses1984
The speed of light is really fast, Google has fiber connecting all of its data centers, I think viable data centers for game streaming overlap more than you think, it's limited by how much bandwidth Google can spare between data centers.

The reason most services need to be located close to clients is because you want to avoid data transit over the open internet. You want as little open internet as possible between you and the client. Traffic that's internal to Google's networks doesn't have that problem. You can use compute / gpu in US-west and transit that data to you via US-east and the additional latency would be measured in nanoseconds.

likpok
With a straight shot, latency from one coast to the other is at least 50 ms [0]. A more typical route is on the order of 80 ms.

That's definitely noticeable for latency-sensitive actions. I recently switched a server from Oregon to New Mexico, and I notice the latency increase with mosh.

Moreover, there's not a lot of timezone difference between the east and west coasts of the US. Going someplace like Europe is more like 180 ms.

I've played games with a ping like that, but a lot of the ping was my wifi. Doubling the latency would not make the game a better experience.

This could work well for certain types of game, those that are a little less latency-sensitive. But in general the latency is still a big issue.

[0]: speed of light in fiber is ~ 100e6 m/s, around 3000 miles between coasts

davinic
Exactly. And 60fps (which is not that high for many games) gives you 16.67ms per frame. at a 50ms delay you're already 3 frames behind.
msh
Not all games are that latency depending. If I am playing civilization I probably won't care that much where the server is.
FrojoS
Neither, will you need to rent an expensive gaming rig by the hour, though.
ses1984
Wow, I never realized my intuition was off by so much. Speed of light in vacuum is still about 16ms between coasts, which is orders of magnitude worse than I thought. I thought most latency came from routers and switches but a significant portion of delay is indeed speed of light.
wmil
You might be able to make it work if you can get a gaming tier on a cloud platform. EC2 already has GPU focused instances.

I can see things working out, but still expensive, if the hardware also being using for scientific computing and CGI rendering.

Although I'm not sure if gaming hardware can run 24/7.

Interesting perspective on work-life balance, certainly contrary to the usual discussion online on these issues https://www.youtube.com/watch?v=udlMSe5-zP8&t=1h27m10s
Starts right about here: https://www.youtube.com/watch?v=udlMSe5-zP8&feature=youtu.be...

2:10:56

plugger
Thank you so much!
elamje
This part through the end is pretty incredible! He’s a nonstop stream of intellect.
mifreewil
The entire podcast is like this, highly recommend the entire episode.
Aug 29, 2019 · 7 points, 2 comments · submitted by cryptozeus
mantis78
Best episode of Joe Rogan yet! Joe did great by just letting John speak most of the time. John is so fun to listen to.
dang
https://news.ycombinator.com/item?id=20826200
Aug 29, 2019 · 10 points, 0 comments · submitted by ekianjo
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.