Hacker News Comments on
John Hennessy and David Patterson 2017 ACM A.M. Turing Award Lecture
Association for Computing Machinery (ACM)
·
Youtube
·
111
HN points
·
5
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.Presentation slides: https://docs.google.com/presentation/d/1ZMtzT6nmfvNOlIaHRzda...Related - John Hennessy and David Patterson 2017 ACM A.M. Turing Award Lecture: https://www.youtube.com/watch?v=3LVeEjsn8Ts
I consistently recommend Hennessy and Patterson's Turing Award Talk[1] as a good overview of the past, present and future of chip performance.[1] - This will take you directly to the meat of the discussion: https://www.youtube.com/watch?v=3LVeEjsn8Ts&t=28m10s
When you posted about joining Oxide I couldn't quite figure out what they (now you) do by looking at the homepage. I stumbled across this other lecture (https://youtu.be/3LVeEjsn8Ts?t=2189) that is along the same line of thinking, and it started to make sense.
⬐ steveklabnikThanks, I'll have to check this out!⬐ iamjkTheir podcast, "On the Metal" provides more context at the work they're aiming to accomplish.
I recommend watching the Turing Award speech by Patterson and Hennessey which covers this topic.Spoiler: the way out is "Domain Specific Architectures".
⬐ fulafelThe problem with this is the likely fragmentation of the programming languages and tool support.Eg GPUs have been around for ~25 years and there's still no decent way to portably program them. We have Metal, Direct3D, OpenGL, OpenCL, Cuda, Vulkan, ... all of which have small market shares. It's markedly worse than microprocessors at 25.
(Not even getting into quality issues like driver quality problems and the closed&proprietary nature of most of the SW stacks).
⬐ tambourine_manWhich is sad, in a way. There was something amazing about a general purpose computer that next generations may not be able to appreciate.⬐ kecThat's a bit like saying it's sad that cpus added media instructions, or that the gpu was invented. Dedicating silicon to fixed tasks or task specific designs can lead to massive performance increases and in no way means the death of general purpose compute.⬐ tambourine_man⬐ silversconfusedAnd it is, in a way. At least to me.Would you rather have a burnt on silicon algorithm for video/encryption decoding or a 10X faster general purpose CPU that can do that and everything else?
⬐ kecI think you underestimate just how much faster and more energy efficient hardware acceleration can make certain tasks, the savings can be far in excess of 10x.⬐ tambourine_man⬐ NoneYou may underestimate the exponential function :)We where doubling our performance every year or two. For decades.
⬐ erikpukinskisWe were also watching 160x240 video clips.⬐ imtringuedThat doubling applies to any sillicon circuit, not just CPUs.⬐ tambourine_manOf course it does, and your point is?NoneIt was all a marketing illusion to sell pentium chips and windows licenses. Bittersweet to see it go, as it was a great time to start being a technician and I certainly personally benefited from it. We're on the way back to using the Right Tool For The Job now, and that's probably better for everyone. Workloads are unique, just like people.⬐ tempguy9999General purpose chips aren't going to disappear, not even slightly. They're just going to offload specific work to circuitry that it's best suited for.I regret we're about to hit the wall but I have long though that we're going to start moving back to analogue, and probabilistic algorithms, and in one way, having half-width floats is a step in that direction. I expect it to go further.
(All this is my opinion and may be wrong).
⬐ tambourine_manSure, but you might end up with computers for video editing, computers for DNA sequencing, etc.And we are already there for some workloads. AI and crypto mining come to mind.
Regardless, buying a fast computer that could do anything was really fun.
⬐ sgt101⬐ joquarkyI think you will see SOC's that have multiple specialised architectures, which will be lit up intelligently by the CPU depending on what the workload is.> They're just going to offload specific work to circuitry that it's best suited for.Like the Amiga? That actually worked out pretty well. I look forward to it.
⬐ tambourine_manPow’s law at work here for me
⬐ chubotGreat talk! My main takeaway is that they advocate hardware-software co-design to improve performance. Performance issues push software in the direction of domain-specific languages, and hardware in the direction of domain-specific architectures (e.g. TPUs for machine learning).I think DSLs have a lot of advantages for software engineering reasons, but not everyone agrees. But it doesn't appear to be debatable that they have advantages for performance!
Previous thread: https://news.ycombinator.com/item?id=18009581
(FWIW, the research projects mentioned there count as DSLs ...)
⬐ jacksmith21006⬐ ivan_ahI suspect Zircon will also be similar. The next step will be to create silicon that better fits how Zircon works.Which is basically enabling breaking problems down into smaller pieces and then scheduling those pieces on different cores to work together to get a job done.
Kind of enabling scaling out versus up on a SoC or in a server.
With a monolithic kernel that was very difficult to do.
⬐ yazrWell hand TPU for ML is a great example (about x1000 speed improvement for tensor computation over a CPU).BUT ....
TPU is just a specialized GPU which has been around for 20 years? So very slow evolution of a pre-existing HW component
Deep learning has been around for 5 years and we already have 2 or 3 generation of frameworks. So very rapid revolution of DSL implemented on abstracted HW
Which is the complete opposite HW-SW advocacy ?
⬐ yazr(cont.)What maybe i am trying to say that a "sufficiently-clever-compiler" can probably make more progress in cache performance that waiting for a great new memory-cpu architecture to appear ?
Interesting off-the-cuff comments[1] about quantum computing in the question period:Patterson: You may get a question "Won't quantum computing ..." — No! Quantum computing is a really exciting; it's not going to happen tomorrow.
Hennessy: It's about on the same time-scale as fusion design.
⬐ CaliforniaKarlJohn and David will be giving a talk next week Wednesday (October 10) in Stanford's Computer Systems Colloquium (EE380). The topic is Computer Architecture; the title has not yet been announced. Expect it to be up on YouTube at https://www.youtube.com/playlist?list=PLoROMvodv4rMWw6rRoeSp... on Thursday or Friday next week.
⬐ magoghm"Guys, we've been giving you a free ride for 30 years while you write your crummy software and we made it faster. That's over!" -- John Hennessy"Our job is to tell them: it's over. The free ride is over. There's no magic. The cavalry is not coming over the hill." -- David Patterson