HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Future Computers Will Be Radically Different (Analog Computing)

Veritasium · Youtube · 96 HN points · 15 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Veritasium's video "Future Computers Will Be Radically Different (Analog Computing)".
Youtube Summary
Visit https://brilliant.org/Veritasium/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. Digital computers have served us well for decades, but the rise of artificial intelligence demands a totally new kind of computer: analog.

Thanks to Mike Henry and everyone at Mythic for the analog computing tour! https://www.mythic-ai.com/
Thanks to Dr. Bernd Ulmann, who created The Analog Thing and taught us how to use it. https://the-analog-thing.org
Moore’s Law was filmed at the Computer History Museum in Mountain View, CA.
Welch Labs’ ALVINN video: https://www.youtube.com/watch?v=H0igiP6Hg1k

▀▀▀
References:
Crevier, D. (1993). AI: The Tumultuous History Of The Search For Artificial Intelligence. Basic Books. – https://ve42.co/Crevier1993
Valiant, L. (2013). Probably Approximately Correct. HarperCollins. – https://ve42.co/Valiant2013
Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65(6), 386-408. – https://ve42.co/Rosenblatt1958
NEW NAVY DEVICE LEARNS BY DOING; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser (1958). The New York Times, p. 25. – https://ve42.co/NYT1958
Mason, H., Stewart, D., and Gill, B. (1958). Rival. The New Yorker, p. 45. – https://ve42.co/Mason1958
Alvinn driving NavLab footage – https://ve42.co/NavLab
Pomerleau, D. (1989). ALVINN: An Autonomous Land Vehicle In a Neural Network. NeurIPS, (2)1, 305-313. – https://ve42.co/Pomerleau1989
ImageNet website – https://ve42.co/ImageNet
Russakovsky, O., Deng, J. et al. (2015). ImageNet Large Scale Visual Recognition Challenge. – https://ve42.co/ImageNetChallenge
AlexNet Paper: Krizhevsky, A., Sutskever, I., Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NeurIPS, (25)1, 1097-1105. – https://ve42.co/AlexNet
Karpathy, A. (2014). Blog post: What I learned from competing against a ConvNet on ImageNet. – https://ve42.co/Karpathy2014
Fick, D. (2018). Blog post: Mythic @ Hot Chips 2018. – https://ve42.co/MythicBlog
Jin, Y. & Lee, B. (2019). 2.2 Basic operations of flash memory. Advances in Computers, 114, 1-69. – https://ve42.co/Jin2019
Demler, M. (2018). Mythic Multiplies in a Flash. The Microprocessor Report. – https://ve42.co/Demler2018
Aspinity (2021). Blog post: 5 Myths About AnalogML. – https://ve42.co/Aspinity
Wright, L. et al. (2022). Deep physical neural networks trained with backpropagation. Nature, 601, 49–555. – https://ve42.co/Wright2022
Waldrop, M. M. (2016). The chips are down for Moore’s law. Nature, 530, 144–147. – https://ve42.co/Waldrop2016

▀▀▀
Special thanks to Patreon supporters: Kelly Snook, TTST, Ross McCawley, Balkrishna Heroor, 65square.com, Chris LaClair, Avi Yashchin, John H. Austin, Jr., OnlineBookClub.org, Dmitry Kuzmichev, Matthew Gonzalez, Eric Sexton, john kiehl, Anton Ragin, Benedikt Heinen, Diffbot, Micah Mangione, MJP, Gnare, Dave Kircher, Burt Humburg, Blake Byers, Dumky, Evgeny Skvortsov, Meekay, Bill Linder, Paul Peijzel, Josh Hibschman, Mac Malkawi, Michael Schneider, jim buckmaster, Juan Benet, Ruslan Khroma, Robert Blum, Richard Sundvall, Lee Redden, Vincent, Stephen Wilcox, Marinus Kuivenhoven, Clayton Greenwell, Michael Krugman, Cy 'kkm' K'Nelson, Sam Lutfi, Ron Neal

▀▀▀
Written by Derek Muller, Stephen Welch, and Emily Zhang
Filmed by Derek Muller, Petr Lebedev, and Emily Zhang
Animation by Iván Tello, Mike Radjabov, and Stephen Welch
Edited by Derek Muller
Additional video/photos supplied by Getty Images and Pond5
Music from Epidemic Sound
Produced by Derek Muller, Petr Lebedev, and Emily Zhang
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Fire control computers, like on a navy ship, were faster than digital computers of the day. YouTube has a number of videos on them.

An opamp performs multiplication faster than a digital computer (speed of light vs a few cycles). It's not super useful on its own, but it does fit the criteria.

In Veritasium's video 2/2 on analog computers [0] they show some startup products near the end.

[0]https://youtu.be/GVsUOuSjvcg?t=898

gaze
What no. Opamps don’t multiply and they don’t operate at the speed of light. They have some timescale that goes like their bandwidth, which depends on their feedback path.
boltzmann-brain
The propagation time around the feedback loop is still (length of loop) / (speed of light*slowdown constant), so yes, "at the speed of light".
osamagirl69
This is absolutely not true, the speed of analog circuits is (by a significant margin) determined by parasitic capacitance, inductance, and resistance of the components. To put numbers to it, a typical high performance analog multiplier might have a loop length of 1cm for the feedback path. This circuit should theoretically operate at 30GHz, but realistically such circuits operate with a bandwidth measured in megahertz.
Animats
Yes, feedback op-amps definitely have bandwidth limits. Although you can get ones in the gigahertz range now.

Analog multiplier ICs are available.[1] They're not common, and they cost $10-$20 each. Error is about 2% worst case for that one. There are several clever tricks used to multiply. See "Gilbert Cell" and "quarter square multiplier".

[1] https://www.digikey.com/en/htmldatasheets/production/1031484...

Cool. Just realized they were also featured on this episode of Veritasium: https://youtu.be/GVsUOuSjvcg
Veritasium had an interesting presentation about analog computers a while back

https://youtu.be/GVsUOuSjvcg

Considering evolution optimized for biological fitness and not intelligence, it would be incredibly surprising if it wasn't possible to do better, especially using a vastly different architecture.

You could also build an AGI to run on a digital CPU but interact with a simulation that used analog coprocessors (which is a thing these days https://www.youtube.com/watch?v=GVsUOuSjvcg).

You can also include quantum coprocessors (which already exist in their infancy) for various things (probably only useful for quantum simulations at this point though).

Also considering recent ML work, an AGI is more likely going to run on something similar to a GPU than a CPU.

Jensson
> Considering evolution optimized for biological fitness and not intelligence, it would be incredibly surprising if it wasn't possible to do better, especially using a vastly different architecture.

The way I see it is that humans are the stupidest you can be and still create a society. Like a nuclear reaction, you need average human to add a tiny bit of structure on average and it works, any less than that and society is impossible, any more than that and society forms extremely quickly, so fast that evolution has no time to work.

It's a big area of research in AI.

Veritasium has two great videos on it:

https://www.youtube.com/watch?v=GVsUOuSjvcg

https://www.youtube.com/watch?v=IgF3OX8nT0w

I found a hackernews discussion for the second one: https://news.ycombinator.com/item?id=29645610

Great Veritasium video on YouTube about analog neuromorphic architectures and their promise for lower-power operations: https://www.youtube.com/watch?v=GVsUOuSjvcg&t=1s
sshlocalhost98
Yup just watched it.
jillesvangurp
I should have quoted that; thanks for that.
Related Veritasium Video: https://www.youtube.com/watch?v=GVsUOuSjvcg Future Computers Will Be Radically Different
pclmulqdq
This is kind of a cool computer, but it's not a serious approach for most problems, and doesn't have much to do with the original article. Analog computers have very limited precision due to SNR, and this kills their usefulness for almost everything aside from niche simulations and machine learning.

Edit: Also, the host of this YouTube video doesn't sound like he knows what he's talking about.

spicybright
It's like how people have been saying lisp is the future of coding.

I feel most engineers (including myself!) become enamored by beautiful ideas enough that they'll want to try applying it to places it doesn't really work.

Doesn't mean I have to stop dreaming of a plan9 flavored 100% Lisp OS though!

Warning: Bit of a 'Hitch hikers guide to the galaxy"/conway game of life yak shave for a reply.

Nice start to providing a simple way to visually interact/experiment/manipulate different multiple Neural network concepts by loading/transforming any traditional code (working or not) & reshuffling/tesselating the structual representation of the code to generate a turing tape path via color execution paths.

Switching to pi calculus could provide the theory to support coroutine/coprossing/ threading" multiple interacting tape paths/color combinations/path diffs/scaled recusion to perform assembly language instructions. Raster graphics without the yak ( https://verdagon.dev/blog/yak-shave-language-engine-game ) Feed back between tape & "traditional code" as neural network layer(s)!

Flexiable code (via tessilations) without changing the actual imported source code! So, complete svg (tape interactions) web page as code source on how to interact with web page, ( https://www.youtube.com/watch?v=r6sGWTCMz2k )

what's displayed/played, documentation & usage manual included within code structure!

Unicode permits implied piping rich kern font without all the tech fluff. Unicode fails because allows infinate multics and is there for k&r incomplete. But, this approach is way more dimensional/colorful than unicode/tech stuff.

?? can one's code unintentially provide a wordle problem & solution in one package ?? (https://www.youtube.com/watch?v=v68zYyaEmEA )

?? turing complete fourier transforms https://www.youtube.com/watch?v=spUNpyF58BY

toolkit concepts:

  Neural networks / analog stuff : https://www.youtube.com/watch?v=GVsUOuSjvcg
  perceptrons:  https://www.youtube.com/watch?v=GVsUOuSjvcg

  spreadsheets (: https://www.youtube.com/watch?v=UBX2QQHlQ_I
  (turing spreadsheets -> https://www.felienne.com/archives/2974 )

  https://www.quantamagazine.org/the-busy-beaver-game-illuminates-the-fundamental-limits-of-math-20201210/
  BB theory : https://www.scottaaronson.com/papers/bb.pdf
  BB(8000) : https://scottaaronson.blog/?p=2725

  #### electrons do not spin -> https://www.youtube.com/watch?v=pWlk1gLkF2Y
  #### prime number spirals -> https://www.youtube.com/watch?v=EK32jo7i5LQ
  #### modular arithmetic : https://www.youtube.com/watch?v=lJ3CD9M3nEQ

  speed things up : https://github.com/vqd8a/DFAGE
Can this approach be used to classify programs as contributing to verifying 'Hichiker's Guide to the Galaxy' universal answer to 42 or must 42 be qualified as an imaginary number?
They essentially made a simple analog computer. Veritasium just did a video about the future of such computers: https://youtu.be/GVsUOuSjvcg
https://nitishpuri.github.io/posts/books/a-visual-proof-that...

This is a rather intuitive mathematical proof that neural nets can emulate all functions in many dimensions. Knowing integrals (Calc 102) would help.

With this information it’s either that self-driving is eventually possible or human brains are not mathematically representable (i.e. something akin to a soul).

Meanwhile, the data collection/storage is getting better (all Teslas are continually testing FSD and reporting failure cases, synthetic generation of “good and unique” perfectly labeled data[0] is only a year or two “old”). Sensors/cameras are improving (and have a lot more room to improve). The compute hardware is getting better and cheaper (analog computers[1], TSMC 2nm and beyond, Apple/Google/Teslas neural cores, etc).

Additionally, given Waymo is already in production without humans in PHX it’s arguable that we already have “autonomous vehicles”. We now just need them to improve slightly and for Waymo or similar to test in the top 25 cities in the US. Which is mostly a matter of capital and time not technology limits. Maybe weather will be an issue for a few more years/decades - but even if Waymo only operated in Summer, I’d be happy to reduce my Lyft costs - and with enough data and maybe better sensors the neural nets will be able to handle weather too. Especially if the car just goes slow.

[0] https://youtu.be/j0z4FweCy4M

[1] https://youtu.be/GVsUOuSjvcg

[2] https://www.cnbc.com/2022/01/08/heres-what-it-was-like-to-ri...

simion314
>This is a rather intuitive mathematical proof that neural nets can emulate all functions in many dimensions.

Yes, but the problem is we don't know what function we want to approximate. So it is like interpolation you give it some points and you get an approximation that is good enough around those points but probably very wrong if you are far away of the points.

We don't even know how many layers and neurons are needed so it seems to be a brute force approach, throw more data and more hardware and when stuff goes wrong maybe throw more data. But one day a Tesla might go on a different environment and maybe people have different hats and goes to shit because those hats were not in the training data, we don't have like in other sciences bounded errors.

basicly, we can do things like matrix operations and differntial equations using specialized hardware that performs computation over analog signals rather than buidlign discrete systems. the end result is that we dont' need anywhere near the level of compute to do the job as its done using an analog of the physical process.

here's two youtube videos that explain is much better than I could. https://www.youtube.com/watch?v=IgF3OX8nT0w&ab_channel=Verit... https://www.youtube.com/watch?v=GVsUOuSjvcg&ab_channel=Verit...

suffice to say, your average house fly exhibits behavior that absolutely dominates our best drones and with a fraction of the circuitry.

Agreed. I've never really gotten into neural networks, most of the times I start but get distracted or bored/lost and suddenly enough a Vertiasium video pops up and about everything just slots into place, stuff you've glossed over or heard before just make sense. https://www.youtube.com/watch?v=GVsUOuSjvcg. Start at minute 4:00.
Mar 04, 2022 · 8 points, 3 comments · submitted by jstx1
krsrhe
None
someguydave
Error due to imperfections in circuit components will never be low enough to make this idea workable.
shadowgovt
He addresses that midway through the video. For general-purpose computers, probably not; digital logic has great error-minimization features that one really wants in operations.

Specialized tasks can (and do) gain advantages from this approach. When the system can be tuned around the imperfections, the imperfections become just one more parameter in the analog tuning logic.

(Sidebar: if he specifically says "for general purpose, maybe not," why is the title "We're Building Computers Wrong?" Clickbait is frustrating.)

desi_ninja
Blame YouTube algorithm for forcing their hand.
Mar 04, 2022 · 75 points, 79 comments · submitted by lawrenceyan
phendrenad2
With a clickbaity title like that, I knew this would be Veritisium before I clicked it. And now, having verified that it is, I share this information with you.
kgwxd
And if you're curious as to why it's clickbaity, there's a Veritasium video for that too :) https://www.youtube.com/watch?v=S2xHZPH5Sng
ModernMech
I didn't watch the whole thing but is the ladder ever explained? I didn't understand why he's doing the video on a ladder. Is that part of the clickbait? You have to have a crazy face and be on a ladder, and that's how you make Youtube's algorithm happy?

At this point it seems like a vicious spiral. Someone clicks on a guy on a ladder and all of a sudden you're being recommended guys on ladders, so now everyone needs to get on a ladder to get on the front page.

droopyEyelids
Youtube runs the thumbnails through a sentiment analysis AI and promotes videos more when they rate with a high likelihood of the bucket of emotions that have been found to correlate with ad impressions.
l33t2328
Something about Veritasium absolutely rubs me the wrong way, and I generally like Youtube education creators, but his videos are borderline unwatchable for me.

Does anyone have an explanation for what it is about his videos that is so grating?

nodja
Can't say if it's the same feeling, but to me it's the fact that he plays everything off like he's doing a helpful thing and trying to inform you for your own benefit, but the reality is the motivations are warped and the video only exists because some company paid him money to do so.

I've unsubscribed from this channel and smarter every day because while I believe the authors feel like they're doing the right thing. They let themselves be highly biased/influenced by their sponsors that I cannot watch the video and believe that they're being honest.

ah9dja
Same, but it wasn't this unbearable in the beginning. I think it's partly me thinking there's no way he understands such wide ranging subjects at such great depths without significant research and prep for the video, which contradicts how casually and confidently he's presenting everything. I'm used to expecting a more scripted presentation in these cases, which it probably is, but it doesn't sound like it.
boxed
He shows him doing conference calls with scientists explaining shit to him quite a bit though. So that just feels like you're cherry picking.
mardifoufs
Veritasium used to do a lot less a/b testing on his videos and I guess maybe he just got too big too fast. The thing that bothers me the most though is that he went from almost exclusively explaining interesting scientific facts/experiments to a more "aggressive" style where a lot of his videos are about x being wrong and here is z reason why. And I personally just don't like that style, especially when it turns out that there is a lot more nuance than his titles would indicate. I mean what are the odds of an entire field being wrong , and the truth is actually in a 15 mins youtube video by a very general science channel?
jebr224
same, It comes of as smug or some desire to prove he smarter, rather then genuinely trying to educate.

The videos are of educational, high quality, and the content is accurate. But the presentation is not for me.

netizen-936824
I feel like its more the assertiveness on how right the information he's sharing is, when not everything he makes a video on is quite so correct but there isn't any of this nuance or doubt in his works. Doesn't feel like any of the professors I've spoken to, there's too much confidence. The only thing we can be that confident in is how wrong humans tend to be
aroundtown
The two things that bother me about Veritasium are his brand of pseudo-intellectual (smugness maybe?) and the fact that his videos often aren't telling the whole picture.
chowells
All of his videos that I've seen have a weird thing where he challenges people to explain something, and then he reveals he was talking about something else and their explanation isn't relevant. This is just https://xkcd.com/169/

His videos don't strike me as intending to educate. They strike me as intending to show how much more clever he is than you - and it's entirely that he came up with some weird gotcha that people logically assume he couldn't be talking about because it's nonsense.

It's like watching a smug teenager who never grew out of it. No thanks, I don't need that reminder of how grating I was.

droidist2
That comic reminds me of all the times I've seen someone pose the Monty Hall Problem without mentioning that the host always reveals a goat on first round.
fsh
I think he is easily one of the smartest people on YouTube and is producing some of the very best videos. He is also a brilliant educator. For examples, his videos on special [1] and general relativity [2] were real eye-openers for me, even though I have studied physics and have seen this stuff explained a million times before.

What I really don't like are the new video titles. At some point he started A/B testing them to maximize his revenue. He even made a video about it [3]. Of course, this selects for the most outrageous clickbait in order to get that sweet engagement. I am starting to wonder if he took into account, how much this is starting to alienate long-term followers.

[1] https://www.youtube.com/watch?v=pTn6Ewhb27k

[2] https://www.youtube.com/watch?v=XRr1kaXKBsU

[3] https://www.youtube.com/watch?v=S2xHZPH5Sng

dabernathy89
I really enjoy his content, and I don't expect it to be exhaustive or cover all angles and approaches to a topic. To each his own ¯\_(ツ)_/¯
eternityforest
Doesn't he have some things that seem a bit "In this moment I'm euphoric"?
Brian_K_White
I don't know how to articulate it either.

It's some form of uncanny valley type of thing where every detail or facet or property I might point out, there are other people exhibiting the same nominal qualities without bothering me. But something here is just slightly off in some way, but so subtly you can't say what it is.

I honestly can't say he's doing anything actually wrong. I don't think even the most dumbed down ones are really guilty of being wrong. Not enough that I'd complain about it in a casual yt video.

I can see some people maybe not liking his basic style of exposition. That slightly affected dreamy pondering thing.

One thing I can observe is that another guy that just turns me right off even worse is the Smarter Every Day, and these two appear in each others videos at least a few times. Or maybe it was just one time and that one time just bugged me that much :)

I don't know what it is but I frequently see a video from either of these guys where the topic they are presenting looks interesting, but then I do not like their video on that topic.

I think it's down to the presentation just seems a bit too affected. And it's not even as bad as seeming fake or insincere. They both actually strike me as perfectly sincere.

phendrenad2
To me the videos feel like a sustained form of clickbait. I keep expecting a payoff at the end of the video, but instead he sort of weaves together a grand narrative that nevertheless feel unfulfilling to my analytical mind. Take this video for instance. In the beginning he talks about analog computers and how they differ from digital computers. Then he talks about how neural networks work. Then he explains how neural networks could be better if they used analog values. But he doesn't ever get around to proving the title of the video ("we're building computers wrong"). He doesn't even convincingly show that we could build an analog neural network given current technology. His video format is just a lofty premise, a middle full of glossed-over science, and a disappointing conclusion with no hard takeaways.
Brian_K_White
I did like the impossible to measure the speed of light one, even though it has Smarter Every Day.

Ok that one has something I can identify clearly: The way Justin (Smarter Every Day) seems to be just badly hamming up the whole "this is so weird and hard to visualize" as though to pander to the audience who are expected to be baffled by the idea being presented. That was Justin though not Veritassium.

I agree with what you point out about this one.

Brian_K_White
Destin not Justin sorry. Derpyderpderp.
jalgos_eminator
Its interesting you mention Destin from Smarter Every Day, because he always came off as a very earnest learner in his videos. Yeah, he's still putting on a show, but it does seem like he genuinely enjoys and revels in the things that he does. It seems to me like Veritasium has a more contrarian thread of: "Oh, you thought the world worked like that? Well you were wrong, and here's a 15 minute video all about how you were wrong." There's an Onion video making fun of Vox that touches on this perfectly: https://www.youtube.com/watch?v=RpkQEq75y18

I guess to summarize, with Destin it feels like you are learning with him and Veritasium feels like you are being lectured.

Brian_K_White
Hey, I also don't like chocolate all that much, and I seem to be some kind of inexplicable alien because of that. So, it's very possible "it's just me" that I don't like good things because curmudgeon. :)

I see the distinction you make there now that you point it out.

boxed
Funny. Veritasium actually has some videos talking about why he has this approach :)

I saw something similar all the time as a dance teacher: explaining stuff and showing over and over made the students feel like they learned a lot. But they just didn't. It was mostly a total waste of time. I switched to showing ONCE then they had to try. Then I showed again. The speed of learning went up dramatically. The problem I had then was that the students didn't like it!

"Make people believe they're thinking and they'll love you. Make them actually think, and they will hate you."

l33t2328
See, I’ve heard that, but I like being challenged. I like to have to “pause and ponder” as the excellent Youtuber 3Blue1Brown puts it, but Vertiassium is just annoying.
boxed
Challenged? Or feel like you're being challenged?

It'd be nice with a real scientific test! There's a lot of subjective feeling here and not anything very solid. Veritasium at least backs up his position with his own thesis research. I find that more compelling as an argument than having a position not backed up by anything at all.

tronster
I'm subscribed to his channel, as he's covered some great science topics in the past. I skipped this video based on the title - it doesn't tell me anything about the content and "feels" more like a plea for the video to be watched than some good science broken down.

I imagine that, over time, this may shift his target audience. But if a new audience is comprised of a demographic he's aiming to reach, and it gets him the views, I can't fault his decision on how he titles the videos, even if they have me skipping them.

tacitusarc
He updated the title. I watched it when he posted it, at that time it ended in “(for AI)”

I think he’s experimenting with clickbait.

lIIIllllIIII
I thought it was actually just native advertising for Mythic.

Besides that I think the topic is prescient; there's a company local to me in Australia called Brainchip that is doing something similar I think. Given that NNs are just a bunch of matrix multiplications it's a promising approach.

I occasionally used to watch Smarter Every Day then he started doing a lot of this stuff, I think one of them might have been for an oil or defense company which I found utterly gross and I've avoided watching the channel ever since.

progre
He has openly talked about his clickbait titles. He tries a handfull of titles and thumbnails for each video.
CSSer
Committed to, actually. He made a video about how they were going to start doing it more: https://youtu.be/fHsa9DqmId8?t=1035. Also, more recently: https://www.youtube.com/watch?v=S2xHZPH5Sng.
autokad
One thing about Veritasium is that they do get things wrong. They did one on electric circuits where if you had a circuit that was miles long with the energy source sitting right next to you, you would get current as soon as you turned on the electrical circuit because 'electricity doesnt follow the wire' or something along those lines.

This has been disproven by people doing experiments

mannykannot
One could quibble over the details of the experiment, such as what sort of lamp to use, but Maxwell's laws would have to be repealed for the experiment to not work essentially as Veritasium claimed. Check out "electromagnetic induction" and "displacement current".
LordDragonfang
The problem is that he frames the whole whole experiment as if the inductive current is a property of the circuit rather than a property of two separate wires being next to each other - he heavily implies the inductance is result of the current "starting" to travel through the loop, and IIRC never once uses the term "inductance" (which seems odd for a video purporting to promote understanding rather than just be clickbait).

(The fact that the loop is closed can't be relevant by simple thought experiment, because otherwise it implies you could get ftl signaling by having a switch on the physically distant checking whether the "tiny current" flows when you close the source)

mannykannot
Veritasium may not have mentioned either inductance or displacement current; I don't recall. He did, however, talk about an even more abstract concept which subsumes the effect of both: the Poynting vector. Pedagogically speaking, it was probably a mistake to go there without covering the prerequisites in considerable detail, but that does not, IMHO, make it clickbait: for someone who has some background, this could be food for thought.

Whether the distant end of the loop is closed or not makes a difference eventually, where 'eventually' means once the initial EM wave has reflected from the end and returned to the load.

LordDragonfang
> Pedagogically speaking, it was probably a mistake to go there without covering the prerequisites

And considering the fact that Derek has a Ph.D. in physics education research [1] that seems like an odd mistake for him to make, unless it was intentional.

As another commenter pointed out [2] Derek's content lately seems to be tailored to the maxim of "Make people believe they're thinking and they'll love you. Make them actually think, and they will hate you." It seems intended to invoke incredulity rather than understanding, presumably because he's decided the former is more marketable - because providing incomplete understanding and phrasing things in a way that's just technically correct enough to invite flame wars boosts engagement and therefore virality.

Yet another commenter [3] invoked https://xkcd.com/169/, which I think is apt.

[1] https://en.wikipedia.org/wiki/Derek_Muller#Early_life_and_ed...

[2] https://news.ycombinator.com/item?id=30558859

[3] https://news.ycombinator.com/item?id=30559815

mannykannot
You appear to have seen way more of Veritasium's videos than I have, which seems rather odd, given that you dislike them so much.
LordDragonfang
I've been following him for a long time. His videos didn't used to be this bad.
mannykannot
Your obsession with this issue is getting tedious.
lIIIllllIIII
I remember seeing that video. It was an insta-click for me because I understand very, very little about circuits and electricity. Something that actually bugs me quite a bit.

I understood even less after watching it I think..

gaze
No, you’re wrong. If you hook an ohmmeter to a 10 light-second long piece of LOSSLESS coax, you’ll measure the characteristic impedance (50 or 75 ohms, probably) of the cable for 20 seconds.

I have done this personally with shorter coax and TDR.

etskinner
Can you share the experiments that disprove it? The one video I saw proved it.

Someone set up some long wires over a farm field, and measured the time to first current flow, which was faster than if the current had to travel the whole wire length. Granted, the full current flow doesn't appear until later, but the first little bit of current is there, which is exactly what Derek said in the Veritasium video

autokad
I believe this is the same video, the author theorized (probably correctly) that it was because there was residual 'power' on the wire, as it acts like a giant capacitor, among other things. this is because the wire that immediately showed up was extremely small and dissipated very quickly. https://www.youtube.com/watch?v=2Vrhk5OjBP8
dkislyuk
For a more realistic view on AI accelerators that doesn't overhype analog computing, I enjoyed the series by Adi Fuchs: https://medium.com/@adi.fu7/ai-accelerators-part-i-intro-822...

At the end of the day, specialized hardware, particularly on the analog side (neuromorphic, optical, etc.) locks development into the path of highly uniform feedforward networks by optimizing large matrix multiplications, and it is unclear if this tradeoff is worth it as we still have so much to figure out about which methods will make progress in AI.

GlenTheMachine
Neuromorphics aren't analog FWIW. They're just asynchronous.
aurelian15
Depends ;-)

First of all, there is no single accepted definition of "neuromorphic" [1]. Still, as a point in favour of the "neuromorphic systems are analogue" crowd: the seminal paper by Carver Mead that (to my knowledge) coined the term "neuromorphics" specifically talks about analogue neuromorphic systems [2].

Right now, there are some research "analogue" (or, more precisely "mixed signal") neuromorphic systems being developed [3, 4]. It is correct however that there are no commercially available analogue systems that I am aware of. Unfortunately, the same can be said for digital neuromorphics as well (Intel Loihi is perhaps the closest to a commercial product, and yes, this is an asynchronous digital neuromorphic system).

[1] https://iopscience.iop.org/article/10.1088/1741-2560/13/5/05...

[2] https://authors.library.caltech.edu/53090/1/00058356.pdf

[3] https://brainscales.kip.uni-heidelberg.de/

[4] https://web.stanford.edu/group/brainsinsilicon/documents/ANe...

periheli0n
Yes, analog computing can be more efficient and faster. But there are reasons why analog was eclipsed by digital.

1. Flexibility. Reprogramming digital computation is easy and quick.

2. Robustness. One can mass-produce devices that operate in the digital domain and sell them as working as spec'd. But operate them in the analog domain, and you will work with a million snowflakes that all operate slightly differently, giving different computation results. And when temperature changes, the result of your analog computation will inevitably change, too. You can work around that by adding more circuitry, and partly also on the algorithmic end but it will cost efficiency and precision.

Matrix multiplication on an analog device is great, unless you want an exact and reproducible result.

R0b0t1
He touches on this in his first video. The workaround is we are now getting into a regime of computing where exact reproducibility is not necessary.

It will take a while though. It is still quite hard to remove noise for opamp circuits.

periheli0n
> we are now getting into a regime of computing where exact reproducibility is not necessary.

I think that is a myth. Predictable, reproducible and explainable outcomes are the holy grail of computing, in particular in AI.

If stochasticity is desired, there are methods to inject it, with precise control of the level of stochasticity and the distribution.

This level of control is absent in analog computing. Device mismatch introduces some randomness, but it cannot be controlled in practice. Instead of adapting the randomness to the algorithm, one ends up adapting the algorithm to the randomness.

Working with analog computers is a fascinating academic exercise. For practical applications, I doubt we’ll see it compete with digital computation anytime soon.

R0b0t1
I said exact reproducibility. Every time you add numbers your neurons likely fire in slightly different ways, but the end result is what is reproducible and useful.

He's not totally correct because, like I said, the way we think of analog computers right now is likely not going to get us to where we need to be. Noise is a huge problem. Our own neurons do not seem to use continuous voltage, for example, and use pulse density coding, at least for some of their operation.

somat
on point 2 another way to put it is that digital computers take much much lower tolerance components than analog computers.

A digital computer transistor has to operate(assuming ttl logic) at around 5 volts and 2 volts the transistor behavior in between does not really matter.

The analog computer transistor has to operate to a high precision at all voltage ranges.

Go ahead and start pricing out high precision transistors and the environmental controls needed to keep them there and you will see why we use digital computers.

The history of the navy NTDS air defense system is interesting because they were in a position where they could have gone ether way. use known and understood analog computers or go with the unknown new tech, digital computers.

https://ethw.org/First-Hand:No_Damned_Computer_is_Going_to_T...

Strilanc
I'm a bit worried for that analog-matrix-multiplication-for-AI company. I vaguely remember reading somewhere that the past half century is littered with companies whose value proposition was "our specialized thing does 10x better than digital transistors" and then they predictably just got steamrolled by Moore's law. And although Dennard scaling ended two decades ago, the flops-per-watt and flops-per-second of AI-specialized chips like TPUs has been improving substantially recently [1][2].

1: https://en.wikipedia.org/wiki/Tensor_Processing_Unit#Third_g...

2: https://arxiv.org/abs/2005.04305

uxp100
Fuzzy logic was pretty closely tied to Neural Networks at one point (80s to early 90s?) in terms of which researchers talked about them and even just which books info about them was located in. I think dedicated analog fuzzy logic chips were, as you say, steamrolled by Moore's Law.

(My zorojushi rice cooker still mentions fuzzy logic, but they must be just implementing the fuzzy "transfer function" with an mcu at this point, right?)

snakke
You're correct that specialised analog companies have not done well historically. However, we don't find ourselves in exactly the same position in computer architecture/performance as we've been decades before.

There's some (relatively) new ideas that now the performance of computers will be pushed more by dedicated silicon for a dedicated purpose, and tools. See for example there's plenty of room at the top [1], or Hennesy's talk at Google [2].

This of course does not mean that analog computers are suddenly viable, but it does mean that they could potentially fill a niche where they failed previously.

Anecdotally, when looking at jobs for hardware design by the likes of Infineon, STM, Cyient etc. there seems to be a relatively high ask for (senior) analog designers, and a new focus on mixed-technology chips. It might turn out to be a dud still, but it isn't the same situation as decades before.

1: https://www.science.org/doi/abs/10.1126/science.aam9744 , or the IEEE article https://ieeexplore.ieee.org/abstract/document/7863324

2: https://www.youtube.com/watch?v=Azt8Nc-mtKM , around the 12:15 mark, but the entire video is relevant.

123pie123
TL;DW

found it bit interesting, goes on about the history of AI/ Neural Networks and says these do not need the precision of digital computers (manly requiring matrix math)

and this can be done faster with analogue computers (using variable resistors type of transistors)

jodrellblank
In the intro to this video he mentions analogue computers predicting tides. His video before this one[1] goes into detail on that, and it was incredible.

In the 1800s, Lord Kelvin spent years working on tidal rise and fall patterns, applying Fourier Transforms to break them down into ten individual sine waves, then combining those sine waves back together to predict future tides. And built analog computers to do all the involved integration, multiplying and summing. It's a part of the history of computing I'd never heard of, despite hearing quite a lot about the dawn of electro-mechanical computers in the early 1900s.

[1] https://www.youtube.com/watch?v=IgF3OX8nT0w

ggm
The machines (or later derivatives) is in the science museum. One variant uses cones, I imagine they provide for a huge amount of variance of the parameters of the fourier transform, another is pulleys, but changing out a pulley is a lot more work than moving where two cones are co-incident rubbing to transfer motion.

Along with a meccano numerical analyser, and bits of babbage's original work

Animats
That video has one of the best explanations of how and why deep learning works.

The argument for analog is weaker. I've tried some stuff with analog computers, and at one time I was into op-amp circuits. The basic problems are noise and inflexibility. However, that may change as people develop ICs that are reconfigurable, like an FPGA.

Flash memory cells as multipliers by a changeable but not dynamic value are a new idea. That only leads to analog ICs that do a pre-trained neural net, though. Running neural nets isn't that expensive. It's training deep neural nets that needs entire data centers. If they can figure out a way to do the whole backpropagation thing in analog, that would be impressive.

antattack
If you think about it, Mythic analog AI accelerators will also develop neurological 'disease' as it's flash cells wear out.
jecel
Since the chip is meant for inference but not for training, in theory you only write to the Flash once or twice. In this case data retention is a more limiting factor than wear.

For digital storage the data can change quite a bit before a 1 becomes 0 or vice-versa, but for analog even a small change can be a problem. So I don't know if we can extrapolate the typical Flash memory data retention number to the expected aging for this device.

rapjr9
Check out the work of Kofi Odame on implementing algorithms in analog circuitry on integrated circuits:

https://engineering.dartmouth.edu/community/faculty/kofi-oda...

He's interested in creating analog circuits that are more efficient than digital ones for signal processing and machine learning.

I've programmed an analog computer when I was working in the military-industrial complex long ago, implemented a standard missile guidance algorithm for an IR missile using a dual feedback loop and adjustable nav ratio. It was kind of like programming with wires to implement math. It had noise limitations on accuracy, in part because of the many wires exposed to florescent lights and digital computers in the room. Quite interesting. I think the Analog Thing might make a good EuroRack music making module! Turn math into music.

olliej
I liked some of veritasium's older stuff but it seems to be getting more and more clickbait as time goes on (which is reasonable as it's clear that YT is his job, and YT promotes clickbait above everything else).

A few people said that they "knew" it would be veritasium, but I had guessed it would be Mill computer folk :D

Dylan16807
> A few people said that they "knew" it would be veritasium, but I had guessed it would be Mill computer folk :D

They're trying to bring DSP-level efficiency to more general CPUs, so I could imagine a similar title from them but it wouldn't be this clickbait title.

mrvenkman
I have to say that the title is definitely “click baity” - I watched the video because I wondered what we were doing wrong. It is focused on how to improve AI when we run out of atom sized hardware. I don’t appreciate the title.
mikewarot
Let's use the LM358DT as an example of a modern op-amp. Just sitting there, it has current sources sinking 100uA of current. At 5 volts, it consumes about 0.7 mA with no load.

So, compare that to a 4 bit ALU (an old one, at that) the 74HC181. At 5 volts it consumes 80uA at idle.

Analog circuits require transistors have bias flows through them at all times to ensure linearity. CMOS digital logic has the transistors either fully on or off, except during switching times.

Going analog isn't as great an option as this video makes it seem.

teleforce
For a wonderful introduction to digital and analog computers please check the book by Steiglitz, "The Discrete Charm of the Machine" [1].

[1] https://leonardo.info/review/2019/07/review-of-the-discrete-...

charcircuit
I feel like it was a bit misleading to mention large amount of energy to train a model and then show a chip which can only be used for inference and not training. In general people are optimizing for power use at the edge and not for training.
aidenn0
Are analog computers actually more energy efficient than a digital ASIC with similar accuracy (say 6-10 bits)?
boxed
Yes. By a lot.
agumonkey
i find funny the parallel between the uptake of reactive programming and analog computing (same kind of real time smooth coupling)
mastax
This video discusses the history and principles of classical analog computing and then says that analog computers have energy efficiency benefits over digital computers. This can give the incorrect impression that classical analog computers are more efficient than digital computers, which is absolutely not correct. It may be true that modern analog phase-change memory AI accelerators are more efficient than their digital counterparts, but this assertion should be distanced from the discussion of classical analog computers to not give the wrong impression.
eternityforest
People always love to say some old lost tech is better than the modern version

But it's like, a one in a thousand shot.Almost all modern tech is better with regressions lasting a decade at most.

Occasionally there are rumors of some modern medical treatments being worse even though medicine as a whole improves, but mechanical and electrical stuff is much better understood and more controlled.

The exception is a few specific appliances that are seemingly designed to fail. Even then there's no lost tech. I'm pretty confident we could make a much better washing machine today for much less money if we tried, than anything from 30 or 50 years ago.

autokad
The analog computers on the Missouri class battleships was never upgraded because the kill radius of the shells was larger than the margin of error the analog computers introduced and that the analog computers required less electricity which is good for something such as a ship. So in this way, yes, they are better (these computers developed in the 30s). It depends on the application.

> Occasionally there are rumors of some modern medical treatments being worse even though medicine as a whole improves, but mechanical and electrical stuff is much better understood and more controlled.

Such as the iron lung probably being better than the invasive machines we have today.

> The exception is a few specific appliances that are seemingly designed to fail. Even then there's no lost tech. I'm pretty confident we could make a much better washing machine today for much less money if we tried, than anything from 30 or 50 years ago.

look at the space industry, it actually costs us more (adjusting for inflation) to try and go back to the moon than when we did before when we had no idea on how to do it.

Or look at tractors, farmers are clamoring for 1980s tractors over these new expensive (break down easy with no way to repair) 'modern' tractors John Deer is pushing.

ben_w
All the commentary I see about the moon program says that the rockets bit of NASA was set up to be a pork barrel for all of Congress just so nobody cancelled the Apollo mission, and that the many people are annoyed that this structure is still in place.

Given that, the cost of the latest moon mission is “as much as we can squeeze the taxpayer for” rather than representative of the actual cost. The various new startups worldwide (not just SpaceX) are all much more interesting, though obviously they’ll only get a price comparison when they actually land a human.

I think it would’ve been neat if the Falcon Heavy had speed-run the Lunar X Prize, but alas they had better things to do: https://space.stackexchange.com/questions/27271/could-the-lu...

throwaway14356
it just isnt so binary. old tech is always better than new in various ways time tested being the largest factor. we stil move on for reasons
eternityforest
People died going to the moon the first time. Better to not go at all, or to have it cost 10x as much, than to risk even one astronaut's life.

Besides, if we just redid what we had, we would not be developing any new tech that could be used on earth. And it would be less comfortable, a very bad thing if you're still fighting the losing battle to convince anyone that they'd like to be in space for more that a brief adventure.

AFAIK they have cuirass ventilation to replace iron lungs now.

Those new tractors are most likely way more reliable. In practice they may be worse because of artificial limitations put there on purpose, but were it not for those it would almost certainly be better(Although people would probably still like the same ones, because no matter how reliable something is, people seem to like things with a "substantial" feel that they can understand).

BiteCode_dev
It depends of your criteria. E.G: old appliance are bulky, noisier and suck more juice. You could say modern ones are then better. But my mother still own stuff from decades ago, that are consistently sturdier, last longer, are easier to repair, and have a better UI.

I just bought a food slicer. The motor was connected to the saw using a very soft plastic gear. Its teeth became smooth after a month of use, rendering the machine useless, and no way to buy a new gear on the internet.

This is modern tech at work: you getter a cheaper machine, but to get there, the materials used suck. And there is no stock for replacement parts because it's expensive to keep them around.

eternityforest
They're probably deep in planned obsolescence since most people will use them about 3 times in 30 years.

The more common everyday things like kettles, vacuum cleaners, toasters, etc all seem to have very good options for not much money.

I've never driven a car, but my family seems to need less trips to the mechanics by far than when I was a kid. Computers definitely seem better in every way, and of course all small electronics like tape players have been phone-ified and seem to be much more reliable.

It may well be that slicers are niche enough that consumer versions are worse.

Still, nylon gears can be very durable(Unless any ozone gets on them, that seems to kill them).

A truly modern slicer with the same crappy materials would probably be using the MCU to predict the gear temperature based on motor current and limiting the duty cycle, and it would last a long time and perform acceptably well.

Either that or they'd have some direct drive scheme for then really nice ones, or maybe even some kind of no moving parts linear motor.

Modern power tools do this all the time. They shut down or reduce power for what seems to be no reason or a minor reason, but the computer probably detected some subtle overload condition I wouldn't have. It's a bit annoying, but it makes them cheap and durable enough to not think twice about buying used.

The stuff at Wal-Mart usually sucks, but there's almost always some affordable modern version that beats the older tech.

It does seem that 3rd party gears exist though, for some slicers.

BiteCode_dev
I suppose there is also the problem that those same appliances were pricey at the time. Lots of cheap options today, but you get what you pay for.
daltont
Modern refrigerators and washing appliance can be really flaky from software bugs. I've talk to others who have $3000 fancy stainless steel refrigerators and they seem vulnerable to weird and hard to diagnose problems.
austinl
The ancient tech myth I see the most is the idea that the recipe for "Roman concrete" was lost, and somehow modern engineers can't figure out how to make a superior mix.

Certainly, Roman engineers built incredible unreinforced concrete structures. But this was accomplished through structural engineering techniques designed to keep the concrete compressed (e.g. arches, domes). Modern structures like elevated highways and skyscrapers would be impossible to build this way, and require steel reinforcement.

While the mix the Romans used was slightly different (apparently it contained a bit of volcanic ash and used less water), modern engineers deliberately choose a different mix based on the structure. E.g. larger structures with natural reinformcement, like dams, will tend to use a mix closer to what the Romans used.

tonfreed
I just want a Damascus steel sword because they look cool
Mar 02, 2022 · 3 points, 1 comments · submitted by davidbarker
bklaasen
https://www.youtube.com/watch?v=GVsUOuSjvcg&t=1093

Start here to hear the deplorable and utterly uninspiring uses the guy comes up with for analogue chips, then rewind to the start.

The tech is great, but the (presumably most compelling) use cases this guy can come up with are automated surveillance and efficiencies in junk food production.

Mar 01, 2022 · 2 points, 0 comments · submitted by lawrencechen
Mar 01, 2022 · 8 points, 2 comments · submitted by alexb_
alexb_
Worth mentioning that, while this video is very great and informative, Veritasium has made videos hyping up far-from-market technologies for money in the past[1]. Similar to the linked video, this talks about extremely promising tech not by asking experts in the field, but instead going to startups with an extreme incentive to hype up the technology (complete with VC bait "metaverse applications" of the tech). It's still a great video, but this should be kept in mind.

[1] https://www.youtube.com/watch?v=yjztvddhZmI

ktpsns
The product presented: https://the-analog-thing.org/ made by https://anabrid.com/

The company presented: https://www.mythic-ai.com/

This is the second video (of a series of two) by Veritasium about analog computers. The first one was https://www.youtube.com/watch?v=IgF3OX8nT0w and discussed on Hacker News at https://news.ycombinator.com/item?id=29645610

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.