Hacker News Comments on
But what is the Fourier Transform? A visual introduction.
3Blue1Brown
·
Youtube
·
333
HN points
·
29
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.I recommend learning about the Fourier Transform, and then peeking at how frequency transformations are used to represent everyday images, like the Cosine Transform used in JPEG.3 blue 1 brown FFT: https://youtu.be/spUNpyF58BY
JPEG components https://en.wikipedia.org/wiki/JPEG#Discrete_cosine_transform
I worked at the Very Large Array radio telescope for a while, and even so the language didn't click for me until I thought about the signal transforms that have become ubiquitous in everyday life.
And a co worker described the telescope as a double-slit experiment, with multiple 2-D holes instead of slits; imagine the interference pattern that you get when you have multiple slits. The elements of an interferometer are the "slits", but they are receiving the light rather than emitting it. The VLA has 27 radio dishes for its elements, so poke 27 holes in a board and shoot a laser through the holes, you get this blurry pattern on the wall. That's the distortion from the shape of your telescope, you're trying to cancel that out...
Then there's the time domain, because the Earth is rotating, so your telescope is sweeping a pattern across the sky. It can seem like a hopeless mess, but it's actually the case that for most observations, that sweep gives you even more coverage of the image. It can be a good thing.
But you have to keep track of one wavefront across all of those moving elements. The correlator is a computer doing those FFTs to find where the signal matches enough between the elements to start extracting finer detail.
I think some insight into how that's possible is around the 8-minute mark of this video:
https://youtu.be/nmgFG7PUHfo?t=8m
I have to admit that I'm still thinking about it all, and it's been 20 years. But JPEG is about at my level.
That's at the 22-minute mark of the same Vertiserum video:
I can also highly recommend 3Blue1Brown's treatment of the topic: https://www.youtube.com/watch?v=spUNpyF58BY (...which I see now is cited in the video that you link to!)
Visual introduction to Fourier Transform from 3Blue1Brown: https://www.youtube.com/watch?v=spUNpyF58BY
If the part about the tack on the board under the clocks wasn't clear, and you're a visual learner, this video from 3Blue1Brown about the [classic] Fourier Transform might help clarify the relationship to periodicity:
Warning: Bit of a 'Hitch hikers guide to the galaxy"/conway game of life yak shave for a reply.Nice start to providing a simple way to visually interact/experiment/manipulate different multiple Neural network concepts by loading/transforming any traditional code (working or not) & reshuffling/tesselating the structual representation of the code to generate a turing tape path via color execution paths.
Switching to pi calculus could provide the theory to support coroutine/coprossing/ threading" multiple interacting tape paths/color combinations/path diffs/scaled recusion to perform assembly language instructions. Raster graphics without the yak ( https://verdagon.dev/blog/yak-shave-language-engine-game ) Feed back between tape & "traditional code" as neural network layer(s)!
Flexiable code (via tessilations) without changing the actual imported source code! So, complete svg (tape interactions) web page as code source on how to interact with web page, ( https://www.youtube.com/watch?v=r6sGWTCMz2k )
what's displayed/played, documentation & usage manual included within code structure!
Unicode permits implied piping rich kern font without all the tech fluff. Unicode fails because allows infinate multics and is there for k&r incomplete. But, this approach is way more dimensional/colorful than unicode/tech stuff.
?? can one's code unintentially provide a wordle problem & solution in one package ?? (https://www.youtube.com/watch?v=v68zYyaEmEA )
?? turing complete fourier transforms https://www.youtube.com/watch?v=spUNpyF58BY
toolkit concepts:
Can this approach be used to classify programs as contributing to verifying 'Hichiker's Guide to the Galaxy' universal answer to 42 or must 42 be qualified as an imaginary number?Neural networks / analog stuff : https://www.youtube.com/watch?v=GVsUOuSjvcg perceptrons: https://www.youtube.com/watch?v=GVsUOuSjvcg spreadsheets (: https://www.youtube.com/watch?v=UBX2QQHlQ_I (turing spreadsheets -> https://www.felienne.com/archives/2974 ) https://www.quantamagazine.org/the-busy-beaver-game-illuminates-the-fundamental-limits-of-math-20201210/ BB theory : https://www.scottaaronson.com/papers/bb.pdf BB(8000) : https://scottaaronson.blog/?p=2725 #### electrons do not spin -> https://www.youtube.com/watch?v=pWlk1gLkF2Y #### prime number spirals -> https://www.youtube.com/watch?v=EK32jo7i5LQ #### modular arithmetic : https://www.youtube.com/watch?v=lJ3CD9M3nEQ speed things up : https://github.com/vqd8a/DFAGE
I never understood how multiple electrical frequencies can be in a copper cable. I always thought they must merge and therefore it must be impossible to find the right set of original frequencies and their strength.Then I learned about the Fourier transform in this beautiful video: https://youtu.be/spUNpyF58BY
Dude really needs to subscribe to 3Blue1Brown:https://www.youtube.com/watch?v=spUNpyF58BY
(but anyone interested in stuff like this should be. I guess I'm surprised to see someone like this writing a blog post who isn't regularly checking out that channel)
3Blue1Brown has an amazing visual explanation which would go well with this: https://www.youtube.com/watch?v=spUNpyF58BY
⬐ N1H1LI rediscovered my joy of mathematics with 3Blue1Brown⬐ jstx1And an FFT explanation by Reducible - https://www.youtube.com/watch?v=h7apO7q16V0It's in a similar style to 3blue1brown (uses the same visualisation library) but the video focuses specifically on the FFT algorithm, not on Fourier transforms in general. I'm pretty sure Grant himself recommended the Reducible video at some point.
When I saw the link named "explaining [FFT's] ingenuity in more detail" I was nearly certain it would be a 3B1B video, and it was:
Agreed! It's one reason I'm bullish about our children learning and applying math better than us.Btw, 3 Blue 1 Brown's videos on linear algebra [1] are similarly awesome and of course his video on Fourier is magnificent [2]. Another awesome math explanation is on game theory by Nicky case [3].
[1] https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x.... (you'll finally learn what a determinant is!) [2] https://www.youtube.com/watch?v=spUNpyF58BY [3] https://ncase.me/trust/
You’re in luck. There was a thread about it 5 days ago.https://news.ycombinator.com/item?id=27229836
My favorite was
What about the 3 blue 1 brown video on it? I think that is pretty concise and intuitive too but does not seem to appear on this thread https://www.youtube.com/watch?v=spUNpyF58BY&ab_channel=3Blue...
⬐ BlueTemplarHaven't they "just" taken the above explanation, and made a video about it ?
The 3b1b video was mind blowing. Link: https://www.youtube.com/watch?v=spUNpyF58BY
⬐ travisgriggsAgreed! Came here to post it. First time I really "got" what an FFT was about.
This is getting at the idea behind the Fourier transform. 3b1b did an excellent video with exactly this visualization:https://www.youtube.com/watch?v=spUNpyF58BY
The idea is that wrapping a function around a circle at the right frequency (i.e. length per spiral) will cause peaks and troughs to align. It just happens here that the "right" frequency is 1/year.
I think the key is finding an intuitive way to grasp the subject matter, and then it doesn't feel like such a grind.If material like this had been available 20 years ago it would have made a huge difference.
This is an explanation (from the amazing three blue one brown) of the DFT not the DCT, but this was the first thing that ever really made sense to me. It is explained by someone who really knows math incredibly well and is an amazing teacher, with amazing visuals: https://www.youtube.com/watch?v=spUNpyF58BY
3Blue1Brown ELI15 the Fast Fourier Transform, which sets up the problem domain:
This is one of the few people who connects math and music in a way that's both useful and interesting. I followed this channel back when I found the video on Fourier transforms[1], but never looked at other videos. I've never really thought about the math of consonance and dissonance, but I use it all the time without realizing it.For example: a hypersaw. That's 7 saws slightly detuned played at the same time. The more you detune them, the worse it sounds. If you own Serum, you can hear this in realtime if you make a 7 voice saw and play with the detune knob.
You can use an LFO to modulate this and get a sound that's a little uneasy while still somehow being harmonious. It's perfect for dark/spooky music.
⬐ nicetryguyA little detuning goes a long way! It gives it that full warbly phase sound. 5-10 cents down is usually my sweet spot for synth plugins. I even detune one string on each key on my piano about 5-10 cents down, it gives it some life! Natural phasing sounds awesome.
In many cases, Wikipedia actually lists the analogy for classical probability theory. For several theorems, the proof for Hilbert spaces can be directly brought to the classical world. The main difference to keep in mind is that we have amplitudes, not probabilities, and they can be negative or complex. We recover typical probability theory by taking norms/squares of amplitudes.The uncertainty principle follows via doing signal analysis on Fourier-transformed wavefunctions. A fun professional treatment from Baez et al. is [0], and 3blue1brown has an excellent two-video visual explanation [1][2]. The uncertainty principle turns out to be a special case of the sampling theorem (yes, that one! [3]), which itself turns out to be a special case of a result in sheaf theory [4].
Measurements matter because "observables don't commute"; taking linear operators on the complex numbers or other Hilbert spaces can have lasting effects which can't easily be undone. Combine this with "conservation of probability", which is formally mostly abstract nonsense [5], and we get mostly to Aaronson's point of view. (I would go further using the Free Will Theorem. [6])
When Aaronson says that the no-cloning principle is provable using probability, but for complex numbers, he's referring to the standard proof [7]. There are two connections to draw to typical probability theory. The first, and bigger, connection is that random variables can't be cloned in typical probability theory either! The second, deeper, connection is that Hilbert spaces give linear logics, which imply conservation laws for the information representing the particles to be cloned.
Finally, for the Bell inequalities, again there is a standard proof on Wikipedia [9] using typical probability theory. I'd like to mention the overlooked Kochen-Specker theorem [8], which forms the backbone of the Free Will Theorem [6]. Measuring a particle is like taking a sample of a random variable: We decide how we want to ask the particle, and the particle chooses a response that is both allowed by its probability distribution and also correctly represents its context.
[0] http://math.ucr.edu/home/baez/photon/schmoton.htm
[1] https://www.youtube.com/watch?v=spUNpyF58BY
[2] https://www.youtube.com/watch?v=MBnnXbOM5S4
[3] https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...
[4] https://arxiv.org/abs/1405.0324
[5] https://en.wikipedia.org/wiki/Probability_current
[6] https://en.wikipedia.org/wiki/Free_will_theorem
[7] https://en.wikipedia.org/wiki/No-cloning_theorem#Proof
[8] https://en.wikipedia.org/wiki/Kochen%E2%80%93Specker_theorem
3blue1brown is great, e.g. his Fourier transform animation is super intuitive: https://youtube.com/watch?v=spUNpyF58BYHe has opensourced his animation engine "manim" used in his videos: https://github.com/3b1b/manim
⬐ Sohcahtoa82Thank you for showing me that Fourier transform video. I've never understood how it worked because usually they just show the integral and call it good.
3blue1brown has a great video on the topic: https://youtu.be/spUNpyF58BY
⬐ supernova87aI have to say that video completely changed the level of my understanding about it. Especially the bit of visually intuitively understanding why the imaginary terms are the integration of the wrapping of the frequency component. Well worth watching.
I recommend watching the "But what is the Fourier Transform? A visual introduction." https://www.youtube.com/watch?v=spUNpyF58BY. Beautiful.
3Blue1Brown has an extremely good explanation[1] of the intrinsic uncertainty, and why it's separate from measurement uncertainty. (the previous episode[2] is a recommended prerequisite for background on how the Fourier Transform works)> emerges through a different, as yet unknown, mechanism.
In 3Blue1Bron's explanation[1], he shows how the intrinsic uncertainty is an inherent trade-off of trying to measure both position and frequency. A short wave packet only a few wavelengths long correlates with a narrow (precise) range of positions, but also correlates well with a very wide range of frequencies due. A Heisenberg-like uncertainty exists any time you are working with weave packets with length near the wavelength. 3Blue1Brown gives a very good example using Doppler radar.
⬐ atomackYes, I like these sources too. Good for building intuition. I would just add that Heisenberg-like here means that both systems share features of wave mechanics. Doppler type effects aren't quantum mechanical though.When I suggest the mechanism is unknown, I mean that Heisenberg uncertainty is a postulate of quantum mechanics. In other words the fundamental reason that quantum mechanics should appeal to wave mechanics isn't really established - we don't really know yet the fundamental objects and interactions that lead to quantum mechanics (despite much effort).
⬐ gus_massaI'm not sure about the historical part, but now the uncertainty principle is not an independent postulate. It's deduced form the non commutation of the operations to measure the position and the momentum of a particle. This can be done in the wave representation or in the matrix representation.Moreover, similar calculations can be done with other measurements that don't conmute. One that is very important is the spin of a particle in the x, y, and z axis.
Another is the polarization of a photon in directions that are at 45°. For example, most of (all?) the experiments of the EPR paradox are done with polarization instead of position-momentum, because polarization is much easier to measure. https://en.wikipedia.org/wiki/EPR_paradox
⬐ atomackIt's a good point that uncertainty relations exist for all kinds of physical observables. But whether they're expressed as commutation relations or as in Heisenberg's original formulation, or whatever formulation you choose (wave mechanics, matrix mechanics, dirac representation, qft, or anything else one can think of) it's still asserted, rather than derived from an underlying set of fundamental physical objects and interactions.
Hahah, congratulations on completing the implementation! You might not believe it, but I visited the page just two days ago and saw the updates! I wanted to show mandalagaba to one of my little cousins, let him play with it for a bit. I connected my laptop to the tv screen, and then a mouse with a long cable to the laptop, so we wouldn't have anything in the way... we started playing with some fluid simulations, which are super fun... sadly, I couldn't get mandalagaba to work afterwards, I don't know if the HDMI connection caused some problem with the detected resolution, or if there was any other temporary issue... we had fun anyway, I'll definitely show him the page another day.Really amusing to meet again. I don't have any new idea in mind which doesn't kinda lead mandalagaba to become a full fledged image edition program (layers, brushes, some effects, changing hues and tones, etc.), but if I come up with something nice I'll tell you. As seen in this thread, maybe devices (like a drawing compass) with multiple articulations? (you might also find some inspiration with polar coordinates and waves in this nice video about fourier transforms by 3Blue1Brown [https://youtu.be/spUNpyF58BY?t=233], like at 3:53 or 5:14). Keep up the good work!
This is really cool. I especially like the rotating circles visualization. Another video that helped me finally "get" Fouriers is the 3Blue1Brown video:
I like this one by 3Blue1Brown: https://youtu.be/spUNpyF58BY
Ted Nelson's Computers for Cynics probably doesn't contain technical information that's new to any of you, but Ted has a knack for reframing things in ways that make the arbitrariness of certain historical decisions clear: https://www.youtube.com/watch?v=KdnGPQaICjkOn the subject of hypertext, The Web That Wasn't gives a nice history of the idea (for anybody who thinks it starts with TBL -- surprisingly many people!): https://www.youtube.com/watch?v=72nfrhXroo8
Another reframing-oriented talk is Clay Shirky's "It's not information overload, it's filter failure", which ultimately leads to Shirky suggesting the kinds of user-oriented filtering features that Mastodon has implemented: https://www.youtube.com/watch?v=LabqeJEOQyI
At the intersection of neurology and information science, Peter Watts always has something interesting to say, and as a former marine biologist focusing on the nervous system of starfish, this is absolutely in his wheelhouse: https://www.youtube.com/watch?v=6GAicTW7MGo
This one ("moving away from defensive programming") justified strong typing in a pretty clear way: https://www.youtube.com/watch?v=Csj3lzsr0_I
Dan Dennett is just as relevant as Doug Hofstadter when it comes to metacognition: https://www.youtube.com/watch?v=EJsD-3jtXz0
Forgotten Ideas in Computer Science starts slow, but if you don't have much of a historical background (like, if you're only vaguely aware of what happened in CS in the 70s), it's a laundry list of things you should look up and be aware of before you start your next project: https://www.youtube.com/watch?v=-I_jE0l7sYQ
Everybody should understand procedural generation: https://www.youtube.com/watch?v=WumyfLEa6bU
Likewise, since AI is hyped up right now, we should all remind ourselves that IA is a thing too: https://www.youtube.com/watch?v=narjui3em1k
More hypertext history: https://www.youtube.com/watch?v=i67rQdHuO-8
Even more hypertext / UX stuff: https://www.youtube.com/watch?v=gDrHkNgGQDs
A great explanation of Fourier transforms: https://www.youtube.com/watch?v=spUNpyF58BY
Allison Parrish does mindblowing things with corpus statistics by treating term vector spaces as generalizations of 2d image formats: https://www.youtube.com/watch?v=L3D0JEA1Jdc
Finally, these aren't tech talks but instead UI demo reels. If you have any interest in UI or UX, you should watch them. They are wonderfully cheesy, mostly doable, and despite being more than 20 years old, nobody has bothered actually implementing the useful features shown: https://www.youtube.com/watch?v=NKJNxgZyVo0 https://www.youtube.com/watch?v=hb4AzF6wEoc https://www.youtube.com/watch?v=1iAJPoc23-M
In case anyone missed it, the most genius description of the Fourier Transform was published recently by the 3Blue1Brown YouTube site (https://www.youtube.com/watch?v=spUNpyF58BY). It is absolutely brilliant.
There are actually two reasons why 3blue1brown video lectures are superior to old fashioned professor-and-blackboard lectures:1. A video with animation and clear text and colours can be far more effective than chalk on blackboard. For example, watch 60 seconds of this video on Fourier Transformations: https://m.youtube.com/watch?feature=youtu.be&v=spUNpyF58BY&t... Such an effective, efficient presentation simply could not be done with chalk and blackboard.
2. The internet provides a way to watch the best of the best lectures in the world. Grant Sanderson, the guy behind 3blue1brown, is a Khan Academy talent search winner: https://www.khanacademy.org/about/blog/post/125876900000/mee...
⬐ zwiebackLots of other great video on the same channel, many of them previously discussed
⬐ MarkMcI'm awed by the educational quality of Grant Sanderson's videos. No wonder he was a Khan Academy talent search finalist: https://www.khanacademy.org/about/blog/post/125876900000/mee...
⬐ nayukiLater duplicate that ended up on the front page: https://news.ycombinator.com/item?id=16242103⬐ NokinsideOh boy was that well presented. I wish I had seen this when I studied signal processing.⬐ DoingIsLearningSame! I feel that this 20 min video reached an intuition level that is equivalent to what were probably a couple of months of my ECE undergrad.
⬐ magnatSource code of the video: https://github.com/3b1b/manim/blob/master/active_projects/fo...⬐ Patient0He touches on it - but I’d love to see an intuitive explanation of why the response of each frequency to the input function is linearly independent. i.e the fact that Fourier transform of the sum is equal to the sum of the Fourier transforms. This is “why it works” - it’s what makes the frequency space an orthonormal basis - but it’s never been intuitively obvious to me. Otherwise, there would be more than one way of decomposing a function into a superposition. e.g. what would be useful is to give an example of a set of functions which are not linearly independent.⬐ kortex⬐ nayukiTake your wrapping function from t1 to t2, at a frequency ƒ (signal) != Fs (sampling freq), then take the limit as t1/t2 goes to -∞/+∞. As your window gets longer, more cycles of the oscillation "cancel out", moving the center of mass towards 0+0i. This means the peak around ƒ narrows and raises. At ∞, ƒ becomes infinitely narrow and high (Dirac delta * ).This is also why peaks on an fft are gaussian (finite window), and get sharper as the fft window is increased.
* for cosine, technically there is a peak at -ƒ too. this is because a real cosine signal is ambiguous whether it is "moving forward or backwards in time". Hence it has a peak at +/-ƒ. A complex exponetial (helix through time) has chirality due to the real and imag components, so it has a single peak at ƒ. And if you take a +ƒ (lefthanded) and -ƒ helix (righthanded) and add them, the complex part cancels out, leaving only a real "up and down" wave.
⬐ chestervonwinchThe orthogonality is essentially follows from (1) integer frequency complex sinusoids have an average value of zero over [0,2π], and (2) if you multiply two distinct integer frequency complex sinusoids, you get another integer frequency complex sinusoid. I'm not sure that this is any more intuitive.> what would be useful is to give an example of a set of functions which are not linearly independent.
See [1,2] for example, which (I believe) has applications in compressed sensing and dictionary learning.
⬐ mlevental⬐ totalZero>The orthogonality is essentially follows from (1) integer frequency complex sinusoids have an average value of zero over [0,2π], and (2) if you multiply two distinct integer frequency complex sinusoids, you get another integer frequency complex sinusoid. I'm not sure that this is any more intuitive.i think these kinds of explanations are hilariously pointless. and i don't mean to disparage because you're just trying to answer op's question but all you've done is restated the proof in english - i.e. of course it follows from that because what you've just said is the inner product of basis functions is 0. well yes of course that's definition of orthogonal.
Here's how I think about it.You can play the individual notes of a chord on one piano or several, but they still come together to produce the same chorus of frequencies.
The Fourier series representation of a waveform is itself a sum, since trigonometric functions are waveforms as well. Thus, a combination thereof should be commutative because a sum of two sums retains the properties of addition.
⬐ rocquaIt actually follows from the 'centre of mass' explanation. If you take the centre of mass as described of f + g you get the centre of mass of f plus the centre of mass of g. One way to explain this is to just say the + can be moved out of the integral.Alternatively, consider the centre of mass only over the horizontal axis. Now say we only look at the 'contribution' of f + g at time t (ignoring the issues of that contribution being infinitesimal). That contribution is (f(x) + g(x)) * sin (theta) where theta is the angle of our point. Clearly this equals f(x) * sin (theta) + g(x) * sin (theta). These are the separate contributions of f(x) and g(x). The same argument holds for the centre of mass over the vertical axis (replacing sin with cos).
If we were to make the alternative explanation formal, we get back to the + being able to move outside the integral. Note that our decomposition into the horizontal and vertical part is an alternative way to de the fourier transform without complex numbers. The vertical part here is essentially the imaginary part of the fourier transform.
BetterExplained (Kalid Azad) has a good written article that covers the Fourier transform in a similar manner to the 3Blue1Brown video: https://betterexplained.com/articles/an-interactive-guide-to...I have an article explaining step by step how to implement code for the discrete version of the Fourier transform: https://www.nayuki.io/page/how-to-implement-the-discrete-fou...
⬐ adamnemecekI’ll just leave this herehttp://tomlr.free.fr/Math%E9matiques/Math%20Complete/Analysi...
Mathematics of the discrete Fourier Transform by Julius O. Smith. (O stands for Orange I hope)
⬐ jacobolus⬐ nsb1Poking around the web turns up https://amzn.com/097456074X/ https://ccrma.stanford.edu/~jos/pubs.htmlI really wish this stuff existed when I was learning about FFTs - this video describes the theory far better and in far less time than my broken-english college professors ever could.⬐ madezSound waves don't add up linearly. However, it is a good enough idealization for many uses.Fourier analysis is also approachable from the discrete setting of finite vectors instead of functions, where the fourier analysis is just an orthogonal (orthonomal when sanely defined) linear function, i.e. it acts by matrix multiplication and is represented as that matrix.
This appropriately extended to the continous setting leads to the fourier transform on functions, and also gives intuition why the fourier transform uses integrals.
⬐ krannerThis one is related and (I think) quite good:⬐ nwatsonI think it's much easier and more direct to visualize the time-domain as superposition of helical components and the transform as an exploration of what happens when you twist the "cylinder" with varying "intensities". You avoid the vague center-of-mass spike depicted here and start from the get-go with the terms of the transform.⬐ jacobolus⬐ probinsoPerhaps you can explain what you mean by “exploration of what happens” and “terms of the transformation”? That’s pretty vague as a description of a visualization.Maybe you’re talking about a visualizing a discrete Fourier transform?
⬐ nwatson⬐ Nimitz14Yep, it's vague, sorry, I'll need to try my hand at a video.⬐ probinsothe source code brown brown blue uses for animations is open source> I think it's much easier and more direct to visualize the time-domain as superposition of helical components and the transform as an exploration of what happens when you twist the "cylinder" with varying "intensities".That doesn't sound very clear at all to me.
> You avoid the vague center-of-mass spike depicted here and start from the get-go with the terms of the transform.
The center-of-mass spike is the result of summing across all the different complex points/vectors, this is stated very clearly by the FT formula (sum (int_^) of points on a circle (exp(t)) amplified by signal strength (f(t))). Seems very explicit to me.
In fMRI data, we refer to frequency space of volumetric image data as K-Space.I would like a general term for frequency space of a signal, without the use of the word `frequency` . This is because `frequency` is also used when describing histograms in general image processing, and is in general an overloaded term.
Any established words or phrases in the corpus? any tips?
⬐ whatshisface⬐ ablabaK-Space is sufficiently general, because "k" is defined as the wavenumber (2pi over wavelength). That's simply related to frequency in nearly every case unless your medium is interstellar hydrogen or shockwaves in air or something. If you need to talk about frequency specifically you could talk about the period (inversely proportional) or the angular frequency (factor of 2pi).This fourier transform simulation example from shadertoy is good. https://www.shadertoy.com/view/ltKSWD⬐ ablabaThis fourier transform simulation example from shadertoy is good.⬐ codegladiator⬐ wendyjreichertThe link kills my MacBook Pro. Makes the system unresponsive.How do you animate something like this?⬐ meseznik⬐ tambourine_manHe wrote his own tools in Python to achieve this (repo: https://github.com/3b1b/manim).⬐ wendyjreichertThanks!https://news.ycombinator.com/item?id=16244908⬐ knolan3blue1brown’s videos are excellent. They build intuition in a calm and friendly way with an appropriate amount of useful animation. This is how we make mathematics accessible.I’m currently considering moving back into academia and there are a lot of topics in my field that I know students often struggle with that would be greatly helped by some simple animations. Fortunately I’m pretty competent with blender and I relish the idea of developing something worthwhile.
⬐ jestinjoy1Do you have any links showing how to do that with blender?⬐ knolan⬐ bent_that_wayAnything in particular? I was thinking of things like viscosity and stress analysis for fluid mechanics. It would be easy enough to animate a stress tensor and show how each term behaves when prodded. Similarity the basic concepts behind laminar boundary layers would be equally straight forward.Mimicking 3b1b’s style would be trickier since he uses a lot of 2D plots. Of course you can run python directly from blender so you never know.
He has an entire series on "the essence of linear algebra". I'm a PhD student in a technical field, and rewatch that series at least once a year. It's brilliantly accessible, clear, and visually explained. I recommend the series to anyone who asks me about anything to do with matrix operations⬐ NoneNone