HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
A Boy And His Atom: The World's Smallest Movie

IBM · Youtube · 142 HN points · 13 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention IBM's video "A Boy And His Atom: The World's Smallest Movie".
Youtube Summary
You're about to see the movie that holds the Guinness World Records™ record for the World's Smallest Stop-Motion Film (see how it was made at http://youtu.be/xA4QWwaweWA). The ability to move single atoms — the smallest particles of any element in the universe — is crucial to IBM's research in the field of atomic memory. But even nanophysicists need to have a little fun. In that spirit, IBM researchers used a scanning tunneling microscope to move thousands of carbon monoxide molecules (two atoms stacked on top of each other), all in pursuit of making a movie so small it can be seen only when you magnify it 100 million times. A movie made with atoms. Learn more about atomic memory, data storage and big data at http://www.ibm.com/madewithatoms
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Apr 03, 2022 · 1 points, 0 comments · submitted by mtviewdave
None
None
A notable inconsistency, or exaggeration:

> we print our maps using the highest resolution mediums available. ... There's no sense printing at a higher resolution than humans are able to perceive. For print, 300 dpi is the gold standard.

Higher resolution media such as https://www.norsam.com/lanlreport.html are around 20000 dpi. Current semiconductor processes https://www.extremetech.com/computing/296154-how-are-process... have feature sizes around 20 nm, which is about 1.3 million dpi.

This is significantly denser than 300 dpi. It's easy to see the difference between a page printed at 300 dpi and one printed at 600 dpi, so I'd say even 300 dpi doesn't reach "higher resolution than humans are able to perceive".

How long have more detailed mediums (or media) been available? In 01949 George Harrison reported improving the control loop of a ruling engine to a precision of, in the medieval units then in use, 0.2 micro-inches (5 nm): https://www.osapublishing.org/josa/abstract.cfm?uri=josa-41-... so that he could cut grooves for a diffraction grating to that precision, which evidently amounts to a precision of 5 million dpi. This seems to have been about a factor of 70 improvement over what Michelson had achieved before 01900. But serious difficulties attended any attempts to use such diffraction ruling engines to cut irregular patterns such as these maps—as well, of course, as limits in the data bandwidth of the necessary control systems.

More detailed media still have been demonstrated; in 01989 hackers at IBM demonstrated the ability to use an STM to position atoms with atomic precision (≈0.1 nm) https://cen.acs.org/analytical-chemistry/imaging/30-years-mo... and in 02013 other hackers at IBM used this technique to make the famous stop-motion animation, "A boy and his atom" out of a few dozen carbon monoxide molecules on a metal surface: https://www.youtube.com/watch?v=oSCX78-8-q0

0.1 nm precision is about 250 million dpi, almost a million times more detailed than Ramble's maps, or a trillion times if you count by detail per unit area. This is almost as high resolution as you're going to get with matter made out of atoms, although you can improve on it by about an order of magnitude by using, say, lithium hydride. But this resolution has been available for something like 30 years now, though you could reasonably argue that xenon atoms adsorbed to cryogenic copper were not an adequately durable medium.

It's an interesting thought to think about a scale model of Earth printed with this resolution. Ramble carefully omitted any quantitative information about the resolution of their elevation models from this post, though in this thread they say their standard DEM data is ⅓", which is 10 meters; you can download free 30-meter-resolution DEM from USGS https://www.usgs.gov/faqs/where-can-i-get-global-elevation-d... and Airbus offers to sell you 12-meter resolution data https://www.intelligence-airbusds.com/imagery/reference-laye.... Much higher-resolution global data almost surely exists but is not available to the public—interferometric microwave SAR from satellites can get down to centimeter resolution https://earthdata.nasa.gov/learn/backgrounders/what-is-sar but is a strategic advantage for change detection (surveillance) and navigation of things like cruise missiles (when GPS is unavailable).

But suppose you have a 1-cm-resolution DEM of Earth, 5.1 exapixels of data (probably about 5 exabytes, 5.1 million terabytes, about US$100 million of disk), as surely the national spying agency of every spacefaring power does. If you were to print a relief map from it at single-atom resolution—0.1 nm—how big would that map be?

Well, the radius of the Earth is 6371 km (the pole-equator distance was supposed to be 10'000 km, which would have made the radius 6366 km, but Humboldt's expedition lamentably made an 0.08% error in their measurements that we must now live with), and scaling that down by the ratio 1cm:0.1nm, or 100 million to 1, we end up with 63.71 mm radius, or 127.4 mm diameter. The scale model of the Grand Canyon would be 19 microns deep and 290 microns wide. The model earth, accurate to the centimeter, would easily fit in your hand, although hopefully it would be equipped with handles so a stray sand grain on your finger wouldn't dig a kilometer-deep trench across Iowa.

You might very reasonably protest that a map that can only be read with an electron microscope, because nearly all its detail is smaller than the wavelength of light, is less than useful. So if we limit the map's resolution to what you can see with visible light—say, 400 nm—its scale is 4000 times larger. Your scale model of Earth would then be 510 meters across, the size of a small town. But you would still need a very fine optical microscope to see most of its detail.

If you printed out sheets of this map on A3 paper, it would take 6.5 million pages, mostly ocean. Each sheet would cover 10.5 km × 7.4 km.

There's still a lot of room at the bottom!

neolog
I'm very impressed by your comments here. I would like to know what kind of background enables you to write like this?
jacobolus
> It's easy to see the difference between a page printed at 300 dpi and one printed at 600 dpi

This depends entirely on viewer and viewing distance. 300 dpi should be roughly at the limit of what someone with 20/20 vision can distinguish from a distance of about a foot.

If you get a teenager with 20/15 vision and put them as close as their eyes can focus (say, 4 or 5 inches), they’ll be able to see a clear difference between these. But if you take an average person and look at the two images from a distance of a few feet, it will be all but impossible to tell the difference.

kragen
Subjectively, I think it's easy to see the difference between the output of a "300 dpi" laser printer from a "600 dpi" one. But can we justify that mathematically? I think distinguishing the orientation of a set of bars separated by, in ancient Babylonian units, 1 arc minute, is Snellen's definition of 20/20 vision? 290 microradians? You might have, say, a 1-arc-minute-wide line, followed by a 1-arc-minute-wide space, and then another 1-arc-minute-wide line? At a "foot" an arc minute is 88.7 microns, which works out to 286 "dpi". So, I guess you're right: 300 dpi is fairly precisely the limit of what 20/20 vision can distinguish from one "foot".

In https://dercuano.github.io/notes/bokeh-pointcasting.html I estimated the visual acuity in my good eye experimentally at about 200 μradians, which I guess means I have about 20/15 vision. Too bad I can't focus any closer than a "foot" now that I'm old. I can still see the jaggies on 300-dpi laser prints, though.

But if you have 20/20 vision and can focus at 150 mm (as most of us can before we get old, or as we can with reading glasses, or just if we're a bit nearsighted, or if we're reading in bright sunlight so the bokeh is a bit smaller), then you can distinguish lines separated by a single white pixel at 600 dpi. So in theory you should be able to print out http://canonical.org/~kragen/bible-columns.png on a regular 600-dpi laser printer and then read it with a magnifying glass—the whole KJV Bible on three pages. Two sheets of paper, if you print double-sided. (So far, I haven't managed to get it to print that clearly, but I think that's probably a matter of printer drivers and pixel grid alignment to avoid resampling.)

Now, if you have 20/15 vision, at one foot you should have about 400 dpi of resolution. Or, at the 4 inches you suggest, 1200 dpi! (1145, actually.) You should be able to get the KJV Bible onto one side of one sheet of A4 paper!

Anyway, I think there's a big gap between the ambition described by "The World's Most Detailed Print Maps" and "we print our maps using the highest resolution mediums available" and even "no sense printing at a higher resolution than humans are able to perceive", and the standard you describe, "if you take an average person and look at the two images from a distance of a few feet, it will be all but impossible to tell the difference." I mean there are a lot of things that humans are able to perceive, but not from a distance of a few feet: the flavor of pizza, the scent of most kinds of roses, the difference between good wine and bad wine, the individual scales on a butterfly's wing, bad breath, the subtle impatience in an outwardly tender caress—and, perhaps, even the meaning of most printed text.

And there are many, many things that humans are able to perceive, but average people are not: the meaning of an English sentence (since 83% of people do not speak English), the difference between love and lust, the historical context of colonialism, the particular version of AutoCAD used to design a building, the form of engine malfunction represented by a particular rattling noise, the shoddiness of Dan Brown's writing, the self-serving dishonesty of those who promote the "Law of Attraction," the abusive nature of proprietary-software licenses, global warming, the difference between a Bristol accent and a London accent.

So I think you are redefining the stated goals of this work downward to a really remarkable extent. What an average person can easily see from a distance of a few feet is very far from the limits of human perception.

jacobolus
Marketing copy exaggerates a bit, film at 11.

You are right if they wanted to really push the limits of human perception they could develop custom printers or buy some machines used for physics experiments or integrated circuit fabrication or whatever and try to print at 1200+ pixels/inch.

Realistically, what they are doing is making full resolution prints on commercially available large-format photograph printers, with their effort going into tweaking the shaded relief algorithms and applying photoshop filters to boost local contrast.

mparr4
> Realistically, what they are doing is making full resolution prints on commercially available large-format photograph printers, with their effort going into tweaking the shaded relief algorithms and applying photoshop filters to boost local contrast.

Yes, exactly this.

> Marketing copy exaggerates a bit, film at 11.

And yes, also this, ha. We sell wall art, so the article is intended to convey that for something you look at on a wall, we're running up against the limits of what a human can see, with normal vision, at any reasonable viewing distance. It is absolutely true that someone with good vision, the right light, and a magnifying glass could probably see some dots, but that's not typically how wall art is viewed, which is what we design for.

kragen
You should probably fix that, because lying about your product in order to get people to buy it is fraud. Even if you don't care about the ethical issues, it could have negative repercussions for you in the future. It looks like it's good enough that people would buy it even if you didn't lie about it, so I think you should capitalize on that. (Maybe you disagree? People who notice the lie will think you don't think it's good enough to sell without lying about it, and they'll wonder what else you're lying about.)

Also 300 dpi is coarse enough that, despite my aging vision, I wouldn't need a magnifying glass or particularly good light to see the jaggies from 300 mm, and I think that is a reasonable viewing distance for wall art. 300 dpi is definitely not a "gold standard" for coffee-table books or wall art. Someone with good vision, the right light, and a magnifying glass can see features at 2400 dpi, 64 times the number of pixels you're using. Why do you think Linotype made 2450 dpi the Linotronic resolution 35 years ago? Perhaps you think they didn't have any experience with printing?

kragen
I think you're being unjustifiably uncharitable by accusing them of intentional fraud. I think they're just fooling themselves because they're carried away by their enthusiasm for what they've created, which is really pretty cool.

You don't need custom printers to print at 1200+ pixels per "inch". Most off-the-shelf inkjet printers and some laser printers can do it, though at high resolutions there's a serious tradeoff between precise color and precise position. https://news.ycombinator.com/item?id=27965938 points out that standard imagesetters are 2540 "dpi" (and they have been since the 01980s). Photographic processes routinely produce images with a resolution below 10μm (2540 "dpi") and, when desired, below 2μm (12700 "dpi"). I don't mean with some kind of super niche lab equipment; I mean Kodak point-and-shoot cameras with mass-market Kodachrome film. Metal machining routinely hits precisions of 25 μm (1000 "dpi") and has for a century and a half, sometimes using machines made out of junk using nothing but hand tools; when it matters, it gets down to 1 μm (25400 "dpi"), which is roughly the measuring repeatability of any old handheld micrometer. You can buy such a micrometer on eBay for US$30 https://www.ebay.com/itm/363415676100?hash=item549d4324c4:g:... though a quality one probably costs US$60 or more, even used.

What made Harrison's achievements with the diffraction-grating ruling engines 70 years ago remarkable was not that he could position the ruling engine in increments of 25 nanometers or better (a million dpi); that, and even 2-nanometer increments, had been achieved for small gratings about a century earlier. His achievement was maintaining a total error under 5 nanometers across a distance of over 200 mm during the weeks necessary to rule a full diffraction grating. (I think the Richardson Grating Lab still uses the Michelson ruling engine he overhauled in the 01940s, though with further improvements.)

bluenose69
I came here to tell you, kragen, that I am enjoying your zeros before 4-digit dates, your derision of old units, etc. Oh, that and your informative comments, of course.
jacobolus
Fraud? Huh? When they say that these maps are the “most detailed” in their marketing materials they clearly mean that they are using a digital elevation model with higher resolution than the ones used to create other similar images available for sale elsewhere. They obviously use relatively standard digital photo printing processes, and are not implying otherwise. If you have your own source file, you can get it printed/mounted using similar technology from your favorite commercial print shop.

One can certainly produce a higher resolution image using various technologies (as you say, a photographic negative potentially has very high resolution). But so what? I really have no idea why you are talking about micrometers, diffraction gratings, and so on.

Jun 22, 2021 · 2 points, 0 comments · submitted by manjana
Jan 22, 2021 · 3 points, 0 comments · submitted by dgellow
Jun 27, 2020 · 4 points, 0 comments · submitted by unreal37
May 21, 2020 · mncharity on X
> can be a huge barrier to entry for the student

I did a category theory class a few months back. Huge barrier at times - so many moving pieces. "Err, which properties do 𝒞 and mumble(𝒟) have here, in this paragraph???"

I wanted to create an interactive version of some topics. Just mouseover elaboration and graphics, to reduce the amount of state one has to keep track of mentally, and its associated cognitive load. But the state of in-browser rendering of category diagrams wasn't quite there yet.

Best part of the class was the "after-math" (that's a 3-interpretation pun). An iterative improvement on past versions of the class, formalizing an observed custom - after the 1 hr math lecture, the room was reserved for a second hour, so people could hang out, with the instructors, in front of blackboards. In part, clarifying and explaining things in different ways.

One instructor observed something very vaguely like "I'd never approach things the way I do in lecture, if I was tutoring someone one-on-one."

One can imagine technical fixes. Mouse-over equations. FAQs. Presenting the material from multiple perspectives.

Many of them could implemented on clay tablets. And that newfangled wood pulp. My best understanding is, that as a society, incentives are poorly aligned with creating non-wretched education content in the sciences, and I assume in math.

So my hope for XR in education, is not just the power of tools like eye tracking, but that incentives are disrupted enough to get us unstuck.

Consider eye tracking. When I show people this video[1](IBM's STM stopmotion) in person, it was very common for them to ask me "what are the ripples?". Apparently it was enough of a thing for IBM to do supplementary content on it. But if I'm not there, what UI would youtube need to have a similar dynamic?

Someone did an art project. You sit by yourself and watch a drama. An interesting drama. But why is it interesting? The video is a graph, like one of those old "choose your own adventure" games. And there's an eye tracker. So if your gaze shows you're interested in character A...

Why not ripples? Imagine the richly interwoven tapestry of science and engineering as that graph, adaptively explored, driven by interest, misconceptions, and learning objectives.

And imagine being able to watch someone's gaze patterns as they try to understand a paragraph of math. Backed by a database of everyone's patterns. And being able to respond. Not as well as an excellent human tutor in some ways, but in some ways even better.

Or we could just copy the usual wretched content into 3D and call it a day.

[1] https://www.youtube.com/watch?v=oSCX78-8-q0

yyyk
"Consider eye tracking... imagine being able to watch someone's gaze patterns as they try to understand"

Let's not? Ever? It reminds me of dystopias where a government tries to sense how really interested people are in the latest propaganda and punishes those who are too bored. Or just a small school ran as a totalitarian regime where you're not even allowed to be bored.

May 11, 2020 · mncharity on Visualizing Electricity
Apropos visualizing atoms and their electrons, here's a fun sidebar. This IBM video[1] was made with an STM scanning tunneling microscope, which sees outer electrons. So while it shows the IIRC carbon monoxide molecules, standing up like buoys, as little balls, the background copper surface of delocalized electrons looks smooth, with ripples. You can't see the individual copper atoms. But if you instead used an AFM atomic force microscope, which can feel inner electrons, then you could - here's one of silicon[2].

When crafting and teaching abstract representations, it's easy to forget that these are real physical objects.

[1] https://www.youtube.com/watch?v=oSCX78-8-q0 [2] https://imgur.com/a/7Onbz8s

IEEE VR starts tomorrow (Sunday-Thursday). Entirely online. Registration is waitlisted, until they see how their infrastructure holds up. But they also plan live streaming video of all presentation sessions (papers, panels, workshops, and keynotes) on Twitch, open to anyone.

streams: http://ieeevr.org/2020/online/ schedule: http://ieeevr.org/2020/program/overview.html hashtag: https://twitter.com/hashtag/IEEEVR2020

Also working on a video-rich[1] tweet thread "Atoms are {little, balls, sticky, jiggly, bouncy, stacking, etc}", as motivation while working towards an exemplar of transformatively improved science education content, which I hope will speed conversations about that. Anyone have any favorite media of atoms?

Also on desktop panning using head tracking. Also on using RealSense t265 tracking cameras with Google Mediapipe's TF hand tracking.[2] Also on a next rev of DIY 3D shutter glasses. Also on... sigh.

[1] example media: balls https://www.youtube.com/watch?v=oSCX78-8-q0 , sticky https://i.insider.com/5249da00eab8ea2172fa799a?width=700&for... , ball in the ball (nuclei) https://www.youtube.com/watch?v=AVKZDmYrTHo [2] low-performance browser demo: https://blog.tensorflow.org/2020/03/face-and-hand-tracking-i...

person_of_color
Don't you think the VR industry is now in jeopardy with decreased consumer spending?
mncharity
Much XR effort is commercial/industrial rather than consumer. FAGM's bets are long-term. Smaller OEMs seem to focus on industry and/or CN/SK/JA. Consumer XR software effort, but for games, is on phones/tablets. US consumer game VR, well, depression and a supply-chain-troubled Xmas, versus more time at home through next winter... shrug? My very fuzzy impression is people were already expecting merely-steady growth there.
seanmcdirmid
I personally think this will make VR even more popular, especially the fitness aspects that were incredibly useful when all the gyms closed.
Mar 05, 2020 · mncharity on Watching Magnets Rust
> but without chemical bonds?

Metallic bonds. My understanding is all bonding is atoms sticking together electrostatically, with (insanely fast) "slow" lumbering nuclei, nudged around by their local e-field, and wacky electrons, adopting weird spatial distributions based on the current nuclei positions, setting that field. The bond taxonomy descriptively sorts by characteristics. Here, one is delocalization of valence electrons. Perhaps picture https://www.youtube.com/watch?v=oSCX78-8-q0&list=PLaFe0BJiho... , STM imaging of cold CO molecules sitting on a copper surface, IIRC. You can't see the individual copper atoms, and there are extended ripples in surface electron density.

About 7 years after this: https://youtu.be/oSCX78-8-q0
mrleinad
That video is about manipulating atoms, not atoms bonding and breaking
From the first paragraph of the article:

> An atom typically can’t be assigned a definite position, for example — we can merely calculate the probability of finding it in various places.

Is this really the case? Or maybe author meant 'electron' instead of 'atom'?

I mean, if we wouldn't be able of calculating the position of an Atom, how could "A Boy And His Atom" would be possible?

https://www.youtube.com/watch?v=oSCX78-8-q0

titzer
There is a limit to the precision of a position measurement of any particle, and atoms are made of many particles. The Heisenberg Uncertainty Principle states that the uncertainty of the position and momentum of a particle are related. In particular, the product of the errors has a constant lower bound. Visually, if you think of (1d) position and (1d) momentum plotted on a 2d graph, then Heisenberg's principle means that particles are not points, but rectangles with a minimum area.
> It's mind blowing

:) When people watch this[1], and I'm standing next to them, they'll often ask about the ripples. But if they were watching it alone, there isn't an easy way to do that.

Someone did an art project, where you sit and watch an interesting drama video. But why was it interesting? There's an eye tracker, and the video is a graph of scenes, like an old "choose your own adventure" story. So if you're interested in character A... the video explores/emphasizes character A.

AR/VR is coming. With eye tracking (eventually... patents). So...

Imagine science education that's consistently mind blowing, rather than boring. What might that be like?

Though a cautionary note: there's apparently a problem of students, who have consistently had excellent teachers, clearly and motivationally presenting material, reaching college without the skills or inclination to themselves wrestle with a body of knowledge and distill understanding. Another opportunity for improvement. :)

[1] stop-motion animation with individual molecules: https://www.youtube.com/watch?v=oSCX78-8-q0

Mar 13, 2019 · 1 points, 1 comments · submitted by supercopter
gus_massa
It's nice to see it again, but remember that this was done in (2013). More info:

Wikipedia page: https://en.wikipedia.org/wiki/A_Boy_and_His_Atom

How it was made: https://www.youtube.com/watch?v=xA4QWwaweWA

> I'm not as pessimistic as Bret Victor [...] ABriefRant

I'd actually go one step further than that "rant".

In the real world, small physical motions usually have little control significance. Mouse movement and keyboards are among the many exceptions. With XR's hand tracking, that can change. For example, while gaming is focused on "feeling like it's real", so game hands overlap real hands, for getting work done, it's nice to have stretchy arms, so you can reach across the room to grab things. For optimal long-term ergonomics, you want a mix - sometimes you want the exercise of waving your arms, and sometimes you want to achieve the same end by merely twitching a hand resting on a knee or keyboard.

So not only did the video reduce all of human motion vocabulary, including gaze and voice, down to a single finger over a sad UI, but even that single finger was used in an impoverished manner. ;)

> XR will work much better with some pointing and voice input

Eye tracking as well. "Look; point; click" has a seemingly redundant step.

When someone is watching an education video, say IBM's atom animation[1], you want to both be able to field common questions like "What are the ripples?", but also notice that they are looking at the ripples, and volunteer related content.

One can do prototyping now, with WebVR, google speech recognition, and in-browser or google voice synthesis.

[1] https://www.youtube.com/watch?v=oSCX78-8-q0

seanmcdirmid
Voice input on its own is almost worthless.

A conversational interface is a huge step beyond voice, is much more than that. We aren't nearly there yet.

WorldMaker
> Eye tracking as well. "Look; point; click" has a seemingly redundant step.

> When someone is watching an education video, say IBM's atom animation, you want to both be able to field common questions like "What are the ripples?", but also notice that they are looking at the ripples, and volunteer related content.

This is why I think the gaze tracking in the Windows MR platform (which lights up, quite literally in Fluent design applications, in the just released FCU) is more important than people yet realize. It's been a part of HoloLens demos from the beginning, but it's interesting how many people haven't noticed yet.

Oct 14, 2017 · 115 points, 32 comments · submitted by ssijak
BostonEnginerd
This is one of my favorite SPM videos.

My company makes an Scanning Probe Microscope (more commonly known as an AFM) which goes into a Zeiss Scanning Electron Microscope. You can unlock some really neat applications with the combination - scanning areas which would be hard to locate, observing the tip interaction with the sample and using the Focused Ion Beam to remove material from the sample.

With SPM, you can do a variety of measurements - measure the topology, measure the electrical properties, surface potential, conductivity, etc.

Here's a neat video of the system in action:

https://www.youtube.com/watch?v=BrsoS5e39H8

JelteF
The video explaining how this is done is also very interesting: https://www.youtube.com/watch?v=xA4QWwaweWA
Bromskloss
The informative part is 2:57–3:23.

https://www.youtube.com/watch?v=xA4QWwaweWA&t=2m57s

jv22222
One thing that I don't quite get. If a single atom is the smallest thing (other than quantum foam etc), then what is it resting on in this movie. IE what is the background made of?
burkaman
Atoms aren't the smallest thing, but that's sort of irrelevant, because this was "filmed" with a scanning tunneling microscope: https://en.wikipedia.org/wiki/Scanning_tunneling_microscope

It's not like normal photography, it's more of a visualization of an electrical current sensor. The background is just the area of the plane they were scanning that didn't have any atoms.

Koshkin
> didn't have any atoms

That sounds a bit weird...

burkaman
Just empty space. There's a smooth plane of atoms, then a few atoms that make up your frame on top of that, and everything else is vacuum.
throwanem
Didn't have any atoms sitting on it.
craftyguy
Well what was 'it' if it wasn't made of atoms?
throwanem
A slice of quantum foam.
craftyguy
Not quite.
SAI_Peregrinus
Other atoms, but further away. Effectively the background is blurred because it's out of focus. The depth of field is thin enough that this works. (These are analogies. It's not strictly correct.)
bdamm
It's so far from correct as to be not a useful analogy.

The real question is, how is the microscope's output converted into a visual picture? Hint, it isn't by lensing light. Thus there is no "focus" to be out of.

SAI_Peregrinus
An atomic force microscope produces images by sensing atoms with a probe. It scans in a planar layer, analogous to a very narrow depth of field for a film camera. The background atoms do exert force on the probe, but much more weakly.
21
I presume it's sitting on a flat layer of other atoms, but they are picturing only the top layer, kind of how a focused photo has a blurry background.
None
None
tyingq
The background is a copper plate.

"Copper plate: The scientists used copper 111 as the surface of the animation — the same material they used 10 years ago when they built the first computer that performed digital computation operations. Carbon monoxide (CO): ok The scientists chose carbon monoxide molecules to move around the plate. Carbon monoxide has one carbon atom and one oxygen atom, stacked on top of each other."

http://www.research.ibm.com/articles/madewithatoms.shtml#fbi...

I assume focus / depth of field is why the surface looks smooth. Or whatever the equivalent of that is for a scanning/tunnelling microscope.

dogma1138
Smoothness is likely due to the grounding of the backplate.

If anything I would posture that they’ve filmed the backplate and the A atoms are the shadow from the scattering of the electrons that didn’t hit the backplate.

thatcherc
One thing I don't understand with that quote: it seems to imply this video was made only ten years after "first computer that performed digital computation operations". Is this a typo in the original or are they referring to something else?
tyingq
Probably something like logic gates assembled by moving atoms around?
None
None
RunawayGalaxy
Pedantically speaking, most movies are made by filming actual atoms.
pmoriarty
But not at a high enough resolution to distinguish one atom from another, nor to see how an individual atom behaves.
21
In that case, all data storage is already atomic.
None
None
cosban
This is interesting, but it was also posted to Youtube in 2013. Could we get an updated title with that information?
sprash
Could you build a super advanced processor with this SPM technology? Has this been done already by anyone?
hueving
How many other tech companies fund fundamental research like this? Has it come down to just IBM?
sethbannon
What exactly are the fields appearing around each atom?
applecrazy
Electron clouds? Maybe?
6nf
They are part of the atom - the electron cloud around the nucleus. https://phys.org/news/2015-10-electron-orbitals-molecules-d....
LarryMade2
Reminds me of Dancing Demon for the TRS-80
baron816
What kind of atoms are they?
None
None
XaspR8d
They're molecules of carbon monoxide, according to the making-of clip.
None
None
avsteele
You can find another movie of actual atoms taken by me at my website.

https://avsteele.com/index.html#science (scroll down a little)

Each dot is fluorescence from a single trapped barium ion. Excitation/fluorescence is at 493 nm, so the false-color map in the video is almost realistic.

More details are in the figure caption.

This very much reminds me of https://www.youtube.com/watch?v=oSCX78-8-q0 which they did a couple of year ago
Nov 02, 2016 · 1 points, 0 comments · submitted by npt4279
Jun 26, 2016 · 1 points, 0 comments · submitted by The_Fox
The linked video "A Boy And His Atom: The World's Smallest Movie" is entertaining. IBM made a stop-motion animated movie from individual atoms.

https://www.youtube.com/watch?v=oSCX78-8-q0&feature=youtu.be

asmithmd1
It was linked with the text "all manner of jackassery" I am glad to know I am not the only one who thought that video was a waste of time.
alva
Connecting incredibly difficult to understand scientific achievements with the simple notion of the common human desire for play and creativity is truly glorious.

A beautiful symbol of prowess, intelligence, creativity and humanity. I don't think anyone should underestimate the impact that a video like that can have on a child (or even a curious adult!)

quanics
It is a nice video.

Also be sure to check out IBM's STM gallery for some more amazing images.

http://researcher.watson.ibm.com/researcher/view_group.php?i...

Relevant IBM Video: A Boy and his Atom.

https://www.youtube.com/watch?v=oSCX78-8-q0

angersock
A Boy and His Dog is more fun.
Sep 02, 2014 · 5 points, 1 comments · submitted by bane
None
None
bane
How it was made

https://www.youtube.com/watch?v=xA4QWwaweWA

What are the ripples?

https://www.youtube.com/watch?v=bZ6Hv_du2Zo

Feb 15, 2014 · 3 points, 0 comments · submitted by vpj
I expect it to be about as difficult as this - http://www.sciencedirect.com/science/article/pii/S1466604901... - but not quite as difficult as this - http://www.youtube.com/watch?v=oSCX78-8-q0 - or this - http://www.youtube.com/watch?v=Pk7yqlTMvp8
EdwardCoffin
You believe that single atoms of the most reactive element (fluorine) will not be quite as difficult as carbon monoxide molecules [1] to push around?

[1] The pixels in the first of the two videos you linked to, according to http://www.research.ibm.com/articles/madewithatoms.shtml#fbi...

moocowduckquack
I don't believe that you would have to push them around at all in the way that the IBM guys are pushing around those atoms, I put that forward as something that is harder to do. You'd probably start with batch making a ton of very small doped samples and then testing if any have unusually low resistance.
May 02, 2013 · 2 points, 0 comments · submitted by mannjani
May 02, 2013 · 4 points, 0 comments · submitted by eamonncarey
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.