HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
NEW Hindenburg Disaster in Color

Neural Networks and Deep Learning · Youtube · 192 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Neural Networks and Deep Learning's video "NEW Hindenburg Disaster in Color".
Youtube Summary
The original Hindenburg footage has been denoised, upscaled, colorized and frame interpolated to 60fps using artificial intelligence. Using a newer colorizer here, let me know what you think. Still trying to work out all of the reds.

Support our content on Patreon: https://www.patreon.com/NeuralNetworksDeepLearning

Buy Topaz Video Enhance AI for 4K upscaling (Affiliate link): https://topazlabs.com/video-enhance-ai/ref/496/

Instagram: https://www.instagram.com/neuralnetworksdeeplearning/
Facebook: https://www.facebook.com/NeuralNetworksDeepLearning

#AI

0:00 Introduction
0:04 Edited by EUGENE W. CASTLE
0:18 ALL THE WORLD ACCLAIMS
0:50 Germany's pride completes its first crossing to America.
1:12 Millions hail a new group of air
2:32 Ernst A. Lehmann, Captain
2:40 The giant mistress of the skies arrives in New York City.
3:12 Dusk — the Hindenburg nears its mooring mast.
3:26 Suddenly – the fatal moment!
4:03 A tragic blast heard 'round the world!
4:14 Rescuers rushed into this mass of twisted
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Oct 15, 2021 · 192 points, 109 comments · submitted by DamnInteresting
marcodiego
Considering it is entirely automated, this is extremely good.

I'm not a bit against ML restoration. We've been paying artist to color BW in the early days, this is just replacing the artist by machines.

Judging from the video, it looks like no inter-frame relation is considered, so color varies wildly from on frame to another. The video still lacks some form of stabilization. Frames still have defects inherited from the film.

Also, it clearly was recorded in different speeds, so people have an uncanny walk in the last scenes.

I hope, in the not too distant future, these flaws will be taken care of and we'll see restorations that are very hard to differentiate from original footage.

scotty79
Lack of temporal consistency for colors is absolutely horrible.

I think it indicates that the problem posed to ML was defined wrongly.

xwolfi
Exactly, the upscaling is good but any human intern would have fixed the color for free, even by choosing a random one, and it would have produced a result closer to expectations...
cheschire
Maybe then we'll finally get an HD release of Deep Space 9.
cf100clunk
Evidently all efforts of the following HN member to improve consumer-grade DS9 video has reached an end:

https://news.ycombinator.com/item?id=24417255

The studio itself won't commission the necessary professional work for an HD (or at least upscaled) re-release.

cheschire
The article you linked in that thread had an update from the same person.

https://www.extremetech.com/extreme/324466-tutorial-how-to-u...

boulos
He came back to it :).

> Update (7/17/2021): The article below has been superseded by the results discussed in “Far Beyond the Stars” and the accompanying tutorial, “How to Upscale Star Trek: Deep Space Nine.” These articles are the latest that I’ve published and the best showcase for my latest work.

[1] https://www.extremetech.com/extreme/323905-far-beyond-the-st...

[2] https://www.extremetech.com/extreme/324466-tutorial-how-to-u...

foobiekr
It really isn’t very good. It’s worse than you’d get from low cost hand retouching outsourced to a low cost of labor geo as was done in the 80s.
dylan604
>as was done in the 80s.

as is done to this day. FTFY

wiz21c
>> The video still lacks some form of stabilization.

Yeah, I wonder why. Once you are able to create colors, I'd think stabilizing the picture must be really easy. And since we're doing fancy interpretation (yeah, I say "interpretation" 'cos ML can't know the exact colors, just see the craft turning red and blue all of the time), then it'd be nice to go to the end of it : color, stabilization, speed fixing, the whole thing.

Besides, how do we know how such "interpretation" is close or far from reality ? What sort of validity testing is done ?

tdrdt
The channel Rick88888888 has a ton of restored movies: https://youtube.com/user/Rick88888888
xdennis
I'd sooner call it vandalism. Restoration is what's shown in "They Shall Not Grow Old" where they paid attention to detail and approached it with respect.

Sharp edges and shifting red-blue colours is not what I would call restoration.

seoulmetro
It's getting sadder and sadder to see people enjoying these fake restorations. Eventually we'll have so much of a mess that the original source will be forgotten or lost.
dylan604
Hahaha, that's their creative touch so they can claim copyright on this for the next 70 years
TaylorAlexander
It’s not vandalism unless the originals have been damaged. If someone makes a digital copy and goofs around with it I really don’t see any harm being done.
foobarian
In this particular case I think there is the extra complication that the ship skin was made of a special material that likely does not burn with the usual wood fire colors. Combined with the hydrogen on the inside I actually have no idea what color the fire would have been - the most accurate way to restore this would probably have been with an actual eyewitness in the loop, or by restoring a section of the ship including the flammable paint.
xdennis
I couldn't possibly disagree more. This is nothing more than the digital equivalent of the monkey Jesus restoration: https://en.wikipedia.org/wiki/Ecce_Homo_(Mart%C3%ADnez_and_G... It's extremely disrespectful. The colours are shifting all over the place. At times it's no better than overlaying a random gradient over the original.

> We've been paying artist to color BW in the early days, this is just replacing the artist by machines.

And some people are very upset over those. (Have you ever actually seen classic movies like "It’s a Wonderful Life" in colour?) But this ML nonsense doesn't even hold a candle to those. In the manual process, for every object, they pick a colour and stick with it.

axiosgunnar
How is this disrespectful if it's just a demo of the tech progress?

Are painting artists not allowed to train by repainting Picassos until they get good?

dylan604
Sure, you can train all you want, but showing off something that is inferior to other things that can already be done is just embarassing. At least it should be. There's a difference of showing off progress to your parents or investors or whatever, but making a public post of something clearly inferior in a way that screams "look what I did" is sad really.
renewiltord
Interesting. This is a common consumer view. Since pure consumers aren't acquainted with anything but the best in the field, they demand very high quality of anything they witness. That makes sense.

At least part time content creators know that creation is harder - because that first step to actually doing something is mentally hard (pure consumers find it impossible). People will make crappy things before they make good things and they'll share them with each other.

Perhaps the problem is when pure consumers, seeking further stimulus, enter creator spaces. Or where part-creators accidentally expand the audience for their fellow creators into pure consumer spaces or more-consumer-than-creator spaces like HN.

dylan604
If you are attempting make something from scratch that has a competitor on the markent now, why would you evern think someone would use your product when it is so clearly inferior to other offerings? Hope to attract business by being the cheaper option? That just means you're going to get the clients nobody wants.

There's a difference between showing your friends and family something, but opening up the entire public with posting to YouTube and asking "tell me what you think" means you must be pretty proud of it. If you were proud of the results from this, then just wow.

In full disclosure, I've spent many an hour in the chair of restoring film/video. I've written many a tool to help with this endeavor for internal use as well as used professional tools. I have beta tested software that later went to production release. I have sent content out to outsource the work and critically evaluated the results. If this was the result someone sent me after suggesting they could do the work, I would never send them the work as well as vocally tell others not to waster their time.

This isn't even good enough to send someone's 8mm footage to for viewing on modern devices let alone professional.

renewiltord
It’s put up to share, man.

You’re writing this comment of yours and posting it publicly. Are you so incredibly proud of it? It’s riddled with typos and wouldn’t get you a passing grade in middle school.

That’s what I would say if I were being overly critical of someone sharing their opinion. It really doesn’t need this degree of hyperbolic value judgment. If it’s such rubbish it will easily be outcompeted for attention.

registeredcorn
The amount of arrogance you're expending in a field you appear to know very little about is, frankly, shocking.
wizzwizz4
Presumably, because they're training on a video of the Hindenburg disaster.
ramblerman
It's not like the original video is full of 'tact'.

> Suddenly - The fatal moment!

registeredcorn
I can't tell if you're being purposefully obtuse or not, but I'll assume good faith.

I can't speak for the other fellow who commented, but I find it offensive for at least two reasons:

1) The demo was of a video involving deaths. It would be equally distasteful if it were of a single person car crash, even a film of a crash didn't directly show the crash victims skull and front teeth strewn across the hood.

Simply because this happens to be of a famous disaster does not make the choice of subject any less distasteful. To be clear, I am also not claiming that restoration work can't be done of content involving fatalities. Rather, I'm saying that the way in which such a technology is presented should be in less of a cavalier nature. This comes off very callous and incredibly disrespectful. "Hey guys!! Look at this NEAT thing I did to a video of people burning to death! Cool, huh!?"

As of 2009, there are still two direct survivors of the disaster. I expect the number of immediate family members of the disaster to be significantly higher, and I doubt many would be thrilled that the deaths of long dead loved ones were used for some inane tech demo. Imagine for a second that your mother died in that disaster. Would you feel so flippant about it? I sincerely hope not.

2) The field of B&W restoration is the collision of several major disciplines:

* Historian. You are handling and attempting to preserve materials beyond value to families.

* Genealogist. You are documenting and "breathing life" into an insanely personal, highly sensitive area of someone's bloodline. The work you do will directly impact multiple living people and possibly many future living people. This work done will be interpreted as fact. Not a suggestion or approximation. When the restoration is over, that is what's going to stick around hundreds of years from now. If you mess that up, you have done a grave insult to everyone who wants to honor the memory and the legacy of that person. It puts you at the lowest of the low for your entire profession.

* Therapist. Your actions can help give solace and healing to a person who might have very, very little to remember a loved one, their father or brother. A child who died far too young. Your work is, in some small degree, to help mend that ache.

* Artist. Even if you have a vast knowledge of the most likely colors of clothes the person might have worn in a specific era, it's impossible to get it right every single time. In trying to get as close as is reasonably possible takes not only "guessing" but a finesse around the idea of "suitability". Perhaps their shirt is more of an garish, neon green. That might be the most likely thing that was worn, but is that what shows the most honor and respect to their memory? Is the client going to be insulted if their beloved uncle shows up in a puke colored jacket and salmon colored trousers? It might be what they actually had on, but is it going to help with that mending and healing process? Maybe! If they were known for wearing garish outfits. But for most, it would do a disservice, and a net negative. This is where the human element comes into play. You need context to the person behind the photo. You need the background. You need that personal interaction or else you aren't doing your job.

There's a few other areas that come to mind, but I'm sure my blathering is tedious enough already.

titzer
The original medium contains information collected at the time, with the technology available at the time. As such, it represents a time capsule that includes much more context than we can possibly realize; especially analog filmstock and audio. "Upscaling" and "autocolorizing" is injecting inferred (and often wrong) context from today using today's technology and context, tainting, corrupting (IMHO), and again, IMHO, ruining what the original artifact represents.

In our current technological, digital zeitgeist, everything that seems like higher and better resolution is universally better and must absolutely replace what was there before. But this is just a shifting fad, unrecognizable to the people who were alive then, and probably unrecognizable to people in the future.

Preserve, conserve, protect and pass on. Don't corrupt and "better" these things.

inglor_cz
The crazy, crazy fact:

Fatalities 36 (13 passengers, 22 crewmen, 1 bystander) Survivors 62 (23 passengers, 39 crewmen)

Looking at the video, you certainly would not expect two thirds of the people on board to survive the hellish fire.

jacquesm
Indeed, a little over half a minute from the start of the fire to total destruction of the airframe, it is amazing that there were that many survivors. Important to note that the crew and the airframe itself were two separate components, but that the passenger area was embedded in the fuselage.

More miraculously when you have seen that footage is to realize that some of the passengers and crew walked away without major injury.

marcodiego
It is mostly hydrogen, so most of the energy was dissipated in the first few seconds. Most of the other materials or do not burn well or have very low thermal capacity. It was not on its highest altitude when burning started and drag dumped the falling to much slower speeds than a free-falling person.

I'd estimate that most people who died were trapped or unable to move. Those who were free to run and had enough "air" were very likely to survive.

lovecg
I’ve read that most fatalities were people jumping out, and those who stayed inside until the burning airship touched down mostly walked away.

Edit: that’s not quite correct, Wikipedia has a better summary https://en.wikipedia.org/wiki/Hindenburg_disaster

m0llusk
Hydrogen does not burn energetically, especially before it has been mixed with oxygen. The tragedy was caused by the coating used on the exterior which was essentially solid rocket fuel.
speedybird
> Hydrogen does not burn energetically, especially before it has been mixed with oxygen.

Drop the word "especially" and I might agree, but a mere party balloon of hydrogen mixed with oxygen in a stoichiometric ratio going off will sound like a rifle and rattle windows.

jwiz
When I was young, I volunteered at a science museum who would, as part of their chemistry show, fill a balloon with hydrogen and ignite it off a tesla coil. One time, we (~15yo kids) convinced the person giving the show (maybe...college age kid?) to also mix O2 in the balloon, instead of only H2.

The boom was so loud that management folks on the other floors sent people down to see WTF was happening. I don't think he got into (much) trouble, but he sure never did that again.

zardo
I don't know why this theory is repeated as though it's fact. The explanation that a lifting gas bag ruptured, mixing it's contents with the air seems a better fit to the witness accounts.
None
None
Zuider
New footage shows the tail-fins catch fire before the gas-bladders erupted. Those tail-fins must, therefore, have been quite flammable. The video suggests that the trigger for the fire was the release of a large build up of static electricity on approaching proximity to the ground. An airship is essentially, an oversized Leyden jar, and such a build up would cause problems, even for ones using helium for lift.

https://www.youtube.com/watch?v=UFCgipjR2ow

zardo
> New footage shows the tail-fins catch fire before the gas-bladders erupted.

In the film you link the entire rear half of the craft is on fire from the first frame.

evgen
The Hindenberg was also reportedly quite tail-heavy coming in for landing, which suggests a leak. If it was leaking at the tail and this was triggered by static discharge it would fit most of the facts.
dangerbird2
That theory has been largely discredited. There wasn't nearly enough reagents in the fabric to create a large thermite reaction. An oxyhydrogen explosion is the most likely explanation: there was probably a leak prior to the explosion, which allowed air to mix with hydrogen inside and surrounding some of the gas bladders. Also, a walkway ran through the bladders, which acted as an oxygen source once the bladders burst, as evidenced by flames being directed through the axial walkway.

https://en.wikipedia.org/wiki/Hindenburg_disaster#Incendiary...

Someone
Also, hydrogen being much lighter than air, a lot of the burning and associated heat was above the zeppelin.

Slightly related question: this film talks about “white hot steel”. It wasn’t steel (https://en.wikipedia.org/wiki/Duralumin#Aviation_application...). Was it white hot?

userbinator
Heat rises, and the passengers were all below the fire. I suspect many of the deaths were from being crushed as it fell on them.
contravariant
This happens quite often with accidental explosions I've noticed. Explosions are not as lethal as you might expect, unless someone deliberately ensured that it caused a lot of damage.
Zuider
It was a design flaw. The flames were able to travel quickly through the central passage which passed through the center of the gas-bladders. The footage shows flames venting from the nose of the airship.
mongol
Were all in the zeppelin when it caught fire?
wodenokoto
yes. It hadn't landed yet.
loeg
One of the deaths was a ground crewman, Allen Hagaman.
dangerbird2
Apparently, most of the passengers were along the window bay in preparation for landing. They were able to hop out of the windows as the guest compartment reached a safe distance to the ground, and before hot wreckage could fall on them.

The three things that saved them was A) an airship, even one leaking and on fire, had a slow enough fall to safely evacuate B) the majority of fuel sources were above the crew and guests and C) most of the guests had an easy exit.

None
None
jacquesm
About half of the guests had an easy exit, the other half were on the other side of the airship where the door had become jammed, most of the people on that side perished in the flames.
GeekyBear
I've really come to enjoy these sorts of upscaled videos that allow a view of various cities in the early 1900's.

New York: https://www.youtube.com/watch?v=85WMpJMv8aA

San Francisco: https://www.youtube.com/watch?v=VO_1AdYRGW8

Paris: https://www.youtube.com/watch?v=JOPaxhhgyd8

Wuppertal, Germany: https://www.youtube.com/watch?v=EQs5VxNPhzk

bsenftner
In the early 90's I was working in early streamed digital video. One of the projects was a history of aviation documentary. The original negatives of early flight, including the original Wright Brothers flights are lost. But in many cases paper print outs of them remain. Before copyright was extended to film, to get a copyright a film had to be printed to paper. These paper print outs are all that remain. But this being the early 90's, there was no machine learning nor a formal field of computer vision as we know it now. We painstakingly scanned the thousands of feet of printed film, and reconstructed them as CLUT-127 (color look up table) animations. I seem to remember that project got streaming media awards of some sort. It was a long time ago.
diggan
Me too, really interesting to see and easier to connect somehow when the quality is better, compared to the black and white versions with strange frame-speeds.

Here is another one:

Barcelona, Spain (1911): https://www.youtube.com/watch?v=P-3NlMAdq9I

GeekyBear
I agree. Correcting the frame rate jitter and also motion stabilization (in addition to upscaling the resolution), really does result in a much more engaging view into the past.

I'm perfectly happy without colorization, but also willing to overlook it.

the-dude
Groningen, The Netherlands : https://www.youtube.com/watch?v=ogsTc9JLfWY
GeekyBear
This source material is very well done, given it's age.
pikelet
That is truly interesting, but I also find it quite depressing. There's almost not a tree in sight. I'm glad that I did not live in such a place.

Edit: The one from the Netherlands is lovely though, and an example of how life in our cities could be without cars.

the-dude
The city council started pushing cars outwards from the late 70s onwards and the city is more or less back to the situation at the time of the video.
FiberBundle
This one of Berlin, though not as early as the ones you posted, is the most spectacular one I've found:

https://m.youtube.com/watch?v=_YSDgruADlE

This gives a much better view into what everyday life must have been like at that time, it almost reminds me in some ways of movies such as Koyaanisqatsi.

CamperBob2
This is absolutely astonishing. The colorization isn't great, but you can tell it will get much, much better as the algorithms improve.

The stabilization, however, is ... well, astonishing. There is zero visible judder except in a very few places.

andreasley
A beautiful animation of the ship's design and the disaster:

https://www.youtube.com/watch?v=VJy17qZmhjE

greenail
The temporal color jitter is pretty bad, but the other processing seemed to crisp it up. Based on my watching two minute papers I have to guess they can do better on the temporal consistency of the color.
fastball
My guess is that they are doing this frame-by-frame with an NN built for images, and that is why there is some much jitter.

Providing actual video as input to the NN could probably do away with this.

dane-pgp
Two more papers down the line, I'm sure they'll be able to fix that.
RotaryTelephone
What a time to be alive!
consumer451
Károly Zsolnai-Fehér is just so darn charismatic. That has to be one of the reasons Two Minute Papers[0] has >1M subs. Is there anyone in the space even close to that level of engagement?

[0] https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg

txdm
I never noticed until now there were big honkin swastikas on the tail.
kumarsw
While it's not very visible in the usual pictures of the disaster, it is very visible at the beginning of Indiana Jones and the Last Crusade. I'm a little curious why people have forgotten that, maybe the Last Crusade is no longer a Blockbuster staple?
gonzus
I did notice them as well. I wonder if the original video had already been edited so that the swastikas were (in my opinion) mostly clipped out, or if this is a more recent edit...
WarOnPrivacy
I remember swastikas in the 1970s. I don't think they were ever edited out.
GhostVII
I think it would be cool if you could have some kind of combo human+ML system, where you use ML to actually detect the different objects and track them between frames, and a human can choose what color to use for each one. In a couple shots at least the Hindenburg showed as fully red instead of silver, and I don't think it is always possible for an algorithm to actually know the correct color without external data. If the algorithm could just say "I've identified this object between frames", and then the human can choose the correct color, could be best of both worlds.

Or maybe you could feed in some reference photos with the same objects, but already colorized. And then the algorithm could match objects from the reference photos to those in the black and white ones to get the colors.

canjobear
It’s interesting that the color of the Hindenburg itself is always shifting. The object is so out of sample that the ML colorizer has no idea what to do with it.
userbinator
I was half-expecting it to turn into the colour of a manatee.
diklon
Although visually it appears to be high resolution, it feels like an illusion, like my brain still receives no additional information Vs the original. Which is odd because I can look at individual elements and see more precision.
wly_cdgr
This sent me down a zeppelin Wikihole and led me to a much greater appreciation of the engineering and operational complexity of these airships. Thanks!
ghostly_s
Please add "(very poorly)" to the title.
oofbey
They should add a regularizing term between frames, like a simple L2 loss between output pixel values, to stabilize the color.
bsenftner
And stabilize the frames, an early assignment in many computer vision courses. No need for the text frames and video images to be so shaken/stuttered.
LargoLasskhyfv
Err...WOW! That was fast...

And I've been unaware that Manhattan had so many skyscrapers at the time. Or maybe not. But one can glimpse that for a few seconds in the video. Awesome.

jgwil2
Yeah it's wild how modern Manhattan looks in the background. I've seen pictures of course but in color/motion it is really striking.
jacquesm
Another way of looking at that is how little change a typical city will undergo once it is laid out. This goes for Rome and Amsterdam just as it does for NYC.

I was away from Amsterdam for about a decade before coming back there after having lived there before for a stretch of nearly 28 years (with a few longer absences) and what surprised me is how little had changed, and yet, here and there there were buildings that I was pretty sure weren't there before or some familiar landmarks that had gone. Over many decades or even centuries that kind of change will add up, and even though for Amsterdam in particular at some point there was a plan to 'overhaul' it and make it more modern (which fortunately got arrested in the earliest stage, but it did do a lot of damage to the east side of the river Amstel) the vast bulk of the city has been unchanged since decades.

What real change there is is expansion wherever there was undeveloped land, but that isn't change insomuch as it is simply growth, what was there before remains.

https://viewpointvancouver.ca/2019/10/27/the-1960s-when-the-...

https://en.wikipedia.org/wiki/Expansion_of_Amsterdam_since_t...

My mom still remembers that just behind the Olympic stadium in Amsterdam there were meadows and cows grazing! (That's now deep inside the city).

If you're interested in when buildings in Amsterdam were first constructed you can find that information in:

https://www.wozwaardeloket.nl/

Just zoom in on the city, click on any building outline on the map and on the right hand side you will find all kinds of interesting information, including the year that ground was broken or the building was formally entered into the registry. This information is not always available but it still contains a wealth of information.

nobrains
https://en.m.wikipedia.org/wiki/USS_Akron
None
None
fitzroy
Joseph Goebbels and the Amazing Technicolor Dream-Blimp
bsanr
Huh. Never noticed the swastikas before.
ChristianGeek
Manhattan has never looked so green!
spodek
Came here to comment on how much park space there was in lower Manhattan. Not the same type a loss as the explosion, but a tragic loss nonetheless.
teeray
This reminds me so much of the Gene Wilder Willy Wonka Tunnel Scene
marstall
a real tragedy, but also a succinct metaphor for Nazi Germany!
spodek
Or for our culture of growth, believing we can outdo nature, like stories in The Onion's reporting on the Titanic: "World's Largest Metaphor Hits Ice-Berg" https://lh4.ggpht.com/-gWV2MSK4odI/T4yLpd1STpI/AAAAAAAASrw/o...
michaelterryio
A bunch of the comments in this thread are inappropriate. There’s no justification for the over-the-top insulting language on this forum.
323
Needs TikTok song:

https://www.youtube.com/watch?v=fXLicO0CRvk

turtletontine
I don't know if this is a breakthrough but... Frankly this looks awful.

I think I could handle unrealistic colors but the way they flicker so much frame to frame is really jarring.

It's interesting that the algorithm seems to generate chromatic aberration at hard edges? Most clearly around the letters on the title cards.

xbkingx
I kept waiting for it to get better than a light brown haze with flashes of green... but it never did. The fire looked okay, but even that looked weird at the beginning of the sequence.

What really made me laugh was the blue smoke followed by a headline implying that was all at night. The AI filled in pristine blue skies and inverted the wreckage colors. It actually made up history. I'm calling it: Unintentional automated disinformation.

sp332
The original video was interlaced. The deinterlacing part of the algorithm is not very good, and I think they would have gotten better results with a special-purpose pass before handing off to the neural network.
jacquesm
What is missing is object continuity with respect to color. That would quiet things down tremendously. Right now it is as if every object gets re-painted from one frame to the next in a completely new (and often garish or wildly incorrect) color.
MauranKilom
I believe there already is quite some continuity, otherwise the colors would flicker much stronger from frame to frame. In the video it varies smoothly from frame to frame.
Someone
I think they get that not because of putting in that constraint, but only because subsequent frames are similar. That makes the coloring algorithm pick similar coloring.
jacquesm
There are many instances of frame-to-frame discontinuity that I can't explain other than by a lack of object detection and labeling. It would be less wrong to use the color from the previous frame even if the lighting changes than to use an entirely different hue for the same object.

Only things like TV screens and other displays (and some interesting objects covered with micro surfaces that can cause light interference) can change color that rapidly given the same color incident light.

spqr0a1
Or using a better source video. As there's no way the original film was interlaced.
bagels
Original film may have flickered, or the playback process to capture may have induced flicker.
skrebbel
I'm now imagining a low-paid 1930s film job called "interlacer" which consisted of taking every frame and drawing tiny, perfectly straight black lines on it, with a tiny fineliner and a tiny ruler, on odd and even rows, every next frame.
speedybird
Analogue video (e.g. television) interlacing did actually begin in the 1930s, a few years before the Hindenburg disaster. But the original disaster footage would have been shot on film (not interlaced of course), cut into newsreels and then converted to a television signal via telecine.

Supposedly the original newsreals still exist, preserved by the National Film Registry: https://en.wikipedia.org/wiki/Hindenburg_disaster_newsreel_f...

dylan604
I know of a company that did the opposite to handle 2:3 pulldown for rotoscoping. They had a photoshop action that would select every other line, copy&paste to a new doc, collapse the empty space, and then paint the frame. Then, do that a second time for the other half. Finally, more actions to recombine.

I couldn't make this up. My jaw just hit the floor when it was explained to me the first time. I still shake my head typing it up to post here.

egorfine
A flow like this could have been someone's job security.
Tronno
I had to look this up: https://en.m.wikipedia.org/wiki/Three-two_pull_down

What tool should have been used to do that conversion (in reverse)? To their credit, it sounds like they solved the problem and moved on.

dylan604
They solved the problem by increasing the work unnecessarily. So to explain the issue with more detail and not requiring peeps to jump to wikipedia, 2:3 pulldown is how 24fps film was converted to 29.97 fps interlaced video. This increases the number of frames roughly 20%, but it does this by repeating fields of the original progressive data. When you step through this 29.97 video frame-by-frame, you will see a repeating pattern of 3 progressive frames followed by 2 interlaced frames. To do this asinine made up workflow, they then increased the number of frames again. Instead, they should have done an inverse telecine (IVTC), which reverses the 2:3 pulldown returning 24 fps progressive frames. When rotoscoping, you definitely do not want to be creating additional frames "just because". Applying a small amount of logic should have suggested that these interlaced frames are odd since the orgininal source had nothing but clean progressive frames. If this was introduced in the conversion to video, surely it can be undone.

They, as you, decided it was acceptable to do whatever needed to be done at whatever expense rather than taking 10 minutes to find the proper workflow which would have saved them money. A simple phone call to the company they used to transfer the film to video would have been able to explain to them how to do this in less than 5 minutes. I know because I worked for the company doing the film transfer and had helped several other clients with the video for film post workflows. Today, it's even easier because there's like a bazillion write ups on how to do this posted on the web.

Edit: I didn't answer the question directly after pontificating. IVTC is the process that needed to be applied. Many many tools exist(ed) for this. The tools at this company's disposal would have been After Effects, Avid, etc. Later more tools became available like AVISynth, FFMPEG, and other dedicated tools were created by people to tackle this directly.

None
None
rob74
Looks like the AI can handle objects it "knows" (people, grass, sky etc.) okish, but is completely confounded by the Zeppelin - which is sad, as the Hindenburg is of course present in most of the shots. Maybe they should have trained it with some color(ized) photos of the Hindenburg (like this one: https://www.alamy.de/das-deutsche-zeppelin-luftschiff-die-hi...) first?
xyzzy21
This is where a human videographer could trivially do better or add value. The obsession with "automating everything" is a real disease in ML and CS generally. Sometimes it makes things worse! This is one of those situations.
gremloni
Only thing that looked good was the closeup of the captain. That one short segment was very well done.
AdrianB1
For an early alpha version, it is ok-ish. For anything but that, it is terrible.
lvs
And there's lots of geometric distortion too. Buildings don't like like this (0:38): https://youtu.be/GhoGTqSfUzs?t=38 . It looks like Google Maps's (terrible) 3D reconstructions of satellite imagery. (Aside: I wish Maps had never done that. The source imagery is 1000% more useful than their reconstructions.)
hinkley
I had to stop 60% of the way through. It was giving me a headache.

This video proves to me that you can’t do denoise, upscale and then color without stabilizing first. The result is too jarring once image stab is a known transform.

Aardwolf
There are also certain frames where the smoke turns into green trees
Igelau
People complaining about the colors shifting don't release it pulsated red in time to the phat beats it was dropping.
oofbey
The de-noising and super-resolution looks quite good to me. It's the colorization that's super unstable and looks ugly. If they'd just left it B&W it would IMHO be more impressive.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.