HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
DynaFlash: High-speed 8-bit image projector at 1,000fps with 3ms delay

Ishikawa Senoo Laboratory · Youtube · 176 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Ishikawa Senoo Laboratory's video "DynaFlash: High-speed 8-bit image projector at 1,000fps with 3ms delay".
Youtube Summary
DynaFlash can achieve 8-bit-level image projection up to 1,000 fps with the minimum delay of 3 ms. The performance of high frame rate is realized by using the digital micromirror device (DMD), high-brightness LED, and the high-speed processing module controlling these two devices. Also the performance of small delay is realized by our own original communication module between a computer and DynaFlash. Toward the new projector applications, we aim at the development of the new applications by integrating the high-speed projector and the high-speed vision that has ever been developed. As a first example application, we have realized a projection mapping system for the high-speed moving objects.
http://www.k2.t.u-tokyo.ac.jp/vision/dynaflash/
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Jul 30, 2015 · 176 points, 11 comments · submitted by dhotson
nitrogen
I'm so glad that we've finally moved past the silly arguments that human reaction times are too slow to need more than 60fps. It's not about reaction times, it's about latency. The future, in this regard, appears bright.
pol0nium
For those who didn't notice, the video was published on March 6, 2012.
None
None
idoco
I remember getting so excited with Johnny Lee demoing the movable projected displays 7 years ago on youtube, and thinking that this could be the future - youtube.com/watch?v=liMcMmaewig

This looks like a really big step toward getting something like this into every house :)

hughes
Millisecond-level feedback is incredibly exciting. It's only at these rates that the limits of human perception seem to no longer detect the disparity between the real world and digital interaction.

This strongly reminds me of a video[1] from Microsoft Research a few years ago in which touch-based interaction was demonstrated at 1ms latency. It's surprisingly more realistic than even the 10ms level.

[1] https://youtu.be/vOvQCPLkPt4?t=52s

zokier
That video is from 2012 and represents 100ms as typical at that time. I wonder if we have made any progress in the three years that have passed. Does anyone know what the touch latency of typical Android device/application would be?

I'd be curious to also know what sort of keyboard/mouse to display latency PCs typically have. I've heard some talk about display input latency, but I don't remember hearing anything about e.g. keyboard latency or OS latency (in this context)

Qantourisc
Your link is one of the main reasons I use actual paper to write on.
None
None
Mithaldu
As much as i agree that latency reigns the human mind supreme, i find that video just a little bit disingenuous, since they compare vector drawing with bitmap drawing.
leni536
What about drawing in bitmap with low latency and building the vector drawing from buffered user input in the background? Then replace bitmap with vector when it's available. It could be low latency and vector drawing at the same time.
rasz_pl
>only at these rates that the limits of human perception seem to no longer detect the disparity

IF you use the brute force approach of vomiting pictures in general direction of an eye.

Human vision system is incredibly meager and clever at the same time. Your eye doesnt really capture whole picture all at once, you have a huge blind spot in the middle + part of your nose obscures vision, you can only see sharp shapes in the center and fast movement on the boundaries, data stream consists of shaky blurry fragmented mess. Its the brain that filters and glues it all together into coherent picture.

Example - go to a mirror and look into your eye. Now find another person and look into their eye, you will be surprised to see yours was steady, and theirs is all over the place. Now check out your blind spot http://io9.com/5804116/why-every-human-has-a-blind-spot---an...

Saccades produce a lot of blurry artefacts that are simply thrown away. I read somewhere brain ignores about 2-3 hours of visual data a day.

We already have incredibly fast micromirrors, I wonder if someone is working on a microprojector that tracks saccades and displays only part of the picture eye is currently fixated at. This would allow constructing high resolution scenes using lower resolution projector. 120 Fov requires >500mpixels, but you can only see ~7mpixels at a time. Quick back of napkin calculation tells be you could bump perceived resolution of Occulus by ~40x

random_passerby
Moreover seems like our perception of time and space do change during a saccadic eye movement.

At least according to that paper from ten years ago : http://www.nature.com/neuro/journal/v8/n7/abs/nn1488.html

Maybe is there more recent researches in that field.

None
None
jay-saint

  " I read somewhere brain ignores about 2-3 hours of visual data a day."
I imagine that is mostly due to PowerPoint and meaningless dashboards.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.