HN Theater

The best talks and videos of Hacker News.

Hacker News Comments on
I Made My Own Image Sensor! (And Digital Camera)

SeanHodgins · Youtube · 121 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention SeanHodgins's video "I Made My Own Image Sensor! (And Digital Camera)".
Youtube Summary
It actually works! Finally got around to building my own digital camera from scratch. Its not an easy project, but if you want to recreate it, there are resources below!

Support my Free Open Source Projects by becoming joining the Patreon! -

The 8-Bit Guy Gameboy Camera Video -


PCBWay Affiliate Link(Get $5):

Some Tools(Amazon Affiliate):
Soldering Iron Hakko FX888D -
ESD Safe Tweezers -
Rework Station -
Power Supply -
Oscilloscope -
3D Printer -

Tech Instagram:

Instrumentals Produced By Chuki
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Dec 31, 2019 · 106 points, 20 comments · submitted by glax
Visually, I'm reminded of this project.

I know CCDs for astrophotography have large photosites to increase the signal/noise ratio. It seems like this array would have the same qualities.

Do you know what the dynamic range is, how many stops the sensor can capture?

This is really interesting, so I'd really like to see a write-up instead of just a video.

How about making a (slow) medium/large format camera with a linear sensor from a scanner?
People have done it. I've seen a couple articles about it, but the one I was able to dig up today is from Mattias Wandel (of note):

This isn't exactly what you're looking for, but you run into some of the same limitations using a scanner as a large format back: moving objects end up distorted by the movement of the scanner "head" as it scans the image plane.

Funny to see wandel here. I binged his channel but never knew he made articles. Anyway distortion is also pretty interesting artistically.

I scavenged a dozen of heads to make a giant wall scanning bar.

Well done. The quality of explanation and production value were good. It seems like by adding more processing power, you can double or triple the scan rate.
From what I understand scan rate depends on exposure time which is quite a bit high for such a sensor.
I bet by changing up the sensor pixels this would be a pretty cost effective way of making a low res FLIR camera, or really any imaging band. Using a different pin hole you might even be able to image things with xrays, neutrons, or strong 100Ghz through THz radio waves.
Fun and a great learning experience, sure. But, at least with respect to infrared, I'm not seeing the cost effective part when fully integrated solutions with ~19x greater resolution and then some can be had for $400 retail; the BOM for such a project will approach (if not exceed) this figure. Prototyping is often deceptively expensive.
Certain FLIR resolutions beyond a certain density are, let's just say, not unlike the early days of cryptography.
Instead of a front lens element you might get better results using a lens salvaged from an old medium format folder. The whole lens and shutter assembly comes off as a single standard sized unit.

These should be a good fit for the needed image circle and give better images as well.

Using decapped DRAM chip as photo sensor looks more promising:

That's how CMOS cameras got started in the first place.
Very interesting.

Any idea why this happens?

> Exposing the capacitor to light causes it to discharge faster.

My guess is this:

Capacitors on DRAMs are usually implemented as PN junctions, maybe even parasitic capacitance of a transistor is used. Photons hitting depletion region of PN junction decreasing it hence increasing conductivity.

I wonder how good images could be with modern DRAM. A 256MB DDR3 chip would theoretically have over 2 trillion pixels. Light sensitivity should be better too due to the smaller capacitors.
Who is down voting this comment?
Probably because I was off by a factor of 1,000. Still a billion pixels is pretty good.
Decapping and testing this wont be difficult, meaning you understand how individual transistors are located on the die to reorder raw data into planar image. I wonder if this information can be recovered using some known patterns shown to such sensor then retrieving appropriate data and analyzing.

Another interesting thing is that for training neural networks for image recognition purpose information about location of individual pixels is not necessary at all.

Just move it about for a bit with frames sampled in sequence and you should be able to build up a pretty good map of the arrays of cells.
> Another interesting thing is that for training neural networks for image recognition purpose information about location of individual pixels is not necessary at all.

Convolutions use the spatial information. I'm less sure whether attention-based approaches typically use it.

Dec 29, 2019 · 15 points, 1 comments · submitted by Abishek_Muthian
You could move it around taking lots of pictures and construct a more detailed image
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ [email protected]
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.