HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Forge – mobile 3-D capture

aboundlabs.com · 82 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention aboundlabs.com's video "Forge – mobile 3-D capture".
Watch on aboundlabs.com [↗]
aboundlabs.com Summary
ABOUND: Mobile reality capture SDK for iOS engineers. Add 3D scanning to your iPhone app in minutes. The only real-time photogrammetry for mobile.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Mar 26, 2014 · 82 points, 35 comments · submitted by acjohnson55
hatuman
Seems like the tip of a very big iceberg. I sent this on to a friend who creates 3D construction and real estate models. I think he needs to diversify before too long...
nobbis
I agree. Also, Forge's underlying streaming 3D technology has lots of applications - mobile capture is just the first.
diafygi
Great demo! A few questions:

1) What previously existing libraries are you using (if any)?

2) Since the video capture (assuming it's from the tablet's camera) is detached from the depth capture, how do you coordinate the two to make the grid layout in the video? Structure.io seems to require attaching the depth device to the video device (so the perspectives are relatively fixed).

nobbis
1) The core reconstruction technology is written from scratch (it's been a full-time project for over 18 mths.)

Some open source projects it uses: Eigen, OpenNL, OpenMesh, OpenCTM, GLM, Protobuf, Redis, Gluster, Node.

2) Video capture is actually from the RGB-D sensor, so it's registered with the depth. I believe the Structure sensor requires that the camera is fixed to the mobile device so that its IMU can be used for tracking. No such requirement here.

mierle
No Ceres?
nobbis
Not yet, but I've been meaning to try it out. It sounds nice.
jrpowers
Jeff from the Structure Sensor team here. Structure Sensor comes with an attachment to hold it rigidly attached to an iPad (and CAD so you could 3D print attachments for other devices). Color-depth registration is handled by the SDK and is registered using vision (not IMU) -- but IMU is available if desired. Hope to see this project ported to be compatible with Structure Sensor!
nobbis
Definitely possible - Forge is depth camera agnostic.

Before it could happen, though, I'd need you to ship my Structure Sensor USB cable and for you to release either an Android, Linux or Windows SDK. Hope that'll happen soon.

bayesianhorse
Will it be free? Will the models be free? Would be very neat when used with blender...
nobbis
I'd really like for light usage to be free, but I'm undecided as to the best monetization strategy.

One option is for model creators to be paid when users download their public models.

bayesianhorse
The most profitable route would be aiming for a sharing platform, getting early stage investment, build a userbase, then sell the company. Both Facebook and Google are eyeing this field.

It's not even clear if this is more risky than a paid model, because the internet landscape is littered with corpses of paid-model web applications which didn't take off "enough".

rmc
I did not know you could buy off the shelf 3d cameras. Where can I buy one?
nobbis
DepthSense 325 - http://www.softkinetic.com/Store/tabid/579/ProductID/6/langu...

DepthSense 311 - http://www.softkinetic.com/Store/tabid/579/ProductID/2/langu...

Creative Senz3D - http://us.store.creative.com/Creative-Senz3D-Depth-and-Gestu...

Structure Sensor - https://store.structure.io/preorder

Asus Xtion Pro - http://www.newegg.com/Product/Product.aspx?Item=N82E16826785...

Pxl_Buzzard
Is it possible to use a Leap Motion? I don't think it has a video camera in it, but could it still be used to do 3D scanning?
nobbis
My understanding is the Leap Motion is a wide-angle stereo IR camera. It's built to detect fingers/hands/pointers, and doesn't produce a depth map so can't do 3D scanning.

A sensor with a similar form factor is the CamBoard pico, which does produce depth maps. I don't think it's shipping yet.

http://www.pmdtec.com/products_services/reference_design_pic...

MechSkep
This looks great! I'd like to use it to ease up the tech overhead for some mobile robotics applications. Is there anyway that the user's position in the world frame can be accessed? i.e. can this do SLAM over longish distances?

Basically I'm looking for Google's Project Tango, without the hardware constraints.

rmc
Hook it up to a GPS at the same time?
nobbis
Sure, the pose in the world frame is estimated at 30 Hz. Over longish distances there's some drift so a global optimization step is needed.

I'd like to release a developer API (Forge currently runs on Android, Windows & Linux) hopefully later this year, which should be what you're looking for.

Also, I worked in mobile robotics at Willow Garage a few years ago (https://www.youtube.com/watch?v=0aqghgoeCWk), so I'm keen to see Forge used in robotics.

MechSkep
Neat. Looking forward to it.
yogrish
This seems to be in line with Project Tango of Google. https://www.google.com/atap/projecttango/
nobbis
Project Tango has a 320x180 depth sensor running at 5 fps, i.e. 290k depth measurements per second. Compare this with off-the-shelf depth cameras (e.g. DS325) that generate 320x240 at 60 fps, i.e. 4.6m measurements per second.

The reason for this is that mobile processors aren't fast enough to process more information. So the Tango prototype has to have, in addition to its depth camera, a special motion tracking camera with a fish-eye lens and 2 dedicated processors in order to robustly track.

Even then, with less depth information, the quality of any Tango reconstruction will be far inferior. Maybe in 5-10 years, mobile processors can approach what desktop GPU's are capable of today.

In any case, it remains to be seen if Google can persuade cellphone manufacturers to include 2 special cameras + 2 extra processors in their future devices.

therobot24
nice! I'd be interested in the method they use to put everything together. My best bet is some basic structure from motion weighted by the depth sensor...or maybe it's simpler than that...
nobbis
Author here. It uses color info as well as depth for tracking. Otherwise, it'd fail if you pointed the camera at featureless geometry, e.g. walls, floors.
joshvm
Looks neat - what depth sensor were you using to generate that video? It's not totally clear to me from the comments.
phorese
Looks great :)

In the video it looks like you are using a volumetric representation, perhaps an Octree+isosurface extraction?

nobbis
Correct. It stores a volumetric signed distance function in a tree-structure, which makes mesh generation (and raytracing) simple and fast.
druidsbane
Can it stitch together much larger captures? eg: a building exterior and interior?
nobbis
You can download and manually align separate captures to create a larger mesh but, no, Forge doesn't do this automatically yet.

On the roadmap, but won't be in the first version.

bromagosa
I tried the URL that's shown at the end of the video and it didn't work :(

Any live demos?

nobbis
No, sorry. I'm moving coast to coast next week, but hope to start beta testing late April.

Enter your email on the website and I'll let you know before beta testing starts.

jc_dntn
this is going to totally change the way we send dick pics.
ihnorton
At the end of the video, one camera is from PrimeSense, and the other (appears) to be a SoftKinetic (or Senz3D); is that correct? Does anyone have experience comparing those cameras for accuracy/fov/framerate/etc.?
nobbis
Correct. It's a SoftKinetic DS325.

Its FOV is higher than the PrimeSense sensor, it's capable of higher frame rate but it's a little less accurate and it's range is currently shorter.

The two technologies (structured light vs time of flight) have different strengths/weaknesses, but I believe ToF is the future.

ihnorton
Thanks. Have been playing with the Senz3D (rebranded DS325) for a rather different application, but was a bit underwhelmed so far with both accuracy and range. (On the other hand, it's kind of amazing for $125).
nobbis
Well, the DS325/Senz3D is tuned for gesture recognition which explains why it's not great for some use cases.

I'd keep an eye out for SoftKinetic's new DS320 long-range camera, due late Q2.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.