HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Meshroom: Open Source 3D Reconstruction Software (2018)

alicevision.github.io · 262 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention alicevision.github.io's video "Meshroom: Open Source 3D Reconstruction Software (2018)".
Watch on alicevision.github.io [↗]
alicevision.github.io Summary
AliceVision is a Photogrammetric Computer Vision framework for 3D Reconstruction and Camera Tracking.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Apr 17, 2019 · 262 points, 39 comments · submitted by hereiskkb
jefft255
Really cool, especially now that Autodesk's photogrammetry software is no longer available for free. I think this is a good example of academic/public research resulting in useful free software!
anjneymidha
+1! There's such a wealth of computer vision approaches of practical value locked up in CVPR and other research papers that could be exposed this way to enthusiasts and semi-pro users
jeffchuber
+1 to this comment
pierrec
They link to the full Mikros showreel [1], implying that it's been used in a bunch of blockbusters already. But I couldn't find any information on how it's been used and in which projects, it would be cool to have more details on that. It's always interesting to see the film industry working with open source software, especially in cases like this where production companies are sponsoring development.

[1]: https://www.youtube.com/watch?v=pBy1g7gbXCQ

pfranz
> it's hard to tell how it's been used and in which projects

That's always the case in vfx. The best source is contemporaneous interviews directly with the people involved, which is sporadic and very rare. Most demo reels, even for big software packages, are incredibly misleading. Company A might have some partnership with a production studio, slap their name in the credits, use their work in the demo reel, but the output is junk so the artist scraps it and does it the "old fashioned way" from scratch. Or the software gets used in a very specific, tangential way like roughing something out or converting file formats back and forth.

VFX companies and software packages are trying to sell themselves for future work. So they tend to gloss over details and present themselves much better than they are in practice. After a show wraps, it's really difficult to get good information. The files on disk may not even open anymore and the artists have moved on.

carreau
My organization recently considered buying more licenses for photoscan/metashape (https://www.agisoft.com/) but it was too expensive. I'm curious as to how it compares (I'm not a user myself so can't judge), and I couldn't find wether this can run on a distributed GPU cluster. If that's the case I would love to push for replacing photoscan by this.
sherr
A useful addition to that page would be a short summary of what "photogrammetry" is and a description of what this software does. It need only be a couple of sentences. Otherwise, looks very interesting.
tacotime
It's described right on the page: "Meshroom is a ... 3D Reconstruction Software..." 1 minute into the 6 minute video you can see that the program generates a point cloud from a set of photos.
notthemessiah
If you somehow stumble onto the page and are too lazy to Google the term, I don't think you're in the target audience.
mkesper
Yes, the description right now is equivalent to "does magic with photographs". It's a lost opportunity for introducing the uninitiated reader, thereby reducing the chance of them watching the video.
ungzd
Underlying photogrammetry library, https://alicevision.github.io/, has more informative landing page.
h2odragon
Still CUDA only? Didn't see an answer to that in a few minutes of clicking around their site.
microcolonel
Seems like some of it is still CUDA-only, from the AliceVision INSTALL.md [0].

I have to wonder how AMD is letting NVIDIA win so big in academia, when their GPUs are actually quite competitive. I'm beginning to think that they have some management practice or culture which makes it difficult for them to retain software talent, otherwise I can't really understand how they always seem to have a glaring weakness somewhere that they're not addressing.

[0]: https://github.com/alicevision/AliceVision/blob/develop/INST...

TomVDB
Nvidia had (has?) CUDA centers of excellence for more than a decade, they gave GPUs to universities, they've had their own compute related conference for years, they have a huge set of libraries.

During all the time, AMD did essentially nothing. (They weren't financially in the position to do so.) Then they missed the AI boom.

Now they are trying, right at the time where the compute specific performance lead that they once had is gone and when their perf/W is terrible (which is not a good thing for datacenters.)

There is really no mystery here.

errantspark
I don't think it's AMD's fault really, CUDA just as an insane amount of inertia in the space and there's a whole lot of incentive for anyone to do anything about it.

As much as I hate it I've been vendor locked to NV for years and as a result I don't really care anymore. I'll never buy a AMD card because I need to get work done and get paid. It sucks, but not enough for me to go through the pain of trying to work with the far far inferior OpenCL based software for rendering/photogrammetry etc. I don't even touch ML, but I assume the story is the same there.

What's the value proposition for writing software that works with OpenCL when your entire market already has NV cards? The price differential is tiny too, maybe if AMD came down like 30% for the equivalent computational power.

corysama
btw: https://www.reddit.com/r/photogrammetry/
sitdownyoungman
so...are shiny black objects better scanned into 3d objects now?
nobbis
Another great free alternative is COLMAP (https://demuc.de/colmap).

And for those interested in doing this on mobile in real-time, we've just released a beta SDK (https://abound.dev) that lets you add real-time photogrammetry to any iOS app.

gloflo
Considering this is a thread about open source software I feel like your self promotion of a proprietary service is out of place.
Sophistifunk
It's a thread about photogrammetry.
gloflo
> Meshroom: Open-Source Photogrammetry Software
ereyes01
Open source does not mean anti-commercial to most people.
gloflo
I never said that.
atdrummond
They also offered a link to a free library prior to doing so. I think the comment walks a fair line between substantive (in an area of which they clearly have domain knowledge) and advertisement.
Fission
Very cool stuff! Since you're using volumetric integration/fixed voxel size, one trick to get significantly improved visual texturing quality is to low-pass/smooth the mesh right before texturing.
nobbis
Yes, we support Laplacian filtering of the level set prior to meshing in the SDK (plus other mesh smoothing techniques.) There are trade-offs involved.
obviuosly
How do COLMAP, Meshroom and VisualSFM compare to one another?
atkbrah
Some stuff that comes into mind:

* COLMAP and Meshroom are able to create a mesh from the pointcloud whereas VisualSFM is not. If you wish to create a mesh out of VisualSFM end result, you could use OpenMVS[1]

* COLMAP does not create a texture file for the resulting mesh - instead it uses vertex color for "texturing"

* COLMAP and Meshroom licenses permit commercial usage - VisualSFM does not (you need to license it)

From these three I've found that COLMAP works the best and VisualSFM comes second. Meshroom has a nice feature that allows you to add images into your model incrementally (I think without complete recompute) but it didn't work that well yet.

What comes to open source photogrammetry in general, my opinion is that COLMAP + OpenMVS or OpenMVG[2] + OpenMVS are the tools to go albeit the command line interface with huge amount of possible options can be a bit tricky at first.

1. https://github.com/cdcseacave/openMVS

2. https://github.com/openMVG/openMVG

obviuosly
Thanks for this overview.

> Meshroom has a nice feature that allows you to add images into your model incrementally

VisualSFM also does not recompute when adding photos later, IIRC what I read in the manual.

feniv
Is it possible to use the Apple Face ID sensor to perform a kinect-like 3D mapping of an environment or is the range of those sensors too limited for that?
nobbis
Range is too limited, it's front-facing (can't see what you're scanning), and the OS-provided SLAM system isn't available (so tracking error accumulates.) Issues that will disappear in the long-run, of course.

One startup taking that approach is https://standardcyborg.com. Apologies if linking to a business is unseemly - they are YC-backed at least.

jeffchuber
Jeff from SC here. Thanks @nobbis!
Aeolun
This looks really cool. Is there a demo application published somewhere that I could use to test?
gloflo
And MicMac https://micmac.ensg.eu/index.php/Accueil

And OpenSFM https://github.com/mapillary/OpenSfM

I wish they better documented their specific benefits and short comings.

nobbis
I find COLMAP has better dense depth maps, a more configurable pipeline, is cross-platform, the code is solid, and the accompanying research is principled (it takes a Bayesian approach to stereo matching.)

What aspects of these make them great alternatives in your opinion?

gloflo
I wish I knew! They are all on my list to try, that's all for now. :)
nobbis
Got it. If you’re just listing open source photogrammetry software, there are many more you can add, e.g. Regard3D, OpenMVG.
hbosch
Meshroom has been very fun to work with for what I can only refer to as "closed-shell" or fully convex shapes. I tried my damnedest to collect a good photogrammetry-based model of a flower, and the mesh fidelity I got from my 100 or so photos was pretty terrible... And I think probably because of the leafs and the bowl shape of the petals on top. If I had time, I probably would have separated the flower into distinct pieces (leafs, stem, top of flower) and do more assemblage in my 3D program with discretely generated meshes...

However, stones and plant bulbs came out great.

Edit: After some of the other links in this discussion, I definitely need to revisit this project! Some people have had great success with foliage and flowers it seems like.

sdwisely
I tried a wicker basket as a test mesh this week and had a great deal of trouble with the handles.

For closed objects though the results are probably better than I've found in commercial software.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.