Hacker News Comments on
Making a Scalable Automated Hacking Systems - Artem Dinaburg
Shakacon LLC
·
Youtube
·
1
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.I'm the speaker in this video. AMA!If you're interested in more of the technical details of how a CRS (automatic bug finding system) works, I recommend watching this presentation from my colleague Artem Dinaburg.
"Making A Scalable Automated Hacking System"
* https://www.youtube.com/watch?v=pOuO5m1ljRI
* https://github.com/trailofbits/presentations/blob/master/Cyb...
You should also keep your eye on https://github.com/trailofbits -- we are releasing the final component of our CRS as open-source in a very short time. Manticore, our symbolic execution framework, will be up there soon! I'm happy to give you early access if you get in touch with me on Twitter.
⬐ nuclxThanks for the additional information.When experimenting with libFuzzer to test an audio processing library, I was impressed by the results and also the ease of setup. In-process fuzzing is really the best option for that use-case, which is why I chose libFuzzer over AFL.
An open-source alternative of Microsoft's SAGE/Springfield would be cool. I'm sure there are things to come with the efforts in CRS you mentioned. Looking forward to where this goes and hope that your 2- and 5-year outlooks hold true.
⬐ dguido⬐ 1ris> An open-source alternative of Microsoft's SAGE/Springfield would be coolWe're working on one (!) and I hope we can offer it for free to non-commercial projects. For now, there is Microsoft Springfield [1] for Windows software and Google OSS-Fuzz [2] for open-source software. It is extraordinarily hard to not only get the tech for something like that working but bring it to market.
As noted in the video, nearly all the individual pieces of our CRS are open-source but you actually do not want a "CRS." The competition DARPA designed for them involved more than what is necessary to provide value to a development team, e.g., you don't want something that writes IDS signatures, considers "gameplay" or resource contention, or attempts to write automatic patches. You want something that accurately finds and reproduces bugs. We open-sourced the tools we wrote to do that or used tools that were already open-source, like Grr, Manticore, Radamsa, KLEE, and Z3.
⬐ nuclx> We're working on one (!) and I hope we can offer it for free to non-commercial projects.That's good to hear. Hope you can find a way to monetize it for commercial projects.
Getting the tech right certainly seems to be a hard problem with Google's Konstantin Serebryany calling the symbolic execution route a rocket science. In my view the problem is coming up with a solid solution instead of just heuristics (as with all multi-approach methods: when to switch modes?) and making sure the tech is usable to test arbitrary complex pieces of software.
I'm dealing with a very old, large, somewhat rotting codebase. it's barley tested. Is fuzzing something for me to improve code quality, or are tests the lower hanging fruit?⬐ hannobI'm not the original poster, but:> Is fuzzing something for me to improve code quality,
Is it C/C++ and does it include parsers? Then definitely yes. If not it depends.
> or are tests the lower hanging fruit?
Again: Is it C/C++? Then familiarize yourself with the sanitizer features of gcc and clang, primarily address sanitizer. Lots of "rotting" code shows memory safety errors just by running them with asan.
⬐ dguidoDisclaimer: I have not looked at your codebase, so this should only be taken as my 2 cents. I might have a different recommendation if I had greater familiarity with your exact problem.I'm biased, but I would start with a fuzzer. Fuzzers have two key differences for your scenario that makes me lean towards them:
They provide the will to act. You mentioned the codebase is old and rotting, so a normal bug may not rise to enough attention to fix. Finding security bugs may give greater justification to undertake an effort to start maintaining it properly.
1 fuzzer exercises more than 1 test. I think you'll get more bang for your buck by integrating a fuzzer, whether it is libFuzzer, AFL, Radamsa, or anything else. Fuzzers are not targeted to a single unit and will, ideally, find bugs all over the place from a single, simple starting point.
That said, there are good arguments for diving into the codebase and writing tests too. In fact, writing tests may make your fuzzer more effective.
⬐ dguidoOne extra thought:You only asked fuzzer vs tests, but I'd offer that you should update your build settings before fuzzing. New compilers have a lot more diagnostics than they used to. But then I'd set up libFuzzer or AFL.