In nearly all CTF competitions organizers spend dozens of hours creating challenges that are compiled once with no thought for variation or alternate deployments. For example, a challenge may hard-code in a flag, making it hard to change later, or hard-code in a system-specific resource.
At ForAllSecure, we are working to build automatically generated challenges from templates. For example, when creating a buffer overflow, you should be able to generate 10 different instances to practice on. And these instances should be able to be deployed anywhere, on a dime. While you can’t automate away the placement of subtle bugs and clever tricks, we can definitely add meaningful sources of variance to challenges without much additional effort, with the added bonus that challenges are easier to deploy.
Continue reading “The Motivation and Design Behind Autogenerated Challenges”
Although we have been very busy at ForAllSecure, we finally got the time to redo our website, huzzah! This website is a bit more pleasing on the eyes, and we hope to add more up-to-date information about our projects and what we’re up to.
Part of this refresh is also a new blog. We plan to talk about interesting things we are working on, so check back frequently! To kick things off, here is a post about some of our work on DARPA’s Cyber Grand Challenge.
In June, ForAllSecure participated in DARPA’s Cyber Grand Challenge (CGC) Qualification Event (CQE) 1. During the event our automated system tweeted its progress, and to continue the trend of openness, we decided to publish a writeup of some more details about our system. Our team, Thanassis Avgerinos, David Brumley, John Davis, Ryan Goulden, Tyler Nighswander, and Alex Rebert spent many thousands of hours on our system, and now that the CQE is over, we’re excited to give you a glimpse of its inner workings.
Continue reading “Unleashing the Mayhem CRS”