Microsoft Research334 тыс
Следующее
Опубликовано 22 июля 2016, 16:50
Typical security contests focus on breaking or mitigating the impact of buggy systems. I will present the Build-it, Break- it, Fix-it (BIBIFI) contest, which aims to assess the ability to securely build software, not just break it. I will also present qualitative and quantitative analysis of data gathered from three runs of the contest, which demonstrates some interesting trends. In BIBIFI, teams build specified software with the goal of maximizing correctness, performance, and security. The latter is tested when teams attempt to break other teams' submissions. Winners are chosen from among the best builders and the best breakers. BIBIFI was designed to be open-ended - teams can use any language, tool, process, etc. that they like. As such, contest outcomes shed light on factors that correlate with successfully building secure software and breaking insecure software. During 2015 we ran three contests involving a total of 116 teams and two different programming problems. Quantitative analysis from these contests found that the most efficient build-it submissions used C/C++, but submissions coded in a statically-typed language were less likely to have a security flaw; build-it teams with diverse programming-language knowledge also produced more secure code. Shorter programs correlated with better scores. Break-it teams that were also build-it teams were significantly better at finding security bugs.
Свежие видео
Случайные видео