Exploiting Multiple Cores Today: Scalability and Reliability For Off-the-shelf Software

61
Опубликовано 7 сентября 2016, 16:22
Multiple core CPUs are here and will soon be ubiquitous. The prevailing notion is that we need to rewrite applications not originally designed to support multiple processors to make them multithreaded. Because of the difficulty of programming correct and efficient multithreaded applications (e.g., race conditions, deadlocks, and scalability bottlenecks), this is a major challenge. This talk presents two alternative approaches that bring the power of multiple cores to today's software. The first approach focuses on building highly-concurrent client-server applications from legacy code. I present a system called Flux that allows users to take unmodified off-the-shelf *sequential* C and C++ code and build concurrent applications. The Flux compiler combines the Flux program and the sequential code to generate a deadlock-free, high-concurrency server. Flux also generates discrete event simulators that predict actual server performance under load. While the Flux language was initially targeted at servers, we have found it to be a useful abstraction for sensor networks, and I will briefly talk about our use of an energy-aware variant of Flux in a deployment on the backs of an endangered species of turtle. The second approach uses the extra processing power of multicore CPUs to make legacy C/C++ applications more reliable. I present a system called DieHard that uses randomization and replication to transparently harden programs against a wide range of errors, including buffer overflows and dangling pointers. Instead of crashing or running amok, DieHard lets programs continue to run correctly in the face of memory errors with high probability. Joint work with Brendan Burns, Kevin Grimaldi, Alex Kostadinov, and Mark Corner (Flux), and Ben Zorn (DieHard).
автотехномузыкадетское