Microsoft Research334 тыс
Опубликовано 17 августа 2016, 21:06
Model Checking was introduced in the 1980's, providing a fully automated way to verify that a finite-state system satisfies a logical specification, or to generate a behavioral counterexample showing that it doesn't. Model checking could verify (and sometimes find surprising bugs in) simple protocols and digital circuits. Making model checking into an effective tool for engineers has been a long process, however, and has required us to come to grips with the complexity of real circuits and systems (which in model checking manifests itself as the 'state explosion problem'). In this talk, I'll focus on one key aspect of this many-faceted problem, namely the question of relevance. That is, how can we abstract out just the relevant facts about a system for proving a given property, while ignoring large amounts of irrelevant information? It turns out that quite a bit has been learned about this question over the last decade, especially from the study of the Boolean satisfiability problem. We have learned how to produce relevant generalizations from special cases using deduction, and to focus on relevant facts by tightly coupling search and deduction. In the process, some surprising connections arisen with classical results in proof theory (Craig's interpolation lemma) and more recent results in proof complexity, allowing us to exploit the explanatory power of machine-generated proofs. These ideas have given us a toe-hold on the issue of relevance, and this in turn has allowed far more complex systems to be automatically verified using model checking. This more than any other factor has made it possible for engineers to apply model checking to real chip designs, and for model checking to be applied to software. I'll give a high-level overview of these developments, trying to draw out some of the fundamental ideas, pointing to some of the broader implications and some future possibilities.
Свежие видео