Workshop on Formal Approaches to Explainable VERification (FEVER 2017)


Co-Located with CAV 2017

The objective of this workshop is to increase the explainability and understandability of verification for humans. Traditionally, formal verification aims at providing guarantees on the behavior of a system model. However, the need for explainability or understandability in verification is not sufficiently addressed by state of the art techniques. We see a large and new application area and interdisciplinary research possibilities for computer–aided verification. Satisfying levels according to formal measures of explainability will necessarily help establishing trust in the results of the modeling and verification process.

The aim is to bring together researchers from different areas to lay down the foundations of this new domain and to discuss existing approaches, ideas, and challenges. Topics include (but are not limited to) understandable modeling languages, such as probabilistic programs, accessible synthesis results, abstraction techniques, or explainable counterexamples and controllers. Interdisciplinary research with areas such as artificial intelligence, virtual reality, or human factors is desirable to explore possible applications, interactivity, benchmarking outside the real world, or limits of human perception.

For any kind of questions please contact Nils Jansen at njansen 'at' utexas 'dot' edu


Jan 25, 2017Website is online.

Program Committee

Call for Contributions


Invited Speakers