Workshop on Formal Approaches to Explainable VERiﬁcation (FEVER 2017)
JULY 23, 2017, HEIDELBERG, GERMANY
The objective of this workshop is to increase the explainability and understandability of veriﬁcation for humans. Traditionally, formal veriﬁcation aims at providing guarantees on the behavior of a system model. However, the need for explainability or understandability in veriﬁcation is not suﬃciently addressed by state of the art techniques. We see a large and new application area and interdisciplinary research possibilities for computer–aided veriﬁcation. Satisfying levels according to formal measures of explainability will necessarily help establishing trust in the results of the modeling and veriﬁcation process.
The aim is to bring together researchers from diﬀerent areas to lay down the foundations of this new domain and to discuss existing approaches, ideas, and challenges. Topics include (but are not limited to) understandable modeling languages, such as probabilistic programs, accessible synthesis results, abstraction techniques, or explainable counterexamples and controllers. Interdisciplinary research with areas such as artiﬁcial intelligence, virtual reality, or human factors is desirable to explore possible applications, interactivity, benchmarking outside the real world, or limits of human perception.
For any kind of questions please contact Nils Jansen at njansen 'at' utexas 'dot' edu