Introduction to Symbolic Execution of Neural Networks - Towards Faithful and Explainable Surrogate Models

Maximilian Schlüter, Gerrit Nolte

Abstract


Neural Networks are inherently opaque machine learning models and suffer from uncontrollable errors that are often hard to find during testing. Yet no other model can attain their performance in current ML tasks. Thus, methods are needed to explain, understand, or even gain trust in neural networks and their Decisions. However, many existing explainability methods are abstractions of the true model, thus not providing reliable guarantees. For safety critical tasks, both rigorous explanations and state-of-the-art predictive performance are required. For neural networks with piece-wise linear activation functions (like ReLU), it is possible to distill the network into a surrogate model that is both interpretable and faithful using decompositional rule-extraction. We present a simple-to-follow introduction to this topic building on a well-known technique from traditional program verification: symbolic execution. This is done in two steps: First, we reformulate a neural network into an intermediate imperative program that consist of only if-then-else branches, assignments, and linear arithmetic. Then, we apply symbolic execution to this program to achieve the decomposition. Finally, we reintroduce a decision-tree like data structure called Typed Affine Decision Structure (TADS) that is specifically designed to efficiently represent the symbolic execution of neural networks. Further, we extend TADS to cover partial symbolic execution settings, which mitigates the path explosion problem that is a common bottleneck in practice. The paper contains many examples and illustrations generated with our tool.

Full Text:

PDF


DOI: http://dx.doi.org/10.14279/tuj.eceasst.82.1225

DOI (PDF): http://dx.doi.org/10.14279/tuj.eceasst.82.1225.1144

Hosted By Universitätsbibliothek TU Berlin.