8.-10. Aug. 2023, Høgskulen på Vestlandet (HVL), Bergen, Norway
Tuesday, August 8th
09:30 – 11:15 Session 1: Introduction and Trace Logic (chair: Bernhard Beckert)
First we define a denotational trace semantics for a While language that we show to be equivalent to a standard SOS semantics. Then we define a program logic based on chops and smallest fixed points for specifying program traces. Next we give a calculus for correctness judgments between programs and logic formulas. The calculus is again fully compositional and features a novel rule for loop invariants. It also turns out to be sound and complete. Completeness is shown by a natural construction that computes for each program a canonical formula that precisely represents its traces.
Contracts specifying a procedure’s behavior in terms of pre- and postconditions are essential for scalable software verification, but cannot express any constraints on the events occurring during execution of the procedure. This necessitates to annotate code with intermediate assertions, preventing full specification abstraction. We propose a logic over symbolic traces able to specify recursive procedures in a modular manner that refers to specified programs only in terms of events. We also provide a deduction system based on symbolic execution and induction that we prove to be sound relative to a trace semantics. Our work generalizes contract-based to trace-based deductive verification by extending the notion of state-based contracts to trace-based contracts.
Recent work generalized state-based method contracts to trace contracts, which permit to specify internal behavior of a procedure as a trace, such as calls or state changes. In this talk, we generalize trace contracts to context-aware trace contracts that allow to specify the call context through trace formulas, which cannot be specified with the state-based Hoare-style contracts common in deductive verification. In particular, the behavior of concurrent, asynchronous procedures depends on the call context, because of the global protocol that governs scheduling. We propose a program logic of context-aware trace contracts for specifying global behavior of asynchronous programs. We provide a sound proof system and transfer Liskov’s principle of behavioral subtyping to the analysis of asynchronous procedures.
11:15 – 11:45 Coffee
11:45 – 13:00 Session 2: Correctness-by-Construction (chair: Volker Stolz)
As the amount and complexity of software in safety-critical systems increases, stronger guarantees of correctness are needed that go beyond traditional software testing. Besides post-hoc verification, which verifies the correctness of a program after its construction, the Correctness-by-Construction (CbC) approach is a promising candidate for guaranteeing the correctness during program implementation. CbC aligns specification, verification, and implementation in a fine-grained, refinement-based process. It starts with a formal specification as an abstract Hoare triple and progressively refines it to a concrete implementation that satisfies the specification. CbC provides benefits such as early error detection and increased trust in the software through the structured development process. The CbC approach is supported by the CorC ecosystem, but expert knowledge is still needed to use the tool.
In this work, we introduce three levels of correctness guarantees – Specified, Tested, and Verified – to make the process more accessible to developers without formal methods expertise by introducing more accessible feedback at early stages of the development. At the Specified level, the behavior of the program is formally defined, supported by clear error messages and troubleshooting tips. At the Tested level, test cases are automatically generated from the provided specification to detect coarse-grained errors in the program. At the Verified level, the program is verified with the KeY program verifier to guarantee the correctness of the program. This level is supported by counterexample generation to reason about open proof goals.
The effectiveness of the approach is evaluated through expert interviews and a mutation-based analysis to assess its benefits and error detection capabilities.
The complexity of modern software continues to grow, making it increasingly challenging and time-consuming to develop new programs from scratch. Software product line (SPL) engineering offers the techniques to efficiently develop families of software products. Code is implemented in features which are composed to individual software variants. Besides complexity, the functional correctness of software is more important than ever. Bugs can have expensive consequences. As most intuitive approach, testing can find bugs but cannot guarantee the correctness of a software. As a more advanced approach, deductive verification offers the possibility to verify the behaviour of software against a formal specification. In general, deductive verification can also be applied for SPLs but meets the challenges of an SPLs variability. A commonly used approach for the verification of SPLs is product-based verification, where each product of an SPL is proven individually regarding its specification. Since this approach does not scale for variant-rich product lines, we take up existing approaches of abstract specifications and reusing of proof parts to present our concept of partial proofs. We split proofs into a feature-specific and a product-specific part. The feature-specific part is only proven once for all products. We implement our concept of partial proofs in the tool VarCorC and evaluate it on three case studies. We found that both, the number of executed proof steps and the verification time, can be improved by using partial proofs. Further, we determine a trend of increasing improvements for large-scale SPLs.
Separation logic is an extension of first-order logic for reasoning about separate parts of the heap. Separation logic can be used in Reynolds’ extension of Hoare’s logic for reasoning about pointer-manipulating programs that dynamically allocates memory. Just like dynamic logic (DL) is related to first-order logic and Hoare logic, dynamic separation logic (DSL) is related to separation logic and Reynolds’ logic. In this talk, I present results concerning (new) foundations for separation logic, explain dynamic separation logic, and argue why it can be fruitful to integrate dynamic separation logic in a practical verification system like KeY.
13:00 – 14:00 Lunch
14:00 – 15:45 Session 3: Retrospective and Future of KeY (chair: Eduard Kamburjan)
In this series of very short talks, we present a selection of new features and recent developments in KeY.
- KeY on GitHub (Alexander Weigl)
- Assume and Assert (Florian Lanzinger)
- Free Invariants and Assignables (Florian Lanzinger)
- Proof Management (Wolfram Pfeifer)
- Support of Final Fields (Mattias Ulbrich)
- Redux Maker Tool (Lukas Grätz)
- Support for Java 21 (Alexander Weigl)
- Towards Language Independent Formulas (Daniel Drodt)
- SolidiKeY (Wolfgang Ahrendt)
Soundness is a primary goal for any software verification tool such as KeY. Nevertheless, one of KeY’s crucial calculus rules, the useMethodContract rule, has only been adequately proven sound for particular circumstances. In combination with KeY’s current proof management, this has led to a soundness hole, allowing the verification of incorrect programs. In this talk, we discuss the underlying problem, explore a restriction of rule applications that ensures soundness, and discuss a possible implementation. Further, we briefly give an outlook into improving KeY’s proof management.
15:45 – 16:15 Coffee
16:15 – 17:00 Session 4: Quantum Verification (chair: Thomas Baar)
Quantum computers open up new fields of application for hard-to-compute problems due to potential superpolynomial speedup. The design of quantum algorithms is complex and thus error-prone, which makes them a prime target for formal methods. We present our tool QIn to translate quantum circuits into a classical host language (Java). As a result, we can use any tool developed for the host language – and thereby leverage the full power of available formal methods for that language to reason about quantum circuits and create a verification/validation tool chain. Furthermore, QIn enables us to reason about hybrid programs, consisting of classical code and quantum circuits. This is crucial as, for the time being, only single subroutines of a program will be implemented on quantum computers. To show the possibilities of our approach, we present an example of a tool chain based on QIn. This tool chain relies on a software bounded-model checker and can prove correctness of hybrid programs combining the host language Java with quantum circuits. We use the Java Modeling Language (JML) as the specification language and show the feasibility of our approach on several examples including a bounded version of Shor’s algorithm.
18:30 – … Reception
Wednesday, August 9th
09:30 – 11:00 Session 1: Invited Talk and Dynamic Logic (chair: Mattias Ulbrich)
Reasoning about floating-point arithmetic is hard: special values and rounding errors make manual reasoning about numerical correctness essentially impossible for everyone but experts. In this talk, I will present our (not so recent anymore) integration of automated floating-point reasoning into KeY that allows to prove several interesting properties such as absence of special values within the existing KeY infrastructure. I will also discuss some of our recent work in scaling up automated static floating-point rounding error analyses that may be interesting for KeY, though they may require more substantial changes.
In this talk, I will revisit the foundations of dynamic logic, including its connection to related logics.
11:00 – 11:30 Coffee
11:30 – 13:00 Session 2: VeriFast and Separation Logic (chair: Reiner Hähnle)
The tool VeriFast is a direct competitor to KeY: It is designed to automatically verify implementation code (for example written in C) to be correct with respect to annotated pre-/post-conditions. This philosophy is often referred to as proof-carrying code.
However, the automatic verification of source code comes with a certain price: The user has to enrich the source code first with helpful annotations, for example with tailored function contracts. In this talk, we analyze the tool VeriFast from a user’s perspective: Which steps has a user to take in order to turn a given source code (without any annotation) into a annotated version that can be verified automatically. Especially, we review the feedback mechanisms of VeriFast for unsuccessful proof attempts.
Deductive verification in VeriFast requires the user to formulate contracts as comments. These contracts can become complex and possibly confusing, especially for beginners like students. Our approach for assisting the user while formulating VeriFast contracts are visual representations. We have implemented this approach in a web-based tool.
In this talk I will present which mechanisms for user cognitive support we have reviewed and chosen. Furthermore I will show how we have implemented these mechanisms and which implementations could be performed in future work. Finally we hope that this tool will be used at our university to help students use VeriFast and learn how to formulate VeriFast contracts.
This presentation explores the implementation details for visualizing textual VeriFast contracts by graphical representations. Our implementation has been realized as a Langium project, which has been extended with a React web application. The technical implementation includes writing a parser for C-code enriched with VeriFast annotations and transforming the resulting abstract syntax tree (AST) into visual representations through Reactflow. The presentation will also discuss the hands-on challenges and technical intricacies involved in the system’s architecture, highlighting the synchronization of the front-end visualization and back-end parsing.
13:00 – 14:00 Lunch
14:00 – 16:00 Session 3: New Features for KeY (chair: Richard Bubel)
JMLtk tries to be a modern framework for the realization of your application based on the Java Modeling Language (JML). Currently, it supports the parsing of Java code with JML specification into a common AST. In future, additional analyses, such as linting, refactoring, etc. arrive under the hood of a Language Server for JML. In the same, but more far future, we plan to substitute the JML framework in KeY with this parser.
In this talk, I give insights on wrong and right ways of parsing JML; explain how my JML parser works, and show the new features to come in the next JML reference manual (hopefully).
The possibility for the user to interactively inspect the proof and apply rules is one of the distinguishing features of KeY. These user interactions are performed on the sequent and thus require in-depth knowledge about the underlying logic JavaDL. However, this logical representation is often difficult to understand, as it contains multiple elements that are not explicitly present in the Java/JML world.
In this talk, I will show a technique to represent proof goals (sequents) in JML, which is a view much closer to the user’s specification. In addition, I will present further recent usability improvements as well as ongoing work in this direction.
16:00 – 16:30 Coffee
16:30 – 17:45 Session 4: Modern Domains for Formal Verification (chair: Wolfgang Ahrendt)
Neural networks have proven to be highly successful in a wide range of applications, which has also led to their deployment in sensitive of even safety critical domains like credit scoring, self-driving cars or airborne collision avoidance systems. As errors in these areas can have severe consequences, methods to obtain provable guarantees about the behaviour of neural networks are required.
In the beginning of this talk, I give a brief overview of properties over neural networks that can be verified and then focus on two approaches for solving the problem of neural network verification via
(i) Optimization techniques, where I am going to concentrate on mixed integer linear progamming and
(ii) Abstract interpretation, where I am going talk about symbolic interval propagation and present current work on overapproximating neural networks with adaptive polynomials.
The comprehensive, understandable and effective formal specification of complex systems is often difficult, especially for reactive and interactive systems like web services or embedded system components. In this paper, we propose contract automata, a new specification formalism
for describing the expected behaviour of stateful systems. Contract au tomata combine two established concepts for formal system specification: contract-based specification and nondeterministic finite state automata. Contract automata restrict the effects that the operations of the specified system may have using input-output-contracts. The automaton structure of a contract automaton describes when contracts are applicable. Contract automata support the refinement and composition of reactive systems, enabling modular verification of systems assembled of multiple sub systems. In this paper, we formally define the semantics of contract automata based on a two-party game between the system under test and its environment. We define the proof obligations and present techniques to prove a refinement relationship between contract automata, the validity of system compositions, and the compliance of source code against a contract automaton. We provide a tool for the generation of the proof obligation that can be discharged with model-checkers or static program analyses. We exemplify the use of contract automata by presenting the specification and verification of an emergency brake assistant.
Smart contracts manage resources on a blockchain platform. These resources exist in the form of cryptocurrency, but also, more generally, in the form of data that is stored on the ledger. Due to the peculiarities of blockchain networks, changing smart contracts after deployment is hard or even impossible. This means that smart contracts must be correct and secure upon deployment. However, frequent exploits show that smart contract security is still difficult to achieve.
In this talk, I describe a model-driven, capability-based approach for smart contract security: A developer, given a security policy for a smart contract application, first designs a model of the application. The model consists of state variables, functions, roles and capabilities. Our approach provides a definition of when the created model is consistent, and formal analysis methods to check consistency. Furthermore, we provide a definition of what constitutes a secure implementation w.r.t. this model, and describe how to achieve an implementation which fulfills this notion of security.
19:00 – … Social Dinner
Thursday, August 10th
09:30 – 11:00 Session 1: New Applications and Approaches of Deductive Verification (chair: Crystal Chang Din)
Finding semantic bugs in code is difficult and requires precious expert time. We cannot use verification or run-time assertion checking, since potential specifications or assertions are not yet established. We propose incremental retrospective assertion checking: At run-time we record (a large number or) program traces. Later, we assist the specification expert to design inter-dependent assertions in an incremental procedure. Some assertions can be formally verified with deductive verifications, others are only checked against the recorded program traces. When checking failed, we found a bug. We position our approach in the middle ground between fully-fledged deductive verification and bug finding without semantic guidance.
Parallelization of programs relies on sound and precise analysis of data dependences in the code, specifically, when dealing with loops. State-of-art tools tend to over- and, occasionally, to under-approximate dependences. The former misses parallelization opportunities, the latter can change the behavior of the parallelized program. We have developed a sound and highly precise approach to generate data dependences based on deductive verification. We implemented our approach in KeY. The evaluation shows that our approach can generate highly precise data dependences for representative code taken from HPC applications.
Most formal methods see the correctness of a software system as a binary decision. However, proving the correctness of complex systems completely is difficult because they are composed of multiple components, usage scenarios, and environments. We present QuAC, a modular approach for quantifying the error probability in service-oriented software systems by combining software architecture modeling with deductive verification. Our approach is based on a model of the service-oriented architecture and the probabilistic usage scenarios of the system. The error probability in a single service is approximated by a critical region, which is a formula describing input values that may lead to an erroneous execution. The critical regions can be determined by various analyses, e.g., formal verification, expert estimations, or testing. The critical regions and the software model are then combined into a probabilistic program from which the error probability of the given usage scenarios is computed using model counting. We also present an implementation of QuAC for Java using the modeling tool Palladio and the deductive verification tool KeY. We demonstrate its usability by applying it to a software simulation of an energy system.
11:00 – 11:30 Coffee
11:30 – 12:45 Session 2: Regression Verification and Information Flow (chair: Michael Kirsten)
Regression verification aims to derive a formal equivalence proof of two related programs, and complements regression testing to detect software errors. Previous work reduced this equivalence check to the satisfiability of Constrained Horn Clauses over uninterpreted predicates. However, a major drawback of this method is the required manual insertion of synchronization points into the source code, which makes the approach no longer automatic. To overcome this limitation, we propose a novel heuristic approach that employs bounded symbolic execution to automatically identify synchronization point candidates by finding relational linear combinations of program variables. Our implementation focuses on object-oriented Java programs and employs pattern-driven loop rewriting to simplify the program structures, thereby facilitating the verification process. Through experiments conducted on a benchmark suite of Java programs, we demonstrate improvements over existing state-of-the-art verification methods.
A secure software architecture is increasingly important in a data-driven world. When security is neglected sensitive information might leak through unauthorized access. To mitigate, this software architects needs tools and methods to quantify security risks in complex systems. In this talk, I will present doctoral research in its early stages concerned with creating constructive methods for building secure component-based systems from a quantitative information flow specification. The plan for the work is to leverage ideas from correctness-by-construction and quantitative information flow and then raise it to the architectural level in the form of component-based systems.
This work presents insights gained by investigating the relationship between algorithmic fairness and the concept of secure information flow. There is a strong correspondence between secure information flow and algorithmic fairness: if protected attributes such as race, gender, or age are treated as secret program inputs, then secure information flow means that these “secret” attributes cannot influence the result of a program.While most research in algorithmic fairness evaluation concentrates on studying the impact of algorithms (often treating the algorithm as a black-box), the concepts and methods derived from information flow are based on a white-box analysis of an algorithm’s disparate treatment. In our recent work, we examine the relationship between quantitative as well as qualitative information-flow properties and fairness. Moreover, based on this duality, we derive a new quantitative notion of fairness called fairness spread, which can be easily analyzed using quantitative information flow. We demonstrate that off-the-shelf tools for information-flow properties can be used in order to formally analyze a program’s algorithmic fairness properties, including the new notion of fairness spread as well as established notions such as demographic parity.
12:45 – 13:00 Group Photo
13:00 – 14:00 Lunch
14:00 – 15:45 Session 3: Case Studies with KeY (chair: Alexander Weigl)
Red-black trees are a particularly efficient and widely used version of self-balancing binary search trees. The JDK uses them internally in java.util.HashMap and provides an implementation with java.util.TreeMap. In this case study, we push the limits of KeY by specifying and verifying the contains and add methods of this data structure. We observe that complex proofs over tree structures are doable with KeY’s dynamic frames, but require a lot of work regarding framing.
Features that are used extensively in this case study are model methods for the specification, as well as assertions for prover guidance during the verification – with the addition of JML scripts, a new feature for the automation and persistence of proofs.
In this experience report, we present the complete formal verification of a Java implementation of inplace super-scalar sample sort (ips4 o) using the KeY program verification system. As ips4 o is one of the fastest general purpose sorting algorithms, this is an important step towards a collection of
basic toolbox components that are both provably correct and highly efficient. At the same time, it is an important case study of how careful, highly efficient implementations of complicated algorithms can be formally verified directly. We provide an analysis of which features of the KeY system and its verification calculus are instrumental in enabling algorithm verification without any compromise on algorithm efficiency.
15:15 – … Coffee
15:15 – … Internal KeY Developer Meeting