If you are interested in this practical perspective of using formal methods, you can find Aniket’s article on ACM DL. More information on the SED is available in the Symbolic Debugging section on our website.

.

]]> . A program has secure information flow if it does not leak any secret information to publicly observable output. A large number of static and dynamic analyses have been devised to check programs for secure information flow. The authors present an algorithm that can carry out a systematic and efficient attack to automatically extract secrets from an insecure program. The algorithm combines static analysis and dynamic execution. The attacker strategy learns from past experiments and chooses as its next attack one that promises maximal knowledge gain about the secret. The idea is to provide the software developer with concrete information about the severity of an information leakage. The static analysis is based on KeY’s symbolic execution engine.

]]>]]>

Sorting is a fundamental functionality in libraries, for which efficiency is crucial. Correctness of the highly optimized implementations is often taken for granted. De Gouw et al. have shown that this certainty is deceptive by revealing a bug in the Java Development Kit (JDK) implementation of TimSort.

We have now formally analysed the *other* implementation of sorting in the JDK standard library: A highly efficient implementation of a dual pivot quicksort algorithm. We were able to deductively prove that the algorithm implementation is correct. However, a loop invariant which is annotated to the source code does not hold.

This post reports on the successful case study in which KeY was applied to a non-trivial real-world implementation of a non-trivial algorithm.

**Please find all details in the paper.**

While the worst-case runtime complexity of comparison-based sorting algorithms is known to be in the class O(n log(n)), there have been numerous attempts to reduce their “practical” complexity. In 2009, Vladimir Yaroslavskiy suggested a variation of the quicksort algorithm that uses two pivot elements. The figure above exemplarily illustrates the arrangement of the array elements after the partitioning step. The pivot elements are shown as hatched bars. The first part (green in the figure) contains all elements smaller than the smaller pivot element, the middle part (blue) contains all elements between the pivots (inclusively), and the third part (red) consists of all elements greater than the larger pivot. The algorithm proceeds by sorting the three parts recursively by the same principle. Extensive benchmarking led to the adoption of Yaroslavskiy’s Dual Pivot Quicksort implementation as the OpenJDK 7 standard sorting function for primitive data type arrays in 2011. Conclusive explanations for its superior performance appear to be surprisingly hard to find, but evidence points to cache effects. Wild et al. conclude:

“The efficiency of Yaroslavskiy’s algorithm in practice is caused by advanced features of modern processors. In models that assign constant cost contributions to single instructions – i.e., locality of memory accesses and instruction pipelining are ignored – classic Quicksort is more efficient.”

We formally analysed the class `java.util.DualPivotQuicksort`

contained both in Oracle’s JDK and in OpenJDK 8.

Like many modern programming languages, the standard library of Java uses a portfolio of various sorting algorithms in different contexts. This class, consisting of more than 3000 lines of code, makes use of no less than four different algorithms: Merge sort, insertion sort, counting sort, and quicksort. For the `byte`

, `char`

, and `short`

data types, counting sort is used. Arrays of other primitive data types are first scanned once to determine whether they consist of a small number of already sorted sequences; if that is the case, merge sort is used, taking advantage of the existing sorted array parts. For arrays with less than 47 entries, insertion sort is used. In all other cases, quicksort is used (e.g., for large integer arrays that are not partially sorted). This “default” option is the subject of our correctness proof.

class DualPivotQuicksort {

// ...

/*@ public normal_behavior

@ ensures (\forall int i; 0 <= i && i < a.length;

@ (\forall int j; 0 < j && j < a.length;

@ i < j ==> a[i] <= a[j]));

@ ensures \seqPerm(\array2seq(a), \old(\array2seq(a)));

@ assignable a[*];

@*/

void sort(int[] a) { ... }

}

This JML specification covers the following aspects of the behaviour of the method sort:

- On termination, the array is sorted in increasing order (lines 5–7).
- On termination, the array contains a permutation of the initial array content (line 8).
- The implementation does not modify any existing memory location except the entries of the array (line 9).
- The method always terminates (this is the default for JML if a diverges clause has not been specified).
- The method does not throw an exception. This is implied since the contract is declared normal behavior.

To modularise the problem, we broke down the code into smaller units by refactoring the large sort method into smaller new methods. Besides disentangling the different sorting algorithms, it significantly reduced the complexity of the individual proof obligations. The parts of the code that suggested themselves for method extraction were the partitioning implementation, the initial sorting of the five chosen elements, and several small loops for moving the indices used in the partitioning algorithm. Besides this modularisation into smaller sub-problems, we also reduced complexity by separating three parts of the requirement specification

- the sortedness property,
- the permutation property, and
- the absence of integer overflows.

**Sortedness Property:**

**Permutation Property:**

**Integer Overflow:**

- DualPivotQuicksort_overflow.java
- SinglePivotParition_overflow.java
- DualPivotQuicksort_CBMC.java:This file was proved via the software bounded model checker CBMC.

**ZIP files with all sources and proofs:**

- all source: DualPivot_KeY_Sources.zip
- all files: DualPivot_KeY_Proofs.zip
- employed KeY version: key-2.7_9c003….zip

If the list contains at most 47 Elements, the portfolio engine falls back to insertion sort – in spite of its worse average-case performance – to avoid the comparatively large overhead of quicksort or merge sort. To be more efficient, a variant in which two elements are sorted at a time is used in this case. The challenge to verify the algorithm has been put forth at the VerifyThis competition 2017. Michael Kirsten carried out a verification of the actual implementation using KeY.

]]>
KeY is primarily a system for *heavyweight symbolic execution* (mainly of Java programs); programs are executed by a symbolic interpreter based on a set of symbolic execution rules in dynamic logic. Heavyweight symbolic execution is a powerful technique for program proving and other applications, such as symbolic debugging (see our SED tool). However, it suffers from the so-called **path explosion problem**: Basically, the execution splits into *independent branches at each static branching point* in the program, which makes it difficult to tackle large programs.

Consider the following program computing the Greatest Common Divisor (GCD) of two integers:

View the code on Gist.

At the beginning, a seemingly harmless normalization takes place: Negative inputs are inverted to positive ones. *This is already a problem for symbolic execution*, since after line 9, we already have four symmetric branches in the symbolic execution tree:

This is where state merging enters the place. It allows to *merge together nodes in the symbolic execution tree which share the same program counter*, i.e. the same remaining program to execute. You can use it interactively from within the GUI of KeY, or by adding **merge point specifications to your source code**. The latter is clearly the recommended way to proceed, since it is more transparent, better to automate and less error-prone. The following code sample shows these annotations within our GCD example:

View the code on Gist.

The `merge_point`

specification elements (KeY-specific extensions to JML, the Java Modeling Language) indicate where branches should be merged. It can be as easy as writing `//@ merge_point;`

which will merge branches based on a **non-parametric “if-then-else” procedure** (the `merge_proc "MergeByIfThenElse"`

part is optional since it’s the default). If you want more control, you can use the **“Predicate Abstraction” technique** as in lines 15-20: We are there interested in the facts that the changed inputs for state merging (the variable `b`

in that case) are positive and either equal the initial value of b or its negative. The `conjunctive`

part says that KeY should check for all conjunctions of the predicates which one holds. That is, from all elements of the power set of predicates we create a conjunction of predicates in the element; here we have two predicates, therefore we end up with three conjunctions, two of which consist of one predicate only. If KeY manages to prove that one of these conjunctions holds for the inputs, then the most specific one is used to abstract away from the actual differing values in the merged states during the merge. Instead of `conjunctive`

, you can also specify that you’re interested in all disjunctions (`disjunctive`

) or in only the single predicates as they are (`simple`

). If none of these so created “abstract elements” holds, then KeY uses a “top” element for abstraction, which basically means that you loose all the knowledge about the differing variables of the merged branches except for the name and type of the variables.

As a **general guideline**, state merging should be **interesting for you if you analyze large programs with lots of splits** (a high *cyclomatic complexity*), or if you have **simple splits rather early in your program**, as in our “GCD” example. **State merging works best if applied most locally**; in the example, we could also decide to merge at line 31, which would however increase the proof size substantially due to overly complex expressions in the merged state and a lower potential for savings since there is less code left to execute.

The *Gcd example is included in the examples shipped with KeY*: In KeY, open “File -> Load example” and navigate to “Getting Started -> State Merging”. *For more information, have a look at the following paper*, which discusses the theory behind state merging in KeY:

A General Lattice Model for Merging Symbolic Execution Branches. In: Ogata, Kazuhiro; Lawford, Mark; Liu, Shaoying (Ed.): Formal Methods and Software Engineering - 18th International Conference
on Formal Engineering Methods, ICFEM 2016, Tokyo, Japan, November
14-18, 2016, Proceedings, pp. 57–73, Springer International Publishing, 2016.

**Update:** The following journal paper discusses improvements of proof sizes in a case study on the TimSort sorting algorithm, which is part of the Java standard library. Several proofs got significantly shorter by using state merging; the proofs of two methods, which were out of reach without state merging techniques, finally got feasible thanks to this technique.

Verifying OpenJDK's Sort Method for Generic Collections. In: Journal of Automated Reasoning, 2017, ISSN: 1573-0670.

]]> Interested in their work and how the authors make use of KeY to verify properties for concurrent programs (which is not natively supported by KeY)? Head over to Springer and read their article “Synthesis of verifiable concurrent Java components from formal models”.

If you used or intend to use KeY as part of your own research, we would be interested to hear from you and for any feedback that can help to improve KeY. ]]>KeY is a deductive verification tool for sequential Java programs. It is based on a rich program logic for Java source code. KeY can perform functional verification of Java programs annotated with with specifications in the Java Modeling language. Specification elements include class invariants and method contracts. The rules of KeY’s program logic realize a symbolic execution engine for Java. Verification proceeds method-wise, unbounded loops are approximated by invariants, method calls by contracts. KeY incorporates state-of-art proof search and an auto-active mode that in many cases results in fully automatic proofs. Otherwise, the user can perform interactive steps or ask the system to search for a counter example. KeY has been successfully used to verify complex legacy code, such as the JDK’s sort method, where a subtle bug was found and subsequently fixed. I will explain some of the theoretical underpinnings and design principles of KeY. I will also give a live demonstration of some of KeY’s capabilities.

]]>At the core of the system is a theorem prover for the first-order Dynamic Logic of the respective target language, in particular Java, with a user-friendly graphical interface.

The KeY Symposium brings together researchers interested in KeY and related aspects. We will exchange recent achievements, current ideas, discuss the next steps and milestones of the area, as well as future directions in general. Also the latest developments in the KeY tool are presented and discussed.

As already during the 11th Symposium in 2012, the KeY Symposium takes place at Chalet Giersch located in the French alps.

84 Chemin du Plan du Mont

F – 74230 MANIGOD

To be announced.

Dominic Scheurer

(to be continued)