0. Preface
Static program analysis is the art of reasoning about the behavior of computer programs without actually running them. This is useful not only in optimizing compilers for producing e?cient code but also for automatic error detection and other tools that can help programmers.
As known from Turing and Rice, all nontrivial properties of the behavior of programs written in common programming languages are mathematically undecidable.
This means that automated reasoning of software generally must involve approximation.
One of the key challenges when developing such analyses is how to ensure high precision and e?ciency to be practically useful.
We take a constraint-based approach to static analysis where suitable constraint systems conceptually divide the analysis task into a front-end that generates constraints from program code and a back-end that solves the constraints to produce the analysis results.
The analyses that we cover are expressed using di?erent kinds of constraint systems, each with their own constraint solvers:
- term uni?cation constraints, with an almost-linear union-?nd algorithm,
- conditional subset constraints, with a cubic-time algorithm, and
- monotone constraints over lattices, with variations of ?xed-point solvers.
1. Introduction
Dijkstra: “Program testing can be used to show the presence of bugs, but never to show their absence.”
A program analyzer is such a program that takes other programs as input and decides whether or not they have a certain property.
Rice’s theorem is a general result from 1953 which informally states that all interesting questions about the behavior of programs (written in Turing-complete programming languages ) are undecidable.
At ?rst, this seems like a discouraging result, however, this theoretical result does not prevent approximative answers. While it is impossible to build an analysis that would correctly decide a property for any analyzed program, it is often possible to build analysis tools that give useful answers for most realistic programs. As the ideal analyzer does not exist, there is always room for building more precise approximations (which is colloquially called the full employment theorem for static program analysis designers).
- We say that a program analyzer is sound if it never gives incorrect results (but it may answer maybe).
- A veri?cation tool is typically called sound if it never misses any errors of the kinds it has been designed to detect, but it is allowed to produce spurious warnings (also called false positives).
- An automated testing tool is called sound if all reported errors are genuine, but it may miss errors.
Program analyses that are used for optimizations typically require soundness. If given false information, the optimization may change the semantics of the program. Conversely, if given trivial information, then the optimization fails to do anything.
Although soundness is a laudable goal in analysis design, modern analyzers for real programming languages often cut corners by sacri?cing soundness to obtain better precision and performance, for example when modeling re?ection in Java.
It is impossible to build a static program analysis that can decide whether a given program may fail when executed. Moreover, this result holds even if the analysis is only required to work for programs that halt on all inputs. In other words, the halting problem is not the only obstacle; approximation is inevitably necessary.
2. A Tiny Imperative Programming Language
Abstract syntax trees (ASTs) as known from compiler construction provide a representation of programs that is suitable for ?ow-insensitive analyses, for example, type analysis, control ?ow analysis, and pointer analysis.
For ?ow-sensitive analysis, in particular data?ow analysis, where statement order matters it is more convenient to view the program as a control ?ow graph, which is a different representation of the program code.
A control ?ow graph (CFG) is a directed graph, in which nodes correspond to statements and edges represent possible ?ow of control.
3. Type Analysis
We resort to a conservative approximation: typability. A program is typable if it satis?es a collection of type constraints that is systematically derived, typically from the program AST.
The type constraints are constructed in such a way that the above requirements are guaranteed to hold during execution, but the converse is not true. Thus, our type checker will be conservative and reject some programs that in fact will not violate any requirements during execution.
For a given program we generate a constraint system and de?ne the program to be typable when the constraints are solvable.
This class of constraints can be e?ciently solved using a uni?cation algorithm.
If solutions exist, then they can be computed in almost linear time using a uni?cation algorithm for regular terms as explained below. Since the constraints may also be extracted in linear time, the whole type analysis is quite e?cient.
The uni?cation algorithm we use is based on the familiar union-?nd data structure (also called a disjoint-set data structure) from 1964 for representing and manipulating equivalence relations柠横。
The type analysis is of course only approximate, which means that certain programs will be unfairly rejected.
4. Lattice Theory
We circumvent undecidability by introducing approximation. That is, the analysis must be prepared to handle uncertain information, in this case situations where it does not know the sign of some expression, so we add a special abstract value (T) representing “don’t know”.
Due to undecidability, imperfect precision is inevitable.
- partial order
- upper bound
- lower bound
- least upper bound
- greatest lower bound
- lattice
The least upper bound operation plays an important role in program analysis.
As we shall see, we use least upper bound when combining abstract information from multiple sources, for example when control ?ow merges after the branches of if statements.
- the height of a lattice
- powerset lattice
- reverse powerset lattice
- ?at(A)
- product
- map lattice
- homomorphism
- isomorphism
- lift(L)
Also notice that it is important for the analysis of this simple program that the order of statements is taken into account, which is called ?ow-sensitive analysis.
- monotone
As the lattice order when used in program analysis represents precision of information, the intuition of monotonicity is that “more precise input does not result in less precise output”.
- ?xed-point
- least ?xed-point
- equation system
- constraint functions
- solution
A solution to an equation system provides a value from L for each variable such that all equations are satis?ed.
This clearly shows that a solution to an equation system is the same as a ?xed-point of its functions. As we aim for the most precise solutions, we want least ?xed-points.
The ?xed-point theorem
In a lattice L with ?nite height, every monotone function f : L → L has a unique least ?xed-point denoted ?x(f) de?ned as
fix(f) = \bigvee_{i\geqslant 0} f^i(\perp)
The theorem is a powerful result: It tells us not only that equation systems over lattices always have solutions, provided that the lattices have ?nite height and the constraint functions are monotone, but also that uniquely most precise solutions always exist. Furthermore, the careful reader may have noticed that the theorem provides an algorithm for computing the least ?xed-point.
The least ?xed point is the most precise possible solution to the equation system, but the equation system is (for a sound analysis) merely a conservative approximation of the actual program behavior. This means that the semantically most precise possible (while still correct) answer is generally below the least ?xed point in the lattice.