EECS665 – Compiler Construction – Drew Davidson $\newcommand{\bigsqcap}{\mathop ⨅}$

    Dataflow Solving

    DATAFLOW ANALYSIS Adapted from a set of notes by Professor Susan Hortwitz, UW-Madison

    Contents

    • Topic Introduction
    • MOP Solution
    • Solving a set of equations
    • Iterative Algorithms
    • The Lattice Model of Dataflow Analysis
      Topic IntroductionL

      Once a dataflow problem has been set up, we can focus on solving instances of the problem. A solution to an instance of a dataflow problem is a dataflow fact for each node of the given CFG. But what does it mean for a solution to be correct, and if there is more than one correct solution, how can we judge whether one is better than another?

      Ideally, we would like the information at a node to reflect what might happen on all possible paths to that node. This ideal solution is called the meet over all paths (MOP) solution, and is discussed below. Unfortunately, it is not always possible to compute the MOP solution; we must sometimes settle for a solution that provides less precise information.

      The "Meet Over All Paths" SolutionL

      The MOP solution (for a forward problem) for each CFG node n is defined as follows:

      • For every path "enter → ... → n", compute the dataflow fact induced by that path (by applying the dataflow functions associated with the nodes on the path to the initial dataflow fact).
      • Combine the computed facts (using the combining operator, $\sqcap$ ).
      • The result is the MOP solution for node n.

      For instance, in our running example program there are two paths from the start of the program to line 9 (the assignment k = a):
       

      Path
      Constants associated w/ that path
      1 → 2 → 3 → 4 → 9
      k=2, a=4, x=5
      1 → 2 → 6 → 7 → 9
      k=2, a=4, x=8

      Combining the information from both paths, we see that the MOP solution for node 9 is: k=2 and a=4.

      It is worth noting that even the MOP solution can be overly conservative (i.e., may include too much information for a "may" problem, and too little information for a "must" problem), because not all paths in the CFG are executable. For example, a program may include a predicate that always evaluates to false (e.g., a programmer may include a test as a debugging device -- if the program is correct, then the test will always fail, but if the program contains an error then the test might succeed, reporting that error). Another way that non-executable paths can arise is when two predicates on the path are not independent (e.g., whenever the first evaluates to true then so does the second). These situations are illustrated below.


      Unfortunately, since most programs include loops, they also have infinitely many paths, and thus it is not possible to compute the MOP solution to a dataflow problem by computing information for every path and combining that information. Fortunately, there are other ways to solve dataflow problems (given certain reasonable assumptions about the dataflow functions associated with the CFG nodes). As we shall see, if those functions are distributive, then the solution that we compute is identical to the MOP solution. If the functions are monotonic, then the solution may not be identical to the MOP solution, but is a conservative approximation.

      Solving a Dataflow Problem by Solving a Set of EquationsL

      The alternative to computing the MOP solution directly, is to solve a system of equations that essentially specify that local information must be consistent with the dataflow functions. In particular, we associate two dataflow facts with each node n:

      1. n.before: the information that holds before n executes, and
      2. n.after: the information that holds after n executes.
      These n.befores and n.afters are the variables of our equations, which are defined as follows (two equations for each node n):
      1. n.before = $\bigsqcap$(p1.after, p2.after, ...)
        where p1, p2, etc are n's predecessors in the CFG (and $\sqcap$ is the combining operator for this dataflow problem).
      2. n.after = fn ( n.before )
      In addition, we have one equation for the enter node:
      • enter.after = init (recall that "init" is part of the specification of a dataflow problem)
      These equations make intuitive sense: the dataflow information that holds before node n executes is the combination of the information that holds after each of n's predecessors executes, and the information that holds after n executes is the result of applying n's dataflow function to the information that holds before n executes.

      One question is whether, in general, our system of equations will have a unique solution. The answer is that, in the presence of loops, there may be multiple solutions. For example, consider the simple program whose CFG is given below:


      The equations for constant propagation are as follows (where $\sqcap$ is the intersection combining operator):

        enter.after = empty set
        1.before = enter.after
        1.after = 1.before - (x, *) union (x, 2)
        2.before = 1.after
        2.after = if (x, c) is in 2.before then 2.before - (y, *) union (y, c), else 2.before - (y, *)
        3.before = $\bigsqcap$(2.after, 4.after )
        3.after = 3.before
        4.before = 3.after
        4.after = 4.before
      Because of the cycle in the example CFG, the equations for 3.before, 3.after, 4.before, and 4.after are mutually recursive, which leads to the four solutions shown below (differing on those four values).

      Variable
      Solution 1
      Solution 2
      Solution 3
      Solution 4
      1.before
      { }
      { }
      { }
      { }
      1.after
      {(x, 2)}
      {(x, 2)}
      {(x, 2)}
      {(x, 2)}
      2.before
      {(x, 2)}
      {(x, 2)}
      {(x, 2)}
      {(x, 2)}
      2.after
      {(x, 2) (y, 2)}
      {(x, 2) (y, 2)}
      {(x, 2) (y, 2)}
      {(x, 2) (y, 2)}
      3.before
      { }
      {(x, 2)}
      {(y, 2)}
      {(x, 2) (y, 2)}
      3.after
      { }
      {(x, 2)}
      {(y, 2)}
      {(x, 2) (y, 2)}
      4.before
      { }
      {(x, 2)}
      {(y, 2)}
      {(x, 2) (y, 2)}
      4.after
      { }
      {(x, 2)}
      {(y, 2)}
      {(x, 2) (y, 2)}

      The solution we want is solution 4, which includes the most constant information. In general, for a "must" problem the desired solution will be the largest one, while for a "may" problem the desired solution will be the smallest one.

      Self-Study #1L

      Using the simple CFG given above, write the equations for live-variable analysis, as well as the greatest and least solutions. Which is the desired solution, and why?

      solution

      Iterative AlgorithmsL

      Many different algorithms have been designed for solving a dataflow problem's system of equations. For our purposes, we will focus on a type of algoirhtm called an interative algorithm.

      Most of the iterative algorithms are variations on the following algorithm (this version is for forward problems). It uses a new value $\top$ (called "top"). $\top$ has the property that, for all dataflow facts $d, \top \sqcap d = d$. Also, for all dataflow functions, fn($\top$) = $\top$. (When we consider the lattice model for dataflow analysis we will see that this initial value is the top element of the lattice.)

      • Step 1 (initialize n.afters):
        Set enter.after = init. Set all other n.after to $\top$.
      • Step 2 (initialize worklist):
        Initialize a worklist to contain all CFG nodes except enter and exit.
      • Step 3 (iterate):
        While the worklist is not empty:
          Remove a node n from the worklist.
          Compute n.before by combining all p.after such that p is a predecessor of n in the CFG.
          Compute tmp = fn ( n.before )
          If (tmp != n.after) then
            Set n.after = tmp
            Put all of n's successors on the worklist

      Self-Study #2L

      Run this iterative algorithm on the simple CFG given above (the one with a loop) to solve the constant propagation problem. Run the algorithm again on the example CFG from the examples section of the notes.

      The above algorithm works regardless of the order in which nodes are removed from the worklist. However, that order can affect the efficiency of the algorithm. A number of variations have been developed that involve different ways of choosing that order. When we consider the lattice model, we will revisit the question of complexity.

      The Lattice Model of Dataflow AnalysisL

      Recall that while we would like to compute the meet-over-all-paths (MOP) solution to a dataflow problem, direct computation of that solution (by computing and combining solution for every path) is usually not possible. Therefore, dataflow problems are usually solved by finding a solution to a set of equations that define two dataflow facts (n.before and n.after) for each CFG node n.

      Three important questions are:

      1. How do we know that a solution to the equations exists?
      2. If there is more than one solution, which one do we want?
      3. How does the equation solution relate to MOP solution?

      The answers are provided by the framework first defined by Kildall. The next section provides background on lattices; the section after that presents Kildall's framework. BackgroundL

      Let us begin by considering some mathematical definitions that will be useful in our discussion of lattices (including the defintiion of "lattice" itself).

      Partially ordered setsL

      Definition:

        A partially ordered set (poset) is a set S that has a partial ordering ⊆ , such that the ordering is:
        1. Reflexive: for all x in S, x ⊆ x
        2. Anti-Symmetric: for all x,y in S, x ⊆ y and y ⊆ x implies x = y
        3. Transitive: for all x,y,z in S, x ⊆ y and y ⊆ z implies x ⊆ z

        Note: "partial" means it is not necessary that for all x,y in S, either x ⊆ y or y ⊆ x. It is OK for a pair of set elements to be incomparable.

      Below are some examples of sets with orderings; some are partially ordered sets and some are not.

      Example 1: The set S is the set of English words, and the ordering ⊆ is substring (i.e., w1 ⊆ w2 iff w1 is a substring of w2). Here is a picture of some words and their ordering (having an edge w1 → w2 means w1 > w2).

                               candy                  then
                               /    \                /    \
                               v     v              v      v
                      annual  and   can            the   hen
                           \   |    /                \   /
                            \  |   /                  v v
                             v v  v                    he
                               an
                               |
                               v
                               a
      
      
      Note that the "substring" ordering does have the three properties required of a partial order:
      1. It is reflexive (since every word is a substring of itself).
      2. It is anti-symmetric (since if two words are substrings of each other, then they must be the same).
      3. It is transitive (since a substring of a substring of a word is also a substring of the word).

      Example 2: S is the set of English words, and the ordering ⊆ is "is shorter than or equal to in length".

                      candy
                        |
                        v
                 ____ then
                /    /  |  \
                v   v   v   v
               can and the hen
                 \  \   /   /
                  \  \ /   /
                   v v v   v
                     an
                      |
                      v
                      a
      
      Does this ordering have the three properties?
      1. Reflexive: yes.
      2. Anti-Symmetric: NO (Counter-example: "and" and "the" have the same length, but are not the same word.)
      3. Transitive: yes.
      Two out of three isn't good enough -- this is not a poset.

      Example 3: S is the set of integers, and the ordering ⊆ is "less than or equal to". This is a poset (try verifying each of the three properties).

      Example 4: S is the set of integers and the ordering ⊆ is "strictly less than". This is not a poset, because the ordering is not reflexive.

      Example 5: S is the set of all sets of letters and the ordering is subset. This is a poset.

      Lattices

      Definition:

        A lattice is a poset in which every pair of elements has:
        1. a Least Upper Bound (the join of the two elements), and
        2. a Greatest Lower Bound (the meet of the two elements).
      The join of two elements x and y is defined to be the element z such that:
      1. x ⊆ z, and
      2. y ⊆ z, and
      3. for all w such that x ⊆ w and y ⊆ w, w ⊇ z.
      The first two rules say that z actually is an upper bound for x and y, while the third rule says that z is the least upper bound. Pictorially:
              z
             / \              z is the least upper bound of x and y
            v   v
            y   x 
      
             z  w
             |\/|
             |/\|             z is NOT the least upper bound of x and y
             vv vv            (they have NO least upper bound)
             y  z     
      
      The idea for the meet operation is similar, with the reverse orderings.
      Examples:
      • Example 1 above (English words, ordered by "substring") is not a lattice. (For instance, "a" and "he" have no meet.)
      • Example 3 above (integers, ordered by "less than or equal to") is a lattice. The meet operator is "min" and the join operator is "max".
      • Example 5 above (sets of letters, ordered by subset) is a lattice. The meet operator is intersection and the join operator is union.

      Complete lattices

      Definition:
        A complete lattice is a lattice in which all subsets have a greatest lower bound and a least upper bound (the bounds must be in the lattice, but not necessarily in the subsets themselves).

      Note: Every finite lattice (i.e., S is finite) is complete.

      Examples:

      • Example 3 above (integers, ordered by "less than or equal to") is not a complete lattice. The set of all positive numbers is a subset of S; it has a lower bound but no upper bound.
        The set of all even numbers is a subset of S; it has neither an upper nor a lower bound.
      • The set: {1, 1/2, 1/4, 1/8, ... 0} with the usual numeric ordering is an example of an infinite complete lattice.

      Note: Every complete lattice has a greatest element, "Top" (written as a capital T) and a least element "Bottom" (written as an upside-down capital T). They are the least-upper and the greatest-lower bounds of the entire underlying set S.

      Monotonic and distributive functions

      Definition:
        A function f: L → L (where L is a lattice) is monotonic iff for all x,y in L: x ⊆ y implies f(x) ⊆ f(y).
      Definition:
        A function f: L → L (where L is a lattice) is distributive iff for all x,y in L: f(x meet y) = f(x) meet f(y).

      Every distributive function is also monotonic (proving that could be good practice!) but not vice versa. For the GEN/KILL dataflow problems, all dataflow functions are distributive. For constant propagation, all functions are monotonic, but not all functions are distributive. For example, the dataflow function f associated with this assignment

        x = a + b
      is not distributive. To see this, first recall that function f is defined as follows:
        f(S) = if S == T then T
        else if (a, v1) in S and (b, v2) in S then S - (x,*) + (x, v1+v2)
        else S - (x,*)
      i.e., if f is applied to the special "top" value, then the result is T; otherwise, if both a and b are constant, then x is set to have the value of a+b (and its old value, if any, is removed from the set of pairs); otherwise, x's old value, if any, is removed from the set of pairs.
    Now consider the following two sets:
      S1 = { (a,1) (b,2) } S2 = { (a,2) (b,1) }
    The meet of S1 and S2 is { } So f(S1 meet S1) is { }. However, if we apply f to S1 and S2 and then take the meet we get this:
      f(S1) = {(a,1)(b,2)(x,3)}
      f(S1) = {(a,2)(b,1)(x,3)}
      f(S1) meet f(S2) = {(x,3)}

    Fixed points

    Definition:
      x is a fixed point of function f iff f(x) = x.

    Examples:

    Let L be the lattice whose elements are sets of letters (as above). Here are the fixed points for the functions we considered above:
    1. f(S) = S union {a}
      This function has many fixed points: all sets that contain 'a'.
    2. f(S) = if sizeof(S) ≤ 3 then S else empty-set
      The fixed points for this function are all sets of size less than or equal to 3
    3. f(S) = S - {a}
      The fixed points for this function are all sets that do not contain 'a'.
    As an example of a function that has no fixed point, consider the lattice of integers with the usual "less than or equal to" ordering. The function: f(x) = x+1 has no fixed point.

    Here is an important theorem about lattices and monotonic functions:

    Theorem:

      If L is a complete lattice and f: L → L is monotonic, then:
      • f has at least one fixed point.
      • f's fixed points themselves form a (complete) lattice. (i.e. there exist greatest and least fixed points).
      • If L has no infinite descending chains, then the greatest fixed point of f can be found by iterating:
          f(T), f(f(T)), f(f(f(T))), ...
        until a fixed point is found (i.e., two successive values are the same). (Note that "T" means L's top element.)
      • Similarly, if L has no infinite ascending chains, then the least fixed point of f can be found by iterating
          f(bottom), f(f(bottom)) ...
        until a fixed point is found.

    Examples

    L is our usual lattice of sets of letters, with set union for the join.
    1. f(S) = S U {a}
      Recall that f is monotonic. The greatest fixed point of f is the set of all letters: {a, b, ..., z}. That is also the top element of the lattice, so we find the greatest fixed point in just one iteration:
        f(T) = T
      The bottom element is the empty set. If we apply f to that we get the set {a}. f({a}) = {a}, and we've found the least fixed point. Other fixed points are any set of letters that contains a.

    2. f(S) = if size(S) ≤ 3 then S else {}
      Recall that this function f is not monotonic. It has no greatest fixed point. It does have a least fixed point (the empty set). Other fixed points are sets of letters with size ≤ 3.

    Creating new lattices from old ones

    We can create new lattices from old ones using cross-product: if L1, L2, ..., Ln are lattices, then so is the cross-product of L1, L2, ..., Ln (which we can write as: L1 x L2 x ... x Ln). The elements of the cross-product are tuples of the form:

      <e1, e2, ... , en>
    such that value ek belongs to lattice Lk

    The ordering is element-wise: <e1, e2, ..., en> ⊆ <e1', e2', ..., en'> iff:

      e1 ⊆ e1' and
      e2 ⊆ e2', and
      ..., and
      en ⊆ en'

    If L1, L2, ..., Ln are complete lattices, then so is their cross-product. The top element is the tuple that contains the top elements of the individual lattices: <top of L1, top of L2, ... , top of Ln>, and the bottom element is the tuple that contains the bottom elements of the individual lattices: <bottom of L1, bottom of L2, ... , bottom of Ln>.

    Summary of lattice theory

    • poset: a partially ordered set, includes a set of elements S and a partial ordering ⊆
    • lattice: a poset with meet and join for every pair of elements
    • complete lattice: a lattice with meet and join for all subsets of S
    • monotonic functions: x ⊆ y implies f(x) ⊆ f(y)
    • fixed points: x is a fixed point of function f (of type L→L) iff x = f(x).
      If L is a complete lattice and f is monotonic, then f has a greatest and a least fixed point.
      If L has no infinite descending chains then we can compute the greatest fixed point of f via iteration ( f(T), f(f(T)) etc.)

    Kildall's Lattice Framework for Dataflow Analysis

    Recall that our informal definition of a dataflow problem included:

    • a domain D of "dataflow facts",
    • a dataflow fact "init" (the information true at the start of the program),
    • an operator ⌈⌉ (used to combine incoming information from multiple predecessors),
    • for each CFG node n, a dataflow function fn : D → D (that defines the effect of executing n)
    and that our goal is to solve a given instance of the problem by computing "before" and "after" sets for each node of the control-flow graph. A problem is that, with no additional information about the domain D, the operator ⌈⌉ , and the dataflow functions fn, we can't say, in general, whether a particular algorithm for computing the before and after sets works correctly (e.g., does the algorithm always halt? does it compute the MOP solution? if not, how does the computed solution relate to the MOP solution?).

    Kildall addressed this issue by putting some additional requirements on D, ⌈⌉ , and fn. In particular he required that:

    1. D be a complete lattice L such that for any instance of the dataflow problem, L has no infinite descending chains.
    2. ⌈⌉ be the lattice's meet operator.
    3. All fn be distributive.
    He also required (essentially) that the iterative algorithm initialize n.after (for all nodes n other than the enter node) to the lattice's "top" value. (Kildall's algorithm is slightly different from the iterative algorithm presented here, but computes the same result.)

    Given these properties, Kildall showed that:

    • The iterative algorithm always terminates.
    • The computed solution is the MOP solution.
    It is interesting to note that, while his theorems are correct, the example dataflow problem that he uses (constant propagation) does not satisfy his requirements; in particular, the dataflow functions for constant propagation are not distributive (though they are monotonic). This means that the solution computed by the iterative algorithm for constant propagation will not, in general be the MOP solution. Below is an example to illustrate this:
             1: enter
               |
    	   v
    	 2: if (...)
    	/        \
           v          v
       3: a = 2     4: a = 3
           |          |
           v          v
       5: b = 3     6: b = 2
             \     /
    	  v   v
            7: x = a + b
                |
    	    v
    	8: print(x)
    
    The MOP solution for the final print statement includes the pair (x,5), since x is assigned the value 5 on both paths to that statement. However, the greatest solution to the set of equations for this program (the result computed using the iterative algorithm) finds that x is not constant at the print statement. This is because the equations require that n.before be the meet of m.after for all predecessors m; in particular, they require that the "before" set for node 7 (x = a + b) has empty, since the "after" sets of the two predecessors have (a,2), (b,3), and (a,3), (b,2), respectively, and the intersection of those two sets is empty. Given that value for 7.before, the equations require that 7.after (and 8.before) say that x is not constant. We can only discover that x is constant after node 7 if both a and b are constant before node 7.

    In 1977, a paper by Kam and Ullman (Acta Informatica 7, 1977) extended Kildall's results to show that, given monotonic dataflow functions:

    • The solutions to the set of equations form a lattice.
    • The solution computed by the iterative algorithm is the greatest solution (using the lattice ordering).
    • If the functions are monotonic but not distributive, then the solution computed by the iterative algorithm may be less than the MOP solution (using the lattice ordering).

    To show that the iterative algorithm computes the greatest solution to the set of equations, we can "transform" the set of equations into a single, monotonic function L → L (for a complete lattice L) as follows:

    Consider the right-hand side of each equation to be a "mini-function". For example, for the two equations:

      n3.before = n1.after meet n2.after
      n3.after = f3( n3.before )
    The two mini-functions, g11 and g12 are:
      g11(a, b) = a meet b
      g12(a) = f3( a )

    Define the function that corresponds to all of the equations to be:

      f( <n1.before,n1.after,n2.before,n2.after ...> ) = <g11(..),g12(...),g21(..), g22(...),...>
    Where the (...)s are replaced with the appropriate arguments to those mini-functions. In other words, function f takes one argument that is a tuple of values. It returns a tuple of values, too. The returned tuple is computed by applying the mini-functions associated with each of the dataflow equations to the appropriate inputs (which are part of the tuple of values that is the argument to function f).

    Note that every fixed point of f is a solution to the set of equations! We want the greatest solution. (i.e., the greatest fixed point) To guarantee that this solution exists we need to know that:

    1. the type of f is L→L, where L is a complete lattice
    2. f is monotonic

    To show (1), note that the each individual value in the tuple is an element of a complete lattice. (That is required by Kildall's framework.) So since cross product (tupling) preserves completeness, the tuple itself is an element of a complete lattice.

    To show (2), note that the mini-functions that define each n.after value are monotonic (since those are the dataflow functions, and we've required that they be monotonic). It is easy to show that the mini-functions that define each n.before value are monotonic, too.

    For a node n with k predecessors, the equation is:

      n.before = m1.after meet m2.after meet ... meet mk.after
    and the corresponding mini-function is:
      f(a1, a2, ..., ak) = a1 meet a2 meet ... meet ak
    We can prove that these mini-functions are monotonic by induction on k.

    base case k=1

      f(a1) = a1

      We must show that given: a ⊆ a', f(a) ⊆ f(a').

      For this f, f(a) = a, and f(a') = a', so this f is monotonic.

    base case k=2

      f(a1, a2) = a1 meet a2

      We must show that given: a1 ⊆ a1' and a2 ⊆ a2', f(a1, a2) ⊆ f(a1', a2').

      • by the definition of meet we know that:
          (a1 meet a2) ⊆ a1 , and
          (a1 meet a2) ⊆ a2
      • by the transitivity of ⊆ we know that:
          (a1 meet a2) ⊆ a1 ⊆ a1' implies (a1 meet a2) ⊆ a1', and
          (a1 meet a2) ⊆ a2 ⊆ a2' implies (a1 meet a2) ⊆ a2'
      • so (a1 meet a2) is a lower bound for both a1' and for a2'
      • since (a1' meet a2') is (by definition) the greatest lower bound of a1' and a2', it must be that all other lower bounds are less; in particular: (a1 meet a2) must be ⊆ (a1' meet a2').

    Induction Step

    Assume that for all k < n

      a1 ⊆ a1' and a2 ⊆ a2' and ... and an-1 ⊆ a'n-1 =>
      f(<a1, ..., an-1) ⊆ f(<a1',... a'n-1>)
    Now we must show the same thing for k=n
    • f( <a1, a2, ..., an >) = a1 meet a2 meet ... meet an = f( <a1, a2, ..., an-1 >) meet an
    • let x = f(<a1,. .... an-1>)
    • let x'= f(<a1',. .... an-1'>)
    • the induction hypothesis says: x ⊆ x'
      we need to show: x meet an ⊆ x' meet an'
    • this is true by the base case k=2!

    Given that all the mini-functions are monotonic, it is easy to show that f (the function that works on the tuples that represent the nodes' before and after sets) is monotonic; i.e., given two tuples:

      t1 = <e1, e2, ..., en>, and
      t2 = <e1', e2', ..., en'>
    such that: t1 ⊆ t2, we must show f(t1) ⊆ f(t2). Recall that, for a cross-product lattice, the ordering is element-wise; thus, t1 ⊆ t2 means: ek ⊆ ek', for all k. We know that all of the mini-functions g are monotonic, so for all k, gk(ek) ⊆ gk(ek'). But since the ordering is element-wise, this is exactly what it means for f to be monotonic!

    We now know:

    • f is a monotonic function of type L→L
    • L is a complete lattice with no infinite descending chains

    Therefore:

    • f has a greatest fixed point
    • It can be computed using f(T), f(f(T)), ...
    This is not quite what the iterative algorithm does, but it is not hard to see that it is equivalent to one that does just this: initialize all n.before and n.after to top, then on each iteration, compute all of the "mini-functions" (i.e., recompute n.before and n.after for all nodes) simultaneously, terminating when there is no change. The actual iterative algorithm presented here is an optimization in that it only recomputes n.before and n.after for a node n when the "after" value of some predecessor has changed.

    SUMMARY

    Given:
    • A CFG,
    • a domain of "dataflow facts" (a complete lattice L),
    • a monotonic dataflow function for each CFG node, and
    • an initial dataflow fact (an element of L)
    the goal of dataflow analysis is to compute a "dataflow fact" (an element of L) for each CFG node. Ideally, we want the MOP (meet over all paths) solution, in which the fact at node n is the combination of the facts induced by all paths to n. However, for CFGs with cycles, it is not possible to compute this solution directly.

    Another approach to solving a dataflow problem is to solve a system of equations that relates the dataflow facts that hold before each node to the facts that hold after the node.

    Kildall showed that if the dataflow functions are distributive, then the (original version of the) iterative algorithm always terminates, and always finds the MOP solution. Kam and Ullman later showed that if the dataflow functions are monotonic then the iterative algorithm always finds the greatest solution to the set of equations. They also showed that if the functions are monotonic but not distributive, then that solution is not always the same as the MOP solution. It is also true that the greatest solution to the system of equations is always an approximation to the MOP solution (i.e., may be lower in the lattice of solutions).