Research Themes and Contributions

I provide a summary of my major research themes and contributions. These are listed in roughly chronological order. When I do not explicitly use the first person pronoun, I am summarizing work done with a number of colleagues. Follow links to find references to the others involved.

To describe my research with the broadest stroke: I have applied proof theory concepts to various computer science problems. Such applications have usually been synergistic: new results within proof theory have been developed to treat the demand of computing and new computer system designs and tools have been built exploiting our new understanding of proofs.

A proof structure for classical logic: In my PhD thesis, I defined a generalization of Herbrand disjunctions for higher-order classical logic. That generalization, called expansion trees and expansion proofs, can be thought of as a parallel proof structure for classical logic in a style similar to proof nets for linear logic.

Unification: I have worked on the following aspects of the unification of simply typed λ-terms (aka higher-order unification). • The soundness of skolemization (as a means to simplify quantifier alternation) and unification within higher-order logic. • The treatment of unification under a mixed prefix (ie, without skolemization). • The design and analysis of a subset of higher-order unification, now called higher-order pattern unification, that appears to be the weakest extension of first-order unification that respects α, β, and η-conversions.

Logic programming: Starting in about 1985, I worked with several students and colleagues on a decade long effort to use proof theory as a foundation for logic programming languages. The following resulted from that work.

λ-tree syntax: λProlog was the first programming language to provide a data structure with language support for bindings modulo α-convergence, scoping, and substitution. The term higher-order abstract syntax is often used for describing this approach to syntactic representation in λProlog. Unfortunately, that term has grown to encapsulate the modeling of bindings as functions within functional programs, an approach that yields a limited and semantically questionable approach to abstract syntax. As a result, I introduced the term λ-tree syntax to denote the specific treatment of bindings that is available in systems such as λProlog, Isabelle/Pure, and Twelf. An extensive set of examples using λ-tree syntax in specifications and programs is available.

Operational semantics: I have often turned to the specification of operational semantics as a domain in which to apply logic programming and relational specifications. In particular, λ-tree syntax can make such specifications particularly direct and elegant and both small-step and big-step specification lend themselves immediately to logic programming specifications. Linear logic can also be used to great effect in the specification of big-step operational semantics. I have an overview article on this topic.

Proof theory for fixed points: Logic programming, at least as developed above, is limited to essentially reachability. One can go beyond such judgments if one embraces forms of induction. Since induction is generally used to encode properties of first-order term structures, we have studied the proof theory of the triple consisting of first-order quantification, first-order equality, and induction and co-induction. The resulting proof theory has allowed to establish a logical duality between finite success and finite failure as well as a proof theoretic treatment of simulation and bisimulation. This work provided the basis of the model checking system Bedwyr.

∇-quantification: When extending the proof theory for fixed points and first-order quantification to allow for a treatment of λ-tree syntax, we introduced the ∇-quantifier to denote generic quantification since the presence of negation-as-failure forces a separation of generic from universal quantification. Such an extension has made it possible to directly perform model checking on π-calculus expressions.

Formalized meta-theory: One reason to design and use logic programming languages based on rich logical principles is that they should support broad avenues for formally reasoning about what they specify. The first tool that does such formalized reasoning about λProlog is the Abella theorem prover. This system employs a two-level logic approach to specifications and reasoning. Some of the most elegant formal treatments of meta-theory for the λ-calculus and π-calculus can be found in Abella repository.

Focused proof systems: From our work in the 1980's, it was clear that the sequent calculus, as defined by Gentzen, was too chaotic a setting for direct use in many computer science and proof theoretic applications. That early work used uniform proofs to give more structure to sequent calculus proofs. With the introduction of linear logic, uniform proofs could be replaced by the more flexible notion of focused proof system. Such proofs system are now available for classical and intuitionistic logics. In particular, the LJF and LKF proof systems are focused versions of Gentzen's LJ and LK calculi. Furthermore, multifocused proofs can be used to capture parallelism in proof.

Foundational proof certificates: Focused proofs are organized into two phases. One phase involves the application of invertible inference rules. The second phase involves making a series of (possibly non-invertible) inference rules. This latter phase involves making the choices that eventually lead to a final proof. The structure of focused proofs allow them to be used as protocols between a proof checker and a proof certificate. My ERC Advance Grant ProofCert has developed the foundational proof certificate approach to defining the proof semantics of a wide range of proof evidence for classical and intuitionistic logics.