Research Themes and Contributions
I provide a summary of my primary research themes and contributions. These are listed in roughly chronological order. When I do not explicitly use the first person pronoun, I summarize work done with several colleagues. Follow links to find references to the others involved.
My research has focused on applying proof theory concepts to various computer science problems. Such applications have usually been synergistic: new proof-theoretic results have been developed to address various demands of computing and reasoning. Building computer systems exploiting these new results has led to new proof theory problems to address.
A proof structure for classical logic: In my Ph.D. thesis, I defined a generalization of Herbrand disjunctions for higher-order classical logic. That generalization, called expansion trees and expansion proofs, can be thought of as both a generalization of Herbrand disjunctions and a parallel proof structure for classical logic.
Unification: I have worked on the following aspects of the unification of simply typed λ-terms (aka higher-order unification). • The soundness of skolemization (used to simplify quantifier alternation) and unification within higher-order logic. • The treatment of unification under a mixed prefix (i.e., without skolemization). • The design and analysis of a subset of higher-order unification, now called higher-order pattern unification, that appears to be the weakest extension of first-order unification that respects α, β, and η-conversions.
Logic programming: Starting in about 1985, I worked with several students and colleagues on a decade-long effort to use proof theory as a foundation for logic programming languages. The following resulted from that work. (See also my 2020 paper A Survey of the Proof-Theoretic Foundations of Logic Programming.)
A view of logic programming as proof-search and the use of uniform proofs to describe both goal-directed search and backchaining. This work provided an early example of a comprehensive, focused proof system.
A formal treatment of higher-order quantification (at predicate and non-predicate types) in logic programming.
The description of logic programming within (higher-order) intuitionistic logic that allowed logic programming to embrace such abstraction mechanisms as modules, abstract datatypes, bound variables, and higher-order programming.
The design of two linear logic programming languages. Lolli extended λProlog by adding linear implication while Forum extended Lolli by adding the multiplicative disjunction and multiplicative false. These linear logic programming languages greatly extended the expressiveness of Prolog and λProlog.
λ-tree syntax: λProlog was the first programming language to provide a data structure with language support for bindings modulo α-convergence, scoping, and substitution. The term higher-order abstract syntax is often used for describing this approach to syntactic representation in λProlog. Unfortunately, that term has grown to encapsulate the modeling of bindings as functions within functional programs, an approach that yields a limited and semantically questionable approach to abstract syntax. As a result, I introduced the term λ-tree syntax to denote the specific treatment of bindings available in systems such as λProlog, Isabelle/Pure, and Twelf. A key feature of λ-tree syntax is the need to support the mobility of binders: bound variables are never replaced by free variables but rather by other bound variables. An extensive set of examples using λ-tree syntax in specifications and programs is available.
Linear logic as a logical framework: Linear logic can specify several proof systems for classical, intuitionistic, and linear logics. Such specifications come with easily checked necessary conditions for determining if the specified proof systems satisfy cut elimination. This framework has been extended with subexponentials and has been implemented.
Operational semantics: I have often turned to the specification of operational semantics as a domain to apply logic programming and relational specifications. In particular, λ-tree syntax can make such specifications straightforward and elegant, and both small-step and big-step specifications lend themselves immediately to logic programming specifications. Linear logic can also be used to great effect in the specification of big-step operational semantics. I have an overview article on this topic.
Proof theory for fixed points: Logic programming, at least as developed above, is limited to essentially reachability. One can go beyond such judgments if one embraces forms of induction. Since induction is generally used to encode properties of first-order term structures, Baelde and I have studied the proof theory of the triple consisting of first-order quantification, first-order equality, and induction and co-induction. The resulting proof theory has allowed establishing a logical duality between finite success and finite failure and a proof-theoretic treatment of simulation and bisimulation. This work provided the basis of the model checking system Bedwyr. I have also argued that the proof theory of MALL (multiplicative-additive linear logic) extended with fixed points provides a basis for model checking.
∇-quantification: When extending the proof theory for fixed points and first-order quantification to capture λ-tree syntax, Tiu and I introduced the ∇-quantifier to denote generic quantification: the presence of negation-as-failure forces a separation of generic from universal quantification. Such an extension has made it possible to perform model checking on π-calculus expressions directly. The ∇-quantifier is built into the Abella theorem prover and the Bedwyr model checker.
Formalized meta-theory: One reason to design and use logic programming languages based on rich logical principles is to support broad avenues for formally reasoning about what they specify. The first tool that does such formalized reasoning about λProlog is the Abella theorem prover. This system employs a two-level logic approach to specifications and reasoning. Some of the most elegant formal treatments of meta-theory for the λ-calculus and π-calculus can be found in the Abella repository.
Focused proof systems: From our work in the 1980s, it was clear that the sequent calculus, as defined by Gentzen, was too chaotic to apply directly to many computer science and proof theory problems. Some of our early work used uniform proofs to give more structure to sequent calculus proofs. With the introduction of linear logic, Andreoli showed how uniform proofs could be replaced by the more flexible notion of focused proof systems. I have worked with Liang to develop the LJF and LKF proof systems, which are comprehensive focused versions of Gentzen's LJ and LK calculi. Furthermore, multi-focused proofs can be used to capture parallelism in proof.
Foundational proof certificates: Focused proofs are organized into two phases. One phase involves the application of invertible inference rules. The second phase involves making a series of (possibly non-invertible) inference rules. This latter phase involves making the choices that eventually lead to a final proof. The structure of focused proofs allows them to be used as protocols between a proof checker and a proof certificate. Based on this notion of protocol, my ERC Advanced Grant ProofCert has funded the development of the foundational proof certificate approach to defining the proof semantics of a wide range of proof evidence for classical and intuitionistic logics.