Archive for the 'Coq' Category

Hybrid: A Defintional Two Level Approach to Reasoning with Higher-Order Abstract Syntax

By Amy Felty and Alberto Monigliano, accepted in the Journal of Automated Reasoning 2010, available from Amy Felty’s website:

Combining higher-order abstract syntax and (co)-induction in a logical framework is well known to be problematic. We describe the theory and the practice of a tool called Hybrid, within Isabelle/HOL and Coq, which aims to address many of these difficulties. It allows object logics to be represented using higher-order abstract syntax, and reasoned about using tactical theorem proving and principles of (co)induction. Moreover, it is definitional, which guarantees consistency within a classical type theory. The idea is to have a de Bruijn representation of λ-terms providing a definitional layer that allows the user to represent object languages using higher-order abstract syntax, while offering tools for reasoning about them at the higher level. In this paper we describe how to use Hybrid in a multi-level reasoning fashion, similar in spirit to other systems such as Twelf and Abella. By explicitly referencing provability in a middle layer called a specification logic, we solve the problem of reasoning by (co)induction in the presence of non-stratifiable hypothetical judgments, which allow very elegant and succinct specifications of object logic inference rules. We first demonstrate the method on a simple example, formally proving type soundness (subject reduction) for a fragment of a pure functional language, using a minimal intuitionistic logic as the specification logic. We then prove an analogous result for a continuation-machine presentation of the operational semantics of the same language, encoded this time in an ordered linear logic that serves as the specification layer. This example demonstrates the ease with which we can incorporate new specification logics, and also illustrates a significantly more complex object logic whose encoding is elegantly expressed using features of the new specification logic.

LNGen: Tool Support for Locally Nameless Representations

By Brian Aydemir and Stephanie Weirich, preprint, available from Brian Aydemir’s website:

Given the complexity of the metatheoretic reasoning about current programming languages and their type systems, techniques for mechanical formalization and checking of such metatheory have received
much recent attention. In previous work, we advocated a combination of locally nameless representation and cofinite quantification as a light- weight style for carrying out such formalizations in the Coq proof assistant. As part of the presentation of that methodology, we described a number of operations associated with variable binding and listed a number of properties, called “infrastructure lemmas”, about those operations that needed to be shown. The proofs of these infrastructure lemmas are straightforward but tedious.
In this work, we present LNgen, a prototype tool for automatically generating statements and proofs of infrastructure lemmas from Ott language specifications. Furthermore, the tool also generates a recursion scheme for defining functions over syntax, which was not available in our previous work. LNgen works in concert with Ott to effectively alleviate much of the tedium of working with locally nameless syntax. For the case of untyped lambda terms, we show that the combined output from the two tools is sound and complete, with LNgen automatically proving many of the key lemmas. We prove the soundness of our representation with respect to a fully concrete representation, and we argue that the representation is complete—that we generate the right set of lemmas—with respect to Gordon and Melham’s “Five Axioms of Alpha-Conversion.”

Associated Coq code can be found here.

Focusing on binding and computation

By Daniel R. Licata, Noam Zeilberger and Robert Harper from LICS 2008, available from Robert Harper’s website:

Variable binding is a prevalent feature of the syntax and proof theory of many logical systems. In this paper, we define a programming language that provides intrinsic support for both representing and computing with binding. This language is extracted as the Curry-Howard interpretation of a focused sequent calculus with two kinds of implication, of opposite polarity. The representational arrow extends systems of definitional reflection with the notion of a scoped inference rule, which permits the adequate representation of binding via higher-order abstract syntax. On the other hand, the usual computational arrow classifies recursive functions over such higher-order data. Unlike many previous approaches, both binding and computation can mix freely.

Following Zeilberger [POPL 2008], the computational function space admits a form of open-endedness, in that it is represented by an abstract map from patterns to expressions. As we demonstrate with Coq and Agda implementations, this has the important practical benefit that we can reuse the pattern coverage checking of these tools.

A verified compiler for an impure functional language

By Adam Chlipala, from POPL 2010, available from Adam Chlipala’s website:

We present a verified compiler to an idealized assembly language from a small, untyped functional language with mutable references and exceptions. The compiler is programmed in the Coq proof assistant and has a proof of total correctness with respect to big-step operational semantics for the source and target languages. Compilation is staged and includes standard phases like translation to continuation-passing style and closure conversion, as well as a common subexpression elimination optimization. In this work, our focus has been on discovering and using techniques that make our proofs easy to engineer and maintain. While most programming language work with proof assistants uses very manual proof styles, all of our proofs are implemented as adaptive programs in Coq’s tactic language, making it possible to reuse proofs unchanged as new language features are added.

In this paper, we focus especially on phases of compilation that rearrange the structure of syntax with nested variable binders. That aspect has been a key challenge area in past compiler verification projects, with much more effort expended in the statement and proof of binder-related lemmas than is found in standard pencil and-paper proofs. We show how to exploit the representation technique of parametric higher-order abstract syntax to avoid the need to prove any of the usual lemmas about binder manipulation, often leading to proofs that are actually shorter than their pencil-and-paper analogues. Our strategy is based on a new approach to encoding operational semantics which delegates all concerns about substitution to the meta language, without using features incompatible with general-purpose type theories like Coq’s logic.

Program sources are available here (part of the LambdaTamer project), and a talk given at WMM 2009 is available here.

A previous posting of Adam Chlipala’s work on Parametric Higher Order Abstract Syntax can be found here.

Reasoning on an imperative object based calculus in higher-order abstract syntax

By Alberto Ciaffaglione, Luigi Liquori and Marino Miculan, from MERLIN 2003, available from Luigi Liquori’s website:

We illustrate the benefits of using Natural Deduction in combination with weak Higher-Order Abstract Syntax for formalizing an object-based calculus with objects, cloning, method-update, types with subtyping, and side-effects, in inductive type theories such as the Calculus of Inductive Constructions. This setting suggests a clean and compact formalization of the syntax and semantics of the calculus, with an efficient management of method closures. Using our formalization and the Theory of Contexts, we can prove formally the Subject Reduction Theorem in the proof assistant Coq, with a relatively small overhead.

Combining de Bruijn indices and higher-order abstract syntax in Coq

Sorry for the long delay in posting any updates.  This last week, I was invited to give a talk in Sussex on name binding.  In my talk, I was discussing some of the problems with (“strong”) HOAS, specifically negative occurences in the types of constructors, and the problems with using these in Coq.  An audience member asked where he could read more about this problem, and ways around it.  As I’ve already posted the two papers I suggested, I’ll add a third:

By Venanzio Capretta and Amy P. Felty, from TYPES 2006, available from Amy P. Felty’s website:

The use of higher-order abstract syntax is an important approach for the representation of binding constructs in encodings of languages and logics in a logical framework. Formal meta-reasoning about such object languages is a particular challenge. We present a mechanism for such reasoning, formalized in Coq, inspired by the Hybrid tool in Isabelle. At the base level, we define a de Bruijn representation of terms with basic operations and a reasoning framework. At a higher level, we can represent languages and reason about them using higher-order syntax. We take advantage of Coq’s constructive logic by formulating many definitions as Coq programs. We illustrate the method on two examples: the untyped lambda calculus and quantified propositional logic. For each language, we can define recursion and induction principles that work directly on the higher-order syntax.

Coq source files are available here.

Abstracting syntax

By Brian Aydemir, Stephan A. Zdancewic and Stephanie Weirich, University of Pennsylvania Technical Report, available from the UPenn Tech Report website:

Binding is a fundamental part of language specification, yet it is both difficult and tedious to get right. In previous work, we argued that an approach based on locally nameless representation and a particular style for defining inductive relations can provide a portable, transparent, lightweight methodology to define the semantics of binding. Although the binding infrastructure required by this approach is straightforward to develop, it leads to duplicated effort and code as the number of binding forms in a language increases.

In this paper, we critically compare a spectrum of approaches that attempt to ameliorate this tedium by unifying the treatment of variables and binding. In particular, we compare our original methodology with two alternative ideas: First, we define variable binding in the object language via variable binding in a reusable library. Second, we present a novel approach that collapses the syntactic categories of the object language together, permitting variables to be shared between them.

Our main contribution is a careful characterization of the benefits and drawbacks of each approach. In particular, we use multiple solutions to the POPLMARK challenge in the Coq proof assistant to point out specic consequences with respect to the size of the binding infrastructure, transparency of the definitions, impact to the metatheory of the object language, and adequacy of the object language encoding.