Sound and Complete Bidirectional Typechecking for Higher

Sound and Complete Bidirectional Typechecking
for Higher-Rank Polymorphism and Indexed Types
Joshua Dunfield
Neelakantan R. Krishnaswami
University of British Columbia
Vancouver, Canada
[email protected]
University of Birmingham
Birmingham, England
[email protected]
Abstract
us that left’s argument has an index of 0. Since there exists no
natural number n such that 0 = succ(n), the Right branch cannot
occur. Therefore it is safe to omit this case from the pattern match.
This is an entirely reasonable explanation for programmers,
but language designers and implementors will have more questions. First, how can we implement such a type system? Clearly
we needed some equality reasoning to justify leaving off the Right
case, which is not trivial in general. Second, designers of functional
languages are accustomed to the benefits of the Curry-Howard correspondence, and expect to see a logical reading of type systems to
accompany the operational reading. So what is the logical reading
of GADTs?
Since we relied on equality information to eliminate the second
clause, it seems reasonable to look to logical accounts of equality. However, one of the ironies of proof theory is that it is possible to formulate equality in (at least) two different ways. The
better-known approach is the identity type of Martin-Löf; another
approach is due to Schroeder-Heister (1994) and Girard (1992).
Both approaches introduce equality via reflexivity:
Bidirectional typechecking, in which terms either synthesize a type
or are checked against a known type, has become popular for its
scalability, its error reporting, and its ease of implementation. Following principles from proof theory, bidirectional typing can be applied to many type constructs. The principles underlying a bidirectional approach to indexed types (generalized algebraic datatypes)
are less clear. Building on proof-theoretic treatments of equality,
we give a declarative specification of typing based on focalization. This approach permits declarative rules for coverage of pattern matching, as well as support for first-class existential types
using a focalized subtyping judgment. We use refinement types to
avoid explicitly passing equality proofs in our term syntax, making
our calculus close to languages such as Haskell and OCaml. An
explicit rule deduces when a type is principal, leading to reliable
substitution principles for a rich type system with significant type
inference.
We also give a set of algorithmic typing rules, and prove that it
is sound and complete with respect to the declarative system. The
proof requires a number of technical innovations, including proving
soundness and completeness in a mutually-recursive fashion.
1.
Γ ` refl : (t = t)
However, they differ in the elimination rule. The Martin-Löf identity type eliminates equalities with an equality coercion J. A simplified (non-dependent) version of this rule can be formulated as:
Introduction
Consider an indexed sum type with a numeric index indicating
whether the left or the right branch is inhabited, written in Haskelllike notation as follows:
Γ ` e : (s = t)
Γ ` t : A(s)
Γ ` J(e, t) : A(t)
data Sum : Nat -> * where
Left
: A -> Sum 0
Right : B -> Sum (succ n)
Note that this kind of rule does not immediately justify the above
definition of left: since J is a coercion, to use it we need to match
the Right case and then show it leads to a contradiction.1
The elimination rule of Girard and Schroeder-Heister has a
rather different flavour. It was originally formulated without proof
terms, in a sequent calculus style:
We can use this definition to write a projection function that always
gives us an element of A when the index is 0:
left : Sum 0 → A
left (Left a) = a
for all θ. if θ ∈ csu(s, t) then θ(Γ ) ` θ(C)
Γ, (s = t) ` C
This definition omits the clause for the Right branch. The Right
branch has index succ(n) for some n, and the type annotation tells
Here, we write csu(s, t) for a complete set of unifiers of s and t.
So the rule says that we can eliminate an equality s = t by giving
a proof of the goal C under each substitution θ that makes the two
terms s and t equal.
There are three important features of the Girard/SchroederHeister rule, two good and one bad. First, when there are no unifiers, there are no premises: if we assume an inconsistent equation,
we can immediately conclude the goal. For example, consider elim1 Indeed, the identity type by itself is not enough to write the left function:
without universes, MLTT cannot show that a proof of (0 = 1) leads to a
contradiction (Smith 1988)!
[Copyright notice will appear here once ’preprint’ option is removed.]
1
2015/3/2
inating the equality 0 = 1:
this style of hypothetical reasoning is natural to explain to
programmers, it is also extremely non-algorithmic.
Γ, (0 = 1) ` C
• We formulate an algorithmic type system (Section 4) for our
declarative calculus, and prove that typechecking is decidable,
deterministic (4.5), and sound and complete (Sections 5–6) with
respect to the declarative system.
Second, this rule is an invertible left rule in the sequent calculus,
which is known to correspond to pattern matching (Krishnaswami
2009). These two features line up nicely with our definition of
head, where the impossibility of the Left case was indicated by the
absence of a pattern clause. So it looks like the equalities used by
GADTs correspond to the Girard/Schroeder-Heister equality, not
the Martin-Löf identity type.
Alas, the third feature of this rule prevents us from just giving
a proof term assignment for first-order logic and calling it a day. It
is formulated in terms of unification, and it works by treating the
free variables of the two terms as unification variables. But type
inference algorithms also use unification, introducing unification
variables to stand for unknown types. So we need to understand
how to integrate these two uses of unification, or at least how to
keep them decently apart, in order to take this logical specification
and implement type inference for it.
This problem—formulating indexed types in a logical style,
while retaining the ability to do type inference for them—is the
subject of this paper.
Supplementary material. We submitted two supplemental files:
one with rules that we omitted for space reasons, and another (very
long) file with detailed proofs and omitted lemma statements.
2.
Overview
To orient the reader, we give an overview and rationale of the
novelties in our type system, before getting into the details of
the typing rules and algorithm. We explain our design choices
by continuing with the Sum type definition from the introduction.
As is well-known (Cheney and Hinze 2003; Xi et al. 2003), this
kind of declaration can be desugared into type expressions that use
equality and existential types to express the return type constraints;
the example in the introduction desugars into something like
Sum n , A × (n = 0) + ∃m : N. B × (n = succ(m))
While simple, this encoding suffices to illustrate all of the key
difficulties in typechecking for GADTs.
Contributions. The equivalence of GADTs to the combination of
existential types and equality constraints has long been known (Xi
et al. 2003). Our fundamental contribution is to reduce GADTs to
standard logical ingredients, while retaining the implementability of
the type system. We accomplish this by formulating a system of indexed types in a bidirectional style (combining type synthesis with
checking against a known type), which is well-known to combine
practical implementability with theoretical tidiness.
Universal, existentials, and type inference. All practical typed
functional languages must support some degree of type inference,
most critically the inference of type arguments. That is, if we have
a function f of type ∀a. a → a, and we want to apply it to the
argument 3, then we want to write f 3, and not f [Nat] 3 (as would
be the case in pure System F). Even with a single type argument,
this is a rather noisy style, and programs using even moderate
amounts of polymorphism would rapidly become unreadable.
However, omitting type arguments has significant metatheoretical implications. In particular, it forces us to include subtyping
in our typing rules, so that (for instance) the polymorphic type
∀a. a → a can be viewed as a subtype of its instantiations (like
Nat → Nat).
For the subtype relation induced by polymorphism, subtype entailment is decidable (under modest restrictions). Matters get more
complicated when existential types are also included. As can be
seen in the encoding of the Left constructor of the Sum n type,
existentials are necessary to encode equality constraints in GADTs.
But the naive combination of existential and universal types requires doing unification under a mixed prefix of alternating quantifiers (Miller 1992), which is undecidable. Thus, programming languages traditionally have stringently restricted the use of existential
types. They tie existential introduction and elimination to datatype
declarations, so that there is always a syntactic marker for when to
introduce or eliminate existential types. This permits leaving existentials out of subtyping altogether, at the price of no longer permitting implicit subtyping (such as using λx. x + 1 at type ∃a.a → a).
While this is a practical solution, it increases the distance between surface languages and their type-theoretic cores. Our goal is
to give a direct type-theoretic account of the features of our surface
languages, avoiding complex elaboration passes.
The key problem in mixed-prefix unification is that the order in which to instantiate quantifiers is unclear. When deciding
Γ ` ∀a. A(a) ≤ ∃b. B(b), we have the choice to choose an instantiation for a or for b, so that we prove the subtype entailment
Γ ` A(t) ≤ ∃b. B(b) or the subtype entailment Γ ` ∀a. A(a) ≤
B(t). An algorithm will introduce a unification variable for a and
then for b, or the other way around—and this choice matters! In the
first order, b may depend on a, but not vice versa; with the second
order, the allowed dependencies are reversed. Accurate dependency
• Our language supports implicit higher-rank polymorphism in-
cluding existential types. While algorithms for higher-rank universal polymorphism are well-known (Peyton Jones et al. 2007;
Dunfield and Krishnaswami 2013), our approach to supporting
existential types is novel.
Our system goes beyond the standard practice of tying existentials to datatype declarations (Läufer and Odersky 1994),
in favour of a first-class treatment of implicit existential types.
This approach has historically been thought difficult, because
the unrestricted combination of universal and existential quantification seems to require mixed-prefix unification (i.e., solving equations under alternating quantifiers). We use the prooftheoretic technique of focusing to give a novel polarized subtyping judgment, which lets us treat alternating quantifiers in a
way that retains decidability while maintaining other essential
properties of subtyping, such as stability under substitution and
transitivity.
• Our language includes equality types in the style of Girard and
Schroeder-Heister, but without an explicit introduction form for
equality. Instead, we treat equalities as property types, in the
style of intersection or refinement types. This means that we do
not need to write explicit equality proofs in our syntax, which
permits us to more closely model the way equalities are used in
OCaml and Haskell.
• Our calculus includes nested pattern matching, which fits neatly
in the bidirectional framework, and allows a formal specification of coverage checking with GADTs.
• Our declarative system tracks whether or not a derivation has a
principal type. The system includes an unusual “higher-order
principality” rule, which says that if only a single type can
be synthesized for a term, then that type is principal. While
2
2015/3/2
tracking amounts to Skolemization, which means we have reduced
the problem to higher-order unification.
We adopt an idea from polarized type theory. In the language
of polarization, universals are a negative type, and existentials are
a positive type. So we introduce two mutually-recursive subtype
relations: Γ ` A ≤+ B for positive types and Γ ` A ≤− B
for negative types. The positive subtype relation only deconstructs
existentials, and the negative subtype relation only deconstructs
universals. This fixes the order in which quantifiers are instantiated,
making the problem decidable (in fact, rather easy).
The price we pay is that fewer subtype entailments are derivable. But the lost subtype entailments are those that rely on “clever”
quantifier reversals, and all such entailments can be mimicked by
writing identity coercions. So we do not lose fundamental expressivity, but we do gain decidability.
Expressions
Values
Spines
Patterns
Branches
Lists of branches
e ::= x | () | λx. e | e1 (e2 ·· s) | (e : A)
| he1 , e2 i | inj1 e | inj2 e | case(e, Π)
v ::= x | () | λx. e | (v : A)
| hv1 , v2 i | inj1 v | inj2 v
s ::= · | e ·· s
ρ ::= x | hρ1 , ρ2 i | inj1 ρ | inj2 ρ
π ::= ~ρ ⇒ e
Π ::= · | π | Π
Figure 1. Source syntax
Universal variables α, β, γ
Sorts
κ ::= ? | N
Types
A, B, C ::= 1 | A → B | A + B | A × B
| α | ∀α : κ. A | ∃α : κ. A
|P ⊃A|A∧P
Terms/monotypes
t, τ, σ ::= zero | succ(t) | 1 | α
|τ→σ|τ+σ|τ×σ
Propositions
P, Q ::= t = t 0
Contexts
Ψ ::= · | Ψ, α : κ | Ψ, x : A p
Polarities
± ::= + | −
Binary connectives
⊕ ::= → | + | ×
Principalities
p, q ::= ! | 6 6 !
|{z}
Equality as a property. The constructors in the datatype declaration above contain no explicit equality proofs: we can construct
a value Left a without giving an equality proof that the index is
zero. This is the usual convention in Haskell and OCaml, but our
encoding pairs a value together with a proof. As before, we would
like to model this feature directly, so that our calculus stays close
to surface languages, without sacrificing the logical reading of the
system.
In this case, the appropriate logical concepts come from the
theory of intersection types. A typing judgment such as e : A ×
B can be viewed as giving instructions on how to construct a
value (pair an A with a B). But types can also be viewed as
properties, where e : X is read “e has property X”. To model
GADTs accurately, we treat equations t = t 0 as properties, using
a property type constructor A ∧ P to model elements of type A
satisfying the property (equation) P. (We also introduce P ⊃ A for
its adjoint dual.) So our encoding is really:
Sum n , A ∧ (n = 0) + ∃m : N. B ∧ (n = succ(m))
sometimes omitted
Figure 2. Syntax of declarative types and contexts
quantifiers are introduced into the system, which is why it is not
much remarked upon. However, prior work on GADTs, starting
with Simonet and Pottier (2007), has emphasized the importance of
the fact that handling equality constraints is much easier when the
type of a scrutinee is principal. Essentially, this ensures that no existential variables can appear in equations, which prevents equation
solving from interfering with unification-based type inference. The
OutsideIn algorithm takes this consequence as a definition, permitting non-principal types just so long as they do not change the values of equations. However, Vytiniotis et al. (2011) note that while
their system is sound, they no longer have a completeness result for
their type system.
We use this insight to extend our bidirectional typechecking algorithm to track principality: The judgments we give track whether
types are principal, and we use this to give a relatively simple specification for whether or not type annotations are needed. We are able
to give a very natural spec to programmers—cases on GADTs must
scrutinize terms with principal types, and an inferred type is principal just when it is the only type that can be inferred for that term—
which soundly and completely corresponds to the implementationside constraints: a type is principal when it contains no existential
unification variables.
Then inj1 a inhabits Sum n only if the property n = 0 is true.
Handling equality constraints through intersection types means that
certain restrictions on typing that are useful for decidability, such
as restricting property introduction to values, arise naturally from
the semantic point of view—via the value restriction needed for
soundly modeling intersection and union types (Davies and Pfenning 2000; Dunfield and Pfenning 2003).
Bidirectionality, pattern matching, and principality. Something
that is not, by itself, novel in our approach is our decision to formulate both the declarative and algorithmic systems in a bidirectional
style. Bidirectional checking is a popular implementation choice
(for systems ranging from dependent types (Coquand 1996; Abel
et al. 2008) to OO languages like C# and Scala (Bierman et al.
2007; Odersky et al. 2001), but also has good proof-theoretic foundations (Watkins et al. 2004), making it useful both for specifying
and implementing type systems. Bidirectional approaches make it
clear to programmers where annotations are needed (which is good
for specification), and can also remove unneeded nondeterminism
from typing (which is good for both implementation and proving
its correctness).
However, it is worth highlighting that because both bidirectionality and pattern matching arise from focalization, these two features fit together extremely well. In fact, by following the blueprint
of focalization-based pattern matching, we can give a coveragechecking algorithm that explains when it is permissible to omit
clauses in pattern matching (such as the omission of the Right
case from the left function in the introduction).
In the propositional case, the type synthesis judgment of a bidirectional type system generates principal types: if a type can be
inferred for a term, that type is unique. This property is lost once
3.
Declarative Typing
3.1
Syntax
Expression language. Expressions (Figure 1) are variables x, the
unit value (), functions λx. e, applications to spines e1 (e2 ·· s),
annotations (e : A), pairs he1 , e2 i, injections into a sum type injk e,
and case expressions case(e, Π) where Π is a list of branches π,
which can eliminate pairs and injections; see below.
Values v are standard (for a call-by-value semantics).
3
2015/3/2
checking, eq. elim.
Ψ/P ` e ⇐ C p
subtyping
Ψ ` A ≤± B
coverage
~
Ψ ` Π covers A
spine typing
Ψ ` s : ApBq
type checking
Ψ`e⇐Ap
match, eq. elim.
~ ⇐Cp
Ψ / P ` Π :: A
principality-recovering
spine typing
Ψ ` s : A p B dqe
A proposition P or Q is simply an equation t = t 0 .
Note that terms, which represent runtime-irrelevant information, are distinct from expressions; however, an expression may
include type annotations of the form P ⊃ A and A ∧ P, where
P contains terms.
Contexts. A declarative context Ψ is an ordered sequence of universal variable declarations α : κ and expression variable typings
x : A p, where p denotes whether the type A is principal (Section
3.3). A variable α can be free in a type A only if α was declared to
the left: α : ?, x : α p is well-formed, but x : α p, α : ? is not.
pattern matching
~ ⇐Cp
Ψ ` Π :: A
type synthesis
Ψ`e⇒Bp
3.2
Figure 3. Dependency structure of the declarative judgments
context Ψ, type A is a subtype of B,
Ψ ` A ≤± B Under
decomposing head connectives of polarity ±
Ψ ` A type
nonpos(A)
±
Ψ` A≤ A
nonneg(A)
≤Refl±
Ψ ` A ≤− B
nonpos(A)
Ψ ` A ≤+ B
nonpos(B)
Ψ ` A ≤+ B
nonneg(A)
Ψ ` A ≤− B
nonneg(B)
Subtyping
We give our two subtyping relations in Figure 4. We treat the
universal quantifier as a negative type (since it is a function in
System F), and the existential as a positive type (since it is a
pair in System F). We have two typing rules for each of these
connectives, corresponding to the left and right rules for universals
and existentials in the sequent calculus.
We treat all other types as having no polarity. The positive and
negative subtype judgments are mutually recursive, and the ≤−
+
rule permits switching the polarity of subtyping from positive to
negative when both of the types are non-positive, and conversely for
≤+
− . When both types are neither positive nor negative, we require
them to be equal (≤Refl).
In logical terms, functions and guarded types are negative;
sums, products and assertion types are positive. We could potentially operate on these types in the negative and positive subtype relations, respectively. Leaving out (for example) function subtyping
means that we will have to do some η-expansions to get programs
to typecheck; we omit these rules to keep the implementation complexity low. This also serves to illustrate a point of flexibility in
the design of a bidirectional type system: we are relatively free to
adjust the subtype relation to taste!
≤−
+
≤+
−
Ψ` τ:κ
Ψ ` [τ/α]A ≤− B
Ψ, β : κ ` A ≤− B
≤∀L
≤∀R
−
Ψ ` ∀α:κ. A ≤ B
Ψ ` A ≤− ∀β:κ. B
Ψ, α : κ ` A ≤+ B
Ψ` τ:κ
Ψ ` A ≤+ [τ/β]B
≤∃L
≤∃R
+
Ψ ` ∃α:κ. A ≤ B
Ψ ` A ≤+ ∃β:κ. B
3.3
Figure 4. Subtyping in the declarative system
Typing judgments
Principality. Our typing judgments carry principalities: A ! means
that A is principal, and A6 6 ! means A is not principal. Note that a
principality is part of a judgment, not part of a type. In the checking
judgment Ψ ` e ⇐ A p the type A is input; if p = !, we know
that e is not the result of guessing. For example, the e in (e : A) is
checked against A !. In the synthesis judgment Ψ ` e ⇒ A p, the
type A is output, and p = ! means it is impossible to synthesize
any other type, as in Ψ ` (e : A) ⇒ A !.
We sometimes omit a principality when it is6 6 ! (“not principal”).
We write p v q, read “p at least as principal as q”, for the reflexive
closure of ! v 6 6 !.
A spine s is a list of expressions—arguments to a function.
Allowing empty spines is convenient in the typing rules, but would
be strange in the source syntax, so (in the grammar of expressions
e) we require a nonempty spine e2 ·· s.
Patterns ρ consist of pattern variables, pairs, and injections. A
branch π is a sequence of patterns ~ρ with a branch body e. We
formulate pattern clauses as sequences as a technical convenience
to specify pattern typing and coverage checking inductively, by
letting us deconstruct tuple patterns into a sequence of patterns.
Spine judgments. The ordinary form of spine judgment, Ψ `
s : A p C q, says that if arguments s are passed to a
function of type A, the function returns type C. For a function
e applied to one argument e1 , we write e e1 as syntactic sugar
for e (e1 ·· ·). Supposing e1 synthesizes A1 → A2 , we apply
Decl→Spine, checking e1 against A1 and using DeclEmptySpine
to derive Ψ ` · : A2 p A2 p.
Rule Decl∀Spine does not decompose e ·· s but instantiates a ∀quantifier. Note that, even if the given type ∀α : κ. A is principal
(p = !), the type [τ/α]A in the premise is not principal—we could
choose a different τ. In fact, the q in Decl∀Spine is also always 6 6 !,
because no rule deriving the ordinary spine judgment can recover
principality.
The recovery spine judgment Ψ ` s : A p C dqe,
however, can restore principality in situations where the choice of
τ in Decl∀Spine cannot affect the result type C. If A is principal
(p = !) but the ordinary spine judgment produces a non-principal
C, we can try to recover principality with DeclSpineRecover. Its
Types. We write types as A, B and C. We have the unit type 1,
functions A → B, sums A + B, and products A × B.
We have universal and existential types ∀α : κ. A and ∃α : κ. A;
these are predicative quantifiers over monotypes (see below). We
write α, β, etc. for type variables; these are universal, except when
bound within an existential type.
We also have a guarded type P ⊃ A, read “P implies A”. This
implication corresponds to type A, provided P holds. Its dual is
the asserting type A ∧ P, read “A with P”, which witnesses the
proposition P. In both types, P has no runtime content.
Sorts, terms, monotypes, and propositions. Terms and monotypes t, τ, σ share a grammar but are distinguished by their sorts κ.
Natural numbers zero and succ(t) are terms and have sort N. Unit
1 has the sort ? of monotypes. A variable α stands for a term or a
monotype, depending on the sort κ annotating its binder. Functions,
sums, and products of monotypes are monotypes and have sort ?.
We tend to prefer t for terms and σ, τ for monotypes.
4
2015/3/2
Ψ ` e ⇐ A p Under context Ψ, expression e checks against input type A
Ψ ` P true Under context Ψ, check P
Ψ ` e ⇒ A p Under context Ψ, expression e synthesizes output type A
Ψ ` (t = t) true
Under context Ψ,
Ψ` s:ApCq
passing spine s to a function of type A synthesizes type C;
Ψ ` s : A p C dqe in the dqe form, recover principality in q if possible
Ψ` e⇒Aq
Ψ ` A ≤pol(B) B
DeclSub
Ψ` e⇐Bp
x : Ap ∈ Ψ
DeclVar
Ψ` x⇒Ap
Ψ ` () ⇐ 1 p
v chk-I
Ψ, α : κ ` v ⇐ A p
Decl∀I
Ψ ` v ⇐ ∀α : κ. A p
Decl1I
Ψ ` P true
Ψ` e⇐Ap
Decl∧I
Ψ` e⇐A∧P p
Ψ ` A type
Ψ` e⇐A!
DeclAnno
Ψ ` (e : A) ⇒ A !
Ψ` τ:κ
Ψ ` e ·· s : [τ/α]A 6 6 ! C q
Decl∀Spine
Ψ ` e ·· s : ∀α : κ. A p C q
v chk-I
Ψ/P` v⇐A!
Decl⊃I
Ψ` v⇐P⊃A!
Ψ ` P true
Ψ ` e ·· s : A p C q
Decl⊃Spine
Ψ ` e ·· s : P ⊃ A p C q
Ψ, x : A p ` e ⇐ B p
Decl→I
Ψ ` λx. e ⇐ A → B p
Ψ` e⇒Ap
Ψ ` s : A p C dqe
Decl→E
Ψ` es⇒Cq
Ψ ` s : A ! C66!
for all C 0 .
if Ψ ` s : A ! C 0 6 6 ! then C 0 = C
DeclSpineRecover
Ψ ` s : A ! C d!e
Ψ` ·:ApAp
DeclCheckpropEq
Ψ` e⇐Ap
Ψ` s:BpCq
Decl→Spine
Ψ ` e ·· s : A → B p C q
DeclEmptySpine
Ψ ` e ⇐ Ak p
Decl+Ik
Ψ ` injk e ⇐ A1 + A2 p
Ψ` s:ApCq
DeclSpinePass
Ψ ` s : A p C dqe
Ψ ` e1 ⇐ A1 p
Ψ ` e2 ⇐ A2 p
Decl×I
Ψ ` he1 , e2 i ⇐ A1 × A2 p
Ψ` e⇒A!
Ψ ` Π :: A ⇐ C p
Ψ ` Π covers A
DeclCase
Ψ ` case(e, Π) ⇐ C p
context Ψ, incorporate proposition P
Ψ / P ` e ⇐ C p Under
and check e against C
mgu(σ, τ) = θ
θ(Ψ) ` θ(e) ⇐ θ(C) p
DeclCheckUnify
Ψ / (σ = τ) ` e ⇐ C p
mgu(σ, τ) = ⊥
DeclCheck⊥
Ψ / (σ = τ) ` e ⇐ C p
Figure 5. Declarative typing
~ ⇐ C p Under context Ψ,
Ψ ` Π :: A
~ and bodies of type C
check branches Π with patterns of type A
~ ⇐Cp
Ψ ` · :: A
DeclMatchEmpty
~ ⇐Cp
~ ⇐Cp
Ψ ` π :: A
Ψ ` Π :: A
DeclMatchSeq
Ψ ` π | Π :: ∆ ⇐ C p
Ψ` e⇐Cp
DeclMatchBase
Ψ ` (· ⇒ e) :: · ⇐ C p
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: A
DeclMatchUnit
~ ⇐Cp
Ψ ` (), ~ρ ⇒ e :: 1, A
~ ⇐Cp
Ψ, α : κ ` Π :: A, A
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: ∃α : κ. A, A
~ ⇐Cp
Ψ ` ρ1 , ρ2 , ~ρ ⇒ e :: A1 , A2 , A
DeclMatch∃
~ ⇐Cp
Ψ ` hρ1 , ρ2 i , ~ρ ⇒ e :: A1 × A2 , A
~ ⇐Cp
Ψ ` ρ, ~ρ ⇒ e :: Ak , A
DeclMatch+k
~ ⇐Cp
Ψ ` injk ρ, ~ρ ⇒ e :: A1 + A2 , A
A not headed by ∧ or ∃
~ ⇐Cp
Ψ, x : A ! ` ~ρ ⇒ e :: A
~ ⇐Cp
Ψ ` x, ~ρ ⇒ e :: A, A
~ ⇐Cp
Ψ / P ` Π :: A
DeclMatch×
~ ⇐Cp
Ψ / P ` ~ρ ⇒ e :: A, A
DeclMatch∧
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: A ∧ P, A
DeclMatchNeg
A not headed by ∧ or ∃
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: A
~ ⇐Cp
Ψ ` _, ~ρ ⇒ e :: A, A
DeclMatchWild
Under context Ψ, incorporate proposition P while checking branches Π
~ and bodies of type C
with patterns of type A
mgu(σ, τ) = ⊥
DeclMatch⊥
~ ⇐Cp
Ψ / σ = τ ` ~ρ ⇒ e :: A
~ ⇐ θ(C) p
θ(Ψ) ` θ(~ρ ⇒ e) :: θ(A)
DeclMatchUnify
~
Ψ / σ = τ ` ~ρ ⇒ e :: A ⇐ C p
mgu(σ, τ) = θ
Figure 6. Declarative pattern matching
5
2015/3/2
~ Patterns Π cover the types A
~ in context Ψ
Ψ ` Π covers A
var
0
Ψ ` (· ⇒ e1 ) | Π covers ·
1
Π ; Π0
DeclCoversEmpty
×
Π ; Π0
~
Ψ ` Π 0 covers A
DeclCovers1
~
Ψ ` Π covers 1, A
+
Π ; ΠL k ΠR
θ = mgu(t1 , t2 )
~
Ψ ` Π 0 covers A
DeclCoversVar
~
Ψ ` Π covers A, A
Π ; Π0
~
Ψ ` Π 0 covers A1 , A2 , A
DeclCovers×
~
Ψ ` Π covers A1 × A2 , A
~
Ψ ` ΠL covers A1 , A
~
Ψ ` ΠR covers A2 , A
DeclCovers+
~
Ψ ` Π covers A1 + A2 , A
~
θ(Ψ) ` θ(Π) covers θ(A0 , A)
~
Ψ ` Π covers A0 ∧ (t1 = t2 ), A
~
Ψ, α : κ ` Π covers A
~
Ψ ` Π covers ∃α : κ. A, A
mgu(t1 , t2 ) = ⊥
DeclCoversEq
~
Ψ ` Π covers A0 ∧ (t1 = t2 ), A
DeclCovers∃
DeclCoversEqBot
×
Π ; Π 0 Expand head pair patterns in Π
×
Π ; Π0
×
· ; ·
×
ρ ∈ {z, _}
×
ρ, ~ρ ⇒ e | Π ;
ρ1 , ρ2 , ~ρ ⇒ e | Π 0
hρ1 , ρ2 i , ~ρ ⇒ e | Π ;
×
Π ; Π0
_, _, ~ρ ⇒ e | Π 0
+
Π ; ΠL k ΠR Expand head sum patterns in Π into left ΠL and right ΠR sets
ρ ∈ {u, _}
+
· ; ·k·
+
ρ, ~ρ ⇒ e | Π ;
+
Π ; ΠL k ΠR
_, ~ρ ⇒ e | ΠL k _, ~ρ ⇒ e | ΠR
+
Π ; ΠL k ΠR
+
inj1 ρ, ~ρ ⇒ e | Π ; ρ, ~ρ ⇒ e | ΠL k ΠR
+
Π ; ΠL k ΠR
+
inj2 ρ, ~ρ ⇒ e | Π ; ΠL k ρ, ~ρ ⇒ e | ΠR
var
head variable
Π ; Π 0 Remove
and wildcard patterns from Π
1
head variable, wildcard,
Π ; Π 0 Remove
and unit patterns from Π
var
var
· ; ·
ρ ∈ {u, _}
Π ; Π0
var
ρ, ~ρ ⇒ e | Π ; ~ρ ⇒ e | Π 0
var
1
· ; ·
ρ ∈ {u, _, ()}
Π ; Π0
var
ρ, ~ρ ⇒ e | Π ; ~ρ ⇒ e | Π 0
Figure 7. Match coverage
~ ⇐ C p judgment (in Figure 15) systematically
The Ψ ` Π :: A
checks the coverage of each clause in Π: the DeclMatchEmpty rule
succeeds on the empty list, and the DeclMatchSeq rule checks one
clause and recurs on the remaining elements.
The remaining rules for sums, units, and products break down
patterns left to right, one constructor at a time. Products also extend
the pattern and type sequences, with DeclMatch× breaking down
a pattern vector headed by a pair pattern hp, p 0 i , ~p into p, p 0 , ~p
~ into A, B, C).
~ Once
(also turning the type sequence from A × B, C
all the patterns are eliminated, the DeclMatchBase rule says that if
the body typechecks, then the clause typechecks. For completeness,
the variable and wildcard rules are both restricted so that any toplevel existentials and equations are eliminated before discarding the
type.
The existential elimination rule DeclMatch∃ unpacks an existential type, and DeclMatch∧ breaks apart a conjunction by eliminating the equality using unification. The DeclMatch⊥ rule says
that if the equation is false then the branch always succeeds, because this case is impossible. The DeclMatchUnify rule unifies the
two terms of an equation and applies the substitution before continuing to check typing. Together, these two rules implement the
Schroeder-Heister equality elimination rule. Because our language
of terms has only simple first-order terms, either unification will
fail, or there is a most general unifier.
~ judgment (in Figure 16) checks whether a
The Ψ ` Π covers A
set of patterns covers all the possible cases. As with match typing,
first premise is Ψ ` s : A ! C 6 6 !; its second premise (really, an
infinite set of premises) quantifies over all derivations of Ψ ` s :
A ! C 0 6 6 !. If C 0 = C in all such derivations, then the ordinary
spine rules erred on the side of caution: C is actually principal, so
we can set q = ! in the conclusion of DeclSpineRecover.
If some C 0 6= C, then C is certainly not principal, and we must
apply DeclSpinePass, which simply transitions from the ordinary
judgment to the recovery judgment.
We need to stop and ask: Is DeclSpineRecover well-founded?
If the second premise quantified over the same judgment form as
the conclusion, certainly not; hence our distinction between the
ordinary and recovery judgments. But the derivations in the second
premise may have checking derivations (via Decl→Spine), and
in turn synthesis derivations, and in turn (via Decl→E) the same
recovery judgment! We are saved by the fact that Decl→Spine and
Decl→E decompose their subject (the spine s or expression e)—
any derivations of a recovery judgment lurking within the second
premise of DeclSpineRecover must be for a smaller spine.
Pattern matching. The DeclCase rule checks that the scrutinee
has a principal type, and then invokes the two main judgments for
~ ⇐ C p judgment checks
pattern matching. The Ψ ` Π :: A
that each branch in the list of branches Π is well-typed, and the
~ judgment does coverage checking for the list of
Ψ ` Π covers A
~ of pattern types
clauses. Both of these judgments take a vector A
to simplify the specification of coverage checking.
6
2015/3/2
Universal variables α, β, γ
^ γ
Existential variables α
^ , β,
^
Variables
u ::= α | α
^
Types
A, B, C ::= 1 | α | α
^
| ∀α : κ. A | ∃α : κ. A
|P ⊃A|A∧P
|A→B|A+B|A×B
Propositions
P, Q ::= t = t 0
Binary connectives
⊕ ::= → | + | ×
Terms/monotypes
t, τ, σ ::= zero | succ(t) | 1 | α | α
^
|τ→σ|τ+σ|τ×σ
we systematically deconstruct the sequence of types in the pattern
clause, but this time, we need a set of auxiliary operations to expand
×
the patterns. For example, the Π ; Π 0 operation takes every
0
branch hp, p i , ~ρ ⇒ e and expands it to p, p 0 , ~ρ ⇒ e. To keep
the sequence of patterns aligned with the sequence of types, we
also expand variables and wildcard patterns into two wildcards:
x, ~ρ ⇒ e becomes _, _, ~ρ ⇒ e. After expanding out all the pairs,
DeclCovers× checks coverage by breaking down the pair type.
For sum types, we expand a list of branches into two lists, one
+
for each injection. So Π ; ΠL k ΠR will send all branches
headed by inj1 p into ΠL and all branches headed by inj2 p into
ΠR , with variables and wildcards being sent to both sides. Then
DeclCovers+ can check the coverage of the left and right branches
independently.
As with typing, DeclCovers∃ just unpacks the existential type,
Likewise, DeclCoversEqBot and DeclCoversEq handle the two
cases arising from equations. If an equation is unsatisfiable, coverage succeeds since there are no possible values of that type. If
it is satisfiable, we apply the substitution and continue coverage
checking.
4.
Contexts
Complete contexts Ω ::= · | Ω, α : κ | Ω, x : A p
| Ω, α
^ : κ = τ | Ω, α = t | Ω, Iu
Possibly-inconsistent contexts
∆⊥ ::= ∆ | ⊥
Figure 8. Syntax of types, contexts, and other objects in the algorithmic system
Algorithmic Typing
Our algorithmic system mimics our declarative rules as closely as
possible, with one key difference: whenever the declarative system
would make a guess, we introduce an existential variable into the
context (written with a hat α
^ ). As typechecking proceeds, we refine
the value of the existential variables to reflect our increasing knowledge. This means that each of the declarative typing judgments has
a corresponding algorithmic judgment taking both an input and an
output context: the type checking judgment Γ ` e ⇐ A p a ∆
now takes an input context Γ and yields an output context ∆ reflecting our increased knowledge of what the types have to be. A
judgment Γ −→ ∆, explained in Section 4.4, formalizes the notion
of increasing knowledge.
These judgments are documented in Figure 13, which has a
dependency graph of the algorithmic judgments. Each declarative
judgment has a corresponding algorithmic judgment, but the algorithmic system adds a few more judgments, such as type equivalence checking Γ ` A ≡ B a ∆ and variable instantiation
Γ ` α
^ := t : κ a ∆. Declaratively, these judgments correspond
to uses of reflexivity axioms; algorithmically, they correspond to
the process of solving existential variables to equate terms.
We give the algorithmic typing rules in Figure 12, but rules for
most other judgments are in the supplemental appendix.
4.1
Γ, ∆, Θ ::= · | Γ, u : κ | Γ, x : A p
| Γ, α
^ : κ = τ | Γ, α = t | Γ, Iu
[Γ ]1
[Γ ]α
Γ [^
α : κ = τ] α
^
Γ [^
α : κ] α
^
[Γ ](A ⊃ B)
[Γ ](A ∧ B)
[Γ ](A ⊕ B)
[Γ ](∀α : κ. A)
[Γ ](∃α : κ. A)
= 1
[Γ ]τ
=
α
=
=
=
=
=
=
=
when (α = τ) ∈ Γ
otherwise
[Γ ]τ
α
^
([Γ ]A) ⊃ ([Γ ]B)
([Γ ]A) ∧ ([Γ ]B)
([Γ ]A) ⊕ ([Γ ]B)
∀α : κ. [Γ ]A
∃α : κ. [Γ ]A
Figure 9. Applying a context, as a substitution, to a type
An equation α = τ must appear to the right of the universal
variable’s declaration α : κ.
Complete contexts. A complete algorithmic context, denoted by
Ω, is an algorithmic context with no unsolved existential variable
declarations.
Syntax
Possibly-inconsistent contexts. Assuming an equality can yield
inconsistency: for example, zero = succ(zero). We write ∆⊥ for
either a valid algorithmic context ∆ or inconsistency ⊥.
Expression language. The expression language is the same as in
the declarative system.
4.2
Existential variables. The algorithmic system adds existential
^ γ
variables α
^ , β,
^ to types and terms/monotypes (Figure 8). We use
the same meta-variables, e.g. A, B, C for types. We write u for
either a universal variable α or an existential variable α
^.
Context substitution
An algorithmic context can be viewed as a substitution for its
^ = α
solved existential variables. For example, α
^ = 1, β
^ →1 can
^ (applied
be applied as if it were the substitution 1/^
α, (^
α→1)/β
^ We
right to left), or the simultaneous substitution 1/^
α, (1→1)/β.
write [Γ ]A for Γ applied as a substitution to type A; this operation
is defined in Figure 9.
Applying a complete context to a type A (provided it is wellformed: Ω ` A type) yields a type [Ω]A with no existentials.
Such a type is well-formed under the declarative context obtained
by dropping all the existential declarations and applying Ω to
declarations x : A (to yield x : [Ω]A). We can think of this context
as the result of applying Ω to itself: [Ω]Ω.
More generally, we can apply Ω to any context Γ that it extends.
This operation of context application [Ω]Γ is given in Figure 10.
Contexts. An algorithmic context Γ is a sequence that, like a
declarative context, may contain universal variable declarations
α : κ and expression variable typings x : A p. However, it may
also have:
• unsolved existential variable declarations α
^ : κ (included in the
Γ, u : κ production);
• solved existential variable declarations α
^ : κ = τ;
• equations over universal variables α = τ; and
• markers Iu .
7
2015/3/2
[·]·
[Ω, x : A p](Γ, x : AΓ p)
[Ω, α : κ](Γ, α : κ)
[Ω, Iu ](Γ, Iu )
[Ω, α = t](Γ, α = t 0 )
[Ω, α
^ : κ = t]Γ
check equation
Γ ` t1 $ t2 : κ a ∆
·
[Ω]Γ, x : [Ω]A p if [Ω]A = [Ω]AΓ
[Ω]Γ, α : κ
[Ω]Γ
[Ω]t/α [Ω]Γ
if [Ω]t = [Ω]t 0

[Ω]Γ 0 when Γ = (Γ 0 , α
^ : κ = t 0)
^ : κ)
= [Ω]Γ 0 when Γ = (Γ 0 , α
[Ω]Γ otherwise
=
=
=
=
=
equiv. props.
Γ `P≡Qa∆
check prop.
Γ ` P true a ∆
instantiation
Γ `α
^ := t : κ a ∆
equiv. types
Γ `A≡Ba∆
subtyping
Γ ` A <:± B a ∆
coverage
~
Γ ` Π covers A
type checking
Γ `e⇐Apa∆
equality elim.
Γ / s $ t : κ a ∆⊥
Figure 10. Applying a complete context Ω to a context
spine typing
Γ `s:ApBqa∆
The application [Ω]Γ is defined if and only if Γ −→ Ω (context
extension; see Section 4.4), and applying Ω to any such Γ yields
the same declarative context [Ω]Ω.
Complete contexts are essential for stating and proving soundness and completeness, but are not explicitly distinguished in any
rules.
4.3
principality-recovering
spine typing
Γ ` s : A p B dqe a ∆
type synthesis
Γ `e⇒Bpa∆
Hole notation
Figure 13. Dependency structure of the algorithmic judgments
Since we will manipulate contexts not only by appending declarations (as in the declarative system) but by inserting and replacing
declarations in the middle, a notation for contexts with a hole is
useful: Γ = Γ0 [Θ] means Γ has the form (ΓL , Θ, ΓR ). For example,
^ = (^
^ x : β),
^ then Γ0 [β
^=α
^=α
^
if Γ = Γ0 [β]
α, β,
^ ] = (^
α, β
^ , x : β).
Since this notation is concise, we use it even in rules that do not
replace declarations, such as the rules for type well-formedness.
Occasionally, we also need contexts with two ordered holes:
Γ = Γ0 [Θ1 ][Θ2 ]
4.4
pattern matching
~ ⇐Cpa∆
Γ ` Π :: A
^ in Ω is different, in the sense
also holds: while the solution of β
^ : ? = 1 while ∆ contains β
^:? = α
that Ω contains β
^ , applying
Ω to the two solutions gives the same thing: applying Ω to ∆’s
^ gives [Ω]^
solution of β
α = [Ω]1 = 1, while applying Ω to Ω’s
^ also gives 1, because [Ω]1 = 1.
own solution for β
Extension is quite rigid, however, in two senses. First, if a
declaration appears in Γ , it appears in all extensions of Γ . Second,
^ is declared after α
extension preserves order. For example, if β
^
^ will also be declared after α
in Γ , then β
^ in every extension of
Γ . This holds for every variety of declaration, including equations
of universal variables. This rigidity aids in enforcing type variable
scoping and dependencies, which are nontrivial in a setting with
higher-rank polymorphism.
This combination of rigidity (in demanding that the order of
declarations be preserved) with flexibility (in how existential type
variable solutions are expressed) manages to satisfy scoping and
dependency relations and give enough room to manoeuvre in the
algorithm and metatheory.
means Γ has the form (ΓL , Θ1 , ΓM , Θ2 , ΓR )
The context extension relation
A context Γ is extended by a context ∆, written Γ −→ ∆, if ∆ has
at least as much information as Γ , while conforming to the same
declarative context—that is, [Ω]Γ = [Ω]∆ for some Ω.
We can also interpret Γ −→ ∆ as saying that Γ is entailed by
∆: all positive information derivable from Γ (say, that existential
variable α
^ is in scope) can also be derived from ∆ (which may
have more information, say, that α
^ is equal to a particular type).
The rules deriving the context extension judgment (Figure 11)
say that the empty context extends the empty context (−→Id); a
term variable typing with A 0 extends one with A if applying the
extending context ∆ to A and A 0 yields the same type (−→Var);
universal variable declarations must match (−→Uvar); equations
on universal variables must match (−→Eqn); scope markers must
match (−→Marker); and, existential variables may:
4.5
Determinacy
Our algorithmic judgments have the nice property that, given appropriate inputs, only one set of outputs is derivable. In addition to
being nice, we use this property in the proof of soundness, for spine
judgments:
Theorem 1 (Determinacy of Typing). Given Γ , e, A, p such that
Γ ` e : A p C1 q1 a ∆1 and Γ ` e : A p C2 q2 a ∆2 , it
is the case that C1 = C2 and q1 = q2 and ∆1 = ∆2 .
• be unsolved in both contexts (−→Unsolved),
• be solved in both contexts, if applying the extending context ∆
makes the solutions equal (−→Solved),
• get solved by the extending context (−→Solve),
• be added by the extending context, either without a solution
(−→Add) or with a solution (−→AddSolved);
5.
Soundness
Extension allows solutions to change, if information is preserved or increased. The extension
^:?=α
^:?=α
α
^ : ?, β
^ −→ α
^ : ? = 1, β
^
We show that the algorithmic system is sound with respect to the
declarative system.
directly increases information about α
^ , and indirectly increases
^ Perhaps more interestingly, the extension
information about β.
^:?=1
^:?=α
^ : ? = 1, β
α
^ : ? = 1, β
^ −→ α
{z
}
|
{z
}
|
For several auxiliary judgment forms, soundness is a matter of
showing that, given two algorithmic terms, their declarative versions are equal. For example, for the instantiation judgment we
have:
∆
5.1
Equating lemmas
Ω
8
2015/3/2
Γ −→ ∆ Γ is extended by ∆
· −→ ·
−→Id
Γ −→ ∆
[∆]A = [∆]A 0
−→Var
Γ, x : A p −→ ∆, x : A 0 p
Γ −→ ∆
[∆]t = [∆]t 0
Γ −→ ∆
−→Eqn
−→Uvar
Γ, α : κ −→ ∆, α : κ
Γ, α = t −→ ∆, α = t 0
Γ −→ ∆
[∆]t = [∆]t 0
−→Solved
Γ, α
^ : κ = t −→ ∆, α
^ : κ = t0
Γ −→ ∆
−→Unsolved
Γ, α
^ : κ −→ ∆, α
^:κ
Γ −→ ∆
−→Solve
0
^
^ : κ0 = t
Γ, β : κ −→ ∆, β
Γ −→ ∆
−→AddSolved
Γ −→ ∆, α
^:κ=t
Γ −→ ∆
−→Add
Γ −→ ∆, α
^:κ
Γ −→ ∆
−→Marker
Γ, Iu −→ ∆, Iu
Figure 11. Context extension
Γ ` e ⇐ A p a ∆ Under input context Γ , expression e checks against input type A, with output context ∆
Γ ` e ⇒ A p a ∆ Under input context Γ , expression e synthesizes output type A, with output context ∆
Under input context Γ ,
Γ ` s:ApCq a∆
passing spine s to a function of type A synthesizes type C;
Γ ` s : A p C dqe a ∆ in the dqe form, recover principality in q if possible
(x : A p) ∈ Γ
Var
Γ ` x ⇒ [Γ ]A p a Γ
Γ ` e⇒Aq aΘ
Θ ` A <:pol(B) B a ∆
Sub
Γ ` e⇐Bp a∆
Γ ` () ⇐ 1 p a Γ
v chk-I
e not a case
1I
Γ ` A ! type
Γ ` e ⇐ [Γ ]A ! a ∆
Anno
Γ ` (e : A) ⇒ [∆]A ! a ∆
Γ [^
α : ?] ` () ⇐ α
^ a Γ [^
α : ? = 1]
Γ, α : κ ` v ⇐ A p a ∆, α : κ, Θ
∀I
Γ ` v ⇐ ∀α : κ. A p a ∆
1I^
α
Γ, α
^ : κ ` e ·· s : [^
α/α]A C q a ∆
∀Spine
Γ ` e ·· s : ∀α : κ. A p C q a ∆
Γ ` P true a Θ
Θ ` e ⇐ [Θ]A p a ∆
∧I
Γ ` e⇐A∧P p a∆
v chk-I
Γ, IP / P a Θ
Θ ` v ⇐ [Θ]A ! a ∆, IP , ∆ 0
⊃I
Γ ` v⇐P⊃A! a∆
v chk-I
Γ, IP / P a ⊥
⊃I⊥
Γ ` v⇐P⊃A! aΓ
Γ ` P true a Θ
Θ ` e ·· s : [Θ]A p C q a ∆
⊃Spine
Γ ` e ·· s : P ⊃ A p C q a ∆
Γ, x : A p ` e ⇐ B p a ∆, x : A p, Θ
→I
Γ ` λx. e ⇐ A → B p a ∆
Γ [^
α1 :?, α
^ 2 :?, α
^ :? = α
^ 1 →^
α2 ], x : α
^1 ` e ⇐ α
^ 2 a ∆, x : α
^1 , ∆0
→I^
α
Γ [^
α : ?] ` λx. e ⇐ α
^ a∆
Γ ` e⇒Ap aΘ
Θ ` s : A p C dqe a ∆
→E
Γ ` es ⇒ C q a ∆
Γ ` s : A ! C66! a ∆
FEV(C) = ∅
SpineRecover
Γ ` s : A ! C d!e a ∆
Γ ` ·:ApAp aΓ
EmptySpine
Γ ` e ⇐ Ak p a ∆
+Ik
Γ ` injk e ⇐ A1 + A2 p a ∆
Γ ` s:ApCq a∆
(p = 6 6 !) or (q = !) or (FEV(C) 6= ∅)
SpinePass
Γ ` s : A p C dqe a ∆
Γ ` e⇐Ap aΘ
Θ ` s : [Θ]B p C q a ∆
→Spine
Γ ` e ·· s : A → B p C q a ∆
Γ [^
α1 : ?, α
^ 2 : ?, α
^:?=α
^ 1 +^
α2 ] ` e ⇐ α
^k a ∆
+I^
αk
Γ [^
α : ?] ` injk e ⇐ α
^ a∆
Γ ` e1 ⇐ A1 p a Θ
Θ ` e2 ⇐ [Θ]A2 p a ∆
×I
Γ ` he1 , e2 i ⇐ A1 × A2 p a ∆
Γ [^
α2 :?, α
^ 1 :?, α
^ :? = α
^ 1 ×^
α2 ] ` e1 ⇐ α
^1 a Θ
Θ ` e2 ⇐ [Θ]^
α2 a ∆
×I^
α
Γ [^
α : ?] ` he1 , e2 i ⇐ α
^ a∆
Γ [^
α2 : ?, α
^ 1 : ?, α
^:?=α
^ 1 →^
α2 ] ` e ·· s : (^
α1 → α
^ 2) C a ∆
α
^ Spine
Γ [^
α : ?] ` e ·· s : α
^ C a∆
Γ ` e⇒A! aΘ
Θ ` Π :: [Θ]A ⇐ [Θ]C p a ∆
Γ ` case(e, Π) ⇐ C p a ∆
∆ ` Π covers [∆]A
Case
Figure 12. Algorithmic typing
9
2015/3/2
DeclSub. The SpineRecover case is interesting: we do finish by applying DeclSpineRecover, but since DeclSpineRecover contains a
premise that quantifies over all declarative derivations of a certain
form, we must appeal to completeness! Consequently, soundness
and completeness are really two parts of one theorem.
These parts are mutually recursive—later, we’ll see that the
DeclSpineRecover case of completeness must appeal to soundness
(to show that the algorithmic type has no free existential variables).
We cannot induct on the given derivation alone, because the derivations in the “for all” part of DeclSpineRecover are not subderivations. So we need a more involved induction measure that can make
the leaps between soundness and completeness: lexicographic order with (1) the size of the subject term, (2) the judgment form,
with ordinary spine judgments considered smaller than recovering
spine judgments, and (3) the height of the derivation:
*
+
ordinary spine judgment
Lemma (Soundness of Instantiation).
If Γ ` α
^ := τ : κ a ∆ and α
^ ∈
/ FV([Γ ]τ) and [Γ ]τ = τ and
∆ −→ Ω then [Ω]^
α = [Ω]τ.
We have similar lemmas for term equality (Γ ` σ $ t : κ a ∆),
propositional equivalence (Γ ` P ≡ Q a ∆) and type equivalence
(Γ ` A ≡ B a ∆).
5.2
Elimination lemmas
Our eliminating judgments incorporate assumptions into the context Γ . We show that the algorithmic rules for these judgments just
append equations over universal variables:
Lemma (Soundness of Equality Elimination). If [Γ ]σ = σ and
[Γ ]t = t and Γ ` σ : κ and Γ ` t : κ and FEV(σ) ∪ FEV(t) = ∅,
then:
(1) If Γ / σ $ t : κ a ∆
then ∆ = (Γ, Θ) where Θ = (α1 = t1 , . . . , αn = tn ) and
for all Ω such that Γ −→ Ω and all t 0 s.t. Ω ` t 0 : κ 0
we have [Ω, Θ]t 0 = [θ][Ω]t 0 where θ = mgu(σ, t).
(2) If Γ / σ $ t : κ a ⊥ then no most general unifier exists.
5.3
e/s/Π,
Proof sketch—SpineRecover case. By i.h., [Ω]Γ ` [Ω]s :
[Ω]A ! [Ω]C q. Our goal is to apply DeclSpineRecover,
which requires that we show that for all C 0 such that [Ω]Θ `
s : [Ω]A ! C 0 6 6 !, we have C 0 = [Ω]C. Suppose we have such a
C 0 . By completeness (Theorem 3), Γ ` s : [Γ ]A ! C 00 q a ∆ 00
where ∆ 00 −→ Ω 00 . We already have (as a subderivation) Γ ` s :
A ! C 6 6 ! a ∆, so by determinacy, C 00 = C and q = 6 6 ! and
∆ 00 = ∆. With the help of lemmas about context application, we
can show C 0 = [Ω 00 ]C 00 = [Ω 00 ]C = [Ω]C.
Using completeness—really, using the i.h.—is justified because
our measure considers a non-principality-restoring judgment to be
smaller.
Direct lemmas
The last lemmas for soundness move directly from an algorithmic
judgment to the corresponding declarative judgment.
Lemma (Soundness of Checkprop).
If Γ ` P true a ∆ and ∆ −→ Ω then Ψ ` [Ω]P true.
Lemma (Soundness of Algorithmic Subtyping). If [Γ ]A = A and
[Γ ]B = B and Γ ` A type and Γ ` B type and ∆ −→ Ω and
Γ ` A <:± B a ∆ then [Ω]∆ ` [Ω]A ≤± [Ω]B.
Lemma (Soundness of Match Coverage).
~ and Γ −→ Ω and Γ ` A
~ ! types and [Γ ]A
~ =A
~
If Γ ` Π covers A
~
then [Ω]Γ ` Π covers A.
5.4
<
, height(D)
recovering spine judgment
6.
Completeness
We show that the algorithmic system is complete with respect
to the declarative system. As with soundness, we need to show
completeness of the auxiliary algorithmic judgments. We omit the
full statements of these lemmas, but as an example, if [Ω]^
α = [Ω]τ
then Γ ` α
^ := τ : κ a ∆, under certain conditions (including that
α
^∈
/ FV(τ)).
Soundness of typing
With lemmas for all the auxiliary judgments in hand, we can move
on to the main soundness result. It has six mutually-recursive
parts, one for each of the checking, synthesis, spine, and match
judgments—including the principality-recovering spine judgment
and the assumption-adding match judgment.
Theorem 2 (Soundness of Algorithmic Typing).
Given ∆ −→ Ω:
6.1
Separation
To show completeness, we will need to show that wherever the
declarative rule DeclSpineRecover is applied, we can apply the algorithmic rule SpineRecover. More concretely, the semantic notion
of principality—that no other type can be given—must entail the
syntactic notion that a type has no free existential variables.
The principality-recovering rules are potentially applicable
when we start with a principal type A ! but produce C 6 6 !, with
Decl∀Spine changing ! to 6 6 !. The proof of completeness (Thm.
3) will use the “for all” part of DeclSpineRecover, which quantifies over all types produced by the spine rules under a given
declarative context [Ω]Γ . By i.h. we get an algorithmic spine
judgment Γ ` s : A 0 ! C 0 6 6 ! a ∆. Since A 0 is principal,
any unsolved existentials in C 0 must have been introduced within
this derivation—they can’t be in Γ already. Thus, we might have
^ 66! a α
^ : ? where a Decl∀Spine
α
^ : ? ` s : A0 ! β
^ : ?, β
^ but α
subderivation introduced β,
^ can’t appear in C 0 . We also
^ in ∆, which would be morally equivalent
can’t equate α
^ and β
to C 0 = α
^ . Knowing that unsolved existentials in C 0 are “new”
and independent from those in Γ means we can argue that, if there
were an unsolved existential in C 0 , it would correspond to an unforced choice in a Decl∀Spine subderivation, invalidating the “for
all” part of DeclSpineRecover. Formalizing claims like “must have
been introduced” requires several definitions.
(i) If Γ ` e ⇐ A p a ∆ and Γ ` A p type then [Ω]∆ `
[Ω]e ⇐ [Ω]A p.
(ii) If Γ ` e ⇒ A p a ∆ then [Ω]∆ ` [Ω]e ⇒ [Ω]A p.
(iii) If Γ ` s : A p B q a ∆ and Γ ` A p type then
[Ω]∆ ` [Ω]s : [Ω]A p [Ω]B q.
(iv) If Γ ` s : A p B dqe a ∆ and Γ ` A p type then
[Ω]∆ ` [Ω]s : [Ω]A p [Ω]B dqe.
~ ⇐ C p a ∆ and Γ ` A
~ ! types and [Γ ]A
~ =A
~
(v) If Γ ` Π :: A
and Γ ` C p type
~ ⇐ [Ω]C p.
then [Ω]∆ ` [Ω]Π :: [Ω]A
~
(vi) If Γ / P ` Π :: A ⇐ C p a ∆ and Γ ` P prop and
FEV(P) = ∅ and [Γ ]P = P
~ ! types and Γ ` C p type
and Γ ` A
~ ⇐ [Ω]C p.
then [Ω]∆ / [Ω]P ` [Ω]Π :: [Ω]A
Much of this proof is simply “turning the crank”: applying
the induction hypothesis to each premise, yielding derivations of
corresponding declarative judgments (with Ω applied to everything in sight), and applying the corresponding declarative rule;
for example, in the Sub case we finish the proof by applying
10
2015/3/2
and p 0 v p then there exist ∆, Ω 0 , and C such that ∆ −→ Ω 0
and dom(∆) = dom(Ω 0 ) and Ω −→ Ω 0
~ ⇐ [Γ ]C p 0 a ∆.
and Γ / [Γ ]P ` Π :: [Γ ]A
Definition 1 (Separation).
An algorithmic context Γ is separable into ΓL ∗ ΓR if (1) Γ =
(ΓL , ΓR ) and (2) for all (^
α : κ = τ) ∈ ΓR it is the case that
FEV(τ) ⊆ dom(ΓR ).
If Γ is separable into ΓL ∗ ΓR , then ΓR is self-contained in the
sense that all existential variables declared in ΓR have solutions
whose existential variables are themselves declared in ΓR . Every
context Γ is separable into · ∗ Γ and into Γ ∗ ·.
Definition 2 (Separation-Preserving Extension).
The separated context ΓL ∗ ΓR extends to ∆L ∗ ΓR , written
Proof sketch—DeclSpineRecover case. By i.h., Γ ` s : [Γ ]A ! C 0 6 6 ! a ∆ where ∆ −→ Ω 0 and Ω −→ Ω 0 and dom(∆) =
dom(Ω 0 ) and C = [Ω 0 ]C 0 .
To apply SpineRecover, we need to show FEV([∆]C 0 ) = ∅.
Suppose, for a contradiction, that FEV([∆]C 0 ) 6= ∅. Construct
a variant of Ω 0 called Ω2 that has a different solution for some
α
^ ∈ FEV([∆]C 0 ). By soundness (Thm. 3), [Ω2 ]Γ ` [Ω2 ]s :
[Ω2 ]A ! [Ω2 ]C 0 6 6 !. Using the separation lemma with the trivial
separation Γ = (Γ ∗ ·) we get ∆ = (∆L ∗ ∆R ) and (Γ ∗ ·) −→
∗
(∆L ∗ ∆R ) and FEV(C 0 ) ⊆ dom(∆R ). That is, all existentials in
C 0 were introduced within the derivation of the (algorithmic) spine
judgment. Thus, applying Ω2 to things gives the same result as Ω,
except for C 0 , giving
(ΓL ∗ ΓR ) −→
∗ (∆L ∗ ∆R )
if (ΓL , ΓR ) −→ (∆L , ∆R ) and dom(ΓL ) ⊆ dom(∆L ) and dom(ΓR ) ⊆
dom(∆R ).
Separation-preserving extension says that variables from one
side of ∗ haven’t “jumped” to the other side. Thus, ∆L may add
existential variables to ΓL , and ∆R may add existential variables to
ΓR , but no variable from ΓL ends up in ∆R and no variable from
ΓR ends up in ∆L . It is necessary to write (ΓL ∗ ΓR ) −→
∗ (∆L ∗ ∆R )
rather than (ΓL ∗ ΓR ) −→ (∆L ∗ ∆R ), because only −→
∗ includes the
^ −→ (^
^=α
^
domain conditions. For example, (^
α ∗ β)
α, β
^ ) ∗ ·, but β
^=α
has jumped to the left of ∗ in the context (^
α, β
^ ) ∗ ·.
We prove many lemmas about separation, but use only one of
them in the subsequent development (in the DeclSpineRecover
case of typing completeness), and then only the part for spines. It
says that if we have a spine whose type A mentions only variables
in ΓR , then the output context ∆ extends Γ and preserves separation,
and the output type C mentions only variables in ∆R :
[Ω]Γ ` [Ω]s : [Ω]A ! [Ω2 ]C 0 6 6 !
Now instantiate the “for all C2 ” premise with C2 = [Ω2 ]C 0 , giving
C = [Ω2 ]C 0 . But we chose Ω2 to have a different solution for
α
^ ∈ FEV(C 0 ), so we have C 6= [Ω2 ]C 0 : Contradiction. Therefore
FEV([∆]C 0 ) = ∅, so we can apply SpineRecover.
7.
Lemma (Separation—Main).
If ΓL∗ΓR ` s : A p C q a ∆ or ΓL∗ΓR ` s : A p C dqe a ∆
and ΓL ∗ ΓR ` A p type and FEV(A) ⊆ dom(ΓR )
then ∆ = (∆L ∗ ∆R ) and (ΓL ∗ ΓR ) −→
∗ (∆L ∗ ∆R ) and FEV(C) ⊆
dom(∆R ).
6.2
Related Work
A staggering amount of work has been done on GADTs and indexed types, and for space reasons we cannot offer a comprehensive survey of the literature. So we compare more deeply to fewer
papers, to communicate our understanding of the design space.
Proof theory and type theory. As described in Section 1, there
are two logical accounts of equality—the identity type of MartinLöf and the equality type of Schroeder-Heister (1994) and Girard (1992). The Girard/Schroeder-Heister equality has a more
direct connection to pattern matching, which is why we make
use of it. Coquand (1996) pioneered the study of pattern matching in dependent type theory. One perhaps surprising feature of
Coquand’s pattern-matching syntax is that it is strictly stronger
than Martin-Löf’s eliminators. His rules can derive Axiom K
(uniqueness of identity proofs) as well as the disjointness of constructors. Similarly, constructor disjointness is derivable from the
Girard/Schroeder-Heister equality, because unification fails when
two distinct constructors are compared.
In future work, we hope to study the relation between these two
notions of equality in more depth; richer equational theories (such
as the theory of commutative rings or the βη-theory of the lambda
calculus) do not have decidable unification, but it seems plausible
that there are hybrid approaches which might let us retain some
of the convenience of the G/SH equality rule while retaining the
decidability of Martin-Löf’s J eliminator.
Completeness of typing
Theorem 3 (Completeness of Algorithmic Typing).
Given Γ −→ Ω such that dom(Γ ) = dom(Ω):
(i) If Γ ` A p type and [Ω]Γ ` [Ω]e ⇐ [Ω]A p and p 0 v p
then there exist ∆ and Ω 0 such that ∆ −→ Ω 0 and dom(∆) =
dom(Ω 0 ) and Ω −→ Ω 0
and Γ ` e ⇐ [Γ ]A p 0 a ∆.
(ii) If Γ ` A p type and [Ω]Γ ` [Ω]e ⇒ A p then there
exist ∆, Ω 0 , A 0 , and p 0 v p such that ∆ −→ Ω 0 and
dom(∆) = dom(Ω 0 ) and Ω −→ Ω 0
and Γ ` e ⇒ A 0 p 0 a ∆ and A 0 = [∆]A 0 and A = [Ω 0 ]A 0 .
(iii) If Γ ` A p type and [Ω]Γ ` [Ω]s : [Ω]A p B q and
p0 v p
then there exist ∆, Ω 0 , B 0 and q 0 v q such that ∆ −→ Ω 0
and dom(∆) = dom(Ω 0 ) and Ω −→ Ω 0
and Γ ` s : [Γ ]A p 0 B 0 q 0 a ∆ and B 0 = [∆]B 0 and
B = [Ω 0 ]B 0 .
(iv) If Γ ` A p type and [Ω]Γ ` [Ω]s : [Ω]A p B dqe and
p 0 v p then there exist ∆, Ω 0 , B 0 , and q 0 v q such that
∆ −→ Ω 0 and dom(∆) = dom(Ω 0 ) and Ω −→ Ω 0
and Γ ` s : [Γ ]A p 0 B 0 dq 0 e a ∆ and B 0 = [∆]B 0 and
B = [Ω 0 ]B 0 .
~ ! types and Γ ` C p type and [Ω]Γ ` [Ω]Π ::
(v) If Γ ` A
~ ⇐ [Ω]C p and p 0 v p then there exist ∆, Ω 0 , and C
[Ω]A
such that ∆ −→ Ω 0 and dom(∆) = dom(Ω 0 ) and Ω −→ Ω 0
~ ⇐ [Γ ]C p 0 a ∆.
and Γ ` Π :: [Γ ]A
~ ! types and Γ ` P prop and FEV(P) = ∅ and
(vi) If Γ ` A
~ ⇐ [Ω]C p
Γ ` C p type and [Ω]Γ / [Ω]P ` [Ω]Π :: [Ω]A
Indexed and refinement types. Dependent ML (Xi and Pfenning
1999) indexed programs with propositional constraints, catching
bugs in programs that type-check under the standard ML type discipline but fail to maintain additional invariants tracked by the propositional annotations. DML worked by extracting constraints from
the program and passing them to a constraint solver, a powerful
technique that led to systems such as Stardust (Dunfield 2007) and
liquid types (Rondon et al. 2008).
From phantom types to GADTs. Leijen and Meijer (1999) introduced the term phantom type to describe a technique for programming in ML/Haskell where additional type parameters are
used to constrain when values are well-typed. This idea proved to
have many applications, ranging from foreign function interfaces
(Blume 2001) to encoding Java-style subtyping (Fluet and Pucella
11
2015/3/2
2006). Phantom types allow constructing values with constrained
types, but do not easily permit learning about type equalities by
analyzing them, putting applications such as intensional type analysis (Harper and Morrisett 1995) out of reach. Both Cheney and
Hinze (2003) and Xi et al. (2003) proposed treating equalities as
a first-class concept, giving explicitly-typed calculi for typechecking equality eliminations. In these systems, no algorithm for type
inference was given.
Simonet and Pottier (2007) gave a constraint-based algorithm
for type inference for GADTs. It is this work which first identified the potential intractibility of type inference arising from the
interaction of hypothetical constraints and unification variables. To
resolve this issue they introduce the notion of tractable constraints
(i.e., constraints where hypothetical equations never contain existentials), and require placing enough annotations that all constraints
are tractable. In general, this could require annotations on case expressions, so subsequent work focused on relaxing this requirement. Though quite different in technical detail, stratified inference (Pottier and Régis-Gianas 2006) and wobbly types (Peyton
Jones et al. 2006) both work by pushing type information from annotations to case expressions, with stratified type inference literally
moving annotations around, and wobbly types tracking which parts
of a type have no unification variables. Modern GHC uses the OutsideIn algorithm (Vytiniotis et al. 2011), which further relaxes the
constraint: as long as case analysis cannot modify what is known
about an equation, the case analysis is permitted.
In our type system, the checking judgment of the bidirectional
algorithm serves to propagate annotations, and our requirement
that the scrutinee of a case expression be principal ensures that
no equations contain unification variables. This is close in effect
to stratified types, and is less expressive than OutsideIn. This is
a deliberate design choice to keep the declarative specification
simple, rather than an inherent limit of our approach.
To give a specification for the OutsideIn approach, the case
rule in our declarative system would be permitted to scrutinize an
expression if all types that can be synthesized for it have exactly
the same equations, even if they differ in their monotype parts. We
feared that such a spec would be much harder for programmers to
develop an intuition for than simply saying that a scrutinee must
synthesize a unique type. However, the technique we use—higherorder rules with implicational premises like DeclSpineRecover—
should work for this case.
More recently, Garrigue and Rémy (2013) proposed ambivalent
types, which are a way of deciding when it is safe to generalize
the type of a function using GADTs. This idea is orthogonal to
our calculus, simply because we do no generalization at all: every polymorphic function takes an annotation. However, Garrigue
and Rémy (2013) also emphasize the importance of monotonicity,
which says that substitution should be stable under subtyping, that
is, giving a more general type should not cause subtyping to fail.
This condition is satisfied by our bidirectional system.
Thierry Coquand. An algorithm for type-checking dependent types. Science
of Computer Programming, 26(1–3):167–177, 1996.
Rowan Davies and Frank Pfenning. Intersection types and computational
effects. In ICFP, pages 198–208, 2000.
Joshua Dunfield. Refined typechecking with Stardust. In Programming
Languages meets Programming Verification (PLPV ’07), 2007.
Joshua Dunfield and Neelakantan R. Krishnaswami. Complete and easy
bidirectional typechecking for higher-rank polymorphism. In ICFP,
2013. arXiv:1306.6032 [cs.PL].
Joshua Dunfield and Frank Pfenning. Type assignment for intersections
and unions in call-by-value languages. In Found. Software Science and
Computation Structures (FOSSACS ’03), pages 250–266, 2003.
Matthew Fluet and Riccardo Pucella. Phantom types and subtyping.
arXiv:cs/0403034 [cs.PL], 2006.
Jacques Garrigue and Didier Rémy. Ambivalent types for principal type
inference with GADTs. In APLAS, 2013.
Jean-Yves Girard. A fixpoint theorem in linear logic. Post to Linear Logic mailing list, http://www.seas.upenn.edu/~sweirich/
types/archive/1992/msg00030.html, 1992.
Robert Harper and Greg Morrisett. Compiling polymorphism using intensional type analysis. In POPL, pages 130–141. ACM Press, 1995.
Neelakantan R. Krishnaswami. Focusing on pattern matching. In POPL,
pages 366–378. ACM Press, 2009.
Konstantin Läufer and Martin Odersky. Polymorphic type inference and
abstract data types. ACM Trans. Prog. Lang. Sys., 16(5):1411–1430,
1994.
Daan Leijen and Erik Meijer. Domain specific embedded compilers. In
USENIX Conf. Domain-Specific Languages (DSL ’99), pages 109–122,
1999.
Dale Miller. Unification under a mixed prefix. J. Symbolic Computation,
14(4):321–358, 1992.
Martin Odersky, Matthias Zenger, and Christoph Zenger. Colored local type
inference. In POPL, pages 41–53, 2001.
Simon Peyton Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Geoffrey
Washburn. Simple unification-based type inference for GADTs. In
ICFP, pages 50–61, 2006.
Simon Peyton Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Mark
Shields. Practical type inference for arbitrary-rank types. J. Functional
Programming, 17(1):1–82, 2007.
François Pottier and Yann Régis-Gianas. Stratified type inference for
generalized algebraic data types. In POPL, pages 232–244, 2006.
Patrick Rondon, Ming Kawaguchi, and Ranjit Jhala. Liquid types. In PLDI,
pages 159–169, 2008.
Peter Schroeder-Heister. Definitional reflection and the completion. In
Extensions of Logic Programming, LNCS, pages 333–347. Springer,
1994.
Vincent Simonet and François Pottier. A constraint-based approach to
guarded algebraic data types. ACM Transactions on Programming Languages and Systems (TOPLAS), 29(1):1, 2007.
Jan M. Smith. The independence of Peano’s fourth axiom from MartinLöf’s type theory without universes. J. Symbolic Logic, 53(3):840–845,
1988.
Dimitrios Vytiniotis, Simon Peyton Jones, Tom Schrijvers, and Martin
Sulzmann. OutsideIn(X): Modular type inference with local assumptions. J. Functional Programming, 21(4–5):333–412, 2011.
Kevin Watkins, Iliano Cervesato, Frank Pfenning, and David Walker. A
concurrent logical framework: The propositional fragment. In Types for
Proofs and Programs, pages 355–377. Springer LNCS 3085, 2004.
Hongwei Xi and Frank Pfenning. Dependent types in practical programming. In POPL, pages 214–227, 1999.
Hongwei Xi, Chiyan Chen, and Gang Chen. Guarded recursive datatype
constructors. In POPL, pages 224–235, 2003.
References
Andreas Abel, Thierry Coquand, and Peter Dybjer. Verifying a semantic βη-conversion test for Martin-Löf type theory. In Mathematics of
Program Construction (MPC’08), volume 5133 of LNCS, pages 29–56,
2008.
Gavin M. Bierman, Erik Meijer, and Mads Torgersen. Lost in translation:
formalizing proposed extensions to C] . In OOPSLA, 2007.
Matthias Blume. No-longer-foreign: Teaching an ML compiler to speak
C “natively”. Electronic Notes in Theoretical Computer Science, 59(1),
2001.
James Cheney and Ralf Hinze. First-class phantom types. Technical Report
CUCIS TR2003-1901, Cornell University, 2003.
12
2015/3/2
Supplemental material for “Sound and Complete Bidirectional Typechecking for Higher-Rank
Polymorphism and Indexed Types”:
Complete Rules
This file contains rules omitted in the main paper for space reasons.
Lemmas and proofs are in another, much longer, file.
We also list all the judgment forms:
Judgment
Description
Location
Ψ` t:κ
Ψ ` P prop
Ψ ` A type
~ types
Ψ` A
Ψ ctx
Index term/monotype is well-formed
Proposition is well-formed
Type is well-formed
Type vector is well-formed
Declarative context is well-formed
Figure 14
Figure 14
Figure 14
Figure 14
Figure 14
Ψ ` A ≤± B
Declarative subtyping
Figure 4
Ψ ` P true
Declarative truth
Figure 5
e⇐Ap
e⇒Ap
s:ApCq
s : A p C dqe
~ ⇐Cp
Ψ ` Π :: A
~ ⇐Cp
Ψ / P ` Π :: A
Declarative checking
Declarative synthesis
Declarative spine typing
Declarative spine typing, recovering principality
Figure 5
Figure 5
Figure 5
Figure 5
Declarative pattern matching
Declarative proposition assumption
Figure 15
Figure 15
~
Ψ ` Π covers A
Declarative match coverage
Figure 16
Index term/monotype is well-formed
Proposition is well-formed
Polytype is well-formed
Algorithmic context is well-formed
Figure 17
Figure 17
Figure 17
Figure 17
[Γ ]A
Applying a context, as a substitution, to a type
Figure 9
Γ ` P true a ∆
Γ / P a ∆⊥
Γ ` s$t:κ a∆
s#t
Γ / s $ t : κ a ∆⊥
Check proposition
Assume proposition
Check equation
Head constructors clash
Assume/eliminate equation
Figure 18
Figure 18
Figure 19
Figure 20
Figure 21
Algorithmic subtyping
Assume/eliminate proposition
Equivalence of propositions
Equivalence of types
Instantiate
Figure 22
Figure 22
Figure 22
Figure 22
Figure 23
e chk-I
Checking intro form
Figure 24
e⇐Ap a∆
e⇒Ap a∆
s:ApCq a∆
s : A p C dqe a ∆
~ ⇐Cp a∆
Γ ` Π :: A
~ ⇐Cp a∆
Γ / P ` Π :: A
~
Γ ` Π covers A
Algorithmic checking
Algorithmic synthesis
Algorithmic spine typing
Algorithmic spine typing, recovering principality
Figure 12
Figure 12
Figure 12
Figure 12
Algorithmic pattern matching
Algorithmic pattern matching (assumption)
Figure 25
Figure 25
Algorithmic match coverage
Figure 26
Γ −→ ∆
Context extension
Figure 11
[Ω]Γ
Apply complete context
Figure 10
Ψ`
Ψ`
Ψ`
Ψ`
Γ
Γ
Γ
Γ
Γ
Γ
Γ
Γ
Γ
Γ
Γ
Γ
Γ
` τ:κ
` P prop
` A type
ctx
` A <:± B a ∆
/ P ` A <: B a ∆
` P≡Q a∆
` A≡B a∆
` α
^ := t : κ a ∆
`
`
`
`
13
2015/3/2
Ψ ` t : κ Under context Ψ, term t has sort κ
(α : κ) ∈ Ψ
UvarSort
Ψ` α:κ
Ψ` 1:?
Ψ ` zero : N
Ψ ` t1 : ?
Ψ ` t2 : ?
BinSort
Ψ ` t1 ⊕ t2 : ?
UnitSort
Ψ` t:N
SuccSort
Ψ ` succ(t) : N
ZeroSort
Ψ ` P prop Under context Ψ, proposition P is well-formed
Ψ` t:N
Ψ ` t0 : N
EqDeclProp
Ψ ` t = t 0 prop
Ψ ` A type Under context Ψ, type A is well-formed
(α : ?) ∈ Ψ
DeclUvarWF
Ψ ` α type
Ψ ` A type
Ψ ` 1 type
DeclUnitWF
Ψ ` B type
⊕ ∈ {→, ×, +}
DeclBinWF
Ψ ` A ⊕ B type
Ψ, α : κ ` A type
DeclAllWF
Ψ ` (∀α : κ. A) type
Ψ, α : κ ` A type
DeclExistsWF
Ψ ` (∃α : κ. A) type
Ψ ` P prop
Ψ ` A type
DeclImpliesWF
Ψ ` P ⊃ A type
Ψ ` P prop
Ψ ` A type
DeclWithWF
Ψ ` A ∧ P type
~ types Under context Ψ, types in A
~ are well-formed
Ψ` A
~
for all A ∈ A.
Ψ ` A type
DeclTypevecWF
~ types
Ψ` A
Ψ ctx Declarative context Ψ is well-formed
· ctx
EmptyDeclCtx
Ψ ctx
x∈
/ dom(Ψ)
Ψ ` A type
HypDeclCtx
Ψ, x : A ctx
Ψ ctx
α∈
/ dom(Ψ)
VarDeclCtx
Ψ, α : κ ctx
Figure 14. Sorting; well-formedness of propositions, types, and contexts in the declarative system
14
2015/3/2
~ ⇐Cp
Ψ ` Π :: A
~ ⇐Cp
Ψ ` · :: A
Under context Ψ,
~ and bodies of type C
check branches Π with patterns of type A
~ ⇐Cp
~ ⇐Cp
Ψ ` π :: A
Ψ ` Π :: A
DeclMatchSeq
Ψ ` π | Π :: ∆ ⇐ C p
DeclMatchEmpty
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: A
~ ⇐Cp
Ψ ` (), ~ρ ⇒ e :: 1, A
Ψ` e⇐Cp
DeclMatchBase
Ψ ` (· ⇒ e) :: · ⇐ C p
DeclMatchUnit
~ ⇐Cp
Ψ, α : κ ` Π :: A, A
DeclMatch∃
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: ∃α : κ. A, A
~ ⇐Cp
Ψ ` ρ1 , ρ2 , ~ρ ⇒ e :: A1 , A2 , A
DeclMatch×
~ ⇐Cp
Ψ ` hρ1 , ρ2 i , ~ρ ⇒ e :: A1 × A2 , A
~ ⇐Cp
Ψ ` ρ, ~ρ ⇒ e :: Ak , A
DeclMatch+k
~ ⇐Cp
Ψ ` injk ρ, ~ρ ⇒ e :: A1 + A2 , A
~ ⇐Cp
Ψ / P ` ~ρ ⇒ e :: A, A
DeclMatch∧
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: A ∧ P, A
~ ⇐Cp
Ψ, x : A ! ` ~ρ ⇒ e :: A
DeclMatchNeg
~
Ψ ` x, ~ρ ⇒ e :: A, A ⇐ C p
~ ⇐Cp
Ψ ` ~ρ ⇒ e :: A
DeclMatchWild
~
Ψ ` _, ~ρ ⇒ e :: A, A ⇐ C p
A not headed by ∧ or ∃
A not headed by ∧ or ∃
~ ⇐ C p Under context Ψ, incorporate proposition P while checking branches Π
Ψ / P ` Π :: A
~
with patterns of type A and bodies of type C
mgu(σ, τ) = ⊥
~ ⇐Cp
Ψ / σ = τ ` ~ρ ⇒ e :: A
~ ⇐ θ(C) p
θ(Ψ) ` θ(~ρ ⇒ e) :: θ(A)
mgu(σ, τ) = θ
DeclMatch⊥
~ ⇐Cp
Ψ / σ = τ ` ~ρ ⇒ e :: A
DeclMatchUnify
Figure 15. Declarative pattern matching
~ Patterns Π cover the types A
~ in context Ψ
Ψ ` Π covers A
var
0
Ψ ` (· ⇒ e1 ) | Π covers ·
1
Π ; Π0
DeclCoversEmpty
×
Π ; Π0
~
Ψ ` Π 0 covers A
DeclCovers1
~
Ψ ` Π covers 1, A
+
Π ; ΠL k ΠR
~
Ψ ` Π 0 covers A
DeclCoversVar
~
Ψ ` Π covers A, A
Π ; Π0
~
Ψ ` Π 0 covers A1 , A2 , A
DeclCovers×
~
Ψ ` Π covers A1 × A2 , A
~
Ψ ` ΠL covers A1 , A
~
Ψ ` ΠR covers A2 , A
DeclCovers+
~
Ψ ` Π covers A1 + A2 , A
~
θ(Ψ) ` θ(Π) covers θ(A0 , A)
DeclCoversEq
~
Ψ ` Π covers A0 ∧ (t1 = t2 ), A
~
Ψ, α : κ ` Π covers A
~
Ψ ` Π covers ∃α : κ. A, A
mgu(t1 , t2 ) = ⊥
θ = mgu(t1 , t2 )
~
Ψ ` Π covers A0 ∧ (t1 = t2 ), A
DeclCovers∃
DeclCoversEqBot
×
Π ; Π 0 Expand head pair patterns in Π
×
Π ; Π0
×
· ; ·
×
ρ ∈ {z, _}
×
~
ρ, ρ ⇒ e | Π ;
ρ1 , ρ2 , ~ρ ⇒ e | Π 0
hρ1 , ρ2 i , ~ρ ⇒ e | Π ;
×
Π ; Π0
_, _, ~ρ ⇒ e | Π 0
+
Π ; ΠL k ΠR Expand head sum patterns in Π into left ΠL and right ΠR sets
ρ ∈ {u, _}
+
· ; ·k·
+
ρ, ~ρ ⇒ e | Π ;
+
Π ; ΠL k ΠR
_, ~ρ ⇒ e | ΠL k _, ~ρ ⇒ e | ΠR
+
Π ; ΠL k ΠR
+
inj1 ρ, ~ρ ⇒ e | Π ; ρ, ~ρ ⇒ e | ΠL k ΠR
+
Π ; ΠL k ΠR
+
inj2 ρ, ~ρ ⇒ e | Π ; ΠL k ρ, ~ρ ⇒ e | ΠR
var
head variable
Π ; Π 0 Remove
and wildcard patterns from Π
1
head variable, wildcard,
Π ; Π 0 Remove
and unit patterns from Π
var
var
· ; ·
ρ ∈ {u, _}
Π ; Π0
var
ρ, ~ρ ⇒ e | Π ; ~ρ ⇒ e | Π 0
var
1
· ; ·
ρ ∈ {u, _, ()}
Π ; Π0
var
ρ, ~ρ ⇒ e | Π ; ~ρ ⇒ e | Π 0
Figure 16. Match coverage
15
2015/3/2
Γ ` τ : κ Under context Γ , term τ has sort κ
(u : κ) ∈ Γ
VarSort
Γ ` u:κ
(^
α : κ = τ) ∈ Γ
SolvedVarSort
Γ` α
^:κ
Γ ` zero : N
Γ ` 1:?
UnitSort
Γ ` τ1 : ?
Γ ` τ2 : ?
BinSort
Γ ` τ1 ⊕ τ2 : ?
Γ ` t:N
SuccSort
Γ ` succ(t) : N
ZeroSort
Γ ` P prop Under context Γ , proposition P is well-formed
Γ ` t:N
Γ ` t0 : N
EqProp
Γ ` t = t 0 prop
Γ ` A type Under context Γ , type A is well-formed
(^
α : ? = τ) ∈ Γ
SolvedVarWF
Γ` α
^ type
(u : ?) ∈ Γ
VarWF
Γ ` u type
Γ ` A type
Γ ` 1 type
Γ ` B type
⊕ ∈ {→, ×, +}
BinWF
Γ ` A ⊕ B type
Γ, α : κ ` A type
ExistsWF
Γ ` ∃α : κ. A type
UnitWF
Γ, α : κ ` A type
ForallWF
Γ ` ∀α : κ. A type
Γ ` P prop
Γ ` A type
ImpliesWF
Γ ` P ⊃ A type
Γ ` P prop
Γ ` A type
WithWF
Γ ` A ∧ P type
Γ ` A p type Under context Γ , type A is well-formed and respects principality p
Γ ` A type
FEV([Γ ]A) = ∅
PrincipalWF
Γ ` A ! type
Γ ` A type
NonPrincipalWF
Γ ` A6 6 ! type
~ [p] types Under context Γ , types in A
~ are well-formed [with principality p]
Γ` A
~
for all A ∈ A.
Γ ` A type
TypevecWF
~ types
Γ` A
~
for all A ∈ A.
Γ ` A p type
PrincipalTypevecWF
~ p types
Γ` A
Γ ctx Algorithmic context Γ is well-formed
· ctx
x∈
/ dom(Γ )
Γ ctx
Γ ` A type
HypCtx
Γ, x : A6 6 ! ctx
EmptyCtx
Γ ctx
Γ ctx
u∈
/ dom(Γ )
VarCtx
Γ, u : κ ctx
α:κ∈Γ
(α = −) ∈
/Γ
Γ, α = τ ctx
Γ ctx
Γ ctx
Γ ` τ:κ
x∈
/ dom(Γ )
Γ ` A type
FEV([Γ ]A) = ∅
Hyp!Ctx
Γ, x : A ! ctx
α
^∈
/ dom(Γ )
Γ ` t:κ
SolvedCtx
Γ, α
^ : κ = t ctx
EqnVarCtx
Γ ctx
Iu ∈
/Γ
MarkerCtx
Γ, Iu ctx
Figure 17. Well-formedness of types and contexts in the algorithmic system
16
2015/3/2
Γ ` P true a ∆ Under context Γ , check P, with output context ∆
Γ ` t1 $ t2 : N a ∆
CheckpropEq
Γ ` t1 = t2 true a ∆
Γ / P a ∆⊥ Incorporate hypothesis P into Γ , producing ∆ or inconsistency ⊥
Γ / t1 $ t2 : N a ∆⊥
Γ / t1 = t2 a ∆⊥
ElimpropEq
Figure 18. Checking and assuming propositions
Γ ` t1 $ t2 : κ a ∆ Check that t1 equals t2 , taking Γ to ∆
Γ ` u$u:κ aΓ
CheckeqVar
Γ ` 1$1:? aΓ
CheckeqUnit
Γ ` τ1 $ τ10 : ? a Θ
Θ ` [Θ]τ2 $ [Θ]τ20 : ? a ∆
CheckeqBin
Γ ` τ1 ⊕ τ2 $ τ10 ⊕ τ20 : ? a ∆
Γ ` zero $ zero : N a Γ
Γ ` t1 $ t2 : N a ∆
CheckeqSucc
Γ ` succ(t1 ) $ succ(t2 ) : N a ∆
CheckeqZero
Γ [^
α : κ] ` α
^ := t : κ a ∆
α
^∈
/ FV(t)
CheckeqInstL
Γ [^
α : κ] ` α
^$t:κ a∆
Γ [^
α : κ] ` α
^ := t : κ a ∆
α
^∈
/ FV(t)
CheckeqInstR
Γ [^
α : κ] ` t $ α
^:κ a∆
Figure 19. Checking equations
t1 # t2 t1 and t2 have incompatible head constructors
zero # succ(t)
1 # τ1 ⊕ τ2
succ(t) # zero
⊕1 6= ⊕2
σ1 ⊕1 τ1 # σ2 ⊕2 τ2
τ1 ⊕ τ2 # 1
Figure 20. Head constructor clash
Γ / σ $ τ : κ a ∆⊥ Unify σ and τ, taking Γ to ∆, or to inconsistency ⊥
Γ /α$α:κ aΓ
Γ / zero $ zero : N a Γ
α∈
/ FV(τ)
(α = −) ∈
/Γ
ElimeqUvarL
Γ / α $ τ : κ a Γ, α = τ
ElimeqUvarRefl
Γ / σ $ τ : N a ∆⊥
ElimeqZero
Γ / succ(σ) $ succ(τ) : N a ∆⊥
α∈
/ FV(τ)
(α = −) ∈
/Γ
ElimeqUvarR
Γ / τ $ α : κ a Γ, α = τ
ElimeqSucc
t 6= α
α ∈ FV(τ)
ElimeqUvarL⊥
Γ /α$τ:κ a⊥
t 6= α
α ∈ FV(τ)
ElimeqUvarR⊥
Γ /τ$α:κ a⊥
Γ /1$1:? aΓ
ElimeqUnit
Γ / τ1 $ τ10 : ? a Θ
Θ / [Θ]τ2 $ [Θ]τ20 : ? a ∆⊥
Γ / τ1 ⊕ τ2 $ τ10 ⊕ τ20 : ? a ∆⊥
ElimeqBin
Γ / τ1 $ τ10 : ? a ⊥
ElimeqBinBot
Γ / τ1 ⊕ τ2 $ τ10 ⊕ τ20 : ? a ⊥
σ#τ
ElimeqClash
Γ /σ$τ:κ a⊥
Figure 21. Eliminating equations
17
2015/3/2
Γ ` A <:± B a ∆ Under input context Γ , type A is a subtype of B, with output context ∆
A not headed by ∀/∃
B not headed by ∀/∃
Γ ` A≡B a∆
Γ ` A <:± B a ∆
B not headed by ∀
Γ, Iα^ , α
^ : κ ` [^
α/α]A <:− B a ∆, Iα^ , Θ
<:∀L
Γ ` ∀α : κ. A <:− B a ∆
<:Equiv
Γ, β : κ ` A <:− B a ∆, β : κ, Θ
<:∀R
Γ ` A <:− ∀β : κ. B a ∆
A not headed by ∃
^ : κ ` A <:+ [β/β]B
^
Γ, Iβ^ , β
a ∆, Iβ^ , Θ
Γ, α : κ ` A <:+ B a ∆, α : κ, Θ
<:∃L
Γ ` ∃α : κ. A <:+ B a ∆
Γ ` A <:+ ∃β : κ. B a ∆
<:∃R
neg(A)
Γ ` A <:− B a ∆
nonpos(B) −
<:+ L
Γ ` A <:+ B a ∆
nonpos(A)
Γ ` A <:− B a ∆
neg(B)
<:−
+R
Γ ` A <:+ B a ∆
pos(A)
Γ ` A <:+ B a ∆
nonneg(B) +
<:− L
Γ ` A <:− B a ∆
nonneg(A)
Γ ` A <:+ B a ∆
pos(B)
<:+
−R
Γ ` A <:− B a ∆
Under input context Γ ,
Γ ` P ≡ Q a ∆ check that P is equivalent to Q
with output context ∆
Γ ` t1 $ t2 : N a Θ
Θ ` [Θ]t10 $ [Θ]t20 : N a ∆
≡PropEq
0
Γ ` (t1 = t1 ) ≡ (t2 = t20 ) a ∆
Under input context Γ ,
Γ ` A ≡ B a ∆ check that A is equivalent to B
with output context ∆
Γ ` α≡α aΓ
≡Var
Γ` α
^≡α
^ aΓ
≡Exvar
Γ ` 1≡1 aΓ
≡Unit
Γ ` A1 ≡ B1 a Θ
Θ ` [Θ]A2 ≡ [Θ]B2 a ∆
≡⊕
Γ ` A1 ⊕ A2 ≡ B1 ⊕ B2 a ∆
Γ, α : κ ` A ≡ B a ∆, α : κ, ∆ 0
≡∃
Γ ` (∃α : κ. A) ≡ (∃α : κ. B) a ∆
Γ, α : κ ` A ≡ B a ∆, α : κ, ∆ 0
≡∀
Γ ` (∀α : κ. A) ≡ (∀α : κ. B) a ∆
Γ ` P≡Q aΘ
Θ ` [Θ]A ≡ [Θ]B a ∆
≡⊃
Γ ` (P ⊃ A) ≡ (Q ⊃ B) a ∆
Γ ` P≡Q aΘ
Θ ` [Θ]A ≡ [Θ]B a ∆
≡∧
Γ ` (A ∧ P) ≡ (B ∧ Q) a ∆
α
^∈
/ FV(τ)
Γ [^
α] ` α
^ := τ : ? a ∆
≡InstantiateL
Γ [^
α] ` α
^≡τ a∆
α
^∈
/ FV(τ)
Γ [^
α] ` α
^ := τ : ? a ∆
≡InstantiateR
Γ [^
α] ` τ ≡ α
^ a∆
Figure 22. Algorithmic equivalence and subtyping
input context Γ ,
Γ` α
^ := t : κ a ∆ Under
instantiate α
^ such that α
^ = t with output context ∆
^ ∈ unsolved(Γ [^
^ : κ])
β
α : κ][β
InstReach
^
^
^:κ=α
Γ [^
α : κ][β : κ] ` α
^ := β : κ a Γ [^
α : κ][β
^]
Γ0 ` τ : κ
InstSolve
Γ0 , α
^ : κ, Γ1 ` α
^ := τ : κ a Γ0 , α
^ : κ = τ, Γ1
Γ [^
α2 : ?, α
^ 1 : ?, α
^:?=α
^1 ⊕ α
^ 2] ` α
^ 1 := τ1 : ? a Θ
Θ` α
^ 2 := [Θ]τ2 : ? a ∆
InstBin
Γ [^
α : ?] ` α
^ := τ1 ⊕ τ2 : ? a ∆
Γ [^
α : N] ` α
^ := zero : N a Γ [^
α : N = zero]
Γ [^
α1 : N, α
^ : N = succ(^
α1 )] ` α
^ 1 := t1 : N a ∆
InstSucc
Γ [^
α : N] ` α
^ := succ(t1 ) : N a ∆
InstZero
Figure 23. Instantiation
18
2015/3/2
e chk-I Expression e is a checked introduction form
λx. e chk-I
he1 , e2 i chk-I
() chk-I
injk e chk-I
Figure 24. “Checking intro form”
~ ⇐ C p a ∆ Under context Γ ,
Γ ` Π :: A
~ and bodies of type C
check branches Π with patterns of type A
~ ⇐Cp aΓ
Γ ` · :: A
~ ⇐Cp aΘ
Γ ` π :: A
MatchEmpty
~ ⇐Cp a∆
Θ ` Π 0 :: A
0
~ ⇐Cp a∆
Γ ` π | Π :: A
MatchSeq
~ ⇐Cp a∆
Γ ` ~ρ ⇒ e :: A
MatchUnit
~ ⇐Cp a∆
Γ ` (), ~ρ ⇒ e :: 1, A
Γ ` e⇐Cp a∆
MatchBase
Γ ` (· ⇒ e) :: · ⇐ C p a ∆
~ ⇐ C p a ∆, α : κ, Θ
Γ, α : κ ` ~ρ :: A, A
Match∃
~ ⇐Cp a∆
Γ ` ~ρ ⇒ e :: (∃α : κ. A), A
~ ⇐Cp a∆
Γ / P ` ~ρ ⇒ e :: A, A
Match∧
~ ⇐Cp a∆
Γ ` ~ρ ⇒ e :: A ∧ P, A
~ ⇐Cp a∆
Γ ` ρ1 , ρ2 , ~ρ ⇒ e :: A1 , A2 , A
Match×
~ ⇐Cp a∆
Γ ` hρ1 , ρ2 i , ~ρ ⇒ e :: A1 × A2 , A
~ ⇐Cp a∆
Γ ` ρ, ~ρ ⇒ e :: Ak , A
Match+k
~ ⇐Cp a∆
Γ ` (injk ρ), ~ρ ⇒ e :: A1 + A2 , A
~ ⇐ C p a ∆, z : A !, ∆ 0
Γ, z : A ! ` ~ρ ⇒ e 0 :: A
MatchNeg
~ ⇐Cp a∆
Γ ` z, ~ρ ⇒ e :: A, A
A not headed by ∧ or ∃
~ ⇐Cp a∆
Γ ` ~ρ ⇒ e :: A
MatchWild
~
Γ ` _, ~ρ ⇒ e :: A, A ⇐ C p a ∆
A not headed by ∧ or ∃
~ ⇐ C p a ∆ Under context Γ , incorporate proposition P while checking branches Π
Γ / P ` Π :: A
~
with patterns of type A and bodies of type C
~ ⇐ C p a ∆, IP , ∆ 0
Θ ` ~ρ ⇒ e :: A
MatchUnify
~ ⇐Cp a∆
Γ / σ = τ ` ~ρ ⇒ e :: A
Γ /σ$τ:κ a⊥
Match⊥
~ ⇐Cp aΓ
Γ / σ = τ ` ~ρ ⇒ e :: A
Γ, IP / σ $ τ : κ a Θ
Figure 25. Algorithmic pattern matching
~ Under context Γ , patterns Π cover the types A
~
Γ ` Π covers A
var
Γ ` (· ⇒ e1 ) | Π covers ·
×
Π ; Π0
CoversEmpty
Π ; Π0
~
Γ ` Π 0 covers A1 , A2 , A
Covers×
~
Γ ` Π covers A1 × A2 , A
~
Γ, α : κ ` Π covers A
~
Γ ` Π covers ∃α : κ. A, A
Covers∃
~
Γ ` Π 0 covers A
CoversVar
~
Γ ` Π covers A, A
+
Π ; ΠL k ΠR
1
Π ; Π0
~
Γ ` ΠL covers A1 , A
~
Γ ` Π 0 covers A
Covers1
~
Γ ` Π covers 1, A
~
Γ ` ΠR covers A2 , A
~
Γ ` Π covers A1 + A2 , A
Covers+
~
∆ ` [∆]Π covers [∆]A0 , [∆]A
CoversEq
~
Γ ` Π covers A0 ∧ (t1 = t2 ), A
Γ / [Γ ]t1 $ [Γ ]t2 : κ a ∆
Γ / [Γ ]t1 $ [Γ ]t2 : κ a ⊥
~
Γ ` Π covers A0 ∧ (t1 = t2 ), A
CoversEqBot
Figure 26. Algorithmic match coverage
19
2015/3/2