Return to main page

𝝀 Propositions as Types

1. Introduction
2. Church, and the theory of computation 3. Theory of proof
4. A glimpse of univalent foundations

Introduction

Linking two previously thought-separate fields of inquiry yields powerful discoveries. Descarte's coordinates, for example, connect geometry with algebra. Planck's Quantum Theory connects particles with waves. The notion of Propositions as Types, which connects logic to computation, provides such a synthesis. It looks to be a simple coincidence - almost a joke - at first glance, but it turns out to be incredibly robust, motivating the development of automated proof assistants and programming languages.

The Curry-Howard isomorphism, which describes a relationship between a given logic and a certain computer language, is often referred to as propositions as types. On the surface, it appears that there is a corresponding type in the programming language for each statement in the logic and vice versa.

Church, and computation theory

Aristotle and the Stoics founded logic in classical Greece, followed by Ochalm and the Scholastics in the Middle Ages. Our interest in the subject stems from the contributions of Boole, De Morgan, Frege, Peirce, Peano, and others in the nineteenth century.

As the 20th century drawned, Whitehead and Russel's Principia Mathematica demonstrated that formal logic could express a large part of mathematics. Inspired by this vision, Hilbert became the leading proponent of formal logic, aiming to put it on a firm foundation. One goal of Hilbert's Program was to solve the Entscheidungsproblem (decision problem), that is, to develop an effectively calculable procedure to determine the truth or falisty of any statement. Some of you may have noticed that the issue assumes completeness: that every statement has a proof, either for it or for its negation. Given that any foundation for Hilbert's optimism was destroyed the day that Gödel announced his proof that arithmetic is incomplete, it may be fitting to place this project on a tombstone.

Although fulfilling Hilbert's program was the main objective, an exact definition of "effectively calculable" wasn't necessary. We simply assumed that it would be obvious whether or not a particular procedure was calculable. But a formal definition of effectively calculable was necessary to demonstrate that the decision problem was indecidable. Lambda calculus, which offers a clear notation for functions, including "first-class" functions that may appear as arguments or results of other functions, was introduced by Church as a model of computation similar to Turing machines and recursive functions.

It is remarkably compact: the basic constructs of lambda calculus are lambda abstractions and applications. If $t$ is an expression of some sort involving a free variable $x$, then the lambda abstraction $$\lambda x.t$$ is intended to represent the function which takes one input and returns the result of substituting that input for $x$ in $t$. Thus, for instance, $\lambda x.(x+1)$ is the function which adds 1 to its argument. You may have as well seen lambda expressions outside of lambda calculus, for referring to a function without giving its name. Speaking of application, it is generally considered to be left-associated, which allows the representation of functions of multiple variables in terms of functions of variable via « currying » : after being applied to the first argument, we return a function which, when applied to the next argument, returns a value. For instance $\lambda x.(\lambda y.x+y)$ : when applied to an argument $x$, it returns a function which, when applied to an argument $y$, returns $x+y$. For that matter, it is generally abbreviated $\lambda x\ y.x+y$

Evaluation and Reduction

Reduction is the process of « computing » the value of a lambda term. The $\beta$-reduction of a term $(\lambda x.t)(u)$ is defined as the substitution without captures of $u$ for $x$ in $t$. \[ (\lambda x.t)(u) \rightarrow_{\beta} t[u/x] \] Terms which can be connected by a sequence of $\beta$-reductions are then said to be $\beta$-equivalent.

Now in good situations, lambda calculus reduction is confluent, so that every term has at most one normal form, and two terms are equivalent precisely when they have the same normal form. A term is said to be normalizing if its reduction is also terminating – more precisely, $t$ is said to be weakly-normalizing if there exists a finite sequence of reductions \[ t \rightarrow t_1 \rightarrow \dots \rightarrow t_n \] terminating in a normal form $t_n$.

Different flavours of lambda calculus

In the untyped variant of lambda calculus, every term can be applied to any other term. As an exemple of the sort of freedom this allows, we can form the term $u = \lambda x.xx$ which applies its argument to itself and then define \[ \omega := u(u) \] as the self-application of $u$. Performing $\beta$-reductionns on $\omega$ yields \[ \omega \rightarrow_\beta xx[u/x] = u(u) = \omega \rightarrow_\beta \omega \rightarrow_\beta \dots \] thus giving the classic example of a non-terminating program. You might wonder how it is useful. In pure untyped lambda calculus, we can define natural numbers using the Church encoding: the number $n$ is represented by the operation of $n$-fold iteration. \[ 0 := \lambda f.(\lambda x.x) \\ \begin{align*} 1 := \lambda f.(\lambda x.f\ x) && 2 := \lambda f.(\lambda x.f(f\ x)) \end{align*} \] It is then possible to imagine a successor function, i.e., which essentially does $f^{n+1} x = f \circ f^n x$: \[ n+1 := \lambda n.\lambda f.\lambda x.f(n\ f\ x) \] This reflexive property of untyped lambda calculus is present in most natural sort of model, which cab be seen as a reflexive object in a cartesian closed category.

As was previously mentioned, Church's initial attempt to encode logical formulas using lambda calculus had to be abandoned due to its inconsistency. The system's failure to prevent a predicate from acting on itself was related to Russell's paradox and, to some extent, diagonal arguments. Self-application in Russell's logic results in paradox, whereas it results in non-terminating computations in Church's untyped lambda calculus. On the other hand, simply typed lambda calculus ensures that every term has a normal form, or that it corresponds to a computation that halts.

The theory of proof

Mathematical deduction can be presented in a variety of ways. In contrast to proof systems that reason from truth in the tradition of Hilbertian axiomatic schemes, the framework of natural deduction describes a specific class of deductive systems that is intended to be close to "natural" deductive reasoning insofar as it is based on the idea of reasoning from assumptions.

Intuitionistic logic

We have this idea that propositions should correspond to types. Therefore, rather than booleans, propositions should be interepreted as sets of values and implications as functions between the corresponding values. For instance, if we write $\mathbb{N}$ for a proposition interpreted as the set of natural numbers, the type $\mathbb{N} \implies \mathbb{N}$ would correspond to the set of functions from natural numbers to themselves. We now see that the boolean interpretation is very weak: it only cares about whether sets are empty or not.

The interpretation of implications as sets of functions, is still not entirely satisfactory because, given a function of type $\mathbb{N} \rightarrow \mathbb{N}$, there are many ways to implement it.

We thus want to shift form an extensional perspective, where two functions are equal when they have the same values on the same inputs, to an intentional one where the way the result is computed matters. This means that we should be serious about what a program is, or equivalently a proof, and define it precisely so that we can reason about the proofs of a proposition instead of its provability.

  1. the boolean level: propositions are interpreted as booleans we are interested in whether a proposition is provable or not.
  2. the extensional level: propositions are interpreted as sets and we are interested in which functions can be implemented.
  3. the intentional level: we are interested in the proofs themselves

A system of natural deduction is a deductive system containing a class of judgements generated by some constructor operations, and for which each constructor comes with two relevant classes of rules:

  1. Introduction rules, whcih allow us to conclude a judgement built using the constructor from simpler judgements
  2. Elimination rules, which allow us to conclude some other judgements by using a judgement involving the constructor.

For instance, in a system of propositional logic whose judgements are the truth of propositions, one such constructor might be the conjunction connective $\land$. The introduction rule for $\land$ would be \[ \frac{A\ \texttt{true} \qquad B\ \texttt{true}}{A \land B\ \texttt{true}} \] while its elimination rules might be

\[ \begin{align*} \frac{A \land B\ \texttt{true}}{A\ \texttt{true}} &\qquad& \frac{A \land B\ \texttt{true}}{B\ \texttt{true}} \end{align*} \] Similarly, in a system of type theory, the relevant judgements are typing judgements of the form $a:A$, meaning that the term $a$ belongs to the type $A$. In this case, an analogous constructor might be the cartesian product type, whose rules are analogous, but keeping track of the specific terms involved: \[ \frac{a:A \qquad b:B}{(a, b): A \times B} \] and \[ \begin{align*} \frac{p:A\times B}{\pi_1(p) : A} &\qquad& \frac{p:A\times B}{\pi_2(p):B} \end{align*} \] In particular, when considering natural deduction systems for type theory, there are essentially two other classes of rules:

  1. Formation rules, which declare on which basis a new type can be introduced
  2. Computation rules which constrain the result of combining term introduction with term elimination

For instance, the cartesian product type would come with a formation rule \[ \frac{A\ \texttt{type} \qquad B\ \texttt{type}}{A \times B\ \texttt{type}} \] and computation rules \[ \begin{align*} \frac{a:A \qquad b:B}{\pi_1 (a, b) \rightarrow_\beta a} &\qquad& \frac{a:A \qquad b:B}{\pi_2(a,b) \rightarrow_\beta b} \end{align*} \] One direct consequence of this insight was that any proof could be normalised to one that is not « detoured », and no concepts other than those contained in the final result are incorporated into the proof. Sequent calculus is an extention of natural deduction which supports the subformula principle and where every proof in natural deduction can be converted to a proof in sequent calculus, and conversely, so the two systems are equivalent.

Now we have already mentionned that conjunction $A \land B$ corresponds to cartesian product $A \times B$, that is, a record with two fields, also known as a pair. A proof of the proposition $A \land B$ consists of a proof of $A$ and a proof of $B$. Similarly, a value of type $A \times B$ consists of a value of type $A$ and a value of type $B$.

It turns out we can find similar correspondences with disjunctions and implications.
  1. Disjunction $A \lor B$ corresponds to a disjoint sum $A + B$, that is, a variant with two alternatives. A proof of the proposition $A \lor B$ consists of either a proof of $A$ or a proof of $B$, including an indication of which of the two has been provde. Similarly a value of type $A + B$ consists of either a value of type $A$ or a value of type $B$ including an indication of whether this is a left or right summand.
  2. Implication $A \implies B$ corresponds to function space $A \rightarrow B$. A proof of the proposition $A \implies B$ consists of a procedure that given a proof $A$ yields a proof of $B$. Similarly, a value of type $A \rightarrow B$ consists of a function that when applied to a value of type $A$ returns a value of type $B$

This reading goes back to the foundations of intuitionistic logic, with the BHK interpretation and given the intuitionistic aspect of proof, it hardly seems surprising that intuitionistic natural deduction and lambda calculus should correspond so closely. We can extend the analogy by introducing dependent types corresponding to the predicate quantifiers $\exists$ and $\forall$. With the introduction of dependent types, every proof in predicate logic can be represented by a term of a suitable typed lambda calculus.

Dependent types

The dependent product $\prod_{x:A}B(x)$ for a dependent type $x:A \vdash B(x)\ \texttt{type}$ is the type of dependently typed functions assigning to each $x:A$ an element of $B(x)$.

Said otherwise with the formation rule : \[ \frac{\Gamma \vdash A\ \texttt{type} \qquad \Gamma,x:A \vdash B(x)\ \texttt{type}}{\Gamma \vdash \displaystyle \prod_{x:A}B(x) :\equiv (x:A) \rightarrow B(x)\ \texttt{type}} \] We can then show to construct dependent function of type $\prod_{x:A} B(x)$, i.e., how we prove elements of that fibered proposition using an introduction rules. In order to construct a dependent function, one has to create a term $b(x): B(X)$ indexed by $x:A$ in context $\Gamma$ \[ \frac{\Gamma,\ x:A \vdash b(x) : B(x)}{\Gamma \vdash \lambda x.b(x) : \displaystyle \prod_{x:A} B(x)} \lambda \] The introduction rules provides a way to use dependent functions. The way to use a dependent function is to evaluate it at an argument of the domain type. Thus, the elimination rules is defined : \[ \frac{\Gamma \vdash f : \displaystyle \prod_{x:A} B(x)}{\Gamma,\ x:A \vdash f(x) : B(x)}\texttt{ev} \] We now postulate rules that specify the behaviour of functions. First we have a rule that asserts that a funciton of the form $\lambda x.b(x)$ behaves as expected, i.e., which is consistent regarding $\beta$-substitution : \[ \frac{\Gamma,\ x:A \vdash b(x) : B(x)}{\Gamma,\ x:A \vdash (\lambda y.b(y))(x) \equiv b(x) : B(x)} \beta \] Second we postulate that all elements of $\prod$-type are dependent functions: \[ \frac{\Gamma \vdash f : \displaystyle \prod_{x:A} B(x)}{\Gamma \vdash \lambda x.f(x) \equiv f :\displaystyle \prod_{x:A} B(x)} \]

I don't want to talk too much about dependent types, just keep in mind that the distinction between types and terms drops as we can manipulate types as any other data: we can define functions which create types from other types, etc. In particular, this means that the types can express pretty much any propositions and the language can be considered as a proof assistant. One key difference with functional programming languages is that the typing is so rich in proof assistants that there are no principal types and typability is undecidable. There is thus no type inference and we have to explicitly provide a type for all functions. The more precise the type for a function is, the longer the implementing the program will take, but the stronger the guarantees will be.

A glimpse of univalent foundations

We have mentionned earlier the motivations of intutionistic logic by changing the intuitive meaning we give to types: instead of thinking of them as denoting booleans, it is much more satisfactory to consider that they should be interpreted as sets, where it makes sense to consider various elements of a type.

Namely the boolean interpretation is too limited because when a type $A$ is not false (i.e. empty) there is only one reason why this could be: $A$ is necessarily true (i.e. the set with one element) and in this case there is only one proof of $A$ (the element of the set). In thise sense, the boolean interpretation does not allow for considering the possibility that a type should admit various proofs. Now if we try to make sense of equality in type theory, we discover that the set-theoretic interpretaion of logic somehow suffers from the same limitations. Namely, in a set, when two elements $x$ and $y$ are equal there is only one reason why this could be possibly be: this because $x$ is the same as $y$.

This suggest changing once again the semantics we give to types and interpret them not as booleans, not as sets, but as spaces. In this interpretation, proofs of equality correspond to paths, and we can can thus conceive models where there can be various ones. Homotopy type theory is dependent type theory seen from this point of view, and was introduced by Voevodsky in the 2000's. Voevodsky discovered that an additional axiom, called univalence, was required for logic to match the situation in spaces, and homotopy type theory is usually understood with this axiom.

In intentional type theory, we have two notions of equality:

In most proof assistants, like Agda, there is no syntax for definitional equality, because there is simply no way to distinguish between two definitionally equal terms. That is why you'll usually see propositional equality written as $\equiv$.

One interesting property inherited from dependent type theory is that we can iterate other and identity type, forming identitications between identifications. Thus we will usually refer to identifications as paths and iterated identifications as homotopies.



Formally, a homotopy between a pair of continuous maps $f : X_1 \rightarrow X_2$ and $g : X_1 \rightarrow X_2$ is a continuous function $H : [0,1] \times X_1 \rightarrow X_2$ such that $H(x, 0) = f(x)$ and $H(x, 1) = g(x)$.