### 3` `Lecture notes

#### 3.1` `Propositional logic

##### 3.1.1` `The syllabus

Go over the premise of the course and go over the beats of the course. All of this content is in the syllabus. Have you read the syllabus? You should read the syllabus. If you’re reading this paragraph, you should be reading the syllabus instead.

##### 3.1.2` `Philosophical foundations of type theory

The first rule of type theory is you don’t ask what a type is.

In set theory, when we say a statement like 2 \in \mathbb{N}, you’re
making a statement about \mathbb{N} —

In type theory, the definition of \mathbb{N} looks a lot more like a description: we add two rules to our theory,

both of which describe what is a natural number.

Don’t worry about what these mean yet, but you can read it as "under any context, zero is a natural number", and "if under some context, m is a natural number, then \mathrm{suc\ } m is a natural number".

When we say that 2 : \mathbb{N}, then, we’re saying that \mathrm{suc\ (suc\ zero)} : \mathbb{N}, which makes a statement about 2.

This is useful because in set theory, the intuition is you’ve constructed an infinitely large set of things, and to check membership you just traverse that infinite set. Computers are really bad at doing anything that’s infinite. (Obviously, there are ways of axiomatizing set theory that avoid this. The point is that the philosophy of type theory is more compatible with computer-aided proof.)

##### 3.1.3` `Classical propositional logic: semantics

prop-expr | = | var | ||

| | (prop-expr ∧ prop-expr) | |||

| | (prop-expr ∨ prop-expr) | |||

| | (prop-expr → prop-expr) | |||

| | (¬ prop-expr) |

A | B | A \to B |

\mathsf{T} | \mathsf{T} | \mathsf{T} |

\mathsf{T} | \mathsf{F} | \mathsf{F} |

\mathsf{F} | \mathsf{T} | \mathsf{T} |

\mathsf{F} | \mathsf{F} | \mathsf{T} |

Definition: A valuation is a function v : \mathsf{Var} \to \{\mathsf{T}, \mathsf{F}\} that assigns meaning to each variable. Given a valuation v and a prop-expr e, we write \semantics{e}_v to mean "the meaning of e under the valuation v".

We define the meaning of \semantics{e}_v recursively for each connective:

Definition: A prop-expr e is a tautology if for all valuations v, \semantics{e}_v = \mathsf{T}.

Definition: e is satisfiable if there is a valuation v such that \semantics{e}_v = \mathsf{T}.

Definition: e is a contradiction if there is no valuation v such that \semantics{e}_v = \mathsf{T}, or in other words, if for all valuations v, \semantics{e}_v = \mathsf{F}.

##### 3.1.4` `Natural deduction

In H241 (note: not C241), M384, et cetera, you would have been exposed to Fitch-style natural deduction proofs. We will not be doing those, instead opting for Gentzen-style natural deduction.

Natural deduction systems are comprised of inference rules, which look like this:

Formation: How to form a proposition syntactically. For example, given that A is a proposition and B is a proposition, then A \to B is a proposition. This effectively describes the grammar of our logic.

Introduction: How to construct a proof of a given proposition. For example, given that we have a proof of A and a proof of B, then we can construct a proof of A \land B.

Elimination: How to get things from a proof of a given proposition. For example, given that we have a proof of A \land B, we can get a proof of B.

Computation: We will discuss this later in the course.

Uniqueness: We will discuss this later in the course.

##### 3.1.5` `Classical propositional logic: proof system

The formation rules are effectively the same as our grammar for prop-expr in Exercise 1, and will be ommitted for now, but, for example, they look like this:

Implication is by far the most complex of our connectives, as with our current formulation of inference rules, we have no good way of stating a Gentzen-style rule for \to_\mathrm{intro}. So, we will state elimination first:

Implication introduction requires more things in our proof system. Intuitively, A \to B is true if, when we assume A, then we can derive B. We don’t have any notion of "assuming" in natural deduction yet, so we need to add some extra things to our system.

Our introduction rule therefore looks like this:

This can be read as "we assume A, and then derive B" means that "if A, then B".

For conjunction, the inference rules are:

For disjunction, the inference rules are:

For negation, we define \neg A to be A \to \bot (where that symbol is read as "bottom"). This is the first serious departure from most expositions of propositional logic, so make a note of it.

\bot intentionally does not have an introduction rule, as it represents a contradiction. Note that, with this definition, we can prove the law of non-contradiction: that \neg (A \land \neg A):

We have two rules for dealing with \bot:

##### 3.1.6` `Intuitionistic propositional logic

We exposited the proof system for classical propositional logic, in which the truth values correspond to the Boolean semantics, last lecture.

We will not be discussing the semantics of IPL: if you are interested, you should look at Kripke models and realizability. (See LACI, 2.9)

The philosophical foundation for constructive mathematics, the primary topic of this course, is that if something is true, you should be able to hand me a witness to that truth. This is based in the notion of intuitionism, which says, in the loosest sense, that what is true is what you can convince me is true.

If we downplay the notion of objective truth and whether or not our semantics can show if something is a tautology, we can instead reason more about what proofs are.

##### 3.1.7` `The BHK interpretation

The Brouwer-Heyting-Kolmogorov (BHK interpretation) is a way of assigning what a proof is —

X \to Y is interpreted as a function which, given a proof of X, produces a proof of Y.

X \land Y is interpreted as a pair (a, b), where a is a proof of X and b is a proof of Y.

X \lor Y is either (0, a) where a is a proof of ${X}, or (1, b) where b is a proof of Y.

\neg X is still defined as X \to \bot.

\bot has no proof.

As an example, suppose that I wanted to prove (A \land \neg A) \to \bot, otherwise known as the law of non-contradiction. Then, I would want to construct a function from A \land \neg A to \bot.

A \land \neg A is interpreted as a pair of proofs for A and \neg A, and \neg A is interpreted as a function A \to \bot.

So, the function f((a, n)) = n(a) is a witness to (A \land \neg A) \to \bot under the BHK interpretation.

##### 3.1.8` `Intuitionistic propositional logic: proof system

In the last lecture, we wrote down a set of rules for classical propositional logic. The set of Gentzen-style rules for IPL are exactly the same, except without the rule \mathsf{DNE}.

Why is this the case? Note that in the BHK interpretation, there is not a clear way to construct a witness to \neg \neg A \to A. The only thing we have is a function, the only way we can eliminate a function is to apply it to an argument, and we have no means to conjure an argument to our function out of thin air.

Tying this proof system to the BHK interpretation is the work of elaboration, a later topic in this course.

#### 3.2` `Designing tactics

##### 3.2.1` `Tactics

We now have a proof system for IPL. We would like to write code that actually verifies our proofs. To try and create this correspondence, we define a kind of reusable checker, known as a tactic.

Definition: A context, usually denoted by \Gamma, is an association of names of witnesses to the propositions they prove. So, the empty context represents no assumptions, and the context x : A, y : B represents a witness named x that proves A, and a witness named y that proves B.

Note: This is very similar to the notion of an environment from 311.

Definition: A check tactic, or ChkTactic in code, is a function Context Prop -> Void that, given a context and a proposition, does nothing if it can prove the proposition under the context, and errors if it cannot.

Definition: An infer tactic, or SynTactic in code, is a function Context -> Prop that, given a context, returns the proposition that it proves.

We define tactic combinators, functions that take and return tactics, to represent each rule. This leads us to the design recipe for tactics:

Determine your output. As a general rule of thumb, introduction rules turn into check tactics, and elimination rules turn into infer tactics.

Solve your unknowns. Determine what information above the line needs to be present to check or infer the proposition that you need.

Signature. All the things above the line turn into inputs, with their type depending on step 2, and your output turns into your return type.

Design your function. This means writing tests, determining how it fits in with other tactics.

Test.

Most of the first few steps will be given to you for assignments 1 and 2, primarily because we do not yet have the machinery to make well-informed decisions on what should be a check tactic versus an infer tactic.

##### 3.2.2` `Propositions in Racket

; A Proposition is one of: ; - (prop-atomic Symbol) ; - (prop-→ Proposition Proposition) ; - (prop-∧ Proposition Proposition) ; - (prop-∨ Proposition Proposition) ; - (prop-⊥) (struct prop-atomic (name) #:transparent) (struct prop-→ (assumption consequent) #:transparent) (struct prop-∧ (left right) #:transparent) (struct prop-∨ (left right) #:transparent) (struct prop-⊥ () #:transparent)

; parse : Sexp -> Prop ; Parses an infix S-expression into a proposition. (define (parse s) (match s [`(,assumption → ,consequent) (prop-→ (parse assumption) (parse consequent))] [`(,l ∧ ,r) (prop-∧ (parse l) (parse r))] [`(,l ∨ ,r) (prop-∨ (parse l) (parse r))] [`(¬ ,v) (prop-→ (parse v) (prop-⊥))] ['⊥ (prop-⊥)] [(? symbol?) (prop-atomic s)])) ; pp : Prop -> Sexp ; Pretty-prints a proposition. (define (pp prop) (match prop [(prop-atomic name) name] [(prop-→ assumption consequent) `(,(pp assumption) → ,(pp consequent))] [(prop-∧ l r) `(,(pp l) ∧ ,(pp r))] [(prop-∨ l r) `(,(pp l) ∨ ,(pp r))] [(prop-⊥) '⊥]))

##### 3.2.3` `Tactics in Racket

First, we provide our data definition and helper function for a context.

; A Context is a [Listof [Pairof Symbol Prop]] ; extend-context : Context Symbol Prop -> Context ; Adds a witness of the given proposition to the context. (define (extend-context Γ name prop) (cons (cons name prop) Γ)) (module+ test (check-equal? (extend-context '() 'x (prop-⊥)) (list (cons 'x (prop-⊥)))) (check-equal? (extend-context `((x . ,(prop-⊥))) 'y (prop-atomic 'A)) (list (cons 'y (prop-atomic 'A)) (cons 'x (prop-⊥)))))

Note that this is effectively the same as the data-structural representation of an environment from 311: we define it as an association list instead of a tagged list, however.

; A ChkTactic is a (chk-tactic Symbol [Context Prop -> Void]) ; A SynTactic is a (syn-tactic Symbol [Context -> Prop]) (struct chk-tactic (name f) #:transparent #:property prop:procedure (struct-field-index f)) (struct syn-tactic (name f) #:transparent #:property prop:procedure (struct-field-index f)) ; run-chk : Prop ChkTactic -> Void ; Runs the given check tactic on the input proposition. (define (run-chk goal tactic) (tactic '() goal)) ; run-syn : SynTactic -> Prop ; Runs the given synthesis tactic, producing the proposition it witnesses. (define (run-syn tactic) (tactic '()))

We define these using prop:procedure for one reason: it is very advantageous to have error messages reference both the name of the input tactic (which is what the name field is for), and it makes our contracts significantly easier to write while also referencing chk-tactic? rather than a function contract.

The run-chk and run-syn functions are merely wrappers that run the tactics on empty contexts. These are usually used for top-level proofs, as we will see later.

##### 3.2.4` `Our first example: variable references

We first design our very first rule, which we need to be able to have a base case. All of our tactic combinators take other tactics as inputs, but we need some tactic that does not depend on other tactics.

We have our Context, which maps names of witnesses to their propositions. However, we need a way to turn one of those witnesses into a tactic that can be provided to combinators. So, we start by noting that we have no tactics as inputs, and that we will be returning a SynTactic, as (by the next section) we can turn a SynTactic into a ChkTactic easily, but not the other way around.

; assumption : Symbol -> SynTactic ; Implements the *variable rule*, ; which, given a witness x of P in the context, ; returns a tactic corresponding to x. (define/contract (assumption s) (symbol? . -> . syn-tactic?) (syn-tactic 'assumption (λ (Γ) ...)))

(module+ test (check-equal? ((assumption 'x) (extend-context '() 'x (prop-atomic 'A))) (prop-atomic 'A)) (check-equal? ((assumption 'y) (extend-context (extend-context '() 'y (prop-→ (prop-atomic 'B) (prop-atomic 'B))) 'x (prop-atomic 'A))) (prop-→ (prop-atomic 'B) (prop-atomic 'B))) (check-error ((assumption 'x) '())))

; assumption : Symbol -> SynTactic ; Implements the *variable rule*, ; which, given a witness x of P in the context, ; imbues x with being a witness of P. (define/contract (assumption s) (symbol? . -> . syn-tactic?) (syn-tactic 'assumption (λ (Γ) (dict-ref Γ s))))

##### 3.2.5` `Examples: conversion and annotation

Let’s work on some of the most basic rules we need in a system like this. Given a check tactic, to make our types line up with all of our combinators, we need to be able to create an infer tactic, and vice versa.

We use the define/contract form to state our signatures and have them checked at runtime, as it results in errors that yell about what tactic you used and whether or not it was a check/infer tactic, rather than incomprehensible errors about procedure arity.

So, we will implement a tactic combinator chk : SynTactic -> ChkTactic, which takes an infer tactic and produces a check tactic with the same behavior. So, given that we have a witness of P, we can create a witness that checks as P.

Steps 1, 2, and 3 of our design recipe are trivial.

; chk : SynTactic -> ChkTactic ; Implements the *conversion rule*, ; which, given that t is a witness of the proposition P, ; then t is also able to be checked as a witness of P. (define/contract (chk tac) (syn-tactic? . -> . chk-tactic?) (chk-tactic 'chk (λ (Γ goal) ...)))

(module+ test (define ctx (extend-context (extend-context '() 'y (prop-→ (prop-atomic 'B) (prop-atomic 'B))) 'x (prop-atomic 'A))) (check-success ((chk (assumption 'x)) ctx (prop-atomic 'A))) (check-success ((chk (assumption 'y)) ctx (prop-→ (prop-atomic 'B) (prop-atomic 'B)))) (check-error ((chk (assumption 'z)) ctx (prop-⊥))) (check-error ((chk (assumption 'x)) ctx (prop-⊥))))

So, we begin our game of type tetris to fill in the definition. We know that Γ is a Context, that goal is a Prop, and that tac is a SynTactic that we can apply to a Context to get a Prop.

Therefore, for our tactic, we want to throw an error if we can’t check as the input. But we know what the input needs to be: the result of tac.

; chk : SynTactic -> ChkTactic ; Implements the *conversion rule*, ; which, given that t is a witness of the proposition P, ; then t is also able to be checked as a witness of P. (define/contract (chk tac) (syn-tactic? . -> . chk-tactic?) (chk-tactic 'chk (λ (Γ goal) (define prop (tac Γ)) (assert-prop-equal! prop goal))))

To go the other direction and make a function imbue that takes a ChkTactic and returns a SynTactic, note that we have an unknown: we aren’t able to use a ChkTactic without a proposition. So, we add a proposition as an argument, and our signature is imbue : ChkTactic Prop -> SynTactic.

; imbue : ChkTactic Prop -> SynTactic ; Implements the *annotation rule*, ; which, given that t can check as the proposition P, ; allows t to be imbued with being a witness of P. (define/contract (imbue tac prop) (chk-tactic? any/c . -> . syn-tactic?) (syn-tactic 'imbue (λ (Γ) ...)))

tac is a ChkTactic, which performs a side effect to determine if it checks as the input proposition. So, we have a Context Γ, and we have a Prop prop. Consequently, (tac Γ prop) will run tac on the context and proposition we have as known variables, and then error if there’s an issue.

Finally, since (tac Γ prop) returns void, and we’re creating a SynTactic, we still need to return a proposition that our new tactic is a witness of. This is merely prop.

; imbue : ChkTactic Prop -> SynTactic ; Implements the *annotation rule*, ; which, given that t can check as the proposition P, ; allows t to be imbued with being a witness of P. (define/contract (imbue tac prop) (chk-tactic? any/c . -> . syn-tactic?) (syn-tactic 'imbue (λ (Γ) (tac Γ prop) prop)))

We do not add tests to imbue yet, because we have no good ChkTactics to test with.

##### 3.2.6` `Check versus infer and solving unknowns

Let’s think a bit harder about why we have these different tactics. In particular, let’s look at the rule for implication elimination:

So, given that A \to B and A, we can prove B. Under our system, it is not at all straightforward to know how to actually do this. How do we know what A and B are?

This is the purpose of our check/infer system, known in the literature as bidirectional typing. Our check rules correspond to what would be known in type theory as "type checking", and our infer rules correspond to "type inference" or "type synthesis".

The question then turns to how to do our design recipe steps 1-2, of figuring out which things above the line and below the line are check/infer. Let’s step through why modus-ponens : SynTactic ChkTactic -> SynTactic works.

First, we infer the proposition corresponding to the implication. Our input SynTactic gives us the proposition A \to B. So, now that we know A \to B, we know what A is and B is. Consequently, we are free to check our tactic witnessing the assumption as A, so that can be a ChkTactic.

Finally, we also know B, which is our consequent, so we are free to return it, thereby giving us a SynTactic.

So how do we determine exactly what combination to use? We design our systems around the criterion of mode-correctness:

the premises are mode-correct: for each premise, the input meta-variable is known.

the conclusion is mode-correct: if we have all the premises, the output of the conclusion is known.

So, for example, the signature modus-ponens : ChkTactic ChkTactic -> ChkTactic is not mode-correct. We know B from the goal of the conclusion, but we have no way of figuring out what A is to be able to construct the proposition A \to B.

SynTactic SynTactic -> SynTactic

SynTactic ChkTactic -> SynTactic

SynTactic ChkTactic -> ChkTactic

ChkTactic SynTactic -> ChkTactic

Note that if your system is not mode-correct, it cannot be written in code.

The decision to go with introduction rules as check tactics and elimination rules as infer tactics is common practice, though entirely arbitrary. Any mode-correct rule will technically work.

Note, however, that because imbue returns a SynTactic, the different rules change the
annotation characteristic of our language —

Avoid asserting equality. If you need to use assert-prop-equal! anywhere, you can probably get away with a ChkTactic rather than a SynTactic. Same goes for calling chk anywhere within your tactic.

Intro rules are check, elimination rules are infer. This is a pretty good baseline to get a good annotation characteristic. Whenever you see an eliminator, you can mentally note that you probably need an annotation.

#### 3.3` `Elaboration

##### 3.3.1` `What?

In A1, we have a set of tactics that correspond to each individual rule of our IPL proof system. However, we are still missing the crucial piece that ties our BHK interpretation to each rule. This piece is known as elaboration, and will be the start of our shift from propositions to types. The goal of elaboration is for our tactics to produce not just a yes/no answer as to whether our proof is correct, but a piece of syntax that, under the BHK interpretation, is an object you can hand me.

First, we are computer scientists, so we will be representing "functions" in the BHK interpretation using lambdas. In other words, our old example of \neg (A \land \neg A) being represented as f((a, n)) = n(a) is now represented by \lambda p. (\snd{p}) (\fst{p}).

For the time being, we restrict our focus to the implicational fragment of our proof system. In A2, we will be adding back our connectives.

; A Syntax is one of: ; A DeBruijn *level*: ; - (syntax-local NaturalNumber) ; A lambda: ; - (syntax-lam Symbol Syntax) ; An application: ; - (syntax-app Syntax Syntax) (struct syntax-local (n) #:transparent) (struct syntax-lam (var body) #:transparent) (struct syntax-app (rator rand) #:transparent)

##### 3.3.2` `DeBruijn levels

Note that I mention DeBruijn levels here. Traditionally, we define a DeBruijn index, as in 311, as a way of "pointing out" to our lambda. So, for example, the term

DeBruijn levels are the dual to these: instead of counting inside out, we count outside in. So:

The motivation for this is that we can now not only unambiguously refer to variables as with DeBruijn indices, but we do not need to keep track of the environment size at all: we can simply use list-ref as our environment application.

##### 3.3.3` `Elaborating tactics

Our last iteration of tactics has no clear way to return anything beyond a yes/no result. So, we need to update our definition of a tactic.

Definition: A check tactic is now a function Context Prop -> Syntax that, given a context and a proposition, returns the BHK interpretation of the proof it represents if it can prove the proposition under the context, and errors if it cannot.

Definition: A infer tactic is now a function Context -> [PairOf Prop Syntax] that, given a context, returns both the proposition that it proves and the BHK interpretation of the proof of that proposition.

So, we now change our data definitions routinely. run-chk is now Prop ChkTactic -> Syntax, run-syn is now SynTactic -> [PairOf Prop Syntax], et cetera.

; context-index-of : Context Symbol -> NaturalNumber ; Returns the index of the symbol in the context. (define (context-index-of Γ s) (match Γ ['() (error "not found")] [(cons (cons x _) rst) (cond [(eq? x s) 0] [else (add1 (context-index-of rst s))])])) (module+ test (check-error (context-index-of '() 'x)) (check-equal? (context-index-of (list (cons 'x (prop-⊥))) 'x) 0) (check-equal? (context-index-of (list (cons 'x (prop-⊥)) (cons 'y (prop-atomic 'A))) 'y) 1))

So, our old tactics need some adjustment. We start with assumption, which returns a SynTactic, which needs to return both a proposition and a core term.

; assumption : Symbol -> SynTactic ; Implements the *variable rule*, ; which, given a witness x : P in the context, ; returns a tactic corresponding to x. (define/contract (assumption s) (symbol? . -> . syn-tactic?) (syn-tactic 'assumption (λ (Γ) (cons (dict-ref Γ s) (syntax-local (- (length Γ) (context-index-of Γ s) 1)))))) (module+ test (check-equal? ((assumption 'x) (extend-context '() 'x (prop-atomic 'A))) (cons (prop-atomic 'A) (syntax-local 0))) (check-error (run-syn (assumption 'x))))

Note that this assumes that your introduce from A1 uses extend-context in the normal way. (If you haven’t looked at A1 yet, do that!)

; chk : SynTactic -> ChkTactic ; Implements the *conversion rule*, ; which, given that t is a witness of P, ; then t is able to be checked as a witness of P. (define/contract (chk tac) (syn-tactic? . -> . chk-tactic?) (chk-tactic 'chk (λ (Γ goal) (match-define (cons prop tm) (tac Γ)) (assert-prop-equal! prop goal) tm))) (module+ test (define ctx (extend-context (extend-context '() 'y (prop-→ (prop-atomic 'B) (prop-atomic 'B))) 'x (prop-atomic 'A))) (check-equal? ((chk (assumption 'x)) ctx (prop-atomic 'A)) (syntax-local 1)) (check-equal? ((chk (assumption 'y)) ctx (prop-→ (prop-atomic 'B) (prop-atomic 'B))) (syntax-local 0)) (check-error ((chk (assumption 'x)) ctx (prop-atomic 'B))))

; imbue : ChkTactic Prop -> SynTactic ; Implements the *annotation rule*, ; which, given that t can check as the proposition P, ; gives us a new tactic that is a witness of P. (define/contract (imbue tac prop) (chk-tactic? any/c . -> . syn-tactic?) (syn-tactic 'imbue (λ (Γ) (define tm (tac Γ prop)) (cons prop tm)))) (module+ test (check-equal? ((imbue (chk (assumption 'x)) (prop-atomic 'A)) ctx) (cons (prop-atomic 'A) (syntax-local 1))) (check-error ((imbue (chk (assumption 'x)) (prop-atomic 'B)) ctx)))

Turning the rest of your A1 tactics into elaborating ones is part of the subject of A2.

#### 3.4` `The simply-typed lambda calculus

##### 3.4.1` `The Curry-Howard correspondence

Surprise! We’re doing type theory in the type theory class. Big reveal.

Function types correspond to implications.

Product types/pair types correspond to logical and.

Sum types correspond to logical or.

The empty type corresponds to \bot.

Given the way that we represent BHK terms after elaboration, this should serve as no surprise.

In addition, this gives the somewhat surprising correspondence that an inhabited type corresponds to an intuitionistic tautology.

However, in type theory, we traditionally have a term associated with a type. We define our concrete syntax to be this kind of term, with the traditional lambda calculus grammar:

; A ConcreteSyntax is one of: ; - (cs-var Symbol) ; - (cs-lam Symbol ConcreteSyntax) ; - (cs-app ConcreteSyntax ConcreteSyntax) ; - (cs-ann ConcreteSyntax Type) (struct cs-var (name) #:transparent) (struct cs-lam (x body) #:transparent) (struct cs-app (rator rand) #:transparent) (struct cs-ann (tm tp) #:transparent)

##### 3.4.2` `Type theory notation

Before we move on to writing a type checker for STLC, we should actually state our type rules.

First, the most glaringly strange thing in our current IPL proof system is the notion of a hypothetical derivation in our intro rule:

The notion of hypothetical dervation is useful, but currently exists at a meta-theoretic level. We now begin to label our antecedents with names, put them in our context, and then use the symbol \vdash to mean "proves".

In addition, we annotate each bit of our concrete syntax (corresponding to a non-elaborated BHK interpretation) with its type.

As an immediate example, our implication introduction rule becomes:

Let’s break this down. The line \Gamma, x : A \vdash y : B reads "under a context \Gamma where x has type A, we can prove y has type B". Note the subtle distinction between : on the left hand side of \vdash compared to the right.

We now need to rephrase our other rules in a similar fashion. Implication elimination is straightforward:

To finish up the proof system for the implicational fragment of IPL, we need to add one more rule with regards to contexts that made no sense prior. In particular, we need a way to extract things from our context:

All of our other rules are more or less trivial to rephrase like this.

As an example, we are now free to actually write a proof tree to show that \vdash \lambda x. \lambda y. x : A \to (B \to A):

We then also wrote a proof tree for \vdash \lambda a. \lambda f. f\ a : A \to ((A \to B) \to B) in class. (If you’re reading this alone, do it yourself!)

##### 3.4.3` `Bidirectional typing

We’ve already been doing this for quite some time, but the time has come to formalize our tactics in good old fashioned math notation.

Our \Gamma \vdash x : A notation is relatively good, but does not give us a straightforward path to implementation. Our check tactics, which correspond to the sequent \Gamma \vdash x : A being valid, are written as \Gamma \vdash x \Leftarrow A. Our infer tactics, which correspond to the sequent giving us what it proves, is written as \Gamma \vdash x \Rightarrow A.

If we simply adapt our existing signatures to the three rules for the implicational fragment of STLC above, we get the following rules:

To sanity check our work, we define a criterion to know when we’re done writing our bidirectional rules:

Definition: A bidirectional type system is complete if whenever \Gamma \vdash e : T, then \Gamma \vdash e’ \Leftarrow T and \Gamma \vdash e’ \Rightarrow T, where e’ is a version of e with any amount of added type annotations.

This system doesn’t seem complete, and it’s not. If we try to construct the proof tree for \vdash \lambda a. \lambda f. f\ a : A \to ((A \to B) \to B), we can’t, because we have no way to turn inference into checking and vice versa. In our tactic system, we wrote chk and imbue for this purpose, so let’s turn them into rules.

chk is relatively easy, and is called the conversion rule:

Going the other way with imbue is more complicated, and what our cs-ann notation is for. This is yet another override of what colon means.

We now write the proof tree for \vdash \lambda a. \lambda f. f\ a \Leftarrow A \to ((A \to B) \to B). (Again, do it yourself!)

##### 3.4.4` `Checking our concrete syntax

chk stays the same.

imbue becomes ann.

introduce stays the same.

modus-ponens becomes app.

assumption becomes var.

We now want to make a type checker (or synthesizer) given a term. We do this with two functions: type-check : ConcreteSyntax -> ChkTactic, and type-infer : ConcreteSyntax -> SynTactic, each of which produces a tactic corresponding to the syntax we have.

In the context of type-check, you can think of this kind of like a curried verifier for a given sequent: to show that \Gamma \vdash e \Leftarrow T, we give type-check e, and it gives us a tactic taking \Gamma and T. The same logic applies for type-infer, which takes e, \Gamma, and gives us T.

; type-check : ConcreteSyntax -> ChkTactic ; Produces a type checker for the given concrete term. (define (type-check stx) (match stx [(cs-var name) ...] [(cs-lam var body) ...] [(cs-app rator rand) ...] [(cs-ann tm tp) ...])) ; type-infer : ConcreteSyntax -> SynTactic ; Produces a type inferrer for the given concrete term. (define (type-infer stx) (match stx [(cs-var name) ...] [(cs-lam var body) ...] [(cs-app rator rand) ...] [(cs-ann tm tp) ...]))

Let’s start with type-check. Note that the only tactic combinator we have (aside from ann) that returns a ChkTactic is introduce, which takes a ChkTactic as input.

; type-check : ConcreteSyntax -> ChkTactic ; Produces a type checker for the given concrete term. (define (type-check stx) (match stx [(cs-lam var body) (introduce var (type-check body))] [_ (chk (type-infer stx))]))

; type-infer : ConcreteSyntax -> SynTactic ; Produces a type inferrer for the given concrete term. (define (type-infer stx) (match stx [(cs-var name) (var name)] [(cs-lam var body) ...] [(cs-app rator rand) (app (type-infer rator) (type-check rand))] [(cs-ann tm tp) (ann (type-check tm) tp)]))

; type-infer : ConcreteSyntax -> SynTactic ; Produces a type inferrer for the given concrete term. (define (type-infer stx) (match stx [(cs-var name) (var name)] [(cs-app rator rand) (app (type-infer rator) (type-check rand))] [(cs-ann tm tp) (ann (type-check tm) tp)] [_ (error (format "the term ~a needs more type annotations!" stx))]))

(run-chk (parse '(A → ((A → B) → B))) (type-check (cs-lam 'a (cs-lam 'f (cs-app (cs-var 'f) (cs-var 'a))))))

#### 3.5` `Normalization by evaluation

##### 3.5.1` `Deciding equivalence

We now shift topics to deciding if two lambda terms are equal, for the intuitive notion of equality. Some of you are likely screaming "α-equivalence", and this is somewhat correct. Furthermore, since we decided to use DeBruijn levels, we already even have definitionally alpha-equivalent terms, and don’t have to worry about substitution.

However, consider the following:

In general, to solve a problem like this, even in the case of

So can we just compare the results of evaluation? Let’s give it a shot.

##### 3.5.2` `Data definitions for STLC

; A Syntax is one of: ; A DeBruijn *level*: ; - (syntax-local Number) ; A lambda: ; - (syntax-lam Symbol Syntax) ; A let-binding: ; - (syntax-let Symbol Syntax Syntax) ; An application: ; - (syntax-app Syntax Syntax) (struct syntax-local (n) #:transparent) (struct syntax-lam (var body) #:transparent) (struct syntax-let (var exp body) #:transparent) (struct syntax-app (rator rand) #:transparent)

; A Closure is a (closure Syntax Environment) (struct closure (term env) #:transparent) ; A Value is a Closure

; A Environment is a [ListOf [PromiseOf Value]] ; empty-env : -> Environment (define (empty-env) '()) ; extend-env : Environment [PromiseOf Value]] -> Environment (define (extend-env env x) (cons x env)) ; apply-env : Environment Number -> [PromiseOf Value] (define (evaluate/apply-env env n) (list-ref env n)) ; env-size : Environment -> Number (define (env-size env) (length env))

##### 3.5.3` `Building terms with DeBruijn levels (or: continuation HOAS)

We need a bit of scaffolding to actually write out our terms with DeBruijn levels. The upshot of DeBruijn indices is that we can very easily plug things, something we lose with levels. However, we no longer have to do shifting math.

; A [TB X] is an [Number -> X] ; run-term-builder : {X} Environment [TB X] -> X ; Produces the built core term, given an environment. (define (run-term-builder env k) (k (env-size env)))

If you’re cheeky, you may think of the continuation monad.

Yes.

; tb/var : Number -> [TB Syntax] ; Given a DeBruijn index, produce a term builder giving a DeBruijn level in syntax. (define (tb/var lvl) (λ (size) (syntax-local (- size lvl 1)))) (module+ test (check-equal? ((tb/var 1) 3) (syntax-local 1)) (check-equal? ((tb/var 0) 1) (syntax-local 0)))

; tb/bind-var : {X} [Number -> [TB X]] -> [TB X] ; Take the given term-builder-builder, and increment its environment size. (define (tb/bind-var k) (λ (size) ((k size) (+ size 1))))

; tb/scope : {X} [[TB Syntax] -> [TB X]] -> [TB X] ; Scopes out a variable, binding it for the given term-builder-builder. (define (tb/scope k) (tb/bind-var (λ (lvl) (k (tb/var lvl)))))

; tb/let : Symbol [TB Syntax] [[TB Syntax] -> [TB Syntax]] -> [TB Syntax] ; Build a let binding. (define (tb/let var e body) (λ (size) (syntax-let var (e size) ((tb/scope body) size)))) ; tb/lam : Symbol [[TB Syntax] -> [TB Syntax]] -> [TB Syntax] ; Build a lambda. (define (tb/lam var body) (λ (size) (syntax-lam var ((tb/scope body) size)))) ; tb/app : [TB Syntax] [TB Syntax] -> [TB Syntax] ; Build an application. (define (tb/app rator rand) (λ (size) (syntax-app (rator size) (rand size))))

(define church-zero (tb/lam 'f (λ (f) (tb/lam 'x (λ (x) x))))) (define church-add1 (tb/lam 'n-1 (λ (n-1) (tb/lam 'f (λ (f) (tb/lam 'x (λ (x) (tb/app f (tb/app (tb/app n-1 f) x)))))))))

##### 3.5.4` `The evaluator

; evaluate : Syntax Environment -> Value ; Evaluates the expression to a closure. (define (evaluate exp env) (match exp [(syntax-local n) (force (evaluate/apply-env env n))] [(syntax-lam _ body) (closure body env)] [(syntax-let var exp body) (define unfold (delay (evaluate exp env))) (evaluate body (evaluate/extend-env env unfold))] [(syntax-app rator rand) (evaluate/do-app (evaluate rator env) (evaluate rand env))])) ; evaluate/do-app : Value Value -> Value ; Performs an application of the rator to the rand. (define (evaluate/do-app rator rand) (match-define (closure tm env) rator) (evaluate tm (extend-env env (delay/strict rand))))

##### 3.5.5` `Normal forms

In lambda calculus, both α-equivalence (consistently renaming bound variables) and β-reduction, written as:

Definition: A (β-)normal form is a form that has no redexes, or reducible expressions: so, no further substitution can be done.

One obvious way to perform normalization is to recursively search a piece of syntax for redexes, and reduce them when possible. It’s very elegant to think about when described in a single sentence. In practice, it is awful, and the code is awful to read, and it performs terribly. Let’s not do that and never discuss it again.

norm | = | neutral | ||

| | (λ (id) norm) |

neutral | = | id | ||

| | (neutral norm) |

; A Cut is a (value-cut Head [ListOf Form]) (struct value-cut (head spine) #:transparent) ; A Head is one of: ; - (value-local Number) ; - (value-let Number [PromiseOf Value]) (struct value-local (n) #:transparent) (struct value-let (n promise) #:transparent) ; A Form is one of: ; (value-app Value) (struct value-app (v) #:transparent) ; A Value is one of: ; - Cut ; - (value-lam String Closure) (struct value-lam (name closure) #:transparent)

Our Value represents our normal form, and our Cut represents our neutral. In essence, our Cut is a stack of eliminators applied to our head, which is a variable, and our only eliminator is application.

Our evaluator will then produce one of these. Once we have one, we need to be able to move backwards to get back our Syntax, since our Closure data type doesn’t work. So how do we turn a closure, which is stuck, back into a piece of syntax?

To do this, we pull a magic symbolic variable out of our hat, and then apply it to the closure with our usual evaluation. This process is called quotation, but since we’re writting Racket and we can’t call it that, I’ll be calling it reification.

Let’s work an example. To distinguish our syntax and value lambdas, I’ll be writing \lambda_S and \lambda_V. So, let’s try normalizing (\lambda_S x. \lambda_S y. x\ y) (\lambda_S x. x).

We call evaluate with an empty environment:

We now want to perform reification to read it back into a piece of syntax. Let’s suppose we had a magic variable \mathbf{y} we pull out from our hat. Then, to reify:

I’ll also work an example with a let binding in class.

; cut/local : Number -> Cut (define (cut/local n) (value-cut (value-local n) '())) ; cut/let-bind : Number [PromiseOf Value] -> Cut (define (cut/let-bind lvl p) (value-cut (value-let lvl p) '()))

; evaluate : Syntax Environment -> Value ; Evaluates the expression to a neutral. (define (evaluate exp env) (match exp [(syntax-local n) (force (apply-env env n))] [(syntax-lam value body) (value-lam var (closure body env))] [(syntax-let var exp body) (define unfold (cut/let-bind (env-size env)) (delay (evaluate exp env))) (evaluate body (extend-env env unfold))] [(syntax-app rator rand) (do-app (evaluate rator env) (evaluate rand env))]))

; evaluate/do-app : Value Value -> Value (define (evaluate/do-app rator rand) (match rator [(value-lam _ closure) (evaluate/apply-closure closure rand)] [(value-cut head spine) (push-form rator (value-app rand) (λ (rator^) (evaluate/do-app rator^ rand)))])) ; evaluate/apply-closure : Closure Value -> Value (define (evaluate/apply-closure clo rand) (match-define (closure tm env) clo) (evaluate tm (evaluate/extend-env env (delay/strict rand)))) ; push-form : Cut Form [Value -> Value] -> Cut (define (push-form cut form unfold) (match-define (value-cut head spine) cut) (define new-head (match head [(value-local n) (value-local n)] [(value-let lvl promise) (value-let lvl (delay (unfold (force promise))))])) (value-cut new-head (cons form spine))) ; QUOTATION ; reify-bind-var : {X} PositiveInteger [PositiveInteger Value -> X] -> X (define (reify-bind-var size k) (k (add1 size) (cut/local size))) ; reify : PositiveInteger Value -> Syntax (define (reify size val) (match val [(value-cut _ _) (reify-cut size val)] [(value-lam var closure) (syntax-lam var (reify-closure size closure))])) ; reify-cut : PositiveInteger Cut -> Syntax (define (reify-cut size cut) (match-define (value-cut head spine) cut) (reify-spine size (reify-head size head) spine)) ; reify-spine : PositiveInteger Syntax [ListOf Form] -> Syntax (define (reify-spine size exp spine) (match spine ['() exp] [(cons form rest) (reify-spine size (reify-form size exp form) rest)])) ; reify-form : PositiveInteger Syntax Form -> Syntax (define (reify-form size exp form) (match form [(value-app rand) (syntax-app exp (reify size rand))])) ; reify-head : PositiveInteger Head -> Syntax (define (reify-head size head) (match head [(value-local lvl) (syntax-local lvl)] [(value-let _ unfold) (reify size (force unfold))])) ; reify-closure : PositiveInteger Closure -> Syntax (define (reify-closure size closure) (reify-bind-var size (lambda (size^ arg) (reify size^ (evaluate/apply-closure closure arg)))))

Yeah.