On this page:
3.1 Propositional logic
3.1.1 The syllabus
3.1.2 Philosophical foundations of type theory
3.1.3 Classical propositional logic:   semantics
3.1.4 Natural deduction
3.1.5 Classical propositional logic:   proof system
3.1.6 Intuitionistic propositional logic
3.1.7 The BHK interpretation
3.1.8 Intuitionistic propositional logic:   proof system
3.2 Designing tactics
3.2.1 Tactics
3.2.2 Propositions in Racket
3.2.3 Tactics in Racket
3.2.4 Our first example:   variable references
3.2.5 Examples:   conversion and annotation
3.2.6 Check versus infer and solving unknowns
3.3 Elaboration
3.3.1 What?
3.3.2 De  Bruijn levels
3.3.3 Elaborating tactics
3.4 The simply-typed lambda calculus
3.4.1 The Curry-Howard correspondence
3.4.2 Type theory notation
3.4.3 Bidirectional typing
3.4.4 Checking our concrete syntax
3.5 Normalization by evaluation
3.5.1 Deciding equivalence
3.5.2 Data definitions for STLC
3.5.3 Building terms with De  Bruijn levels (or:   continuation HOAS)
3.5.4 The evaluator
3.5.5 Normal forms
8.7.0.8

3 Lecture notes

    3.1 Propositional logic

    3.2 Designing tactics

    3.3 Elaboration

    3.4 The simply-typed lambda calculus

    3.5 Normalization by evaluation

3.1 Propositional logic

\newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}} \newcommand{\semantics}[1]{[\![ #1 ]\!]} \newcommand{\fst}[1]{\mathsf{fst\ } #1} \newcommand{\snd}[1]{\mathsf{snd\ } #1}

3.1.1 The syllabus

Go over the premise of the course and go over the beats of the course. All of this content is in the syllabus. Have you read the syllabus? You should read the syllabus. If you’re reading this paragraph, you should be reading the syllabus instead.

3.1.2 Philosophical foundations of type theory

The first rule of type theory is you don’t ask what a type is.

In set theory, when we say a statement like 2 \in \mathbb{N}, you’re making a statement about \mathbb{N} we postulated the existence of this inductive set with infinitely many elements, and however we chose to represent 2 was part of that inductively defined set.

In type theory, the definition of \mathbb{N} looks a lot more like a description: we add two rules to our theory,

\ir{}{}{\Gamma \vdash \mathrm{zero} : \mathbb{N}} ~~ \ir{}{\Gamma \vdash m : \mathbb{N}}{\Gamma \vdash \mathrm{suc}\ m : \mathbb{N}}

both of which describe what is a natural number.

Don’t worry about what these mean yet, but you can read it as "under any context, zero is a natural number", and "if under some context, m is a natural number, then \mathrm{suc\ } m is a natural number".

When we say that 2 : \mathbb{N}, then, we’re saying that \mathrm{suc\ (suc\ zero)} : \mathbb{N}, which makes a statement about 2.

This is useful because in set theory, the intuition is you’ve constructed an infinitely large set of things, and to check membership you just traverse that infinite set. Computers are really bad at doing anything that’s infinite. (Obviously, there are ways of axiomatizing set theory that avoid this. The point is that the philosophy of type theory is more compatible with computer-aided proof.)

3.1.3 Classical propositional logic: semantics

Definition: Propositional logic is defined by the grammar:
  prop-expr = var
  | (prop-expr  prop-expr)
  | (prop-expr  prop-expr)
  | (prop-expr  prop-expr)
  | (¬ prop-expr)
where var is some set of atomic symbols representing the variables.

Classical propositional logic deals with notions of "true" and "false", and the semantics of it are truth-tables. As an example:

A

B

A \to B

\mathsf{T}

\mathsf{T}

\mathsf{T}

\mathsf{T}

\mathsf{F}

\mathsf{F}

\mathsf{F}

\mathsf{T}

\mathsf{T}

\mathsf{F}

\mathsf{F}

\mathsf{T}

where each truth value has a different meaning for each propositional connective. (We went over each connective, and worked some examples.)

Definition: A valuation is a function v : \mathsf{Var} \to \{\mathsf{T}, \mathsf{F}\} that assigns meaning to each variable. Given a valuation v and a prop-expr e, we write \semantics{e}_v to mean "the meaning of e under the valuation v".

We define the meaning of \semantics{e}_v recursively for each connective:

\begin{align*} \semantics{X}_v &= v(X), \mathrm{when\ } X \in \mathsf{Var} \\ \semantics{X \land Y}_v &= \semantics{X}_v \land \semantics{Y}_v \\ \semantics{X \lor Y}_v &= \semantics{X}_v \lor \semantics{Y}_v \\ \semantics{X \to Y}_v &= \semantics{X}_v \to \semantics{Y}_v \\ \semantics{\neg X}_v &= \neg \semantics{X}_v \end{align*}
where each connective on booleans is defined as usual.

Definition: A prop-expr e is a tautology if for all valuations v, \semantics{e}_v = \mathsf{T}.

Definition: e is satisfiable if there is a valuation v such that \semantics{e}_v = \mathsf{T}.

Definition: e is a contradiction if there is no valuation v such that \semantics{e}_v = \mathsf{T}, or in other words, if for all valuations v, \semantics{e}_v = \mathsf{F}.

3.1.4 Natural deduction

In H241 (note: not C241), M384, et cetera, you would have been exposed to Fitch-style natural deduction proofs. We will not be doing those, instead opting for Gentzen-style natural deduction.

Natural deduction systems are comprised of inference rules, which look like this:

\ir{}{\mathrm{assumption\ 1} ~~~ \mathrm{assumption\ 2} ~~~ \ldots ~~~ \mathrm{assumption\ } n}{\mathrm{consequent}}
where everything above the line is something we already know, and the single thing below the line is something we get as a result.

All of our connectives will have one of four (or five) types of associated rules:
  • Formation: How to form a proposition syntactically. For example, given that A is a proposition and B is a proposition, then A \to B is a proposition. This effectively describes the grammar of our logic.

  • Introduction: How to construct a proof of a given proposition. For example, given that we have a proof of A and a proof of B, then we can construct a proof of A \land B.

  • Elimination: How to get things from a proof of a given proposition. For example, given that we have a proof of A \land B, we can get a proof of B.

  • Computation: We will discuss this later in the course.

  • Uniqueness: We will discuss this later in the course.

3.1.5 Classical propositional logic: proof system

The formation rules are effectively the same as our grammar for prop-expr in Exercise 1, and will be ommitted for now, but, for example, they look like this:

\ir{(\land_{\mathrm{form}})}{A \mathrm{\ prop} ~~~ B \mathrm{\ prop}}{A \land B \mathrm{\ prop}} ~~~ \ir{(\bot_{\mathrm{form}})}{}{\bot \mathrm{\ prop}}

Implication is by far the most complex of our connectives, as with our current formulation of inference rules, we have no good way of stating a Gentzen-style rule for \to_\mathrm{intro}. So, we will state elimination first:

\ir{(\to_{\mathrm{elim}})}{A \to B ~~~ A}{B}
This is often called "modus ponens", and it says that if A implies B, and also A, then we have B.

Implication introduction requires more things in our proof system. Intuitively, A \to B is true if, when we assume A, then we can derive B. We don’t have any notion of "assuming" in natural deduction yet, so we need to add some extra things to our system.

Our introduction rule therefore looks like this:

\ir{(\to_{\mathrm{intro}})} {\begin{matrix} \ir{}{}{A} \\ \vdots \\ B \end{matrix}} {A \to B}
where the \vdots represents a hypothetical derivation. We will be revisiting this very soon.

This can be read as "we assume A, and then derive B" means that "if A, then B".

For conjunction, the inference rules are:

\ir{(\land_{\mathrm{intro}})}{A ~~~ B}{A \land B} ~~~ \ir{(\land_{\mathrm{elim}0})}{A \land B}{A} ~~~ \ir{(\land_{\mathrm{elim}1})}{A \land B}{B}
which are relatively straightforward to reason about: if both A and B, then A \land B, and if A \land B, then both A and B.

For disjunction, the inference rules are:

\ir{(\lor_{\mathrm{intro}0})}{A}{A \lor B} ~~~ \ir{(\lor_{\mathrm{intro}1})}{B}{A \lor B} ~~~ \ir{(\lor_{\mathrm{elim}})}{A \lor B ~~~ A \to C ~~~ B \to C}{C}
which require a bit more justification. If A, then obviously A \lor B, and the same for B then A \lor B.

For negation, we define \neg A to be A \to \bot (where that symbol is read as "bottom"). This is the first serious departure from most expositions of propositional logic, so make a note of it.

\bot intentionally does not have an introduction rule, as it represents a contradiction. Note that, with this definition, we can prove the law of non-contradiction: that \neg (A \land \neg A):

\ir{(\to_{\mathrm{elim}})} {\ir{(\land_{\mathrm{elim}1})}{A \land (A \to \bot)}{A \to \bot} ~~~ \ir{(\land_{\mathrm{elim}0})}{A \land (A \to \bot)}{A}} {\bot}
(noting that I implicitly use implication introduction for simplicity.)

We have two rules for dealing with \bot:

\ir{(\bot_{\mathrm{elim}})}{\bot}{A} ~~~ \ir{(\mathsf{DNE})}{\neg \neg A}{A}
referred to as the principle of explosion/ex falso, and double-negation elimination, respectively. In essence, this states that if we can prove \bot, we can prove anything. Double-negation should seem intuitively sensible under the Boolean interpretation.

3.1.6 Intuitionistic propositional logic

We exposited the proof system for classical propositional logic, in which the truth values correspond to the Boolean semantics, last lecture.

We will not be discussing the semantics of IPL: if you are interested, you should look at Kripke models and realizability. (See LACI, 2.9)

The philosophical foundation for constructive mathematics, the primary topic of this course, is that if something is true, you should be able to hand me a witness to that truth. This is based in the notion of intuitionism, which says, in the loosest sense, that what is true is what you can convince me is true.

If we downplay the notion of objective truth and whether or not our semantics can show if something is a tautology, we can instead reason more about what proofs are.

3.1.7 The BHK interpretation

The Brouwer-Heyting-Kolmogorov (BHK interpretation) is a way of assigning what a proof is — as in, the BHK interpretation of a formula is a kind of object you can hand to me to discern whether the formula is true or not.

Definition: We define the BHK interpretation of a formula inductively on connectives:
  • X \to Y is interpreted as a function which, given a proof of X, produces a proof of Y.

  • X \land Y is interpreted as a pair (a, b), where a is a proof of X and b is a proof of Y.

  • X \lor Y is either (0, a) where a is a proof of ${X}, or (1, b) where b is a proof of Y.

  • \neg X is still defined as X \to \bot.

  • \bot has no proof.

As an example, suppose that I wanted to prove (A \land \neg A) \to \bot, otherwise known as the law of non-contradiction. Then, I would want to construct a function from A \land \neg A to \bot.

A \land \neg A is interpreted as a pair of proofs for A and \neg A, and \neg A is interpreted as a function A \to \bot.

So, the function f((a, n)) = n(a) is a witness to (A \land \neg A) \to \bot under the BHK interpretation.

3.1.8 Intuitionistic propositional logic: proof system

In the last lecture, we wrote down a set of rules for classical propositional logic. The set of Gentzen-style rules for IPL are exactly the same, except without the rule \mathsf{DNE}.

Why is this the case? Note that in the BHK interpretation, there is not a clear way to construct a witness to \neg \neg A \to A. The only thing we have is a function, the only way we can eliminate a function is to apply it to an argument, and we have no means to conjure an argument to our function out of thin air.

Tying this proof system to the BHK interpretation is the work of elaboration, a later topic in this course.

3.2 Designing tactics

\newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}} \newcommand{\semantics}[1]{[\![ #1 ]\!]} \newcommand{\fst}[1]{\mathsf{fst\ } #1} \newcommand{\snd}[1]{\mathsf{snd\ } #1}

3.2.1 Tactics

We now have a proof system for IPL. We would like to write code that actually verifies our proofs. To try and create this correspondence, we define a kind of reusable checker, known as a tactic.

Definition: A context, usually denoted by \Gamma, is an association of names of witnesses to the propositions they prove. So, the empty context represents no assumptions, and the context x : A, y : B represents a witness named x that proves A, and a witness named y that proves B.

Note: This is very similar to the notion of an environment from 311.

Definition: A check tactic, or ChkTactic in code, is a function Context Prop -> Void that, given a context and a proposition, does nothing if it can prove the proposition under the context, and errors if it cannot.

Definition: An infer tactic, or SynTactic in code, is a function Context -> Prop that, given a context, returns the proposition that it proves.

We define tactic combinators, functions that take and return tactics, to represent each rule. This leads us to the design recipe for tactics:

Given an inference rule:
  1. Determine your output. As a general rule of thumb, introduction rules turn into check tactics, and elimination rules turn into infer tactics.

  2. Solve your unknowns. Determine what information above the line needs to be present to check or infer the proposition that you need.

  3. Signature. All the things above the line turn into inputs, with their type depending on step 2, and your output turns into your return type.

  4. Design your function. This means writing tests, determining how it fits in with other tactics.

  5. Test.

Most of the first few steps will be given to you for assignments 1 and 2, primarily because we do not yet have the machinery to make well-informed decisions on what should be a check tactic versus an infer tactic.

3.2.2 Propositions in Racket

As we move from paper to code, we have to turn our grammars into data definitions. So:
; A Proposition is one of:
; - (prop-atomic Symbol)
; - (prop-→ Proposition Proposition)
; - (prop-∧ Proposition Proposition)
; - (prop-∨ Proposition Proposition)
; - (prop-⊥)
(struct prop-atomic (name) #:transparent)
(struct prop-→ (assumption consequent) #:transparent)
(struct prop-∧ (left right) #:transparent)
(struct prop-∨ (left right) #:transparent)
(struct prop-⊥ () #:transparent)
where we omit \neg because it is superfluous in the presence of \bot.

It is tedious to write out these, and they do not pretty-print well. So, we define two helper functions that take S-expressions to propositions and back:
; parse : Sexp -> Prop
; Parses an infix S-expression into a proposition.
(define (parse s)
  (match s
    [`(,assumption  ,consequent) (prop-→ (parse assumption)
                                          (parse consequent))]
    [`(,l  ,r) (prop-∧ (parse l) (parse r))]
    [`(,l  ,r) (prop-∨ (parse l) (parse r))]
    [`(¬ ,v) (prop-→ (parse v) (prop-⊥))]
    [' (prop-⊥)]
    [(? symbol?) (prop-atomic s)]))
 
; pp : Prop -> Sexp
; Pretty-prints a proposition.
(define (pp prop)
  (match prop
    [(prop-atomic name) name]
    [(prop-→ assumption consequent) `(,(pp assumption)  ,(pp consequent))]
    [(prop-∧ l r) `(,(pp l)  ,(pp r))]
    [(prop-∨ l r) `(,(pp l)  ,(pp r))]
    [(prop-⊥) ']))

3.2.3 Tactics in Racket

First, we provide our data definition and helper function for a context.

; A Context is a [Listof [Pairof Symbol Prop]]
 
; extend-context : Context Symbol Prop -> Context
; Adds a witness of the given proposition to the context.
(define (extend-context Γ name prop)
  (cons (cons name prop) Γ))
 
(module+ test
  (check-equal? (extend-context '() 'x (prop-⊥)) (list (cons 'x (prop-⊥))))
  (check-equal? (extend-context `((x . ,(prop-⊥))) 'y (prop-atomic 'A))
                (list (cons 'y (prop-atomic 'A)) (cons 'x (prop-⊥)))))

Note that this is effectively the same as the data-structural representation of an environment from 311: we define it as an association list instead of a tagged list, however.

We then define tactics not solely as their underlying function, but as a structure that can be called as a function:
; A ChkTactic is a (chk-tactic Symbol [Context Prop -> Void])
; A SynTactic is a (syn-tactic Symbol [Context -> Prop])
(struct chk-tactic (name f)
  #:transparent
  #:property prop:procedure (struct-field-index f))
(struct syn-tactic (name f)
  #:transparent
  #:property prop:procedure (struct-field-index f))
 
; run-chk : Prop ChkTactic -> Void
; Runs the given check tactic on the input proposition.
(define (run-chk goal tactic)
  (tactic '() goal))
 
; run-syn : SynTactic -> Prop
; Runs the given synthesis tactic, producing the proposition it witnesses.
(define (run-syn tactic)
  (tactic '()))

We define these using prop:procedure for one reason: it is very advantageous to have error messages reference both the name of the input tactic (which is what the name field is for), and it makes our contracts significantly easier to write while also referencing chk-tactic? rather than a function contract.

The run-chk and run-syn functions are merely wrappers that run the tactics on empty contexts. These are usually used for top-level proofs, as we will see later.

3.2.4 Our first example: variable references

We first design our very first rule, which we need to be able to have a base case. All of our tactic combinators take other tactics as inputs, but we need some tactic that does not depend on other tactics.

We have our Context, which maps names of witnesses to their propositions. However, we need a way to turn one of those witnesses into a tactic that can be provided to combinators. So, we start by noting that we have no tactics as inputs, and that we will be returning a SynTactic, as (by the next section) we can turn a SynTactic into a ChkTactic easily, but not the other way around.

So, the signature of our function is assumption : Symbol -> SynTactic. We now write our purpose statement and template:
; assumption : Symbol -> SynTactic
; Implements the *variable rule*,
; which, given a witness x of P in the context,
; returns a tactic corresponding to x.
(define/contract (assumption s)
  (symbol? . -> . syn-tactic?)
  (syn-tactic
   'assumption
   (λ (Γ)
     ...)))

Then, we write some tests. Given a context where x : B, we want to return the proposition B. So, we write tests that codify that:
(module+ test
  (check-equal? ((assumption 'x) (extend-context '() 'x (prop-atomic 'A)))
                (prop-atomic 'A))
  (check-equal? ((assumption 'y)
                 (extend-context (extend-context '()
                                                 'y
                                                 (prop-→ (prop-atomic 'B)
                                                         (prop-atomic 'B)))
                                 'x (prop-atomic 'A)))
                (prop-→ (prop-atomic 'B) (prop-atomic 'B)))
  (check-error ((assumption 'x) '())))

We then use dict-ref to fill in our function body, which does what it says on the tin:
; assumption : Symbol -> SynTactic
; Implements the *variable rule*,
; which, given a witness x of P in the context,
; imbues x with being a witness of P.
(define/contract (assumption s)
  (symbol? . -> . syn-tactic?)
  (syn-tactic
   'assumption
   (λ (Γ)
     (dict-ref Γ s))))
and all our tests pass.

3.2.5 Examples: conversion and annotation

Let’s work on some of the most basic rules we need in a system like this. Given a check tactic, to make our types line up with all of our combinators, we need to be able to create an infer tactic, and vice versa.

We use the define/contract form to state our signatures and have them checked at runtime, as it results in errors that yell about what tactic you used and whether or not it was a check/infer tactic, rather than incomprehensible errors about procedure arity.

So, we will implement a tactic combinator chk : SynTactic -> ChkTactic, which takes an infer tactic and produces a check tactic with the same behavior. So, given that we have a witness of P, we can create a witness that checks as P.

Steps 1, 2, and 3 of our design recipe are trivial.

As for step 4, we begin by writing our our purpose statement, contract, and template:
; chk : SynTactic -> ChkTactic
; Implements the *conversion rule*,
; which, given that t is a witness of the proposition P,
; then t is also able to be checked as a witness of P.
(define/contract (chk tac)
  (syn-tactic? . -> . chk-tactic?)
  (chk-tactic
   'chk
   (λ (Γ goal)
     ...)))

We now need to write some tests. The only base tactic we have is assumption, so we simply turn it into a ChkTactic using our combinator and see what happens:
(module+ test
  (define ctx (extend-context
               (extend-context '()
                               'y
                               (prop-→ (prop-atomic 'B)
                                       (prop-atomic 'B)))
               'x (prop-atomic 'A)))
  (check-success ((chk (assumption 'x)) ctx (prop-atomic 'A)))
  (check-success ((chk (assumption 'y)) ctx (prop-→ (prop-atomic 'B)
                                                    (prop-atomic 'B))))
  (check-error ((chk (assumption 'z)) ctx (prop-⊥)))
  (check-error ((chk (assumption 'x)) ctx (prop-⊥))))

So, we begin our game of type tetris to fill in the definition. We know that Γ is a Context, that goal is a Prop, and that tac is a SynTactic that we can apply to a Context to get a Prop.

Therefore, for our tactic, we want to throw an error if we can’t check as the input. But we know what the input needs to be: the result of tac.

So:
; chk : SynTactic -> ChkTactic
; Implements the *conversion rule*,
; which, given that t is a witness of the proposition P,
; then t is also able to be checked as a witness of P.
(define/contract (chk tac)
  (syn-tactic? . -> . chk-tactic?)
  (chk-tactic
   'chk
   (λ (Γ goal)
     (define prop (tac Γ))
     (assert-prop-equal! prop goal))))
where we use the wishlist method to get assert-prop-equal! : Prop Prop -> Void, which does what it says on the tin. You will implement this function in assignment 1.

To go the other direction and make a function imbue that takes a ChkTactic and returns a SynTactic, note that we have an unknown: we aren’t able to use a ChkTactic without a proposition. So, we add a proposition as an argument, and our signature is imbue : ChkTactic Prop -> SynTactic.

So, purpose statement, contract, template:
; imbue : ChkTactic Prop -> SynTactic
; Implements the *annotation rule*,
; which, given that t can check as the proposition P,
; allows t to be imbued with being a witness of P.
(define/contract (imbue tac prop)
  (chk-tactic? any/c . -> . syn-tactic?)
  (syn-tactic
   'imbue
   (λ (Γ)
     ...)))
We write any/c for brevity’s sake.

tac is a ChkTactic, which performs a side effect to determine if it checks as the input proposition. So, we have a Context Γ, and we have a Prop prop. Consequently, (tac Γ prop) will run tac on the context and proposition we have as known variables, and then error if there’s an issue.

Finally, since (tac Γ prop) returns void, and we’re creating a SynTactic, we still need to return a proposition that our new tactic is a witness of. This is merely prop.

So, our final code is:
; imbue : ChkTactic Prop -> SynTactic
; Implements the *annotation rule*,
; which, given that t can check as the proposition P,
; allows t to be imbued with being a witness of P.
(define/contract (imbue tac prop)
  (chk-tactic? any/c . -> . syn-tactic?)
  (syn-tactic
   'imbue
   (λ (Γ)
     (tac Γ prop)
     prop)))

We do not add tests to imbue yet, because we have no good ChkTactics to test with.

3.2.6 Check versus infer and solving unknowns

Let’s think a bit harder about why we have these different tactics. In particular, let’s look at the rule for implication elimination:

\ir{(\to_{\mathrm{elim}})}{A \to B ~~~ A}{B}

So, given that A \to B and A, we can prove B. Under our system, it is not at all straightforward to know how to actually do this. How do we know what A and B are?

This is the purpose of our check/infer system, known in the literature as bidirectional typing. Our check rules correspond to what would be known in type theory as "type checking", and our infer rules correspond to "type inference" or "type synthesis".

The question then turns to how to do our design recipe steps 1-2, of figuring out which things above the line and below the line are check/infer. Let’s step through why modus-ponens : SynTactic ChkTactic -> SynTactic works.

First, we infer the proposition corresponding to the implication. Our input SynTactic gives us the proposition A \to B. So, now that we know A \to B, we know what A is and B is. Consequently, we are free to check our tactic witnessing the assumption as A, so that can be a ChkTactic.

Finally, we also know B, which is our consequent, so we are free to return it, thereby giving us a SynTactic.

So how do we determine exactly what combination to use? We design our systems around the criterion of mode-correctness:

Definition: A bidirectional judgment is mode-correct if:
  • the premises are mode-correct: for each premise, the input meta-variable is known.

  • the conclusion is mode-correct: if we have all the premises, the output of the conclusion is known.

So, for example, the signature modus-ponens : ChkTactic ChkTactic -> ChkTactic is not mode-correct. We know B from the goal of the conclusion, but we have no way of figuring out what A is to be able to construct the proposition A \to B.

The mode-correct ways to construct modus-ponens are:
  • SynTactic SynTactic -> SynTactic

  • SynTactic ChkTactic -> SynTactic

  • SynTactic ChkTactic -> ChkTactic

  • ChkTactic SynTactic -> ChkTactic

where the fourth one is a bit more complex as to why: we need to synthesize the proposition of the consequent before we can construct the assumption.

Note that if your system is not mode-correct, it cannot be written in code.

The decision to go with introduction rules as check tactics and elimination rules as infer tactics is common practice, though entirely arbitrary. Any mode-correct rule will technically work.

Note, however, that because imbue returns a SynTactic, the different rules change the annotation characteristic of our language — when we eventually make the jump from propositions to types and then to concrete syntax, where we put our SynTactics vs ChkTactics will change where we have to annotate our terms.

For this reason, we have two somewhat arbitrary rules we use to narrow down which mode-correct rules work:
  • Avoid asserting equality. If you need to use assert-prop-equal! anywhere, you can probably get away with a ChkTactic rather than a SynTactic. Same goes for calling chk anywhere within your tactic.

  • Intro rules are check, elimination rules are infer. This is a pretty good baseline to get a good annotation characteristic. Whenever you see an eliminator, you can mentally note that you probably need an annotation.

3.3 Elaboration

\newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}} \newcommand{\semantics}[1]{[\![ #1 ]\!]} \newcommand{\fst}[1]{\mathsf{fst\ } #1} \newcommand{\snd}[1]{\mathsf{snd\ } #1}

3.3.1 What?

In A1, we have a set of tactics that correspond to each individual rule of our IPL proof system. However, we are still missing the crucial piece that ties our BHK interpretation to each rule. This piece is known as elaboration, and will be the start of our shift from propositions to types. The goal of elaboration is for our tactics to produce not just a yes/no answer as to whether our proof is correct, but a piece of syntax that, under the BHK interpretation, is an object you can hand me.

First, we are computer scientists, so we will be representing "functions" in the BHK interpretation using lambdas. In other words, our old example of \neg (A \land \neg A) being represented as f((a, n)) = n(a) is now represented by \lambda p. (\snd{p}) (\fst{p}).

For the time being, we restrict our focus to the implicational fragment of our proof system. In A2, we will be adding back our connectives.

Definition: A Syntax is defined as:
  Syntax = (syntax-local natural?)
  | (syntax-lam symbol? Syntax)
  | (syntax-app Syntax Syntax)
and represented with the Racket definition:
; A Syntax is one of:
; A DeBruijn *level*:
; - (syntax-local NaturalNumber)
; A lambda:
; - (syntax-lam Symbol Syntax)
; An application:
; - (syntax-app Syntax Syntax)
(struct syntax-local (n) #:transparent)
(struct syntax-lam (var body) #:transparent)
(struct syntax-app (rator rand) #:transparent)
These are also referred to as core terms.

3.3.2 DeBruijn levels

Note that I mention DeBruijn levels here. Traditionally, we define a DeBruijn index, as in 311, as a way of "pointing out" to our lambda. So, for example, the term

\lambda x. \lambda y. x \mathrm{\ \ becomes\ \ } \lambda \lambda\ 1
noting that we start indexing from zero, or
\lambda z. (\lambda y. y\ (\lambda x. x)) (\lambda x. z\ x) \mathrm{\ \ becomes \ \ } \lambda\ (\lambda\ 0\ (\lambda\ 0)) (\lambda\ 1\ 0)

DeBruijn levels are the dual to these: instead of counting inside out, we count outside in. So:

\lambda x. \lambda y. x \mathrm{\ \ becomes\ \ } \lambda \lambda\ 0
\lambda z. (\lambda y. y\ (\lambda x. x)) (\lambda x. z\ x) \mathrm{\ \ becomes \ \ } \lambda\ (\lambda\ 1\ (\lambda\ 2)) (\lambda\ 0\ 1)

The motivation for this is that we can now not only unambiguously refer to variables as with DeBruijn indices, but we do not need to keep track of the environment size at all: we can simply use list-ref as our environment application.

3.3.3 Elaborating tactics

Our last iteration of tactics has no clear way to return anything beyond a yes/no result. So, we need to update our definition of a tactic.

Definition: A check tactic is now a function Context Prop -> Syntax that, given a context and a proposition, returns the BHK interpretation of the proof it represents if it can prove the proposition under the context, and errors if it cannot.

Definition: A infer tactic is now a function Context -> [PairOf Prop Syntax] that, given a context, returns both the proposition that it proves and the BHK interpretation of the proof of that proposition.

So, we now change our data definitions routinely. run-chk is now Prop ChkTactic -> Syntax, run-syn is now SynTactic -> [PairOf Prop Syntax], et cetera.

We define a helper to DeBruijnize the variables in our context:
; context-index-of : Context Symbol -> NaturalNumber
; Returns the index of the symbol in the context.
(define (context-index-of Γ s)
  (match Γ
    ['() (error "not found")]
    [(cons (cons x _) rst)
     (cond [(eq? x s) 0]
           [else (add1 (context-index-of rst s))])]))
 
(module+ test
  (check-error (context-index-of '() 'x))
  (check-equal? (context-index-of (list (cons 'x (prop-⊥))) 'x) 0)
  (check-equal? (context-index-of (list (cons 'x (prop-⊥))
                                        (cons 'y (prop-atomic 'A)))
                                  'y)
                1))
which simply does structural recursion on the context. This returns a DeBruijn index the shift to levels will make more sense when we switch to higher-order abstract syntax for our tactics.

So, our old tactics need some adjustment. We start with assumption, which returns a SynTactic, which needs to return both a proposition and a core term.

Our core term will be a syntax-local, which stores the DeBruijn level, which is equal to the length of the context minus the DeBruijn index minus one. So:
; assumption : Symbol -> SynTactic
; Implements the *variable rule*,
; which, given a witness x : P in the context,
; returns a tactic corresponding to x.
(define/contract (assumption s)
  (symbol? . -> . syn-tactic?)
  (syn-tactic
   'assumption
   (λ (Γ)
     (cons (dict-ref Γ s)
           (syntax-local (- (length Γ) (context-index-of Γ s) 1))))))
 
(module+ test
  (check-equal? ((assumption 'x) (extend-context '() 'x (prop-atomic 'A)))
                (cons (prop-atomic 'A) (syntax-local 0)))
  (check-error (run-syn (assumption 'x))))

Note that this assumes that your introduce from A1 uses extend-context in the normal way. (If you haven’t looked at A1 yet, do that!)

chk is a relatively routine change. We use match-define to extract the proposition and core term from the input SynTactic, then return the term it produces, as adding chks should not impact the BHK term:
; chk : SynTactic -> ChkTactic
; Implements the *conversion rule*,
; which, given that t is a witness of P,
; then t is able to be checked as a witness of P.
(define/contract (chk tac)
  (syn-tactic? . -> . chk-tactic?)
  (chk-tactic
   'chk
   (λ (Γ goal)
     (match-define (cons prop tm) (tac Γ))
     (assert-prop-equal! prop goal)
     tm)))
 
(module+ test
  (define ctx (extend-context
               (extend-context '()
                               'y
                               (prop-→ (prop-atomic 'B) (prop-atomic 'B)))
               'x (prop-atomic 'A)))
  (check-equal? ((chk (assumption 'x)) ctx (prop-atomic 'A))
                (syntax-local 1))
  (check-equal? ((chk (assumption 'y)) ctx (prop-→ (prop-atomic 'B)
                                                   (prop-atomic 'B)))
                (syntax-local 0))
  (check-error ((chk (assumption 'x)) ctx (prop-atomic 'B))))

The same principle applies for imbue, in which we get the term generated by the input ChkTactic and spit it back out:
; imbue : ChkTactic Prop -> SynTactic
; Implements the *annotation rule*,
; which, given that t can check as the proposition P,
; gives us a new tactic that is a witness of P.
(define/contract (imbue tac prop)
  (chk-tactic? any/c . -> . syn-tactic?)
  (syn-tactic
   'imbue
   (λ (Γ)
     (define tm (tac Γ prop))
     (cons prop tm))))
 
(module+ test
  (check-equal? ((imbue (chk (assumption 'x)) (prop-atomic 'A)) ctx)
                (cons (prop-atomic 'A) (syntax-local 1)))
  (check-error ((imbue (chk (assumption 'x)) (prop-atomic 'B)) ctx)))

Turning the rest of your A1 tactics into elaborating ones is part of the subject of A2.

3.4 The simply-typed lambda calculus

\newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}} \newcommand{\semantics}[1]{[\![ #1 ]\!]} \newcommand{\fst}[1]{\mathsf{fst\ } #1} \newcommand{\snd}[1]{\mathsf{snd\ } #1}

3.4.1 The Curry-Howard correspondence

Surprise! We’re doing type theory in the type theory class. Big reveal.

In particular, every instance of the word "proposition" in this course can be replaced with "type". The following propositions can be interpreted as the following types in a conventional simply typed lambda calculus:
  • Function types correspond to implications.

  • Product types/pair types correspond to logical and.

  • Sum types correspond to logical or.

  • The empty type corresponds to \bot.

In addition, our natural deduction proof system for IPL corresponds directly to the type system for the simply typed lambda calculus.

Given the way that we represent BHK terms after elaboration, this should serve as no surprise.

In addition, this gives the somewhat surprising correspondence that an inhabited type corresponds to an intuitionistic tautology.

However, in type theory, we traditionally have a term associated with a type. We define our concrete syntax to be this kind of term, with the traditional lambda calculus grammar:

e ::= x \mid \lambda x. e \mid e\ e \mid e : \tau
and we give grammar to our types as well:
\tau ::= A \mid \tau \to \tau

We also give Racket data definitions for these, calling our terms before elaboration ConcreteSyntax and after elaboration merely Syntax:
; A ConcreteSyntax is one of:
; - (cs-var Symbol)
; - (cs-lam Symbol ConcreteSyntax)
; - (cs-app ConcreteSyntax ConcreteSyntax)
; - (cs-ann ConcreteSyntax Type)
(struct cs-var (name) #:transparent)
(struct cs-lam (x body) #:transparent)
(struct cs-app (rator rand) #:transparent)
(struct cs-ann (tm tp) #:transparent)
noting that we have changed our original Prop data definition to Type. Note that also we have the cs-ann node in our concrete syntax, but not in our core terms!

3.4.2 Type theory notation

Before we move on to writing a type checker for STLC, we should actually state our type rules.

First, the most glaringly strange thing in our current IPL proof system is the notion of a hypothetical derivation in our intro rule:

\ir{(\to_{\mathrm{intro}})} {\begin{matrix} \ir{}{}{A} \\ \vdots \\ B \end{matrix}} {A \to B}

The notion of hypothetical dervation is useful, but currently exists at a meta-theoretic level. We now begin to label our antecedents with names, put them in our context, and then use the symbol \vdash to mean "proves".

In addition, we annotate each bit of our concrete syntax (corresponding to a non-elaborated BHK interpretation) with its type.

As an immediate example, our implication introduction rule becomes:

\ir{(\to_\mathrm{intro})} {\Gamma, x : A \vdash y : B} {\Gamma \vdash \lambda x. y : A \to B}

Let’s break this down. The line \Gamma, x : A \vdash y : B reads "under a context \Gamma where x has type A, we can prove y has type B". Note the subtle distinction between : on the left hand side of \vdash compared to the right.

We now need to rephrase our other rules in a similar fashion. Implication elimination is straightforward:

\ir{(\to_\mathrm{elim})} {\Gamma \vdash f : A \to B ~~~ \Gamma \vdash a : A} {\Gamma \vdash f\ a : B}

To finish up the proof system for the implicational fragment of IPL, we need to add one more rule with regards to contexts that made no sense prior. In particular, we need a way to extract things from our context:

\ir{(\mathrm{Var})} {x : A \in \Gamma} {\Gamma \vdash x : A}
noting that \in is effectively set-theoretic inclusion. Since contexts are finite, this is fine.

All of our other rules are more or less trivial to rephrase like this.

As an example, we are now free to actually write a proof tree to show that \vdash \lambda x. \lambda y. x : A \to (B \to A):

\ir{(\to_\mathrm{intro})} {\ir{(\to_\mathrm{intro})} {\ir{(\mathrm{Var})} {x : A \in \Gamma, x : A, y : B} {\Gamma, x : A, y : B \vdash x : A}} {\Gamma, x : A \vdash \lambda y. x : B \to A}} {\Gamma \vdash \lambda x. \lambda y. x : A \to (B \to A)}

We then also wrote a proof tree for \vdash \lambda a. \lambda f. f\ a : A \to ((A \to B) \to B) in class. (If you’re reading this alone, do it yourself!)

3.4.3 Bidirectional typing

We’ve already been doing this for quite some time, but the time has come to formalize our tactics in good old fashioned math notation.

Our \Gamma \vdash x : A notation is relatively good, but does not give us a straightforward path to implementation. Our check tactics, which correspond to the sequent \Gamma \vdash x : A being valid, are written as \Gamma \vdash x \Leftarrow A. Our infer tactics, which correspond to the sequent giving us what it proves, is written as \Gamma \vdash x \Rightarrow A.

If we simply adapt our existing signatures to the three rules for the implicational fragment of STLC above, we get the following rules:

\ir{(\to_\mathrm{intro})} {\Gamma, x : A \vdash y \Leftarrow B} {\Gamma \vdash \lambda x. y \Leftarrow A \to B}
\ir{(\to_\mathrm{elim})} {\Gamma \vdash f \Rightarrow A \to B ~~~ \Gamma \vdash a \Leftarrow A} {\Gamma \vdash f\ a \Rightarrow B}
\ir{(\mathrm{Var})} {x : A \in \Gamma} {\Gamma \vdash x \Rightarrow A}

To sanity check our work, we define a criterion to know when we’re done writing our bidirectional rules:

Definition: A bidirectional type system is complete if whenever \Gamma \vdash e : T, then \Gamma \vdash e’ \Leftarrow T and \Gamma \vdash e’ \Rightarrow T, where e’ is a version of e with any amount of added type annotations.

This system doesn’t seem complete, and it’s not. If we try to construct the proof tree for \vdash \lambda a. \lambda f. f\ a : A \to ((A \to B) \to B), we can’t, because we have no way to turn inference into checking and vice versa. In our tactic system, we wrote chk and imbue for this purpose, so let’s turn them into rules.

chk is relatively easy, and is called the conversion rule:

\ir{(\mathrm{Conv})} {\Gamma \vdash x \Rightarrow A’ ~~~ A \equiv A’} {\Gamma \vdash x \Leftarrow A}
where A \equiv A’ means prop=? for now. For brevity in examples, we may use this version of the rule:
\ir{(\mathrm{Conv})} {\Gamma \vdash x \Rightarrow A} {\Gamma \vdash x \Leftarrow A}
which means the same thing, except the definitional equality is implied by A being the same thing on both the top and the bottom.

Going the other way with imbue is more complicated, and what our cs-ann notation is for. This is yet another override of what colon means.

\ir{(\mathrm{Ann})} {\Gamma \vdash x \Leftarrow A} {\Gamma \vdash (x : A) \Rightarrow A}

We now write the proof tree for \vdash \lambda a. \lambda f. f\ a \Leftarrow A \to ((A \to B) \to B). (Again, do it yourself!)

3.4.4 Checking our concrete syntax

Now that we have a notion of our concrete terms and how to check them, it’s time to compile our terms to tactics. We first start by renaming our tactics:
  • chk stays the same.

  • imbue becomes ann.

  • introduce stays the same.

  • modus-ponens becomes app.

  • assumption becomes var.

We now want to make a type checker (or synthesizer) given a term. We do this with two functions: type-check : ConcreteSyntax -> ChkTactic, and type-infer : ConcreteSyntax -> SynTactic, each of which produces a tactic corresponding to the syntax we have.

In the context of type-check, you can think of this kind of like a curried verifier for a given sequent: to show that \Gamma \vdash e \Leftarrow T, we give type-check e, and it gives us a tactic taking \Gamma and T. The same logic applies for type-infer, which takes e, \Gamma, and gives us T.

So, let’s write out our purpose statements and templates for these functions:
; type-check : ConcreteSyntax -> ChkTactic
; Produces a type checker for the given concrete term.
(define (type-check stx)
  (match stx
    [(cs-var name) ...]
    [(cs-lam var body) ...]
    [(cs-app rator rand) ...]
    [(cs-ann tm tp) ...]))
 
; type-infer : ConcreteSyntax -> SynTactic
; Produces a type inferrer for the given concrete term.
(define (type-infer stx)
  (match stx
    [(cs-var name) ...]
    [(cs-lam var body) ...]
    [(cs-app rator rand) ...]
    [(cs-ann tm tp) ...]))

Let’s start with type-check. Note that the only tactic combinator we have (aside from ann) that returns a ChkTactic is introduce, which takes a ChkTactic as input.

All the other pieces of syntax and their corresponding tactics return a SynTactic. We can use chk to turn those into ChkTactics. So, we fill it in, adding a fall-through for all other pieces of syntax:
; type-check : ConcreteSyntax -> ChkTactic
; Produces a type checker for the given concrete term.
(define (type-check stx)
  (match stx
    [(cs-lam var body) (introduce var (type-check body))]
    [_ (chk (type-infer stx))]))

As for type-infer, we simply fill in the natural recursion (making all the types line up) for all the syntaxes corresponding to combinators returning SynTactics:
; type-infer : ConcreteSyntax -> SynTactic
; Produces a type inferrer for the given concrete term.
(define (type-infer stx)
  (match stx
    [(cs-var name) (var name)]
    [(cs-lam var body) ...]
    [(cs-app rator rand) (app (type-infer rator) (type-check rand))]
    [(cs-ann tm tp) (ann (type-check tm) tp)]))
However, introduce returns a ChkTactic, and we can only turn a ChkTactic into a SynTactic if we know what proposition it checks as. Consequently, we would need a type annotation, so we can’t actually infer any type, and we throw an error:
; type-infer : ConcreteSyntax -> SynTactic
; Produces a type inferrer for the given concrete term.
(define (type-infer stx)
  (match stx
    [(cs-var name) (var name)]
    [(cs-app rator rand) (app (type-infer rator) (type-check rand))]
    [(cs-ann tm tp) (ann (type-check tm) tp)]
    [_ (error (format "the term ~a needs more type annotations!" stx))]))

And we now have a fully functional type checker! Assuming your A1 is done, try out:
(run-chk
 (parse '(A  ((A  B)  B)))
 (type-check (cs-lam 'a (cs-lam 'f (cs-app (cs-var 'f) (cs-var 'a))))))
which should succeed (even if you haven’t finished elaboration).

3.5 Normalization by evaluation

\newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}} \newcommand{\semantics}[1]{[\![ #1 ]\!]} \newcommand{\fst}[1]{\mathsf{fst\ } #1} \newcommand{\snd}[1]{\mathsf{snd\ } #1}

3.5.1 Deciding equivalence

We now shift topics to deciding if two lambda terms are equal, for the intuitive notion of equality. Some of you are likely screaming "α-equivalence", and this is somewhat correct. Furthermore, since we decided to use DeBruijn levels, we already even have definitionally alpha-equivalent terms, and don’t have to worry about substitution.

However, consider the following:

(\lambda x. 2) 3 \stackrel{?}{\equiv} 2
Well, this is intuitively true, right? We can perform a β-reduction to make them equivalent. But these terms aren’t alpha-equivalent.

In general, to solve a problem like this, even in the case of

(\lambda x. y) (\lambda x. x) \stackrel{?}{\equiv} y
we need to know a lot about the lambda calculus: we need to know whether or not y contains x in its free variables, for one, we need to know its behavior when passed \lambda x. x as an argument...

So can we just compare the results of evaluation? Let’s give it a shot.

3.5.2 Data definitions for STLC

First, we define a few of our important data definitions. We add let to our syntax:
; A Syntax is one of:
; A DeBruijn *level*:
; - (syntax-local Number)
; A lambda:
; - (syntax-lam Symbol Syntax)
; A let-binding:
; - (syntax-let Symbol Syntax Syntax)
; An application:
; - (syntax-app Syntax Syntax)
(struct syntax-local (n) #:transparent)
(struct syntax-lam (var body) #:transparent)
(struct syntax-let (var exp body) #:transparent)
(struct syntax-app (rator rand) #:transparent)
Since this is just simply-typed lambda calculus with no constants, our return value is going to be a closure:
; A Closure is a (closure Syntax Environment)
(struct closure (term env) #:transparent)
 
; A Value is a Closure
Finally, we need to actually define our environment, which is lazy for surprise reasons that will help us later:
; A Environment is a [ListOf [PromiseOf Value]]
 
; empty-env : -> Environment
(define (empty-env)
  '())
 
; extend-env : Environment [PromiseOf Value]] -> Environment
(define (extend-env env x)
  (cons x env))
 
; apply-env : Environment Number -> [PromiseOf Value]
(define (evaluate/apply-env env n)
  (list-ref env n))
 
; env-size : Environment -> Number
(define (env-size env)
  (length env))
as well as provide the usual 311-esque functions for environment usage.

3.5.3 Building terms with DeBruijn levels (or: continuation HOAS)

We need a bit of scaffolding to actually write out our terms with DeBruijn levels. The upshot of DeBruijn indices is that we can very easily plug things, something we lose with levels. However, we no longer have to do shifting math.

We define a term builder, which takes an environment size and gives us a corresponding term with DeBruijn levels. To build terms, we will invoke the term builder to get a Syntax.
; A [TB X] is an [Number -> X]
 
; run-term-builder : {X} Environment [TB X] -> X
; Produces the built core term, given an environment.
(define (run-term-builder env k)
  (k (env-size env)))

If you’re cheeky, you may think of the continuation monad.

Yes.

Moving on, we first define a builder which is our base case for building these terms: given a DeBruijn index, it shifts it to a level. (Note that we don’t have to carry around indices: we’ll see this in a bit.)
; tb/var : Number -> [TB Syntax]
; Given a DeBruijn index, produce a term builder giving a DeBruijn level in syntax.
(define (tb/var lvl)
  (λ (size)
    (syntax-local (- size lvl 1))))
 
(module+ test
  (check-equal? ((tb/var 1) 3) (syntax-local 1))
  (check-equal? ((tb/var 0) 1) (syntax-local 0)))

We then provide some more helpers that continue building the continuation, but increment the environment size (thereby binding a variable):
; tb/bind-var : {X} [Number -> [TB X]] -> [TB X]
; Take the given term-builder-builder, and increment its environment size.
(define (tb/bind-var k)
  (λ (size)
    ((k size) (+ size 1))))
and one that scopes out a variable, binding it for a given body (noting that we represent the body as taking a term-builder and returning a term-builder):
; tb/scope : {X} [[TB Syntax] -> [TB X]] -> [TB X]
; Scopes out a variable, binding it for the given term-builder-builder.
(define (tb/scope k)
  (tb/bind-var (λ (lvl) (k (tb/var lvl)))))

Finally, we provide builders for let-bindings, lambdas, and applications. The former two take functions from term builders to term builders, where the input to the function represents the bound variable:
; tb/let : Symbol [TB Syntax] [[TB Syntax] -> [TB Syntax]] -> [TB Syntax]
; Build a let binding.
(define (tb/let var e body)
  (λ (size)
    (syntax-let var (e size) ((tb/scope body) size))))
 
; tb/lam : Symbol [[TB Syntax] -> [TB Syntax]] -> [TB Syntax]
; Build a lambda.
(define (tb/lam var body)
  (λ (size)
    (syntax-lam var ((tb/scope body) size))))
 
; tb/app : [TB Syntax] [TB Syntax] -> [TB Syntax]
; Build an application.
(define (tb/app rator rand)
  (λ (size)
    (syntax-app (rator size) (rand size))))

And voilá, we have magic HOAS:
(define church-zero
  (tb/lam 'f
          (λ (f)
            (tb/lam 'x
                    (λ (x)
                      x)))))
(define church-add1
  (tb/lam 'n-1
          (λ (n-1)
            (tb/lam 'f
                    (λ (f)
                      (tb/lam 'x
                              (λ (x)
                                (tb/app f (tb/app (tb/app n-1 f) x)))))))))
both of which evaluate to their representation with DeBruijn levels, and both of which cannot be illegally scoped because of Racket’s semantics.

3.5.4 The evaluator

This should be pretty straightforward by now:
; evaluate : Syntax Environment -> Value
; Evaluates the expression to a closure.
(define (evaluate exp env)
  (match exp
    [(syntax-local n) (force (evaluate/apply-env env n))]
    [(syntax-lam _ body) (closure body env)]
    [(syntax-let var exp body)
     (define unfold (delay (evaluate exp env)))
     (evaluate body (evaluate/extend-env env unfold))]
    [(syntax-app rator rand) (evaluate/do-app (evaluate rator env)
                                              (evaluate rand env))]))
 
; evaluate/do-app : Value Value -> Value
; Performs an application of the rator to the rand.
(define (evaluate/do-app rator rand)
  (match-define (closure tm env) rator)
  (evaluate tm (extend-env env (delay/strict rand))))
The only primary difference is the delay, et al. These are promises, and produce a lazy value that can be forced.

3.5.5 Normal forms

In lambda calculus, both α-equivalence (consistently renaming bound variables) and β-reduction, written as:

(\lambda x. e_1)\ e_2 \equiv e_1[e_2/x]
comprise a way of determining what terms are equal.

Definition: A (β-)normal form is a form that has no redexes, or reducible expressions: so, no further substitution can be done.

One obvious way to perform normalization is to recursively search a piece of syntax for redexes, and reduce them when possible. It’s very elegant to think about when described in a single sentence. In practice, it is awful, and the code is awful to read, and it performs terribly. Let’s not do that and never discuss it again.

Instead, we can turn our closures back into lambdas, by a process called quotation (or reification). To do this, we carefully limit our Value grammar to only capture normal forms (ommitting let-bindings for the time being):
  norm = neutral
  | (λ (id) norm)
  neutral = id
  | (neutral norm)

We then construct data definitions for this, in a somewhat roundabout fashion. (Again, ignore let-bindings for now).
; A Cut is a (value-cut Head [ListOf Form])
(struct value-cut (head spine) #:transparent)
 
; A Head is one of:
; - (value-local Number)
; - (value-let Number [PromiseOf Value])
(struct value-local (n) #:transparent)
(struct value-let (n promise) #:transparent)
 
; A Form is one of:
; (value-app Value)
(struct value-app (v) #:transparent)
 
; A Value is one of:
; - Cut
; - (value-lam String Closure)
(struct value-lam (name closure) #:transparent)

Our Value represents our normal form, and our Cut represents our neutral. In essence, our Cut is a stack of eliminators applied to our head, which is a variable, and our only eliminator is application.

Our evaluator will then produce one of these. Once we have one, we need to be able to move backwards to get back our Syntax, since our Closure data type doesn’t work. So how do we turn a closure, which is stuck, back into a piece of syntax?

To do this, we pull a magic symbolic variable out of our hat, and then apply it to the closure with our usual evaluation. This process is called quotation, but since we’re writting Racket and we can’t call it that, I’ll be calling it reification.

Let’s work an example. To distinguish our syntax and value lambdas, I’ll be writing \lambda_S and \lambda_V. So, let’s try normalizing (\lambda_S x. \lambda_S y. x\ y) (\lambda_S x. x).

We call evaluate with an empty environment:

\begin{align*} \mathrm{eval\ } (\lambda_S x. \lambda_S y. x\ y) (\lambda_S x. x)\ [] &= \mathrm{apply\ } (\mathrm{eval\ } (\lambda_S x. \lambda_S y. x\ y)\ [])\ (\mathrm{eval\ } (\lambda_S x. x)\ []) \\ &= \mathrm{apply\ } (\lambda_V x. (\mathrm{closure\ } (\lambda_S y. x\ y)\ [])) (\lambda_V x. (\mathrm{closure\ } x\ [])) \\ &= \mathrm{eval\ } (\lambda_S y. x\ y)\ [x = (\lambda_V x. (\mathrm{closure\ } x\ []))] \\ &= \lambda_V y. (\mathrm{closure\ } (x\ y) [x = (\lambda_V x. (\mathrm{closure\ } x\ []))]) \end{align*}
and, strictly speaking, this follows our normal form grammar.

We now want to perform reification to read it back into a piece of syntax. Let’s suppose we had a magic variable \mathbf{y} we pull out from our hat. Then, to reify:

\begin{align*} \mathrm{reify\ } \lambda_V y. (\mathrm{closure\ } (x\ y) [x = (\lambda_V x. (\mathrm{closure\ } x\ []))]) &= \lambda_S y. \mathrm{reify\ } \mathrm{apply\ } (\lambda_V y. (\mathrm{closure\ } (x\ y) [x = (\lambda_V x. (\mathrm{closure\ } x\ []))])) \mathbf{y} \\ &= \lambda_S y. \mathrm{reify\ } \mathrm{eval\ } (x\ y) [x = (\lambda_V x. (\mathrm{closure\ } x\ [], y = \mathbf{y}))] \\ &= \ldots \\ &= \lambda_S y. \mathrm{reify\ } \mathrm{apply\ } (\lambda_V x. (\mathrm{closure\ } x\ [])) \mathbf{y} \\ &= \lambda_S y. \mathrm{reify\ } \mathbf{y} \\ &= \lambda_S y. \mathbf{y} \end{align*}

I’ll also work an example with a let binding in class.

So, let’s change our evaluator to produce these. First, we write a couple of helpers that produce the corresponding cuts:
; cut/local : Number -> Cut
(define (cut/local n)
  (value-cut (value-local n) '()))
 
; cut/let-bind : Number [PromiseOf Value] -> Cut
(define (cut/let-bind lvl p)
  (value-cut (value-let lvl p) '()))

and then rewrite our evaluator:
; evaluate : Syntax Environment -> Value
; Evaluates the expression to a neutral.
(define (evaluate exp env)
  (match exp
    [(syntax-local n) (force (apply-env env n))]
    [(syntax-lam value body) (value-lam var (closure body env))]
    [(syntax-let var exp body)
     (define unfold (cut/let-bind (env-size env)) (delay (evaluate exp env)))
     (evaluate body (extend-env env unfold))]
    [(syntax-app rator rand) (do-app (evaluate rator env)
                                     (evaluate rand env))]))

I’m just going to put the rest of the code here, and talk over it in lecture.
; evaluate/do-app : Value Value -> Value
(define (evaluate/do-app rator rand)
  (match rator
    [(value-lam _ closure) (evaluate/apply-closure closure rand)]
    [(value-cut head spine)
     (push-form rator
                (value-app rand)
                (λ (rator^)
                  (evaluate/do-app rator^ rand)))]))
 
; evaluate/apply-closure : Closure Value -> Value
(define (evaluate/apply-closure clo rand)
  (match-define (closure tm env) clo)
  (evaluate tm (evaluate/extend-env env (delay/strict rand))))
 
; push-form : Cut Form [Value -> Value] -> Cut
(define (push-form cut form unfold)
  (match-define (value-cut head spine) cut)
  (define new-head
    (match head
      [(value-local n) (value-local n)]
      [(value-let lvl promise)
       (value-let lvl (delay (unfold (force promise))))]))
  (value-cut new-head (cons form spine)))
 
; QUOTATION
 
; reify-bind-var : {X} PositiveInteger [PositiveInteger Value -> X] -> X
(define (reify-bind-var size k)
  (k (add1 size) (cut/local size)))
 
; reify : PositiveInteger Value -> Syntax
(define (reify size val)
  (match val
    [(value-cut _ _) (reify-cut size val)]
    [(value-lam var closure) (syntax-lam var (reify-closure size closure))]))
 
; reify-cut : PositiveInteger Cut -> Syntax
(define (reify-cut size cut)
  (match-define (value-cut head spine) cut)
  (reify-spine size (reify-head size head) spine))
 
; reify-spine : PositiveInteger Syntax [ListOf Form] -> Syntax
(define (reify-spine size exp spine)
  (match spine
    ['() exp]
    [(cons form rest)
     (reify-spine size (reify-form size exp form) rest)]))
 
; reify-form : PositiveInteger Syntax Form -> Syntax
(define (reify-form size exp form)
  (match form
    [(value-app rand) (syntax-app exp (reify size rand))]))
 
; reify-head : PositiveInteger Head -> Syntax
(define (reify-head size head)
  (match head
    [(value-local lvl) (syntax-local lvl)]
    [(value-let _ unfold) (reify size (force unfold))]))
 
; reify-closure : PositiveInteger Closure -> Syntax
(define (reify-closure size closure)
  (reify-bind-var size
                  (lambda (size^ arg)
                    (reify size^ (evaluate/apply-closure closure arg)))))

Yeah.