### 4` `Assignments

#### 4.1` `Assignment 1: Intuitionistic propositional logic

This assignment will be assigned on January 17th, 2023, and will be due on January 31, 2023. Corrections will be accepted until February 14, 2023.

The purpose of this assignment is to create a rudimentary proof assistant for IPL, looking similar to our existing proof-trees except bottom-up instead of top-down.

The starting code is available here, or on GitHub Classroom if you are not auditing the course.

All of your functions should have a good amount of tests. We provide the macros check-success (which makes sure that a procedure does not error, but does not check its output) and check-error (which makes sure that a procedure errors).

Exercise 1: Design a function prop=? : Prop Prop -> Boolean that determines if two Props are equal. Do not use equal?, instead use match or match*.

Exercise 2: Design a tactic combinator modus-ponens : SynTactic ChkTactic -> SynTactic, which implements →-elimination:

Hint: Match on the result of the P \to Q tactic, and throw-proof-error! if it’s wrong. If the so-called witness of P \to Q ends up being a witness of something else, we can’t do anything!

Exercise 3: Design a tactic combinator conjoin : ChkTactic ChkTactic -> ChkTactic, which implements ∧-introduction:

Hint: Match on the goal, and throw an error if it’s wrong. If someone tries to make conjoin produce something that’s not a conjunction, we can’t do anything!

Exercise 4: Design tactic combinators fst, snd : SynTactic -> SynTactic, which implement both forms of ∧-elimination respectively:

Exercise 5: Design a tactic combinator introduce : Symbol ChkTactic -> ChkTactic which implements →-introduction (no Gentzen-style rule!). Given a name for a witness of P and a tactic which, given a witness of P in the context, can produce a witness checking as Q, we should get a witness checking as P \to Q.

Hint: Remember implementing lambda in your representation-independent 311-style interpreter? You will need to extend the context similarly.

Exercise 6: Use your tactics and run-chk to prove the proposition A \to (A \land A). How does your proof compare to your proof on paper?

Exercise 7: Use your tactics and run-chk to prove the proposition (A \to B) \to ((B \to \bot) \to (A \to \bot)), or in other words, (A \to B) \to (\neg B \to \neg A). How does your proof compare to your proof on paper? Why can we do this, despite not having written any tactics to handle \bot?

Exercise 8: Design a tactic combinator explode : SynTactic -> ChkTactic, which implements ex falso:

Hint: Use void.

Exercise 9: Use your tactics and run-chk to prove the proposition (A \land (A \to \bot)) \to B, or in other words, (A \land \neg A) \to B. How does your proof compare to your proof on paper?

Exercise 10: Design tactic combinators left-weaken, right-weaken : ChkTactic -> ChkTactic which implement both forms of ∨-introduction, respectively:

Exercise 11: Design a tactic combinator cases : ChkTactic SynTactic SynTactic -> SynTactic, which implements ∨-elimination:

Hint: Use match*, or assert-prop-equal!. Note that the pattern (list a a) matches (list 2 2), but not (list 2 3).

Exercise 12: Use your tactics and run-chk to prove the proposition (A \to B) \to ((A \lor C) \to (B \lor C)). How does your proof compare to your proof on paper?

Hint: You will need to use imbue.

Exercise 13: Use your tactics and run-chk to prove the proposition \neg \neg (A \lor \neg A) (otherwise known as \neg \neg \mathsf{LEM}, the double negation of the law of excluded middle.) How does your proof compare to your proof on paper?

The challenge exercises are optional and not required for anyone, but may be instructive or useful for future assignments.

Challenge exercise 1: Design a tactic combinator dne : SynTactic -> SynTactic which, given a witness of \neg \neg A, gives you a witness of A.

the goal proposition,

and the current set of assumptions.

#### 4.2` `Assignment 2: Simply typed lambda calculus

This assignment will be assigned on January 31, 2023, and will be due on February 6, 2023. Corrections will be accepted until February 20th, 2023.

The purpose of this assignment is to create an elaborating type checker for the simply-typed lambda calculus by extending our proof checker for propositional logic. This change is relatively short, but requires lots of mechanical work.

The starting code is available here.

The term builder is not necessary to do this assignment, and requires significant extension if you wish to use it.

Exercise 1: s/prop/type/g. Now that we’re doing type checking, take your A1 code and rename the functions and data definitions consistently as follows:

throw-proof-error! becomes throw-type-error!

Proposition/Prop becomes Type

The Proposition structures change as according to the starter code

prop=? becomes type=?

introduce becomes intro

imbue becomes ann

assumption becomes var

modus-ponens becomes app

conjoin becomes cons^

left-weaken becomes inl

right-weaken becomes inr

cases becomes +-elim

explode becomes ⊥-elim

Exercise 2: Building our syntax. For every one of our old ChkTactics, return its corresponding syntax. For every one of our old SynTactics, return a pair of the type the term gives as well as the corresponding syntax. Again, this is detailed in the data definition.

match-define may be useful to extract things from the result of SynTactics in one line.

Also note that we have a syntax-ann node, which was not present in our lecture notes.

Remember to write plenty of tests. Your old check-success invocations will need to be changed to check-equal?, as we are no longer returning (void).

Exercise 3: Term checking. Design functions type-check : ConcreteSyntax -> ChkTactic and type-infer : ConcreteSyntax -> SynTactic which construct the tactic corresponding to their input term.

To write tests for these, you will have to invoke run-chk and run-syn on the result of these functions, and check that.

Also note that each piece of syntax corresponds to exactly one tactic, and some will appear only in type-check. Read off your signatures to know what you return. The only place where chk should be called is at the very end of type-check, and nowhere else.

Exercise 4: n-ary application. As an example of what elaboration is capable of, design tactic combinators intros : [ListOf Symbol] ChkTactic -> ChkTactic and apps : SynTactic [ListOf ChkTactic] -> SynTactic which, respectively, do n-ary lambda introduction and n-ary application. They should elaborate to unary lambda/apply.

These types of tactic combinators, known as tacticals, can and should call intro and app, respectively.

Then, change type-check and type-synth to call these rather than intro/app.

; A ConcreteSyntax is one of: ; ... ; - (cs-lam [ListOf Symbol] ConcreteSyntax) ; - (cs-app ConcreteSyntax [ListOf ConcreteSyntax]) ; ...

Exercise 5: Tying it all up. Take the proofs from Exercise 6, Exercise 7, Exercise 9, Exercise 12, and Exercise 13 of Assignment 1, and turn them into concrete syntax terms.

The challenge exercises are optional and not required for anyone, but may be instructive or useful for future assignments.

Challenge exercise 1: Update your unleash-hole! : ChkTactic to now be unleash-hole : [Maybe SynTactic] -> ChkTactic, which, if it is given a SynTactic as input, runs that SynTactic and prints the type that it proves without checking if it matches what is required. Use the syntax-hole node as the piece of syntax to return.

Then, add it to type-check, using the cs-hole node.

As an example, checking the term (lambda (x) (! x)) (where ! represents a hole) should say that the hole has type A, even if we want to check it as A \to B.

#### 4.3` `Assignment 3: Normalization by evaluation

This assignment will be assigned on February 7, 2023, and will be due on February 17, 2023. Corrections will be accepted until March 3, 2023.

The purpose of this assignment is to extend your work for A2, and normalize the core terms that come out of your elaborating type checker. This change requires a lot of mechanical work, and also requires computation and possibly uniqueness rules.

In lecture, we presented the following rules for function types:

In the rest of this, we will be extending our normalizer to handle computation and uniqueness rules for each of our types in A2. We start with our A2 code, and add the normalizer in the starter code, available here.

Exercise 0: Products (done in class). We add a computation rule to our system for products, but not a uniqueness rule. Note that we could add a uniqueness rule, but we do not.

Extend evaluate to produce the new value-cons node. To do this, match on syntax-cons, thereby producing a value-cons node.

Then, to support syntax-fst and syntax-snd, design functions do-fst : Value -> Value and do-snd : Value -> Value that account for all relevant Values (so, Cuts and value-cons). When encountering a neutral, push value-fst and value-snd onto the spine.

Then, extend reify to account for it as well, modifying reify-form and reify. Make sure to preserve type information throughout your cuts.

Exercise 1: Motives. Note that our Syntax and Value data definitions have changed slightly from those in class. In particular, syntax-+-elim now has a motive field, which is a type. It represents the type we are eliminating into: for example, if the result of syntax-+-elim is C and you’re eliminating an A + B, then the motive argument is C and the function types would be A \to C and B \to C.

Don’t change your ConcreteSyntax data definition: instead, in the tactics handling +-elim and ⊥-elim, add the motive as an argument during elaboration. So, if the tactic returns a C, put that C in the resultant syntax as well.

Exercise 2: Sums. Here’s the computation rules (again, no uniqueness) for sums:

Again, extend evaluate and reify to handle syntax-inl and syntax-inr. Then, design a function do-+-elim : Value Value Value -> Value accounting for all relevant Values. When encountering a neutral, push value-+-elim onto the spine.

Note that value-+-elim takes four arguments: the types for f and g in the above rules, and f, g themselves. This is because in reify-form, we need to have the types for f and g to recursively call reify, as we did for application.

Exercise 3: Bottom. We have no computation or uniqueness rule for bottom. However, it still needs to be added to evaluate and reify.

Design a function do-⊥-elim : Type Value -> Value that performs bottom elimination. Note that since we have no concrete values of type ⊥, all your scrutineés will be cuts.

Again, value-⊥-elim needs a field for the type information —

Then, extend evaluate and reify.

Exercise 4: Tying the knot. Modify run-chk and run-syn to produce a normal form of the elaborated syntax, rather than the elaborated syntax.

#### 4.4` `Assignment 4: Dependent types

This assignment will be assigned on February 21, 2023, and will be due on March 6, 2023. Corrections will be accepted until March 20, 2023.

The purpose of this assignment is to extend your work on STLC, and add support for \Pi-types, resulting in a first brush at Martin-Löf type theory. This requires integrating the type checker from A2 with the normalizer written in A3.

This assignment has many moving parts, and it will become easy to confuse variables. Within the tactic engine, I recommend naming all your tactic variables X-tac, all your syntax variables X-stx, and all your value variables just X (for some sensible X).

The starter code is available here.

Exercise 0: Remove the code that normalizes your terms from run-chk and run-syn. You don’t need that anymore.

Exercise 1: We’re reviewing your application. We no longer have a clear-cut function type without closures inside of it. Consequently, preserving the type in do-app will need to change.

Recall the (non-bidirectional) elimination rule for \Pi:

Modify do-app to handle this, updating the type you match on in the cut case. Then, use apply-closure to get B.

Finally, modify the reify case that used to handle type-arrow, and update it to use value-Π. Again, you need to use apply-closure to get B for your recursive call.

Exercise 2: Arrows? What arrows? With the removal of a basic arrow type, we also need to change every time we used to call type-arrow to value-Π. But value-Π takes a closure, which takes a Syntax, and we have a Value!

; A Closure is one of: ; - (closure Syntax Environment) ; - (h-closure [Value -> Value]) (struct closure (term env) #:transparent) (struct h-closure (fn) #:transparent)

Update apply-closure to handle this notion of closure. Everywhere where you used to use type-arrow, update it to use value-Π with this new notion of closure. Again, remember that A \to B := \Pi_{(\_ : A)} B.

Exercise 3: Type-type-type. Add a new case to reify, matching on value-Type. Then, handle every constructor: \Pi, \times, +, \bot, \mathbb{N}, \mathrm{Type}, and cuts.

Reification for every case except \Pi should be relatively straightforward. You will need to use the provided helper function fresh-cut to reify the closure representing B in your \Pi case.

Sanity check: At this point, you should be able to call normalize on closed, well-typed Syntaxes. You should not yet be able to type-check these terms, except by hand to generate test cases for normalize and the functions playing into it.

Exercise 4: Type tactics. Everywhere you used to take a motive argument, you now need to take a ChkTactic that checks as (value-Type). Do this.

Whenever you need to turn that tactic’s result from a Syntax a Value to return it from a SyhTactic or use it in another ChkTactic, use eval-with-context.

Exercise 5: Baking a pi. In class, we showed the well formedness rules for every type but \Pi.

The (bidirectional) well formedness rule for \Pi is more complicated:

Design a function Π-form : Symbol ChkTactic ChkTactic -> SynTactic that performs this typing judgment. You will need to use eval-with-context in order to get A to add to the context.

Exercise 6: Eating your pi. In intro and app, we used to match on type-arrow, and we now need to match on value-Π, which then gives us B as a closure.

Per the last exercise, generate a fresh cut with fresh-with-context, and use it to generate B. You will need to extend the context in intro.

Sanity check: You should now be able to type-check and normalize the identity function.

Exercise 7: Trust the less natural natural recursion. Your starter code contains the ability to handle recℕ. Recall the rules for indℕ:

Update your data definitions to rename recℕ to indℕ.

Extend the tactic for recℕ to handle indℕ, and rename it to indℕ. You will have to use the argument to your h-closure to construct the type of step.

Then, extend do-recℕ to become do-indℕ, noting that step now takes two arguments.

The following exercises are not required for anyone.

Challenge exercise 1: We’re not providing much assistance, huh? We have a type checker, but not really a proof assistant. When we did A3, we ripped the holes out of our system.

Add holes back into your system, and integrate them with the normalizer.

Hint: Modify the data definition for Head.

#### 4.5` `Assignment 5: Dependenter types

This assignment will be assigned on April 11, 2023, and will be due at the latest possible opportunity.

The purpose of this assignment is to extend your work on A4 to support sigma types and identity.

Update evaluate and reify to handle your new data definitions.

Create a tactic combinator for each new form, where each premise (thing above the line) is a tactic argument.

Update type-check and type-infer to handle your new ConcreteSyntax data definitions.

Test.

The starter code is available on GitHub Classroom, with the link on Canvas.

This assignment is impossible without a completed A4.

Exercise 1: The cooler sum. We have +-elim in our old code. Using the rules below, extend this implementation to implement ind+.

This should be relatively routine at this point. Make sure to update every reference to +-elim, and ensure you’re checking with the new dependent motive and type for f and g.

Note that the definition of form-ind+ has changed a bit. Namely, we save the type of the scrut in the scrut-tp field, which provides enough information to reconstruct the type. (The same applies for the scrut-tp field of form-ind≡ later.)

Exercise 2: Just what we needed: more Greek. Using the rules below as a reference, modify ×-form to become Σ-form, and cons^, fst, and snd to handle sigma types. This will look very similar to the implementation of Π types.

Hint: You will have to infer the type of scrut before you can check the type of mot.

Hint: In snd, you will need to get fst of your pair. To do this, you’ll make a call that looks like (eval-with-context (syntax-fst ...) ...).

Exercise 3: A missing rule. Update reify to handle eta for pairs, as given in the rule \Sigma_\eta below.

This will look similar to your implementation for eta for functions, in which you’ll generate a cut of type A, apply it to B-clo, and call do-fst and do-snd. There will be no matching on val after you determine that tp is a dependent pair type.

Exercise 4: Identity crisis. Implement the rules below for identity types. Make sure to update all moving components: so, normalization, tactics, etc.

Hint: You will have to infer the type of scrut before you can check the type of mot.

Hint: You will have an annoying nest of do-apps in your ind≡ tactic combinator.

To help test, all of the interesting things we’ve proven in class (up until HoTT) should now be
provable. Proving that m + 0 = m and 0 + m = m are good tests —

Final type rule reference:

Recall that \Leftarrow represents a check tactic, and \Rightarrow represents an infer tactic.

I like to remember this by thinking of how the arrow points in or out of the term: for \Rightarrow, the type is given to us by the term which is why it points out, whereas with \Leftarrow the type points into the term, as we have to ensure the term has that type.

Bidirectional core:

Type-in-type (inconsistent):

Pi types:

Syntactic sugar: A \to B is \Pi_{(\_ : A)} B.

Sigma types:

Syntactic sugar: A \times B is \Sigma_{(\_ : A)} B.

Identity:

Sum types:

Bottom type:

Syntactic sugar: \neg A is A \to \bot. (Not part of our concrete syntax.)

Naturals:

Not included: formation rules (part of the grammar), rules for Tarski universes, "obvious" reductions (not β or η)