Normalization by Evaluation Four Ways:   Reconstructing Nb  E Designs from First Principles
1 The Syntax
2 A Semantics
3 Digression:   Refining the Type of the Syntax
4 The Normalizer
4.1 Choice {1} The Intensional Residualizing Semantics
4.1.1 Choice {1,1} The Intensional Residualizing Readback Semantics
4.1.2 Choice {1,2} The Intensional Residualizing Reify Semantics
4.1.3 Digression:   Refining reify’s Type:   Normal or Expression
4.2 Choice {2} The Extensional (Meta-circular) Semantics
5 A Secret Third Thing:   The Extensional Residualizing Semantics
6 Full Implementation
8.10

Normalization by Evaluation Four Ways: Reconstructing NbE Designs from First Principles

William J. Bowman <wjb@williamjbowman.com>

I’ve been trying to learn normalization by evaluation (NbE) the past week, and have been getting hung up on different presentations. Why do some implementations look so different from others: some have reify and reflect, while some have only read-back (which appears to be reify)? Why do some have explicit type-annotated injection from normal to neutrals?

In this tutorial, I’ll rederive various designs for NbE from first principles. We’ll start with an evaluator for the simply typed λ-calculus, and from it derive some normalizer. But, whenever practical, we’ll stop and question a design decision, and explore how this gives us a slightly different normalizer.

1 The Syntax

I’ll start with a definition of the syntax

Here, when I say "syntax", I mean both the abstract syntax trees used to build expressions, but also the equational theory over those abstract syntax trees.

of simply typed λ-calculus. I define the syntax in the style of How to Design Programs data definitions. We start with the definition of the syntax of Expressions.

; An Expression is one of:
; - a Base
; - a Variable
; - `(lambda (,x) ,e) where x is a Variable and e is an Expression
; - `(,e1 ,e2) where e1 and e2 are Expression

This describes the syntax, represented as quasiquote’d lists, λ-calculus terms. For example, the following is an Expression: `(lambda (s) (lambda (x) (s x))).

The definition is incomplete. We need the following definitions of Base and Variable

; A Base is a struct (base v), where v is arbitrary
(struct base (v))
(define Base? base?)
 
; A Variable is a symbol?
(define Variable? symbol?)

Finally, I give the equations

I do not specify α-equivalence explicitly, although I do actually consider terms up to α-equivalence, as is standard in programming language work.

I want this syntax to satisfy:
; Two Expression are equal up to (the congruence closure of) β-equivalence
; `((lambda (,x) ,e1) ,e2) = (subst e1 x e2)

2 A Semantics

Next, we define the semantics for this syntax. That is, we define a definitional interpreter for these expressions. That is, we define a meta-circular evaluator for this language.

First, we define the domain of Values into which we interpret Expressions:
; A Value is one of:
; - a (base v)
; - a procedure? that, when applied to a Value, produces a Value

We interpret Base?s as itself, λ functions as semantics/meta functions.

To define this semantics, we need to define Environments, that map syntactic Variables to Values:
; An Environment is a finite map from Variables to Values
; i.e. a dictionary from Variables to Values
(define initial-env '())

; Environment -> Expression -> Value
(define (eval ρ e)
  (match e
    [(? Base?) e]
    [(? Variable?) (dict-ref ρ e)]
    [`(lambda (,x) ,e)
     (lambda (v)
       (eval (dict-set ρ x v) e))]
    [`(,e1 ,e2)
     ((eval ρ e1)
      (eval ρ e2))]))

This semantics is sound with respect to the syntax. Here’s a little example:

Examples:
> (require "impl/meta-circular-semantics.rkt")
> church-zero

'(lambda (s) (lambda (z) z))

> church-add1

'(lambda (n) (lambda (s) (lambda (z) (s ((n s) z)))))

> church-+

'(lambda (x) (lambda (y) (lambda (s) (lambda (z) ((x s) ((y s) z))))))

> (define semantic-church-two
    (eval initial-env
          ; Church 1+1
          `((((lambda (church+)
                (lambda (church-zero)
                  (lambda (church-add1)
                    ((church+
                      (church-add1
                       church-zero))
                     (church-add1
                      church-zero)))))
              ,church-+)
             ,church-zero)
            ,church-add1)))
> semantic-church-two

#<procedure:...cular-semantics.rkt:24:5>

> ((semantic-church-two add1) 0)

2

The semantics does give us values of terms we can reason about. In the above example, I perform addition on Church numeral in the syntax, and I get a semantic value that acts like meta-language natural numbers.

But, how do I know this semantics is right? For one, I’d like to know the semantics is sound with respect to our syntax: two β-equivalent things result in the same value:

Examples:
> (define f1 (eval initial-env `(lambda (x) ((lambda (y) y) x))))
> (define f2 (eval initial-env `(lambda (x) x)))
> (equal? f1 f2)

#f

> (equal? (f1 'x) (f2 'x))

#t

If I had a better meta-language, I might be able to prove that rather than test it. Maybe I’ll redo this in Agda later.

But I’d also like to know that this semantics is complete: and two equal values are in fact β-equivalent.

I might also want to be able to get back the syntax of a value. If the semantics is sound, then that value ought to be equal to its β-normal form, so getting back to its syntax ought to give me a normal form, i.e., going from syntax, to value, to syntax should be normalization.

Turns out, these are essentially the same question. If I can go from syntax, to value, to syntax, and what I get is β-equivalent to what I started with, then my sound semantics is also complete.

3 Digression: Refining the Type of the Syntax

The above definitions leave a lot to be desired. They claim, for example, that 'x is an Expression, which is clearly nonsense if eval is meant to be a semantics (a total function). We could, and it might be helpful to, enrich the definition as follows to describe well bound terms.
; A Context is a finite set of Variables
 
; It supports the following operations:
; ∈ : Variable -> Context -> Set
; _::_ : Variable -> Context -> Context
 
; An (Expression (Γ : Context)) is one of:
; - a Base
; - a Variable, x, s.t. x ∈ Γ
; - `(lambda (,x) ,e) where x is a Variable and e is an (Expression (x :: Γ))
; - `(,e1 ,e2) where e1 and e2 are (Expressions Γ)
 
; (ρ : Environment) -> (Expression (dom ρ)) -> Value
(define (eval ρ e) ...)

That’s better; now we know eval never has to deal with unbound variables. But it still might have to deal with ill-typed terms: what is I ask for the semantics of `(,(base 5) ,(base 5))? That’s nonsense.

So let’s add the syntax of types.
; A Type is one of:
; - 'Base
; - `(-> ,A ,B) where A and B are Types
 
; A Context is a finite map of Variables to Types
 
; An (Expression (A : Type) (Γ : Context)) is one of:
; - a Base, where A is 'Base
; - a Variable, x, s.t. (dict-ref Γ x) = A
; - `(lambda (,x) ,e) where A is `(-> ,A ,B), x is a Variable and e is an (Expression B (dict-set Γ x A))
; - `(,e1 ,e2) where e1 is an (Expression `(-> ,A ,B) Γ) and e2 is an (Expressions A Γ)
 
; A (Value A) is one of:
; - a (base v), where A = 'Base
; - when A = `(-> ,A ,B), a procedure? that, when applied to a (Value A) produces a (Value B)
 
; (ρ : Environment) -> (Expression A (dom ρ)) -> (Value A)
(define (eval ρ e) ...)

Phew; now its a semantics.

But that’s a lot of details that mostly doesn’t contribute to understanding the normalizers, so I’m going to stick with the simplified definition.

4 The Normalizer

Before our digression, we want to know if we can go from values back to syntax. There is some reason to do this, both practically (to get a normalizer) and theoretically (to prove completeness).

So let’s try:
; Value -> Expression
(define (reify-1 v)
  (match v
    [(? Base?) v]
    [(? procedure?)
     ; Then v is a procedure of type Value to Value
     (define x (gensym x))
     `(lambda (,x)
        ; x is not a Value, but a Variable (Expression)
        ,(reify-1 (v x)))]))

Well. This code almost works, but it’s not well typed. Here’s what it does: if the value is a Base?, we just return that, since it’s an expression. If the value is a procedure? (a semantic function), we want to construct a syntactic λ whose body is the result of reifying the value you get after applying the semantic function to the fresh syntactic variable.

But that application is not well typed, since the semantic function takes a Value, but a Variable is not a Value.

So, we extend the definition of Value:
; A Value is one of:
; - a (base v)
; - a procedure? that, when applied to a Value, produces a Value
; - a Variable?

But now we break an invariant in our evaluator: semantic application expected a procedure?, but it might get a Variable. This is easier to see in the indexed definition of Value:

; A (Value A) is one of:
; - a (base v), where A = 'Base
; - when A = `(-> ,A ,B), a procedure? that, when applied to a (Value A) produces a (Value B)
; - a Variable

Now we have a choice:

Do we:
  • Change the evaluator, so that application of a semantic function is defined when applying a non procedure? Value?

  • Or, change reify to convert a non-procedure? Value into a procedure??

We could also view this through the definition of Value: (1) do we allow a Variable to be a (Value `(-> ,A ,B)), or (2) only allow a Variable to be a (Value 'Base)]?

4.1 Choice {1} The Intensional Residualizing Semantics

In this choice, we allow a Variable to be a value of any type. In fact, that will not be enough, as we’ll see shortly. This is a change to the semantics, i.e. the evaluator, and the resulting semantics is called the intensional residualizing semantics.

Sam Lindley’s slides on NbE use this terminology: https://homepages.inf.ed.ac.uk/slindley/nbe/nbe-cambridge2016.pdf.

Intentional here refers to the theory only having β-equivalence, but not η-equivalence. Residualizing refers to the semantics of value being either true values (a procedure) or residual values (variables, or stuck terms).

We update the evaluator as follows:
; Environment -> Expression -> Value
(define (eval-1 ρ e)
  (match e
    [(? Base?) e]
    [(? Variable?) (dict-ref ρ e e)]
    [`(lambda (,x) ,e)
     (lambda (v)
       (eval-1 (dict-set ρ x v) e))]
    [`(,e1 ,e2)
     (do-app1
      (eval-1 ρ e1)
      (eval-1 ρ e2))]))
 
; Value -> Value -> Value
(define (do-app-1 v1 v2)
  (match v1
    [(? procedure?)
     (v1 v2)]
    [_
     ; In this case, we cannot do application, but build a stuck term, i.e., a neutral term.
     `(,v1 ???)]))

Notice that since we might apply a Variable in the application case, we must extend Value further to allow stuck expressions other than Variables to be considered Values. In general, we extend Value to include any Neutral term. A Neutral is either a Variable or an elimination form (function application, in our case) whose target operand is a Neutral.

But, we have another choice to make:

Do we:
  • Allow the non-target opands of an elimination form to be Values?

  • Require the non-target opands of an elimination form to be Expressionss?

If we pick option one, do-app will be simpler and unconnected with our normalizer. If we pick option two, do-app must call into the reifier to transform the v2 into an Expression.

4.1.1 Choice {1,1} The Intensional Residualizing Readback Semantics

In this semantics, we inject all terms, including neutral terms, into the domain of Values, then separately read back any Value (including neutral values) into an Expression. I call this the readback semantics, since the process of getting Values back to Expressions has this one-way loop producing syntax, reminiscent of (but in the opposite direction of) the reader.

; A Value is one of:
; - a base?
; - a procedure? from Value to Value
; - a NeutralValue
 
; A NeutralValue is one of:
; - a Variable?
; - `(,e1 ,e2) where e1 is NeutralValue and e2 is Value

Now we can define reify, or as its called in this presentation, read-back-11, properly. However, since Values can be Neutrals, we also need a second definition, read-back-neutral-11, which takes Neutrals to Expressions.

Examples:
> (define (eval-11 ρ e)
    (match e
      [(? Base?) e]
      [(? Variable?) (dict-ref ρ e e)]
      [`(lambda (,x) ,e)
       (lambda (v)
         (eval-11 (dict-set ρ x v) e))]
      [`(,e1 ,e2)
       (do-app-11
        (eval-11 ρ e1)
        (eval-11 ρ e2))]))
; Value -> Value -> Value
> (define (do-app-11 v1 v2)
    (match v1
      [(? procedure?)
       (v1 v2)]
      [_
       `(,v1 ,v2)]))
; Value -> Expression
> (define (read-back-value-11 v)
    (match v
      [(? base?)
       v]
      [(? procedure?)
       (define x (gensym 'x))
       `(lambda (,x)
          ,(read-back-value-11 (do-app-11 v x)))]
      [neutral
       (read-back-neutral-11 neutral)]))
; NeutralValue -> Expression
> (define (read-back-neutral-11 n)
    (match n
      [(? Variable?) n]
      [`(,n ,v)
       `(,(read-back-neutral-11 n) ,(read-back-value-11 v))]))

Now, we have a normalizer:

Examples:
> (define f1 (eval-11 initial-env `(lambda (x) ((lambda (y) y) x))))
> (define f2 (eval-11 initial-env `(lambda (x) x)))
> (read-back-value-11 f1)

'(lambda (x4861) x4861)

> (read-back-value-11 f2)

'(lambda (x4862) x4862)

> (define semantic-church-two
    (eval-11 initial-env
           `((((lambda (church+)
                 (lambda (church-zero)
                   (lambda (church-add1)
                     ((church+
                       (church-add1
                        church-zero))
                      (church-add1
                       church-zero)))))
               ,church-+)
              ,church-zero)
             ,church-add1)))
> (read-back-value-11 semantic-church-two)

'(lambda (x4863) (lambda (x4864) (x4863 (x4863 x4864))))

> (read-back-value-11
   (eval-11 initial-env
          `(lambda (f)
             (f (lambda (x) x)))))

'(lambda (x4865) (x4865 (lambda (x4866) x4866)))

Observe that this semantics does not produce a term in η-normal form, only β-normal form. The follow two programs aren’t necessarily equal; we would need to know the types to figure that out.

Examples:
> (read-back-value-11
    (eval-11 initial-env
           `(lambda (hof)
              (lambda (f)
                (hof f)))))

'(lambda (x4867) (lambda (x4868) (x4867 x4868)))

> (read-back-value-11
   (eval-11 initial-env
          `(lambda (hof)
             (lambda (f)
               (lambda (x)
                 ((hof f) x))))))

'(lambda (x4869) (lambda (x4870) (lambda (x4871) ((x4869 x4870) x4871))))

In a sufficiently rich meta language, we could prove this semantics sound and complete.

4.1.2 Choice {1,2} The Intensional Residualizing Reify Semantics

Instead of embedding Values in Neutrals, we could eagerly transform a Value back into an Expression when building a Neutral. This makes the evaluator and the normalizer mutually recursive, but means all Neutrals are by construction Expressions, which makes reasoning a little easier.

The type definitions are as follows:

; A Value is one of:
; - a Base
; - a procedure? from Value to Value
; - a Neutral
 
; A Neutral is one of:
; - a Variable?
; - `(,e1 ,e2) where e1 is Neutral and e2 is Normal
 
; A Normal is one of:
; - a Base
; - a `(lambda (,x) ,e) where x is a Variable and e is Neutral
; - a Neutral

Then, in do-app, we call into reify.

Examples:
; Value -> Value -> Value
> (define (do-app-12 v1 v2)
    (match v1
      [(? procedure?)
       (v1 v2)]
      [_
       `(,v1 ,(reify-12 v2))]))
; Value -> Expression
> (define (reify-12 v)
    (match v
      [(? base?)
       v]
      [(? procedure?)
       (define x (gensym 'x))
       `(lambda (,x)
          ,(reify-12 (do-app-12 v x)))]
      [neutral
       neutral]))

The evaluator has identical structure but uses the modified definition of do-app (and modified type definitions).

Examples:
; Environment -> Expression -> Value
> (define (eval-12 ρ e)
    (match e
      [(? Base?) e]
      [(? Variable?) (dict-ref ρ e e)]
      [`(lambda (,x) ,e)
       (lambda (v)
         (eval-12 (dict-set ρ x v) e))]
      [`(,e1 ,e2)
       (do-app-12
        (eval-12 ρ e1)
        (eval-12 ρ e2))]))

The major difference is the result of a term with a λ function as an argument, which we can observe in semantic application of a neutral to function values:

Examples:
> (do-app-11 `f (eval-11 initial-env `(lambda (x) x)))

'(f #<procedure>)

> (do-app-12 `f (eval-12 initial-env `(lambda (x) x)))

'(f (lambda (x4881) x4881))

4.1.3 Digression: Refining reify’s Type: Normal or Expression

We actually could have arrived at Choice {1,2} The Intensional Residualizing Reify Semantics through another route, had we introduced a third choice: whether or not to refine the type of reify when we started defining it.

Had we decided to refine the type of reify to guarantee it produces not arbitrary Expressions, but only Normal Expressions, we would probably have ended up with an equivalent NbE design to Choice {1,2} The Intensional Residualizing Reify Semantics.

Observe that all the "Expression"s produced by reify-12 are Normal. Had we started by designing that refined type, we may not have been tempted to define NeutralValue at all, since we already had a perfectly good Neutral.

4.2 Choice {2} The Extensional (Meta-circular) Semantics

Back-tracking all the way to the top of our choice tree, we could have decide to change reify-1 to transform a Variable into a Value before trying to apply the semantic function to the Variable.

That is, we might have tried to introduce reflect-2, with the following types
; Value -> Expression
(define (reify-2 v)
  (match v
    [(? Base?) v]
    [(? procedure?)
     ; Then v is a procedure of type Value to Value
     (define x (gensym x))
     `(lambda (,x)
        ; x is not a Value, but a Variable (Expression)
        ,(reify-2 (v (reflect-2 x))))]))
 
; Variable -> Value
(define (reflect-2 n) ...)

To do this, we need to know something about the variable: it’s type. Is it a function, or a base? If it’s a base, then we can just return it, since a Base is also a Value. Otherwise, we need to construct a semantic function.

First we modify both reify and reflect to take a type. We use the same syntax of types, Type, introduced in Digression: Refining the Type of the Syntax. Then, we define reflect recursively on the type argument.

Here’s our definition of reflect-2:

Examples:
; Type -> Neutral -> Value
> (define (reflect-2 A n)
    (match A
      ['Base n]
      [`(-> ,A ,B)
       (lambda (v)
         (reflect-2 B `(,n ,(reify-2 A v))))]))

We had to generalize the type so instead of taking a Variable, it takes a Neutral Expresson.

Intuitvely, when we have a Variable at function type, we want to build a semantics function that will, given a Value of type A, produce a Value of type B. To build the right function, we need to "apply" n. Except we can’t apply it; it’s a variable. So we want to build a neutral application of that variable. But that means we must convert the value v to a Normal Expression, so we reify it. By generalizing reflect-2, we can continue reflecting that Neutral to a Value.

This requires the following modified definition of reify-2, which now must return a Normal. We must also update it to be recursive on the type, rather than the term.

Examples:
; Type -> Value -> Normal
> (define (reify-2 A v)
    (match A
      ['Base v]
      [`(-> ,A ,B)
       (define x (gensym 'x))
       `(lambda (,x)
          ,(reify-2 B (v (reflect-2 A x))))]))

This also enables the following modified definition of Value:
; A (Value A) is one of:
; - a (base v), where A = 'Base
; - a Neutral, when A = 'Base
; - when A = `(-> ,A ,B), a procedure? that, when applied to a (Value A) produces a (Value B)

That is, a Neutral is a Value only at type 'Base. That is the only situation where a Neutral term must be returned from reflect-2.

Finally, we don’t have to modify eval at all—this definition works just fine with our existing meta-circular semantics.

Examples:
> (define f1 (eval initial-env `(lambda (x) ((lambda (y) y) x))))
> (define f2 (eval initial-env `(lambda (x) x)))
> (reify-2 '(-> Base Base) f1)

'(lambda (x5476) x5476)

> (reify-2 '(-> Base Base) f2)

'(lambda (x5477) x5477)

This design decision is a very nice one. It forces us into a good design, refining the type of reify to insist it produces a Normal. It completely reuses the existing semantics, rather than requiring the semantics to be modified. It leads to single syntactic definition of Neutral rather than raising questions about whether it should be include exotic subterms from the semantic domain.

It also give us an extensional theory: it models η-equality in addition to β-equality. This means we should modify our syntax to include η, or we don’t have completeness: the following two equal terms are not β-equal.

Examples:
> (reify-2
   '(-> (-> (-> Base Base) (-> Base Base))
        (-> (-> Base Base)
            (-> Base Base)))
   (eval initial-env
            `(lambda (hof)
               (lambda (f)
                 (hof f)))))

'(lambda (x5478)

   (lambda (x5479)

     (lambda (x5481) ((x5478 (lambda (x5480) (x5479 x5480))) x5481))))

> (reify-2
   '(-> (-> (-> Base Base) (-> Base Base))
        (-> (-> Base Base)
            (-> Base Base)))
   (eval initial-env
         `(lambda (hof)
            (lambda (f)
              (lambda (x)
                ((hof f) x))))))

'(lambda (x5482)

   (lambda (x5483)

     (lambda (x5484) ((x5482 (lambda (x5485) (x5483 x5485))) x5484))))

5 A Secret Third Thing: The Extensional Residualizing Semantics

Implicit in our starting semantics was another choice: to use a meta-circular semantics, or not. The meta-circular semantics is quite natural for a denotational semantics of a simple syntax, or as quick and efficient interpreter for a simple language. However, we might have wanted to reason about semantic functions inductively. Or perhaps we wanted to enrich our semantic functions with more structure. Or perhaps we wanted to implement our interpreter with explicit closures.

In that case, we would first start with the following definition of Value:

Examples:
; A Value is one of:
; - a Base
; - a (closure Environment Variable Expression)
> (struct closure (env param body))

Then we’d define our evaluator as follows:

Examples:
; Environment -> Expression -> Value
> (define (eval-c ρ e)
    (match e
      [(? Base?) e]
      [(? Variable?)
       (dict-ref ρ e)]
      [`(lambda (,x) ,body)
       (closure ρ x body)]
      [`(,e1 ,e2)
       (do-app-c
        (eval-c ρ e1)
        (eval-c ρ e2))]))
; Value -> Value -> Value
> (define (do-app-c v1 v2)
    (match v1
      [(closure env param body)
       (eval-c (dict-set env param v2) body)]))

From this definition, how do we build the normalizer?

We might try to start with the definitions of reify and reflect, but we get stuck trying to define reflect:
; Type -> Value -> Normal
(define (reify A v)
  (match A
    ['Base v]
    [`(-> ,A ,B)
     (define x (gensym 'x))
     (reify B (do-app-c v (reflect A x)))]))
 
; Type -> Neutral -> Normal
(define (reflect A n)
  (match A
    ['Base n]
    [`(-> ,A ,B)
     (define x (gensym 'x))
     (reflect B (closure '??? x `(,n ,(reify A '???))))]))

Previously, reflect built a semantic function. In the body of that semantic function, the call to reify was delayed until a real semantic Value was produced. Now, however, we’re trying to build closure. We don’t have a semantic value, so what do we pass to reify? What environment do we use for the closure? How do we reflect a closure, which is not a neutral?

We can’t define reflect, so let’s see if we can change where we use reflect. This call in reify is at fault: (reify B (do-app-c v (reflect A x))). We cannot reflect that variable if its a function type. So here’s an idea: we modify do-app to work on Neutrals. We’re sort of forced into the design choice in Choice {1,1} The Intensional Residualizing Readback Semantics, although with one additional complication: since reify now requires a type, we’ll need to remember the type of Neutral values.

Here’s the new definition of Value
; A Value is one of:
; - a Base
; - (closure Environment Variable Expression)
; - `(reflect ,Type ,Neutral)

We make reflect into a syntactic construct, which embeds a Neutral with its type into the domain of semantic values.

We’ll need to modify our evaluator:

Examples:
; Environment -> Expression -> Value
> (define (eval-3 ρ e)
    (match e
      [(? Base?) e]
      [(? Variable?)
       (dict-ref ρ e)]
      [`(lambda (,x) ,body)
       (closure ρ x body)]
      [`(,e1 ,e2)
       (do-app-3
        (eval-3 ρ e1)
        (eval-3 ρ e2))]))
; Value -> Value -> Value
> (define (do-app-3 v1 v2)
    (match v1
      [(closure env param body)
       (eval-3 (dict-set env param v2) body)]
      [`(reflect (-> ,A ,B) ,neu)
       `(reflect ,B (,neu ,(reify-3 A v2)))]))

In do-app-3, if we see a Neutral, we build the neutral term, but all neutral terms as values are this syntactic version of reflect. To create a neutral application, we must reify the value into a Normal. We essentially inline the definition of reflect on function type.

We could have once more debated whether to have that call to reify, or instead have exotic NeutralValues. The latter choice would lead to an extensional version of Choice {1,1} The Intensional Residualizing Readback Semantics.

Finally, we define only reify (which might be call read-back):

Examples:
; Type -> Value -> Normal
> (define (reify-3 A v)
    (match A
      ['Base
       (match v
         [`(reflect Base ,neu)
          neu]
         [v v])]
      [`(-> ,A ,B)
       (define x (gensym 'x))
       `(lambda (,x)
          ,(reify-3 B (do-app-3 v `(reflect ,A ,x))))]))

We inline the base case of the definition of reflect, to unwrap the any remaining neutrals. (Try leaving out that case and running the following example to understand it better.)

Example:
> (reify-3
   '(-> (-> (-> Base Base) (-> Base Base))
        (-> (-> Base Base)
            (-> Base Base)))
   (eval-3 initial-env
           `(lambda (hof)
              (lambda (f)
                (lambda (x)
                  ((hof f) x))))))

'(lambda (x5507)

   (lambda (x5508)

     (lambda (x5509) ((x5507 (lambda (x5510) (x5508 x5510))) x5509))))

6 Full Implementation

The full implementation can be found: