Ideation
The approach was simple and I thank @graphomath and @mariari for being able to listen to my ramblings regarding it.
The approach was basically: look at QTT and try to map the concepts that it has to the already-existing RM concepts. If these concepts already exists: great, try to interpret them literally. If not, think of ways of incorporating these.
All the images used in this post can be accessible from an excalidraw here.
The Model
There are of course proper methods for defining what a model of a TT is, here I will be mostly handwaving hoping that it will get people interested in this direction.
Core TT
So what do we need to model in order to make sense of what a type theory is? Well, the central notion of a type theory is a judgement. There are practically two sorts of judgements: whether a type is well-formed in a certain context or whether a term of a certain type is well-formed in a certain context.
This makes it clear that what we need to define is:
- what does it mean to be a type in RM
- what does it mean to be a term in RM
- how to make judgements in RM
Let us assume we are in a non-dependently typed environment. Then more-or-less what you need in order to formulate what a type is is its
- introduction rules
- elimination rules
- syntactical name
Introduction rules allow you to say when that type can be introduced (by introducing its term), elimination when a term of said type can be used. A name is just that: a name. Similarly how we can name booleans Bool or bool or bOoL.
This I would say maps fairly easily to the concepts of
- Resource logic when we create a resource
- Resource logic when we consume a resource
- The resource label
I don’t think this particular connection is controversial for us, we have worked with it for quite a while.
And of course this all seems to correspond indeed to a notion of an RM kind:
Now we come to a notion of a term. A term fundamentally has three constituents:
- a type
- a syntactical name
- dependencies
We already know that what we want is to have a correspondence between a term of a type and a resource of a specific kind. We know that kind corresponds to the type, we can associate value with being a syntactical name, although that is not quite how we use it currently.
The hardest thing for us to map here is I think a notion of “dependency”. This is something that has been giving us headaches during the controller development process. We do not have a formal notion of dependencies. It is currently split through being something application-dependent (something that a resource logic asks for explicitly) and application-independent (balancing resources seem to be something that commonsensically is a dependency yet is not seen in resource logics).
So here, I propose we actually introduce a separate field called “dependencies”. As dependencies are multiple and term dependencies are on other terms, it seems natural that dependencies of a resource are some sort of an array of resources (or references to them, we’ll come to that at some point).
All-in-all, at this point we have deduced that we need something equivalent to three fields in the datatype:
That is not the end of it, however, as we go further into the needs of the system, we will need to adapt further.
The first thing to note is that we agree to these descriptions we sort of agree to the fact that a RM transaction corresponds to some derivation, sequents such as e.g.
In this sense the entire history of Anoma executors/controllers (those that run RM algorithms to check state changes and enforce them) can be seen as just an ever expanding derivation tree:
Global IDs/Commitments/Nullifiers
Here we come to identification of resources. What is different between two statements that say true : Bool, i.e. generate a boolean. Generally nothing, on the “underlying TT” level they are just the same term syntactically. However they are different in computation and use. Here we come to the reason why people bear with De Bruijn indices: we need a way of identifying terms which are syntactically the same but practically different. Moreover, we need to do so “globally” across several derivations/transactions. Even more moreover, we need the identification to be somehow “hidden” so that creating and consuming the resource can be made hard to link.
That is, we need a way of incorporating the ZCash commitment/nullifier architecture. Here is where we need the resource to have two more fields: nonce and nullifier key (commitment).
These grant that globally two “syntactically same terms” (read as: resources with same kind, value, and dependency fields) are seen as different. The nullifier key is of course needed for nullifier creation.
Note that from this perspective there is a notion of a term and of additional identification/privacy information. Practically a distinction between something like a “core resource” and “core resource with identitifcation/privacy info.” This idea of a “core resource” has been something that I have heard quite a wile ago from e.g. @jonathan, it’s not a new concept in research folks trying to grasp the RM.
But for the time being, let us bar specific naming and focus on the function. So now we have expanded the resource datatype to contain:
with the idea that nonce is used to derive the unique ID (commitment/nullifier) connected to the term and nullifierKey to conceal usage information (decouple commitment from nullifier).
QTT
Finally, we come to the actual QTT part, namely introducing the concept of quantity.
Here, I am unsure what would be the best way of introducing this or specifically what kind of QTT we want. We may want to either do the usual QTT over the “natural linear-affine” semiring as I described before or “unite” two QTTs: affine and linear. More on this distinction may be read about here.
However here let us bar this particular discussion. My personal opinion is whatever we introduce below may stand in correspondence to either one of the two approach above.
First things first, we need to now introduce an extra concept to the notion of a resource, quantity:
Note that quantity here will properly speaking be interpreted as quantity of a resource, not of a type. This is an important distinction with the current state of things and the hardest part to reconcile with current application designs.
The difference can be spotted in semantics. Currently when we consume 500 quantities of a resource we usually think of it as “I consume 500 of that resource.” In this mode we think of it as “I consume a resource 500 times.”
Note that, however, in this model for us quantity is not just a number but an element of out “natural linear affine” semiring mentioned in the Ideation Setup.
To make things more explicit, let us present it in a way that is similar to the current implementation:
where isEphemeral can be seen as isLinear flag. That is isEphemeral = true means that it is a linear quantity where isEphemeral = false affine.
The idea is hence that in a single transaction - a single derivation step in the sequent tree - linear resources need to be used exactly the number of times declared in said transaction. They are introduced in the runtime of a transaction and consumed in the same runtime. Think intents: you introduce them to be solved in the span of one derivation.
Affine resources - on the other hand - are those that you want to possibly span several derivations. These are persistent resources. These are the resources you may want to create in one derivation but consume way later, such as a proper token.
This has implications for the delta. The computations becomes nuanced:
Definition of “resource being consumed” is not per-se defined by the transaction having nullifiers but by it being dependency of some other resource. Moreover, we now practically would need to have two deltas: one for linear and one for affine resources.
This is probably the hardest thing to reconcile both with the current cryptography tech we use and with application development. The process I describe in computing delta is fairly convoluted and will require proper thought to be made sure it cannot be abused. Moreover, it requires us understanding when deltas have a “positive balance.”
So the idea is that only resources created actually “influence” the delta. In type theory the definition of being “consumed” can roughly stand to being passed as a variable to a generation of another term, i.e. be a term dependency.
So given a transaction T such that linear resources R1_Lin,...,Rn_Lin and affine resources R1_Aff,.., Rk_Aff are created, we calculate the delta by summing the quantities of said created resources and subtracting quantities of the resources listed in their dependencies multiplied by all resource fileds other than the nonce field. With linear and affine deltas computed differently.
As an example, consider two resources:
If they are in the same action, then the delta will be computed:
For linear resources as:
1 * hash(R1.kind, R1.value, R1.nfKey)
For affine as:
– 2 * hash(R2.kind, R2.value, R2.nfKey)
The evident question of “how do we ensure that resource logics of the things listed in dependencies pass,” this can be resolved by passing the action tree root to the compliance units and ensuring that all the dependencies are consumed inside the action.
Note, however, that even if we consume R2 to be part of R1 dependency, we may want to use it only 1 time. So the dependency list needs to contain how many times we use the resource as well. So the dependencies become something like a list [(resource, usedQuantity)] akin to:
with the delta being computed appropriately, subtracting only one instead of two.
In terms of what which transactions we then deem to be able to influence state change:
It is those transactions which produce a balanced linear delta and positive affine delta.
This I think is enough abstract ideation. Here we can move towards concrete implementation details: