Ideation Setup
So we start with trying to deal with problems 1-3 listed above.
1 and 2 seem to be solvable by recognizing the RM as a model of some mathematical theory. The more studied the mathematical theory chosen the better as these will already have a lot of properties proven which may map to the RM. The easier it is to recognize the RM as a model of said theory the better as that will contribute to point 2.
So here I take for granted that what we ultimately want is to understand the mathematical theory behind the RM.
Point 3 then would encourage us to - given our current endeavors regarding object-compilation - to pursue such a mathematical theory that it has as much computational content as possible.
This brought me to
Type Theory
Type theory evidently satisfies both of the criteria. Moreover, if I recall correctly, in our RM designs we have been inspired by these topics already:
- the term[1] “resource” we borrow from linear logic/TT
- “quantity” was - at least as was told to me - inspired by Quantitative Type Theory (QTT)
Here it is important to talk a bit about QTT. Quantitative type theory - roughly - is a type theory where terms carry some “computational” information about how many times said term can be used in a derivation.
In case anyone is interested, a paper that I found insightful is Counting on QTT.
The nice example that paper uses is:
1 x : Bool, 1 y : Nat, 0 z : Nat \vdash if x then y + 1 else y + 2 : Nat
which corresponds to the fact that we can use x boolean exactly once, y natural number exactly once and z never inside the derivation (or runtime, whatever term you prefer better to feed your intuition). Basically, we provide exact information about how much can you use a specific variable during execution of some program.
Quantitative type theory usually is presented over some semiring. The case that would interest us is the semiring Nat. Another semiring that we may get interested in is the semiring
\{(n, type) \colon Nat \times \{\text{Linear, Affine\}\}}
where if a variable is annotated by (n, Linear) it is interpreted as saying that the variable can be used exactly n times while (n, Affine) is a variable that can be used at most n times in a single runtime/derivation. The summation/multiplication is given as a product on the same operations over Nat and Linear + Linear = Linear, Affine + Affine = Affine, Linear + Affine = Affine.
Looking at this sometimes it seems like some parallels with the RM as it is currently designed pops out: resources as terms, kinds as types, a single transaction is like a derivation/runtime, and delta is tracking the use of variables, etc…
However the analogy breaks quite quickly sadly. The last time we discussed this we uniformly agreed that the current RM design diverges from the analogy by the sole fact that delta tracks the usage of types, not of terms. In QTT what is anotated are terms - as in Resources - and not types - as in kinds. However delta makes sure that the kinds are balanced, not the terms. To quote @cwgoes :
Nevertheless, I tried to return to the idea of researching how would an RM corresponding to a QTT look like because I honestly do not think there is too much (sic) to do to make it work. Some correspondences were already-always present and its up to us to make them precise.
And if we do, then I think that this will resolve 1-2 instantly. People have been talking about type theories in some form for more than a century now and QTT also has had quite a few developments.
Problem 3 then would get resolved by the Curry-Howard isomorphism. Well, of course it’s not that simple, but nevertheless, it would be fundamentally easier to get a computational interpretation using a framework that has historically been associated with computational interpretations.
The Resource Machine
However before diving into the changes and correspondences this research has proposed for the resource machine design, it is good to set some base grounds.
The amazing thing about the RM design as it is now is that it works. Moreover, it works great! It is a working architecture for supporting arbitrary shielded applications. This is an amazing feat as-is. Changing the design non-conservatively - i.e. not preserving some of the core features - is pointless and counterproductive.
Hence I wanted here to list some of the things that I took to be for granted for the design to have. That is, if the proposal does not preserve any of these properties then it should be deemed a failure:
- Intent-support
- Compositionality
- Privacy
I will not elaborate on exactly the details of these terms, but hopefully for those who have worked with the resource machine these will make sense. In practical sense we want to support solving and support shielded-state transitions.
Finally, in terms of who gets to decide whether there properties are preserved, it would be the cryptography team, namely @vveiln and @xuyang. If the cryptography team - the core designer of the current architecture - is not convinced of an alternative design then this is a very clear sign that something isn’t working. In this sense I think it makes sense to make the final instance of any ideation on this topic to be the cryptography team.
Given all that, we can come to the ideation itself:
pun unintended ↩︎