Semantics of non-methods in the object system

This post contains some thoughts on the relationship between object behaviour and resource behaviour and in particular the semantics of non-method calls.

An example situation in the RM realm

Let’s consider the following situation: imagine we have an action with a bunch of resources. We go through all resources in the action and prove their logics one by one. The logic of resource R only approves the transition if there is a corresponding resource Q in the same action. The question is: can it happen that two same kind resources Q and Q' are in the same action and only one of them is used to verify constraints of multiple resources instead of each corresponding to the designated case?

It shouldn’t be, but it isn’t clearly specified right now how to prevent this. So far it has only been done “manually”.

  • Often it isn’t possible because this is wrong on the application logic level, like in the example of the counter.
    • If a specific resource Q is intended to be the counterparty for a certain constraint about R, we might want to have an additional logic that ensures the correspondence (if we don’t want to encode the relationship between R and Q in either R or Q explicitly).
  • If the resources don’t influence each other, they can be in different actions.
  • The third case would be that some resources should be in the same action and they have the same kind, so the ordering question happens. I cannot think of an example when this is needed, but I cannot guarantee it is never the case.
    • In this case though, we don’t really care about the order as long as the same resource is not assumed to correspond to multiple consumed ones. So the question reduces to: how to prevent the resolution in which the same resource corresponds to different ones (and who performs this)? Can the user cheat and create wrong bindings when creating the proofs?

How it relates to non-method calls

First, by non-method calls here we mean the functions that do not directly compile into resource logics. This differentiation is helpful on the RM level, even though it might be meaningless on the object language level.

In some sense, if two objects are in the same non-method, these values are related and share some scope and it might make sense to compile them into a single action. But then we might end up in the situation described in the example above. On the other hand, languages allows us calling functions from other functions as many times as we want, and obviously higher level ones cannot be actions if the lower-level ones are. So to say that non-method functions always compile to actions is wrong.

The result of this post is a bunch of questions to answer:

  • what should compile to actions and what should compile to transactions?
  • how to determine what should go in the same action and what can / should be in different actions?
  • what high-level application properties signal that something has to be unique (like counter), different kinds with different labels, what requires an extra logic to ensure some explicit correspondence?
7 Likes

I will recapitulate here the concrete (but artificial) example of mutual counter increment, in the context of the compilation strategy @mariari proposed.

Consider [edited]:

mutualIncrement (c1 : Counter) (c2 : Counter) := 
  let v1 := value c1;
  let v2 := value c2;
  c1.incrementBy(v2);
  c2.incrementBy(v1);

Disregarding the artificiality of it, suppose we want this function compiled to a single transaction with a single action. The consumed resources are [c1, c2], created resources are [c1', c2'], and they satisfy value c1' = value c1 + value c2 and value c2' = value c2 + value c1. The resources c1', c2' have the same kind.

The problem is that the RLs for the method calls of c1.incrementBy and c2.incrementBy need to know which created resources (objects) correspond to which method call. Jeremy’s idea was to store indicators in app data that would signify that c1' is the created resource for c1.incrementBy and c2' is the created resource for c2.incrementBy. So in the app data of each (consumed) resource R in the action, we store a list of indices into the list of created resources, which indicate which created resources should be considered by the RL of R.

The problem I see is that this allows for maliciously circumventing the interface of counter objects. I’d like to understand whether my reasoning below is correct, or otherwise what I misunderstand exactly.

Nobody can tamper the app data you submit (as far as I understand), but the problem seems to be that somebody else can use this mechanism to themselves submit a transaction that would manipulate counter objects in a way that circumvents their intended interface.

Concretely, suppose someone submits a transaction with consumed resources [c1, c2] and created resources [c1', c] such that value c1' = value c1 + value c2 and value c = 0. Assume additionally that it so happens that value c1 = value c2 > 0. In app data for both c1 and c2 we indicate that c1' is the relevant created resource. The RLs for c1 and c2 will succeed because c1' is the correct increment of c1 (and of c2) by value c2 = value c1. The transaction balances: we’re consuming two resources and creating two resources of the same kind.

So we can create c with some arbitrary value in a way that is not allowed by the interface. The point is that we can prepare a transaction to consume two objects and create an unrelated object with arbitrary (legal) state in a way not intended by the object interface. Essentially, we’ve reset one of the counters to 0. This seems wrong if the interface of counter objects doesn’t allow counter destruction or decrement. In general, it seems one could use a similar trick to do an arbitrary state change, e.g., if counters had associated “colors” that were required to be preserved, one could arbitrarily change a counter’s color.

Is this a real problem or I’m misunderstanding something? Upon closer consideration, I don’t understand @mariari’s assurances why this is not possible / a problem.

I was thinking for a while that this can be solved by:

  • instead of storing a single indicator in resource’s app data, store a map from a resource to the set of indices in the created resources list,
  • check in every RL that the sets of indices define a disjoint partition of the entire list of created resources.

But this doesn’t seem to work because (with the current RM data structures at least) the RL has access only to the app data of the self resource, so it cannot ensure the consistency (equality) of the partitions (the maps provided in the app data) among different RLs for different resources. So one could do the same trick by providing different partitions for different resources.

1 Like

Yes the really issue in this strategy is argument reuse. I.E. Pinning objects to be the argument twice over on actions you compose yourself.

The problem you noted in the call was about violating a single objects, which always runs their check.This was the sense that I meant it was fine since you were worried about that detail (I could have misunderstood you in the call though). I belive I noted on the call that there could be argument double use from the user pinning it in the action, which is an issue that you can’t really guarantee here without more check scaffoldings like the map structure and then general resource check.

Thanks for the reply. This makes things clearer for me. So this is indeed a problem without extra checks concerning app data across different RLs, which I don’t know how to implement in the current RM model.

I was concerned about any possible attacks that might result from putting indicators in app data, not only limited to fixed supply or related properties (though this might’ve been the example we discussed most, don’t remember exactly now). I didn’t have any specific examples at the time.

Thanks for this post. I think that this broad question is related to other discussions that we’ve had previously about action structure, most recently here, here, and perhaps even going back to the original proposal for actions.

The action concept and structure originally resulted from practical attempts to implement applications. At the time, we didn’t have a clear idea of what high-level behavior we wanted from the resource machine, much less the plan of compiling an object system to it. We now have a better understanding of the desired high-level semantics, and I think we can leverage this understanding to fix more precisely what interface we want the resource machine to have (which might, in my view, eventually necessitate some changes, perhaps e.g. to the “action” concept and structure).

In particular, when thinking about higher-level object systems, the relevant concepts in play are “objects” and “messages”. Object definitions in the higher-level semantics will then fix constraints as to what sets of objects and messages constitute valid transactions, where we can understand these sets as organized into (a) which objects were involved in the transaction and (b) which messages were sent and received by those objects (in order).

Let’s take the example from @Lukasz with two counters, the syntax of which I found a bit ambiguous at first. The semantics which I understand you to mean (per the invariants) is:

  1. Query the value of the first counter, store it (value c1)
  2. Query the value of the second counter (value c2)
  3. Increment the first counter by the stored value of the second counter
  4. Increment the second counter by the stored value of the first counter

Note that this is different from a program with the more sequential semantics which I think the current syntax loosely implies:

  1. Query the value of the second counter
  2. Increment the first counter by that value
  3. Query the value of the first counter
  4. Increment the second counter by that value

In this latter semantics, the second counter would be incremented by its own original value plus the value of the first counter, since the first counter is modified before we query it.

I would suggest a syntax for your desired semantics that looks a bit more like:

mutualIncrement (c1 : Counter) (c2 : Counter) := 
  v1 := value c1;
  v2 := value c2;
  c1.incrementBy(v2);
  c2.incrementBy(v1);

Or, alternatively, explicitly discard the new objects so that it’s clear that the second query queries the old version:

mutualIncrement (c1 : Counter) (c2 : Counter) := 
  _ := c1.incrementBy(value c2);
  _ := c2.incrementBy(value c1);

That slight nit aside, I would illustrate this program in diagram form as:

Here we have a “ProgramObject” (the implict context of the code snippet) which:

  1. Queries the value of the first counter (“getValue”)
  2. Queries the value of the second counter (“getValue”)
  3. Receives the value “4” from the first counter (“value(4)”)
  4. Receives the value “3” from the second counter (“value(3)”)
  5. Increments the first counter by “3” (“incrementBy(3)”)
  6. Increments the second counter by “4” (“incrementBy(4)”)

Now, let’s generalize this to use variables instead of specific values:

There are two free variables here (just the actual values of the counters in question). Supposing that we just had sets of objects and messages, what would we need to check in order to ensure that a transaction is valid (matches this pattern)? In general, just that:

  1. The messages sent and received by each object are correct method calls and responses.
  2. The messages sent and received by different objects correspond to one another.

In general, the former checks are non-linear – we don’t really care what happens to data “inside an object” – while the latter checks are linear – messages must be sent and received exactly once.

Unfortunately I’m running out of time to write, so I’ll leave this here for now, and hopefully continue tomorrow – but in the meantime curious if this provides anything useful.

2 Likes

Sorry, the example pseudocode was indeed unclear and confusing. I edited it to better reflect the intended semantics.

In general, up to now we’ve been mostly trying to fit some kind of object system on top of existing RM semantics. The specific example illustrates a detailed point: it reveals an issue with a specific compilation strategy to a single-action transaction in the current RM model.

Indeed, there seem to exist some limitations / friction points with the current RM model. In particular, “better” cross-action checks would be useful. If the notion of the Resource Machine is actually more flexible and up for modification, appropriate changes to it may make some things easier.

Our current object model is very “static” (all relevant methods / object relationships need to be known statically at compilation time) but this seems to enable automatic translation of “ordinary” method code into appropriate RL constraints that cannot be “gamed”. It seems that to do any kind of “dynamic” dispatch (e.g., where the method called is not fully known statically at call site, or we want to be able to do some runtime modification, or we don’t want the class to know about all possible intents) is hard without shifting the burden of ensuring object constraint consistency on the user. We have this kind of problem even with the restricted “static” intents we implemented: the user somehow needs to ensure by extra checks some correspondence between the fields of provided and received objects. I think that’s what @mariari meant by being easy to shoot oneself in the foot, and in a “dynamic” model this problem seems to occur earlier. Since the list of methods is then not known up-front (which in our “static” model implicitly defines object interface constraints), the user would need to explicitly provide some kinds of constraints that define permissible object state / behaviour which needs to be respected by subclasses / mixins / dynamically added methods, and it’s easy to mis-specify those constraints.

1 Like

Just to get it down on (digital) paper: I’ve been noodling around with one potential concept for how to make object compilation more straightforward / safer, and potentially alleviate the concern around whether a given resource is used to satisfy “too many” constraints. For now I will give this concept the temporary name of “per-object actions”.

Note that I haven’t thought about performance yet here, and we’ll need to investigate this - I’m just starting by trying to think about the architecture.

Per-object actions

Functionally, objects are supposed to have a “scope” of visibility: an object sees messages sent to it, messages sent from it, its own state (potentially multiple versions), and nothing else. The general idea here is to represent the scope that a single object is supposed to “see” in the course of a transaction, namely (a) the previous state (version), (b) the next state (version), (c) messages sent, and (d) messages received, in the scope of a single action.

Conceptually, at least, it seems to me like we should be able to achieve this roughly by:

  1. Representing all messages as ephemeral resources, where creating a message-resource represents sending a message and consuming a message-resource represents receiving a message.
  2. For each object involved in a transaction, putting together an action with resources representing that object’s old (pre-transaction) and new (post-transaction) state versions, (created) message-resources representing any messages sent by that object, and (consumed) message-resources representing any messages received by that object.
  3. The resource logic associated with an object’s state should then just check that the transition is valid, given the messages and other object state version(s) in the action (there should be at most two state versions). No other resources should be present in the action.
  4. We can rely on the linearity check to enforce that any ephemeral message-resources created in the transaction (representing a message being sent) are consumed (representing a message being received). Somehow, however, we must check that these messages are sent and received by the right object(s) – so the message-resource must encode the appropriate notion of object-identity and check when being consumed that the object involved in the consumption-action is indeed the object that the sender meant to send the message to.

Voila? Action-scopes correspond to object-scopes, correctness seems straightforward to check, and no miscellaneous “extra resources” need to be allowed or reasoned about. I even think it’s relatively straightforward to represent “stateless” / “virtual” objects (e.g. SimpleKudos) here – we can just represent them with ephemeral resources. I guess we need to allow an unbalanced “top-level incoming message-resource” for the whole transaction, but this seems doable. Probably the primary question to think about here is performance – obviously, representing every message as a resource is gonna get expensive, we might want to batch messages together / performance-optimize specifically ephemeral resources / even batch objects.

This could also be extended fairly straightforwardly to “read-only methods” by relaxing the action-exclusive resource usage check and allowing resources to be used in multiple actions if a flag is marked / potentially allowing resources to be read without being consumed if the methods in question are read-only (see relevant prior discussions here).

Thoughts? @vveiln @graphomath @Lukasz especially. Maybe I’m missing something obvious here and this actually doesn’t work…

3 Likes

I’ll try to summarise how this relates to the current GOOSE model, and try to highlight some issues that are maybe a bit orthogonal but seem important from user perspective.

(a) the previous state (version)

This is encoded in consumed persistent resource(s)

(b) the next state (version)

This is encoded in created persistent resource(s)

(c) messages sent, and (d) messages received

Currently, only one “message” is sent and received. It is essentially encoded in the “member logic indicator” in App Data. Message arguments are also stored in App Data.

As for the mutual increment example, it is actually possible to do a version of it with the current model, using functions (goose-lean/Apps/UniversalCounter.lean at 6379097f4597fe74f14787c87de7d26612282e99 · anoma/goose-lean · GitHub). The catch is that the “incrementBy” methods need to be fully known to the function, and compilation just lumps together their code to create a state-transition RL check based on the combination.

  1. Representing all messages as ephemeral resources, where creating a message-resource represents sending a message and consuming a message-resource represents receiving a message.
  2. For each object involved in a transaction, putting together an action with resources representing that object’s old (pre-transaction) and new (post-transaction) state versions, (created) message-resources representing any messages sent by that object, and (consumed) message-resources representing any messages received by that object.

I think this is a good direction to explore, probably better for generalisation than the current approach of essentially encoding messages in App Data. But I think a general concern about exploitability remains: the object (or: the RL of the corresponding resource) needs to known about the messages it can send/receive, and in particular ensure that the logics associated with these messages are as expected. Otherwise, one can game the compilation scheme to create “illegal” object state transitions. This sounds clear, but the question is how does the object know (enough) about things that can dynamically change (overriden/added methods, new intents)?

  1. The resource logic associated with an object’s state should then just check that the transition is valid, given the messages and other object state version(s) in the action (there should be at most two state versions). No other resources should be present in the action.

I’d like to highlight and re-iterate the point I made in the last paragraph of my previous message, which is maybe orthogonal to this discussion, but seems important from end-user perspective.

The question is how do we “just check” that the transition is valid, i.e., where do we get the specification of state transition validity from? The lazy solution (for compiler writer / language designer) is to let the user specify up-front explicitly all constraints that need to hold for any state transition to be valid. But this seems error-prone for the user.

The advantage of the current “restricted” static GOOSE model is that, because all relationships are known by the translation at compilation time, appropriate state transition validity constraints can just be generated from the code of methods/constructors/destructors/functions that the user wrote, with the user perhaps needing to occasionally add some extra checks about things not in the code. If we don’t know the precise behaviour of all possible methods/messages/intents at compilation time, we need to allow for some more generic “message send/receive” for run-time object extensions, but in a way that restricts the state transitions associated with these (not-yet-fully-specified) messages somehow (otherwise, things can be gamed to create arbitrary state transitions). And where do we get a specification of the restrictions on the yet-unknown messages? It seems we’re forced to ask the user to explicitly specify some object state constraints, because we can’t generate them from the code of methods/intents we don’t have full access to.

  1. We can rely on the linearity check to enforce that any ephemeral message-resources created in the transaction (representing a message being sent) are consumed (representing a message being received). Somehow, however, we must check that these messages are sent and received by the right object(s) – so the message-resource must encode the appropriate notion of object-identity and check when being consumed that the object involved in the consumption-action is indeed the object that the sender meant to send the message to.

Yes, if we go back to my specific example of mutual increment, the whole problem was essentially the lack of reliable object identity mechanism other than object kind. I may be wrong, but my understanding was that currently the RM does not provide a way to assign unique object identifiers that’s robust against being exploited?

Here’s an excerpt from my old message to Yulia that led to creating this forum thread:

Maybe we need a unique id for every object? But how would these ids be managed and uniqueness enforced (randomness may be good enough)? And what do we do with methods like split that create multiple resources from a single consumed one? Maybe we need some originId field for created objects / resources?

1 Like

Thank you for the comprehensive response!

For messages sent by a particular object, I don’t think we can expect the object to know more than (a) the message data and (b) the intended message recipient. My rough expectation with this proposal was that we would encode these in the kind of the message-carrying resource, which can certainly be checked by the object-resource. Whatever dynamism we have, if a given object does not know the identifier of a potential message recipient, or how to encode a particular method, it simply cannot call the method. Now, we could devise “meta”-methods for lookup, but I think this is best treated as a separate concern – I touched upon the topic in the last section of this post. To touch upon the two specific examples of concern you cite:

  1. Whether a particular method is “overridden” seems to me like a concern purely of the receiving object (not the sender). Now, we might want “reasoning tools” to check that a method which we’re invoking behaves in a way which we expect, that’s definitely important, but that seems to me like a more general (and more difficult) topic. Is there a particular benefit to the sender knowing that a method has been overriden?
  2. I think intents can be represented in a way which avoids these problems entirely (see the same post again). If that approach doesn’t help, then perhaps I don’t understand the problem you have in mind clearly yet.

… out of time, will respond further later (or in the call)!

1 Like

I think intents can be represented in a way which avoids these problems entirely

Summary of conversation with Yulia: if there is no “allowed intent” check on resource consumption, then it is possible to create a “malicious” intent that effects an object state transition which circumvents the object’s intended high-level interface. But this is a problem that cannot be solved in general, so we just allow this possibility.

From the perspective of compilation, we only need to ensure that the way intents are implemented doesn’t allow for malicious interactions with other features.

1 Like

To me there are two questions here:

  1. What notion(s) of object identity do we want? [this is a “AVM/Goose design question”]
  2. How do we encode these notion(s) in the RM in an exploit-proof way? [this is a “RM compilation/implementation question”]

I think our answer to the former is still somewhat fuzzy, and it would be helpful to have a very clear answer to the former first since that makes the latter question more precise. Still, I don’t see why any notion of object identity we might choose can’t be encoded in the resource kind. In your specific example, to me the fundamental issue seems to me that we aren’t encoding the object identity in the resource kind – as you say,

The resources c1', c2' have the same kind.

If we want c1’ and c2’ to have different object identities (I think we do), and we want to encode object identities in the resource kind, then shouldn’t c1’ and c2’ have different kinds?

Additionally, looking at the hypothetical situation which you describe:

In addition to the question of the resource kind object identity encoding, I see two issues here:

  1. c1’ is only one created resource, but we’re allowing it to be marked as the successor to both c1 and c2. In my imagination of how we would encode object identity, this would be rejected by (one of) the c1 or c2 resource logics, which would check that the marked successor object version has the same “counter identity” as they do (and c1’ could not have both).
  2. c is allowed to be created without being marked as a successor to any object. Presumably, we only want to allow a new counter to be created when some explicit constructor is invoked and associated checks are passed.

In general, this makes sense, although I would note that – at least with my compilation proposal – we only need to know, for a particular object, what messages it can send or receive, we don’t necessarily need to know the behavior of other objects which which it interacts with – this perhaps corresponds to the difference between a class and a class interface (or something like that). Of course, we can only guarantee that a particular set of interacting objects will have particular overall behavioral properties if we know behaviors, not just interfaces, but this is a somewhat distinct question from what we need to know in order to perform compilation, I think.

In order to perform more sophisticated/efficient hierarchical transactional object compilation where not every object/message is represented by a resource, we probably will need to know behaviors, but I think this could be considered in some sense “just an optimization” – albeit a very important one in practice.

I think an interesting hypothetical to consider is that of a “reprogrammable object”, where we have an object (with a persistent identity) that stores its own logic in a field, reads from the field every time we want to check the logic, and perhaps writes to the field (allows itself to be reprogrammed) under certain conditions. In principle, we should be able to support this, and I think this is necessary to allow the sorts of dynamism I suspect @mariari is looking for – but of course, reprogrammable objects must specify under what conditions they allow themselves to be reprogrammed, and any analysis of or guarantees about object behavior would be subject to those conditions.

I imagine that a common pattern in practice might be:

  1. Create a new, reprogrammable object, which you (the developer) have permissions to reprogram.
  2. Assign that object an initial logic (e.g. counter, kudos, etc.).
  3. Put together and gradually test your application (consisting of many objects), where any initial users would have to trust that you (the developer with object-reprogramming permissions) don’t reprogram in malicious ways.
  4. Eventually, when everything is stable, “freeze” the objects which constitute the application, preventing any future reprogramming.
  5. If any sort of application changes are needed in the future, deploy a new application which allows objects from the old application to be consumed and converted into objects of the new application.

I am reasonably confident that my proposal in the other topic avoids this (at least in the sense that no intent would allow an object state transition which circumvents the object’s intended high-level interface – of course, users could still create silly intents which allow their assets to be stolen, etc.).

2 Likes

Still, I don’t see why any notion of object identity we might choose can’t be encoded in the resource kind. In your specific example, to me the fundamental issue seems to me that we aren’t encoding the object identity in the resource kind – as you say,

I might still be confused about the purpose of resource kind in Anoma RM. I understand it as being about, well, the kind of a resource, e.g., Spacebuck, Banana, Dolphin, etc. So there can be many otherwise unrelated objects / resources of the same kind. I was talking about a unique object identifier (UID) - there can be only one object with a given UID. Different objects with the same kind all need to have different UIDs.

The example I gave is very artificial - probably it’s not how one would want to write mutual counter increment. The point was to highlight a compilation issue when we want to write a function operating on two objects of the same kind, and that a specific way of compiling it doesn’t work in general.

I’m not sure if we can just encode UIDs in the kind / label. Then there could be only one object of each kind, which is probably not what we want? Currently, in the GOOSE model there is a 1-1 correspondence between kinds and classes.

1 Like

I am reasonably confident that my proposal in the other topic avoids this (at least in the sense that no intent would allow an object state transition which circumvents the object’s intended high-level interface – of course, users could still create silly intents which allow their assets to be stolen, etc.).

Yes, I think this is true, provided that it’s possible to compile this down to the RM level in a semantics-preserving way (the “require” and “solver object” from the post need more compilation details worked out). The crucial difference seems to be that then intents are restricted to using the object’s official interface, instead of just allowing intent logics to put arbitrary constraints on the resources in the complete transactions. I’m not yet completely sure if it is possible to compile things down in a way that enforces this restriction at each level, but if it is then this would solve the problem.

One thing to notice is that this restriction would be non-trivial. The intent logic no longer can have arbitrary constraints, so the user needs to design the object interface in a general enough manner to allow the implementation of enough intents.

1 Like

I wouldn’t read too much into the English semantics of the word “kind” – functionally, a resource just has three fields which are relevant here: logic, the resource logic, label, some arbitrary data which is part of the kind, and value, some arbitrary data which is not part of the kind. The difference between data being part of the kind or not is whether or not it affects balance in the balance check.

I agree that we don’t want to only allow one object of each class. I guess I was envisioning that classes (or class-associated logic) would just be encoded into the resource logic, and something about object identity could be put into the label (thus affecting kind).

Yes, that’s right, but I think this is a “feature” as opposed to a “bug” – the developer should first define the actual objects involved in an application (e.g. kudos) and how it is possible to modify them (e.g. transfer), and intents should operate on this basis. Otherwise applications themselves would be unable to preserve behavioral invariants in the presence of arbitrary intents.

1 Like

My impression is that you equate semantic kind (“the kind of a resource”) with a resource kind field. I don’t think we care about semantic kinds that much. Resource kind is rather a tool, not a carrier of a meaning. This tool allows us to specify the aspects that we, as application developers, think make two resources not interchangeable.

Depending on our (developer) perspective, two bananas with different countries of origin might be the same kind or different kinds, with the latter allowing to ensure stronger constraints related to the country of origin.

2 Likes

Yes, that’s right, but I think this is a “feature” as opposed to a “bug”

I agree - the object interface should specify what is possible to do with the objects, and intents should operate on this basis.

1 Like

Notes on object identifiers per quick chat w/@graphomath @vveiln:

  • Object version identifiers are use-once (corresponding to resource commitments).
  • Object identifiers track the identity of an object over time (corresponding to resource kinds).
  • Controllers shall not be required to do anything more than generate locally fresh names.
1 Like

This is interesting to me, so we have gensym per controller? This makes a lot of sense given where unique names can come from, as a general high level constraint this is good. In practice we might generate symbols in a slightly more complicated way to compliment this depending on how other parts of design shake out

1 Like

Yes, I’m not too familiar with gensym in particular but I think it’s roughly the same concept. Generating symbols in a more complicated way in practice should be fine, as long as this minimal requirement of locally fresh name generation can be fulfilled.