Proof-of-work as a potential distribution allocation mechanism

I am not too familiar with it either. In the area of half-knowledge, I was associating it with Niklas Luhmann’s Systems Theory, and then I now find this passage from a random source[1]:

The difficulty adjustment in bitcoin strikes me as a good example to keep the system in its present state.

The heaviest chain rule (as followed by the Ethereum community) is maybe a little bit of a different thing, but it could be considered as part of the mechanism to keep the chain running (cf. self-preservation).

Right, indeed, I should have spelled out the self-stabilizing mechanism; so, let us take the airdrop allocation example from the other thread. Let us suppose for the sake of thought experiment that it makes sense to think of “the statistical estimate of the correct token allocation given the data of extant semantic attestations” as a hard problem that is amenable to useful proof of work, e.g., involving some hard inference problems (beware, semi-random link), then proof-of-useful-work and the “attestation game” may interact with each other in the following way:

  • investing computational power to contribute to the attestation game should increase the chances of getting airdrops
  • however obtains more airdrops can invest more computational resources to the attestation game
  • \infty

Now, the question is whether this is possible in a sustainable way: be sure beyond doubt that we do not just re-invent a fancy version of PoW as used in bitcoin. One way to avoid this is by making proof of work actually also involve proof of human work, but without introducing surveillance …


  1. Emphasis is mine (no emphasis in the original). ↩︎

2 Likes