The reputation generated by agents in a local DAO, or the avatar of this agency is relevant to the governance mechanisms of the local DAO. But even if reputations are non-transferable assets they can surely be translatable. Assuming there is global value in harnessing these reputation points in wider use cases, I wonder what how would the interoperability or the standardization of these local reputation take place. An example to locally generated globally relevant reputation would be performance of fund managers or rating agents. If this can be facilitated, it would unlock, i think, the DAOstack’s potential in hosting crowd intelligence applications.
I guess that, based on a entity/ecosystem weight ratio, one agent reputation could be proportionally translated at will.
e.g. based on consensual and dynamic parameters the “weight” of DAO “D” is defined as 3% of the entire network. an agent’s reputation could then be translated by a formula (again dynamic and consensual) based on that weight. Resulting on either a single result or multiple, based on multiple parameters set.
Does it make sense?
Personally, I’d prefer a zero knowledge proof approach because of privacy concerns.
Where, before transacting somehow with Bob, I could see a specific facet of his “global reputation” (like his market prediction ability before sending tokens to his fund) and any possible red flags on unrelated areas (like being reported as racist or sexist too many times) and nothing else Bob wouldn’t want to.
Or even painstakingly asking permission to for each every parameter Alice would want to see.
It does make sense when the reputation mechanics are set in coherence with each other.
But I’m guessing as you increase the number of DAOs under observation the variation goes up.
Like different jurisdictions essentially. Integrating states could be easier than integrating countries.
Sure that’s specially troublesome for a unified reputation value. But not so much for a focused set of parameters. Like skills, “virtues”, presence, and so on.
Great article! Would be great as Parrachia says to study the effects of multifactor reputation systems and even their structure on participation, output and group harmony/satisfaction. For example comparing weight based systems versus min-max variations such as “score=highest value of all factor values” that would make every participant feel accurately assessed based on his personal judgement of his contribution or a “score=lowest value of all factor values” which would push people to maximize every metric but would cause distortions to their judgement, both of which could cause feelings of unfairness:
-“I actually submit the code and others just gain by their presence” or “i just comment and not build the systems but my ideas are used a lot” in the first case
-“I only log in once a day but my contributions are very valuable to the ecosystem in the second case”.