There has been discussion in the community work group thread and the telegram channel regarding this topic.
Problem:
The Boosting Threshold Score only prices collective attention according to the demand for the Dao’s attention and fails to consider its supply
The exponential curve is only dependent on the number of already-boosted proposals: the more proposals in the boosted set, the harder it is to add to the set. But this fails to take into account that a Dao’s bandwidth will vary over time. It also assumes that proposals will require an equal amount of time to evaluate on average, thus failing to capture the fact that bandwidth requirements will vary across proposals.
In other words, the BTS equation assumes a perfectly inelastic supply of collective attention– defined as the average capacity for due diligence of a single REP holder.
Since the boosting threshold does not respond to changes in the DAOs bandwidth or the bandwidth requirements of boosted proposals, there is no way to know if the DAO is optimizing its attention allocation: at any given time the DAO could be over-exerting its collective attention by boosting too many proposals (or a few heavy duty ones) or under-utilizing it by boosting too few (or many trivial ones).
Suggestions So Far
To address this issue, I’ve suggested that we add a factor to the BTS equation that allows the curve to shift right or left depending on the Dao’s bandwidth. This would allow the boosting threshold to respond to the supply of a Dao’s collective attention in addition to the demand for it.
One way to gage bandwidth is with voter participation rates, such as the moving average of % of total REP staked towards the last 10 boosted proposals.
@matan has pointed out that voter participation doesn’t necessarily reflect attention supply since people could abstain on proposals that are outside of their area of interest or expertise
@ezra_w added that people might abstain when they see that a vote is already going in the direction they desired
Ezra suggested that we use web traffic stats to gage supply of collective attention.
My concern here is that it relies on off-chain measures that can be manipulated and require some trusted server/admin to report. Curious to hear workarounds.
Another approach is to target some average “boosted proposal success rate” by adding a difficulty factor that shifts the curve based on the variance from the target.
Difficulty factor, d = actual_success_rate – targeted_success_rate. When d < 0 the BTS curve shifts to the left, making it easier to pass proposals, and vice versa.
Matan has pointed out that this might not actually matter since supply of collective attention will vary a lot less than the demand for it.
Re-define boosting altogether to be continuous rather than discrete.
Instead of being boosted or not-boosted, every proposal lies on a spectrum from fully boosted (requires relative majority) to fully unboosted (requires absolute majority). Proposals would be ordered based on relative score i.e. net stake.
The success of a proposal depends on a weighted average of relative and absolute approval rates. If the weighted average ends up being greater than 50%, then the proposal passes.
For example, a proposal’s score (GEN+ minus GEN–) is in the 75th percentile of all active proposals, so the relative rate will have 75% weight and absolute rate will have 25%. Let’s say that that upon close, the proposal has 65% relative approval and 40% absolute approval. The weighted average is
(.75) (.65) + (.25) (.40) = .58
.58 is greater than 50% so it passes.
There is still the question of how long proposals remain active before the finality condition is checked, but it could work as a dynamic time period that adjusts based on relative score.
This has come up in a number of the topics: the idea of making the boosting system continuous instead of categorical. I think it’s super promising, but 1) it’s probably too ambitious to be included in this next protocol update, and 2) since it’s a big change, we should probably have to justify why it’s worth it. Is it enough better than the current system to justify the development costs?
Another option for how passing would work in this system is to relate a proposal’s prediction score to its quorum: if its score is in the 0th percentile, it requires 51% quorum (absolute majority); if its score is in the 100th percentile, it requires 0% quorum (relative majority), something like that. We also need to consider staking here. Staking needs to stop once a telling amount of rep has voted in order to prevent abuse, and because staking beyond that point isn’t adding new information. Maybe something like: staking is allowed as long as a proposal’s voting rep score (rep_voting_for - rep_voting_against) is less than 20% of the score it needs to pass (and if the score goes too low, downstaking becomes forbidden to prevent abuse) (note that for a proposal requiring 0% quorum, a relative majority, the “score it needs to pass” is 0, so staking would stop immediately).
Regardless of the details, you’d still want some version of the boosting protocol hidden in there–specifically, the part of it that relates collective attention demand (number of boosted proposals, i.e. proposals that urgently need attention) to collective attention supply (the boosting threshold equation does take a guess at this with it’s constants, and it ideally reflects how many proposals the DAO can possibly pay attention to at once). If the DAO doesn’t account for collective attention supply or estimates it poorly, the DAO might end up making proposals it doesn’t like that much too easy to pass (overestimating collective attention supply), or paralyzing itself by making proposals too difficult to pass (underestimating collective attention supply).
The current dichotomous protocol will always have some issues with this: proposals that are on the edge of boosting/not-boosting will always either go one way or the other. That means we’ll inevitably end up with some proposals passing undeservedly or vice versa. In theory, a continuous protocol should solve this: proposals would need a precisely appropriate amount of voting rep. to pass. That precision depends on us being able measure collective attention supply well, but the dichotomous protocol also requires that.
The biggest downside of a continuous protocol is confusing UI, I think. Every proposal will have different passing circumstances, and that seems extremely confusing for voters. One of the big positives of the current protocol is that it’s fairly easy to understand the difference between boosted and non-boosted proposals. Is the advantage worth it?
Totally agree that security is an issue with using web traffic statistics. If someone wanted to screw the DAO’s measurement up, they’d just have to leave the page up 24/7 or something. I imagine there are some things we could do to fortify the measurement, but there might just be too many ways to manipulate it.
The simplest option is probably just to let each DAO measure it’s own supply of attention – each DAO might set its own a and b in S = a * e^(n/b) (or whatever equivalent supply-demand balancing protocol they use), adjusting things up or down depending on whether they feel overwhelmed or under utilized. If these constants could be easily changed via proposal, could this be the best option?
Sorry in advance, Ezra and Ori, if Im tracking your discussion in the wrong way, cause I`m a poor mathematician! I really like both of your approaches to establish a kind of rolling threshold to have a more accurate correlation between momentum of collective attention and boosting/passing of proposals.
One thing is still bugging me anyway… What if we are trying to measure the rong parameter? We are trying to estimate active collective attention - the number and amount of rep.holders who do vote on Alchemy. But at the same time the idea of Holographic Consensus is that any level of attention given to a boosted proposal is appropriate and coherent with absolute majority, even if it demonstrates the will of 1% of rep.holders. Does it mean that staking stage have nothing to do with calculations based on (a) majority /non majority and (b) reputation weight? May be a staking stage is solely a signalling stage and must not substitute a voting stage or predetermine further results of voting?
I wonder if (to some extent) the proof of Holographic Consensus concept is in fact in estimating voices of rep.holders who were totally aware of ongoing voting but did not vote? Irrespective to the reasons they did not give their voices. They decided to be silent knowing it would influence the final score of voting. They consciously let decide to others – “passive voices”. If someone want to appeal the decision made on a proposal he can rise his voice and submit the opposite proposal. Otherwise the coherence is presumed.
Have we considered at all that the price of GEN in evaluating this issue? It seems to me higher dollar denominated value of GEN leads to a positive collective attention supply shock.