Detecting Bad Proposals


This stems from conversations w/ @liviade and @gh1dra, looking for your thoughts.

The Questions

  1. What are ‘bad’ proposals?
  2. How can we automatically detect proposals that are ‘bad’?
    Implementation details below

My Proposed Answers

Answer 1

Bad proposals are ones that don’t uphold the DAOs norms. For example:
a. A proposal requests >10ETH and the proposer doesn’t have a linked social media account.
b. The proposal document doesn’t follow a standard template.
c. The proposal was upvoted & boosted by the proposer.

All of these norms could be established via proposals. We’re familiar w/ some of these that’ve passed within the Genesis DAO. Now for the fun question…

Answer 2

In order to support all of the cases above, bad proposal detection must be implemented client side and not within the protocol layer for technical reasons (gas costs, no outboud/inbound traffic, etc). While some norms could be done within the protocol layer (see 1.c above), I argue that this introduces complexity which increases vulnerability.

Instead, imagine this. You open up Alchemy, view your DAOs proposals, there’s a little spinner in the bottom right that says “scanning proposals”. A few seconds later, a few proposals are annotated w/ red warning signs that show you why this proposal is deemed a “bad” proposal (ex: This proposer is asking for too much reputation).

Implementation Details

The moving parts are:

  1. Client Library: A client side library that could be used w/ any DAOstack DAO dApp. Let’s call this “DAOscan”?
  2. DAO Norms: An on-chain list of norms that the DAO has chosen. For more detail see NOTE below.
  3. DAO Norms Proposal: A new proposal type that, if passed, will add or remove a norm from the DAOs list.

With these parts in place, and a lot of details glossed over to save space :wink: , we now can have access to a client side library that can scan for vulnerabilities that may exist within a proposal based on the DAOs established norms.

NOTE: Each “norm” is really some detection algorithm that DAOscan uses to find vulnerabilities. DAOscan could support a plugin model, which would allow it to dynamically download these algorithms from some on-chain registry. This way, new “norms” could be made available for DAOs to use, and client applications wouldn’t have to update themselves to support this.



I see this as being the optimal soft governance solution.

My issue with trying to implement 100% of this at the protocol level, mainly through global constraints and schemes, is that:

  • Stabilization of the stack is not easy, and every additional scheme adds complexity; by extension, vulnerabilities. Someday we will have an extensive AND secure, highly audited template library of governance schemes. But for now our options are fairly limited – and rightly so, given the high need for security.

  • Interface solutions are a great low-level, bandwidth friendly substitute to protocol changes. Let’s say a DAO becomes quite fond of limiting proposals to say, 10 ETH. Seeing this warning in the interface over the course of a year and deciding to make it permanent via global constraint is a very different decision than doing so off the bat. I’d go as far as to say before implementing any norm as a governance element, a DAO should test it at the interface level first. This is ultimately more bandwidth friendly as well for production.

I can see DAOs choosing from a list of norms, sharing norms, and templatizing norms to be used by each other. So from an ecosystemic perspective as well, this library would be extremely valuable.



I think norms sounds like a great idea. In addition to detecting “bad” proposals, the norms would be very useful when creating proposals e.g., getting a warning in the interface; you are breaking this norm because …

The combo of an on-chain list of norms that the DAO has chosen and no strict enforcement sounds fantastic. I think it is key that the norms are not enforced on the protocol level. It is important to be able to break the norms in certain situations (if not, it would just be called rules).



Would it be too much to ask for proposers to learn markdown? While this would definitely increase the barrier to entry, I think it would allow us to implement solution 2 much more cleanly / with less overhead.

Or we could provide some sort of interface where a user can paste their proposal from the Google Doc. What are people’s thoughts on using Google Form? This would ensure that key fields are required and then we can add a few free form sections.



I was thinking this same thing, using google docs definitely complicates the implementation, although I don’t think it’d be impossible. For example just found this:

1 Like


I would say so. I don’t think it’s a stretch to say 90%+ of potential users will never understand markdown. In my opinion, we need to make sure we design the interface in a clever way to get around this while still being able to correctly leverage semantics.



I really dont think Google or other centralized services, or places that make the proposal content dynamic should be used (this might be another discussion :P).

I think the standard template should be denoted in JSON schema form (for instance: or some other declarative way and that the proposal content should be uploaded to IPFS. It will then be trivial to follow and check if a proposal follows the standard template or not.

The standard template should be proposed and voted over. When it is accepted the DAO will have a pointer to that template. When creating new proposals, this would automatically generate the proposal form from the current, voted in, standard template declaration (JSON schema form).

I also think this will be much better UX as the user will not have to use some external service for writing the proposal. The user will click create proposal then fill the form then click send/propose.



An issue related to this, currently receiving some attention, is the ability to detect proposals that are not only “bad” but are receiving votes as a result of bribery.

1 Like


How would do you propose this might be done?

1 Like


Matan is mulling over an idea he had that involves detecting when a “bad” proposal is receiving significantly more “yes” votes than would be expected, computing how bad it is in terms of the size of the gap between the amount of reward being requested and the amount the community would be expected to approve in the absence of bribery.



Interesting, would love to hear more details around this idea!

1 Like