How to concisely present claim position distribution "healthiness"?

I’ve gone back and forth on the best way to concisely present the “health” of a claim’s position distribution. Essentially, I just want users to be able to, at a glance, decide whether they have a healthy distribution or not.

Healthy:

  • numerous positions of a variety of sizes
  • Positions made from unrelated accounts
  • Positions being made over the course of time (not brigading)

Unhealthy:

  • Few positions
  • ONLY positions made from related accounts
  • Positions all made around the same time (brigading or one person behind the accounts)

These are just some rough examples, the lists are not exhaustive.

So, how would you calculate the “healthiness” rating of a claim and its countervault and how would you present that finding to the user?

For example, in the Snap I am building I do a quick check on the “has tag - trustworthy” positions:

  1. If a whale owns more than 80% it is tagged as whale dominated
  2. 0 positions = neutral, 1 position = whale dominated, 2 = “concentrated”
  3. 3+ positions we use Gini coefficient with the following values:
  • well-distributed: 0.35, // Below this = green
  • moderate: 0.55, // Below this = yellow
  • concentrated: 0.75, // Below this = orange, above = red

I had originally planned on just charting the positions with a violin / tornado chart (like positions on an exchange) but there are a few problems with charts:

  • Snaps don’t really allow you to display images very easily
  • Charts can be confusing. What are the x and y axes? Are the y-values cumulative, etc

Naturally, users can check all the detailed position data on the portal or a similar Intuition / TRUST explorer web app which means we can take some liberties with how to present the data in a Snap or elsewhere with less screen space available (ahem, like a browser extension).

It would be nice if we could find a “best practice” for this calculation but I realize that may be an uphill battle… there’s a chance that dapps will come up with their own proprietary algorithm. Still, it’d be nice to be able to provide new developers with a decent way to present position distribution to their users

What are your guys’ thoughts?

1 Like

I think a score of 0/100 is a good way to present this sort of information (or a tiered based approach like you’re using, with descriptors); the interesting part to me is really computing the score, and potentially showing users how it is computed.

Some thoughts around presenting the computation / the algorithm behind it:

  1. Application can just have one, default algorithm - like on most platforms. Maybe normies care about how its computed - maybe they don’t probably requires some A/B testing here - if people do care how the score is derived, maybe there’s a tooltip that provides more details.

  2. Application can have a default algorithm, but shows the algorithm (concisely - logo?) - and allows the user to toggle to different algorithms / reality tunnels, to get users to think about the fact that ‘this score is not the truth - it is merely one interpretation of the data, and you can toggle to different interpretations if you’d like.

For the algorithm itself, it’s a bit difficult for me to say ‘what equates to a good distribution’ - I think its a function of how many users are using the platform. If we have 10 users and all 10 are attesting, that’s pretty good. If we have 1B users and 10 users are attesting, that’s probably not so good.

Personally, I actually don’t care about the distribution as much as I care about WHO is attesting. If one entity is attesting that I really trust (say, for a smart contract, ConsenSys Diligence is attesting to its security / trustworthiness), then I don’t really need a strong distribution - I just need 1 voice. Because of that, I am not sure how heavily distribution should be weighted? Maybe it’s weighted more heavily in instanced where no one you trust, or only people 1+N degrees of separation away from you are attesting? Lots of nuance here, and an infinite number of ways to interpret the data - I feel the best bet here for arriving at something reasonable quickly is… asking AI haha