So I’ve been thinking about something and decided to share with you guys.
What if we have something like a “Trust Lens” system whereby users can select which trust context they’re viewing positions through?
The core problem I keep coming back to is: whose positions should we display by default on a claim? There’s a tension here.
On one hand, showing positions from people in your trusted network makes the most sense philosophically. But early on, the network is sparse. What are the chances a random address I’m interacting with has claims from someone I’ve explicitly trusted? Probably low.
On the other hand, pure statistical aggregates (GINI, variance, position distribution charts) give you data but no signal about whether you should actually weight those opinions.
Here’s the hybrid approach I’m thinking:
1. Trust Lens selector - let users toggle between contexts like “Security”, “DeFi”, “Social”, “General”. The people I trust for smart contract security opinions aren’t necessarily who I’d trust for music or NFT recommendations.
2. Tiered fallback display - trust network numbers shown first, then a fallback to all aggregates if your network hasn’t specced on that claim. Be clear about what lens you’re looking through.
3. Domain-weighted trust spreading - the weights of the trust edges could be different in each domain. I might have a high level of trust for someone in “Security” but only Neutral in “Social”
4. Implicit network bootstrapping - early adopters ARE interacting with overlapping infrastructure (Multivault, bridges, the dapps we’re all building). We could bootstrap an “implicit network” from on-chain interaction patterns before explicit trust relationships exist.
The multi-hop trust traversal I’m working on for the grant could potentially support this kind of domain-specific propagation. Happy to explore how it might fit if this direction interests you all.
Curious what you think. Am I overcomplicating it, or is this the kind of granularity that would actually be useful?