Proposal: Domain-specific trust contexts for position display

So I’ve been thinking about something and decided to share with you guys.

What if we have something like a “Trust Lens” system whereby users can select which trust context they’re viewing positions through?

The core problem I keep coming back to is: whose positions should we display by default on a claim? There’s a tension here.

On one hand, showing positions from people in your trusted network makes the most sense philosophically. But early on, the network is sparse. What are the chances a random address I’m interacting with has claims from someone I’ve explicitly trusted? Probably low.

On the other hand, pure statistical aggregates (GINI, variance, position distribution charts) give you data but no signal about whether you should actually weight those opinions.

Here’s the hybrid approach I’m thinking:

1. Trust Lens selector - let users toggle between contexts like “Security”, “DeFi”, “Social”, “General”. The people I trust for smart contract security opinions aren’t necessarily who I’d trust for music or NFT recommendations.

2. Tiered fallback display - trust network numbers shown first, then a fallback to all aggregates if your network hasn’t specced on that claim. Be clear about what lens you’re looking through.

3. Domain-weighted trust spreading - the weights of the trust edges could be different in each domain. I might have a high level of trust for someone in “Security” but only Neutral in “Social”

4. Implicit network bootstrapping - early adopters ARE interacting with overlapping infrastructure (Multivault, bridges, the dapps we’re all building). We could bootstrap an “implicit network” from on-chain interaction patterns before explicit trust relationships exist.

The multi-hop trust traversal I’m working on for the grant could potentially support this kind of domain-specific propagation. Happy to explore how it might fit if this direction interests you all.

Curious what you think. Am I overcomplicating it, or is this the kind of granularity that would actually be useful?

2 Likes

I really like where this proposal is going — it’s exciting to see these kinds of conversations emerging, because they’re exactly the ones that need to happen to start taking trust interpretation seriously.

Intuition starts from the premise that there is no single, objective view of trust. Truth is inherently contextual, subjective, and interpretive. The protocol is deliberately designed not to collapse the trust graph into a universal reputation score, but instead to support many simultaneous, sometime incompatible, yet valid interpretations of the same underlying data.

From that perspective, what you’re calling a Trust Lens selector is effectively reality tunnels selector.

A good success metric here isn’t whether users agree on displayed positions. It’s whether users can choose which reality tunnel they’re operating in, understand why a position looks the way it does inside that tunnel, and switch tunnels without friction.


Now, an interesting question to me is what would make us trust a reality tunnel itself.

More specifically: what properties should a reality tunnel have for its computation, filtering, and weighting of signals to be considered auditable, predictable, and reproducible?

If reality tunnels are where interpretation happens, then they become first-order trust objects in their own right. That immediately raises questions around transparency of assumptions and the explicitness of weighting functions.

From there, I see two follow-up questions that feel important to explore:

Are there reality tunnels that could be implemented to be (almost) 100% free of assumptions?
For example, tunnels that are purely mechanical or structural, where interpretation emerges only from the raw graph topology and signal distribution, rather than from human-chosen heuristics.

And for reality tunnels that necessarily embed assumptions, how do we make those assumptions contestable and legible?
How does a community raise flags about a tunnel where, say, a particular predicate, identity class, or signal type has outsized influence?
Is this something handled purely through developer iteration (PRs, alternative implementations), or does it make sense at some point to treat reality tunnels themselves as first-class objects — Atoms — so that a broader community (not just devs) can attach context, critique, and preference signals to the assumptions baked into a given tunnel?

I’m curious how others think about drawing that line between interpretation as code, interpretation as community discourse, and interpretation as something that itself becomes part of the trust graph.

1 Like

This framing of reality tunnels as first-order trust objects is exactly the kind of meta-layer thinking that feels necessary here.

On the question of assumption-free tunnels: I’m skeptical they can truly exist, but I think we can get close. Pure graph topology approaches (PageRank-style, raw signal propagation) still embed assumptions - they just hide them in the structure. The selection for what an edge is; the way decay’s performed over hops; if all links are weighted - these are all assumptions, but less obvious ones.

So maybe the better framing isn’t “assumption-free” vs “assumption-heavy” but rather “implicit assumptions” vs “explicit assumptions.” The goal being to push as many assumptions as possible into the explicit category where they become legible and contestable.

This is where treating reality tunnels as Atoms gets interesting. If a tunnel weighting function, predicates, and parameters are themselves on-chain objects that can receive positions, you create a feedback loop: the community can signal “this tunnel over-weights X” or “this tunnel decay function feels off” without needing to ship code. Devs still iterate on implementations, but the direction of iteration gets informed by legible community signal rather than just GitHub issues.

The tricky part is granularity. Do you make the whole tunnel an Atom? Or decompose it into components (the decay function, the predicate weights, the trust sources) that can each receive independent signal? The latter is more powerful but adds complexity.

Curious whether anyone has thought about what the minimal viable “tunnel spec” would look like - the smallest set of parameters that would need to be explicit for a tunnel to be considered auditable.

1 Like

Love that we are having these conversations!

There’s an infinite amount to explore in Interpretation / Algorithm / Reality Tunnel land - for anyone who is just entering the conversation, there are some sections of the whitepaper that provide a primer here: https://cdn.prod.website-files.com/65cdf366e68587fd384547f0/66ccda1f1b3bbf2d30c4f522_intuition_whitepaper.pdf

Addressing a few things here:

  1. Publishing Interpretations / Algorithms / Reality Tunnels as Atoms makes a LOT of sense. This should absolutely be done. The next step here would be deciding on a standard structure for these Atoms - is the atomData a URL, which points to somewhere that has the respective codified logic for the Interpretation / Algorithm / Reality Tunnel? For example - is our standard atomData for these Atoms an IPFS CID, wherein we have the logic encoded in X format, so that any system can read and integrate with the respective Interpretation / Algorithm / Reality Tunnel? This feels like it would make the most sense.
  2. With respect to explicit Interpretations / Algorithms / Reality Tunnels - there are a lot of good options here, and as Zet mentioned, the goal for the protocol is to remain unopinionated and allow for an infinite number of these to exist on top - so that any user/developer can choose to ‘view the data however they want’, instead of being locked into a singular mode of ‘truth’. On this front - I think one of the first things we will need is an Interpretation ‘Registry’, an Algorithm ‘Registry’, and a Reality Tunnel ‘Registry’. Each of these can both just be ‘Lists’ / ‘Queries’ in Intuition, that abide by a specific Triple structure. For example, the Interpretation Registry could just be [Interpretation Atom] [has tag] [Interpretation Registry], and the Reality Tunnel Registry can just be [Reality Tunnel Atom] [has tag] [Reality Tunnel Registry] - then any application could easily pull from these registries. These registries themselves could also be interpreted in different ways by different applications - so the Interpretations / Algorithms / Reality Tunnels themselves could be configured based on meta Interpretations / Algorithms / Reality Tunnels! For example - maybe the consensus Interpretation amongst your network is the Interpretation that the application chooses for you, etc.
  3. With respect to developing these Interpretations / Algorithms / Reality Tunnels, I think the best first step is to just start creating them, and then iterate. Again, there is no ‘correct’ answer when it comes to these types of things - different Interpretations / Algorithms / Reality Tunnels will likely be preferred by different people/platforms, they will continuously evolve over time, etc. And, so, my vote would be to just get some reasonably-logical implementations in place, and see what sticks, and what doesn’t - and how we can improve things from there, etc. All of the approaches mentioned already make logical sense and seem like worthwhile explorations.
  4. Totally agree on the ‘explicity’ piece - the goal we should always have is to make the underlying logic as explicit as possible, so people know WHY they are seeing what they are seeing - as opposed to them being fed content through a black box reality tunnel masquerading as ‘truth’, that does not necessarily have the user’s best interests in mind.

Just some initial musings from my end - hope we can keep this conversation going! It is a very important one!

2 Likes

The registry structure using triples makes sense - clean and native to how Intuition already works.

One question on the atomData format: if we go the IPFS CID route pointing to codified logic, what format would that logic be encoded in? Are we thinking something like a JSON schema defining parameters and weights, or actual executable code (WASM, JS), or something more declarative?

The tradeoff I see: JSON schema is more portable and auditable but less expressive. Executable code is more powerful but harder to verify and trust. Maybe there’s a middle ground - a constrained DSL specifically for defining interpretation logic?

Just thinking out loud. Actually curious if there’s prior art here we should be looking at.

Executable code unlocks a whole world of possibilities… I think that path is worth exploring, at least…