How Hive Mind extension is implementing per-topic trust circles for social media

First off let me say thank you to all the other developers who seem to have participated in the thoughtful discussions on how best to leverage Intuition to give users a valuable experience that can surface the type of truth that they are looking for.

For the uninitiated the Hive Mind browser extension (first version) focuses on surfacing Intuition-based insights for X. This means that many of the atoms that we work with represent X accounts (typically in x:1234566 format, where the number is the user’s immutable ID).

New users with no trust circle may be able to see global claims (I may hide these if the signal-to-noise ratio is too low), but we want to quickly push them towards creating their own trust circle. IMO the best way to do this is to have the user give their opinion on as many contested claims as possible that are related to said topic. This becomes tricky… how do you decide which claims are related to the topic?

I believe I have settled upon the following bootstrapping. This may or may not be applicable to other dapps:

  1. Dapp developers (me) designate the relevant topics.
    For my case I’m starting off with “crypto” and “Intuition”

  2. Dapp developers (me) assigns 50-100 atoms that are relevant for that topic.
    For the “crypto” topic we labeled X accounts like @vitalikbuterin, @michaelsaylor, @Binance, etc. AI is actually great for this purpose since it can churn out a long list, trivially.

  3. Dapp developers create ~100-200 triples for HIGHLY CONTESTED CLAIMS about the topic. We currently choose contested (non-unanimous, etc) claims whereby one of the atoms in the triple is related to the specific topic (from step 2).
    For my case we made claims like “Solana - is - Ethereum killer” and “Craight Wright - is - Satoshi Nakamoto”, etc. I think it’s okay to come up with these claims ourselves as long as we are not staking on either side. AI is also great for coming up with this list.

  4. Users are ENCOURAGED to take a position on these highly-contested claims as part of onboarding. 10-20 positions should be sufficient (gamify it if you need to)


    Figure 1 - Contested claims for “Crypto” topic for the user to weigh in on

  5. Once there are enough claims then you can start looking at the agreement overlap and recommending that users trust other users who think similarly for that topic. You can also recommend users that their trust circle (for that topic) trust.


    Figure 2 - Extension recommending EVM accounts to add to “Crypto” trust circle

  6. NOW you can start surfacing claims made by those trust circle accounts as you browse X


    Figure 3 - Screenshot showing X.com and side panel surfacing of trusted circle claims. The eye icon represents “interesting”, but that icon is likely not the final icon we’ll use

After a while we should be able to surface organically-created contested claims related to said topic, and even incorporate community suggestions on which atoms to include as “relevant” for any given topic. A dapp could even create its own trust circle of users who do the tagging, etc.

It’s worth noting that accounts who have a lot of people who trust them will likely be in a lucrative position going forward because they can set positions and then everyone trusting / following them may be likely to stake afterwards (either because they think it’s profitable or because they just feel like expressing their agreement). It may be important to emphasize this to make sure those users make a lot of claims, since it will increase their likelihood of overlapping with the average user

After a while we can start showing users which people in their trust circle they are having a lot of disagreements with, just in case users want to remove those people. Once you have people in your trust circle that you agree with on a large portion of things then claims from those trust circles essentially become precognition whereby they don’t even have to make up their mind about the accounts they see on X… their trusted circle will tell them what to think in a split-second. IMO this has the potential to be very useful to even the average web user.

Anyway let me know what you guys think about this process. I think the biggest challenge is getting the initial “relevant” atoms for each topic and creating the contested claims. For example what contested claims would we use for Intuition? Once multiple dapps are connecting users to relevant accounts to trust for each topic then any centralization that my categorization causes for a topic will be mitigated.

PS For the record I think I will likely use the 0xcf8...7e05 - trusted for - crypto (predicate could be more explicit like 0xcf8...7e05 - is trusted for topic - crypto). Let me know if there are any glaring flaws with structuring the triple in that way

1 Like

Great breakdown here Kylan. The bootstrapping problem you’re describing - getting enough signal to make trust circles useful before organic data exists - is something I’ve been thinking about a lot while contributing to the Intuition MCP.

A few thoughts:

On the “relevant atoms per topic” problem - this is essentially predicate filtering. Instead of manually curating 50-100 atoms per topic, you could filter the existing attestation graph by predicate type to surface topic-relevant accounts automatically. For crypto, filter to trusts + bullish on + has tag: crypto attestations and let the graph tell you who the relevant accounts are. Less manual work, more signal-driven.

On the triple structure for 0xcf8...7e05 - trusted for - crypto - the predicate guide recommends using base form, so something like trusted for topic works. But you might also consider (I, trust, 0xcf8) + context: crypto as a lens filter rather than encoding context into the predicate itself. That way one attestation can serve multiple topic contexts without fragmenting markets.

On the “weigh in on contested claims” onboarding - this is genuinely smart. It’s essentially using opinion attestations as a sparse signal to bootstrap a dense trust graph. The 10-20 positions threshold feels right.

One thing I would also like to add:
once trust circles exist per topic, multi-hop traversal becomes powerful. Not just “who does my circle trust for crypto” but “who does my circle’s circle trust” - with decay applied at each hop. That’s what the transitive trust implementation in the MCP handles.

Would love to see how Hive Mind evolves - the browser extension use case is a great real-world test of contextual trust.


Thank you for chiming in @repboiz, I’ve been reading your forum posts quite a bit the last few weeks :+1:

Out of curiosity could you elaborate on this?
“ On the “relevant atoms per topic” problem - this is essentially predicate filtering. Instead of manually curating 50-100 atoms per topic, you could filter the existing attestation graph by predicate type to surface topic-relevant accounts automatically. For crypto, filter to trusts + bullish on + has tag: crypto attestations and let the graph tell you who the relevant accounts are. Less manual work, more signal-driven.”

I’m not quite sure exactly what that query would look like. For X profiles hardly any atoms exist for their X accounts, and pretty much zero of THOSE X account atoms have any claims about them at the start.

Maybe you are saying I should use existing non-X atoms as the contested claims? I could consider that but then it will beg the question of whether people with Crypto opinions will have many claims about Crypto-related X accounts. I suppose the prominence of “Crypto Twitter” may make this a non-issue but the issue may still apply for other topics (eg “sports”).

Let me know if this makes sense.

1 Like

Good point - I should probably clarify what I meant there.
At the cold start phase, you’re right. X account atoms are basically empty, so filtering directly on them won’t give much signal.
What I had in mind is more of a two-phase approach.
Phase 1 is exactly what you described - using contested claims around crypto concepts and protocols to build a kind of preference fingerprint for each user. Basically, who agrees with who, and on what.
Phase 2 is where things start to get interesting.
At that point, you don’t actually need X account atoms to have claims on them yet. Instead, you can look at behavior - which X accounts are depositing on the same triples this user agrees with?
That overlap becomes your trust signal.
So it’s not:
“this account was tagged as crypto-relevant”
It’s more:
“this account consistently shows up in the same places as you”
Predicate filtering then becomes more about ranking that overlap than discovering it.
For example:
weight “trusts” higher
“bullish on” somewhere in the middle
lighter signals like “has tag” lower
And you end up with a much cleaner recommendation set.
On the sports point - yeah, that’s definitely harder.
When X account atoms are sparse, one workaround is to use protocol or project atoms as proxies.
So if someone consistently agrees on Ethereum-related claims, you can reasonably infer they’re crypto-aligned - even before any direct tagging happens.

Hope that explains it a bit better