Webs of trust, specific tastes, a more discerning, open, peaceful kind of social network system

Abstract

A set of guiding design principles and a partial design for a social tool that uses webs of trust to identify the curators of content tags in a way that is decentralized, scalable, manageable, and in many cases subjective. I believe this would make content tags vastly more useful than they previously have been, giving rise to a robust, human-centric way of discovering information and having productive conversations about it.

A concise, complete explanation of what webs of trust are and all of the reasons they work

A web of trust is a network of users endorsing other users.

If you tell a web of trust who you trust, you can then travel along and find out who they trust, and so on as far as you wish to go, and that will give you a large set of people who you can trust to some extent, by transitivity.

Webs of trust can scale up at an exponential rate, as each new user can immediately add more users (better, they can start issuing endorsements in their own network segment even before they're added). This is pretty cool. Wonderfully, despite that, webs of trust can also be pruned and weeded fairly easily: If a few bad endorsements do get made, and the newly empowered bad users start adding more bad users, we will be able to trace the source of badness back through the endorsement relations to the root causes, and pruning them away will also prune away everyone that came through them, directly or indirectly (unless those people have received endorsements from other non-bad people since being added, in which case they're probably fine, and they will remain in). Crucially, the pruning and weeding does not need to be done by any central authority. Every user is the center of their own web, they can bring in anyone they like.

Webs of trust are useful for tracking qualities of users which come with the ability to recognize the presence of that quality in others. Most personal qualities are self-recognizing, in this way, to some extent. A person who has it, faced with another person, can usually figure out whether they have it too.

Examples of such qualities include good taste, responsibility, or not a spambot.

Some non-examples would be is a spambot (spambots are mainly about spamming and are not very interested in identifying each other), or is a fool. A web of trust would not help with keeping track of these qualities, but, again, most qualities people talk about aren't like these. If you do find yourself in need of a web that tracks a non-selfrecognizing quality, consider just making a web that tracks its negation. not a spambot, or no fool would work pretty well.

You might notice that some of the given examples of selfrecognizing qualities have rather subjective meanings. Not everyone will agree about what good taste or responsibility means. Though using fully explicit definitions of things is preferable where possible, sometimes it isn't possible (who would ever try to formally define taste?). Another neat thing about webs of trust is that they will still often work pretty well in those cases! If people disagree about the nuances of a quality, they will often end up organizing into separate webs of trust that agree within themselves. Webs of trust are compatible with subjectivity.

That makes webs of trust suitable for moderating a truly global platform. At no point does a central authority have to decide for everyone else what any of the webs are about. If two groups disagree about what sorts of things should be posted in a fundamental tag like respectful discourse or safe content, they don't have to interact! The web of trust is so powerful as a moderation technology that they can wholly split their webs and keep using the same tags in completely different ways without stepping on each other.

Some noteworthy systems that use webs of trust

The prototypical example of webs of trust seems to have been the process of establishing of real identity in PGP signature networks.

A friend, Alexander Cobleigh, is implementing a subjective moderation system for the P2P chat protocol Cabal, which you can read some things about here

Webs of Trust are being used to measure social adjacency in various distributed systems, for instance

Core principle: Users should not be asked to reduce themselves to a single brand

A web of trust can be used to exclude spammers, sibyls, annoying people, rude people, bad people, or people with bad taste. However, if one web of trust were used to cover all of those meanings and purposes at once, I imagine the results will be pretty inhumane; people would commit chilling, cowardly omissions of self to avoid any risk of being perceived as rude lest the web put them in the same icy hell as spammers.

Twitter kind of is like that, and I think it exhibits a lot of the problems we should expect that to have. On twitter, you have one face, you get one tube, the people who follow you have to be alright with everything you put in that tube. If you ever want to to post a type of content that some of your followers explicitly want to never see, you have nowhere to put it. Brand is totalizing. Everyone has to compress themselves down to one legible brand before the network can thrive.

Users should be encouraged to have more than one side to them. The situation could be helped if twitter were more encouraging of the use of alt accounts. In a way, the system I'm about to propose is a way of streamlining that sort of mode of use.

As it currently stands, we can conceptualize Twitter as a kind of thick slow web of trust for the overly broad content category of good tweet. This web's quality is not truly self-recognizing; the endorsements do not represent a transitive relation, they do not conduct very far, if you travel just a few steps along through your follows of follows you will find mostly people you wouldn't want to follow. Only shitposts and the most general interest news propagates well, everything else propagates depressingly incompletely, there is no strong agreement in most networks about what is good to post, and where there is no strong agreement there is no truth about what is good to post. Nothing is good to post. We must simply log off.

What if, instead, we had many webs of trust that discuss and define the many different dimensions of interest that people can have, which users could choose to participate in or not. Most of these webs may have specific enough meanings that content could be automatically propagated fairly far through them with confidence that everyone in them would be interested in most of it. Some of these webs might be nebulous or subjective in meaning, which would have lower recommended automatic propagation constants, and those would work too.

It is important that users are not required to present as a category of information, and it is important that categories of information can grow larger than any one curator. A person should not be a brand, and a brand should not be a person.

Concept

The system consists mostly of these four types of thing:

To reiterate: presences apply tags to articles to organize them, filter them, and to alert interested parties of them. The webs of trust in which presences are organized speed and shape the propagation of updates about what articles have been tagged recently, and guide queries over the presences most worth visiting.

That's pretty much it. The rest of the document will give you a clearer picture of how many things those primitives will enable.

Some tags will have simple, objective meanings. music, for instance. A tag like this would be affiliated with the uses basic tags correctly web (meaning that basically everyone would be able to use it), it would be useful for confirming that an article is music, but it might not be an especially useful tag to most people for finding or promoting attentionworthy examples of music. Here's where things get interesting:

Consider a tag called good music. Its meaning would, of course, need to be subjective, and webs of trust can support that! You would find a good music taste web, Find someone you align with and trust them along the good music taste dimension, and you'll get their recommendations, and if they haven't recommended anything today you'll see the recommendations of the people they trust, and so on, and it will immediately function as a music recommendation system that you and the musicfriends have complete control over. You would wake up every morning and have your client essentially run a query like "time:today tag:good music from:my(good music taste) min_similarity:0.04" and it would all be great stuff, or if it's not, you can rearrange your endorsements and move towards a web where it is.

The crucial advantage this has over other recommender systems that use user similarity, is that it is fully transparent, accountable, and controllable. You can see how it works, you know where the recommendations are coming from, and you can fix it yourself when there is too much bad or not enough good being recommended. It is not a black box algorithm. You can trust it for a lot more, because it consists of people, who you can see.

A taxonomy of very good webs of trust that should arise under a healthy culture of usage

UX

I wish I could present a clear and complete design, but that will take some consideration. For now I'll just throw some thoughts out

Getting this made

This is not going to happen at all if you leave it to me alone.

If you come to believe that a system like this would be good, reach out, and we'll get organized, and then maybe it will happen.

I should mention up-front that I'm not very interested in doing it in a for-profit way. Systems like this should be managed by non-profits that are constitutionally obligated to steward responsibly over any global political discourses they might come to host. They should not be designed around enriching their founders, nor around self-preservation.

(That said, of course, a good thing must fight to grow faster than the bad things that are growing now.)


lesswrong post mastodon post

For a bit of additional writing about webs of trust and part of an idea for making them efficient to query, see Using neighborhood approximation to make trust queries more efficient

mako yass

September 2020