-
@ Daniel Wigton
2023-07-19 21:56:09All social media companies quickly run into the impossible problem of content moderation. I have often seen commentary suggesting that Twitter, FaceBook, etc need to make some particular adjustment to their content moderation policy. Perhaps they should have clearer rules, be more transparent about moderation actions taken, err on the side of permissiveness, have community moderation, time outs vs bans etc. My thesis, however, is that all such endeavors are doomed to failure. The problem is that maintaining signal to noise over a single channel cannot scale.
If a platform wants to remain relevant to the majority of people, it needs to maintain signal. Filtering is a fundamental feature of communication. Even in-person conversations between individuals break down without self-filtering to stay on topic and maintain constructive relations. Long range delivery systems have a second order problem that makes the issue even worse.
All delivery-by-default communication systems succumb to spam. This is without exception. Mail, phone, fax, email, text, forums, etc. have all fallen to spam and now require herculean efforts to maintain signal through extensive filtering. The main offenders are non-human entities that have a low barrier to entry into the network. Once in, delivery-by-default allows and motivates swamping out competing signals for user attention.
In combating this problem most platform operators do seem to act in good-faith while implementing filters. They are even fairly successful. But they end up with the same problem as economic central planners and Maxwell's Daemon; to do the job perfectly they require more information than the system contains. This means that the system is guaranteed to fail some users. As a successful platform scales the filtering needs will outrun the active users by some f(n) = n^k where k is greater than 1. This follows from the network effect of greater information being contained in the edges of the graph than in its nodes.
For networks like Facebook and Twitter the impossible threshold is many orders of magnitude past. What can be maintained on a small mailing list with the occasional admonition, now requires tens of thousands of swamped moderators that still can't begin to keep up. Those moderators have to make value judgements for content generated by people they don't know or understand. Even the best meaning moderators will make bias mistakes. Thus the proportion of unhappy users will also scale non-linearly with size.
The impossible problem takes a second step in the presence of civil authority. The problem is that a filter being necessary means that a filter exists. A civil authority and other motivated actors will not be able to leave a filter be, without at least attempting to tip the scales. Again they are generally well-meaning. But what they consider noise, may not actually be noise.
Well-meaning is probably a stretch given that levers of power tend to attract the attention of people who like that sort of thing, but I like to assume the best. I tend to think that there were some early signs of Jack stressing out under the load of bearing this problem. He didn't want it but there just wasn't anyway out of it. You must maintain signal or die!
But there is light at the end of the tunnel. The problem only exists with delivery-by-default when reach is universal (a problem for another post) with distributed systems there is still a filter but it is no longer any one entity's problem. Once the social graph is complete, deny-by-default can work wonders since each node can decide what to allow.
I don't know what the final filtering mechanism will look like, but Nostr is a lovely chance to experiment.