-
@ Daniel Wigton
2023-07-20 19:50:48In my previous article I argued that it is impossible to have an effective filter in a delivery-by-default and universal reach messaging protocol. This might seem over-specific because, after various filters, nothing is truly universal or delivery-by-default. * Mail is limited by a cost filter phone calls by call screening services. * Faxes are limited by the fact that we all got sick of them and don't use them any more. * Email dealt with it by pushing everyone into a few siloed services that have the data and scale to implement somewhat effective spam detection. * Social media employs thousands of content moderators and carefully crafted algorithms.
Most of these are, however, delivery-by-default and universal in their protocol design. Ostensibly I could dial any number in the world and talk to anyone else in possession of a phone. If I have an email address I can reach anyone else. I can @ anyone on social media. All these protocols make an initial promise, based on how they fundamentally work, that anyone can reach anyone else.
This promise of reach then has to be broken to accomplish filtering. Looking at limiting cases it becomes clear that these protocol designs were all critically flawed to begin with. The flaw is that, even in a spam free case, universal reach is impossible. For a galaxy-wide civilization with billions of trillions of people, anyone individual wouldn't be able to even request all the information on a topic. The problem is more than just bandwidth, it is latency.
That is without spam and trolling. With spam and trolls things crash at a rate proportional to the attention available on the protocol.
The answer is to break the promise of universal reach in theory so we can regain it in practice. By this I mean filtering should happen as soon as possible to keep the entire network from having to pass an unwanted message. This can be accomplished if all messages are presumed unwanted until proven otherwise. Ideally messaging happens along lines of contacts where each contact has verified and signed each-other's keys. That is annoying and unworkable, however, instead it should be possible to simply design for trust. Each account can determine a level of certainty of an identity based on network knowledge and allow or deny based on a per-instance threshold.
While requiring all messages to originate "in network" does break the promise of the internet as an open frontier, the reality is that, like the six degrees of Kevin Bacon, everyone you want to reach will be in your network if scaled out to a reasonable level. This puts a bottleneck on bot farms trying to gain entry into the network. While they will be able to fool people into connecting with them, such poor users can lose the trust of their contacts for introductions, algorithmically limiting reach.
With Nostr I expect this form of filtering will arise organically as each relay and client makes decisions about what to pass on. This is why I like Nostr. It isn't exactly what I want, but that doesn't matter I am probably wrong, it can morph into whatever it needs to be.