Nuance on the Internet is Dead
|sam g||May 23|
When land was surveyed as the United States expanded west, it was surveyed in squares. As a result, many early cities were set up in grid formations. This initial decision some 200 years ago is still influencing the way we build cities today. Decisions like these have long term effects and consequences, eventually becoming so ingrained in our society that they no longer feel like decisions. That's the secret of design: every single thing in existence was once a conscience decision. Often, when we think through how to build things, we take our own agency out of the question. We can choose to account for near term or long term problems. Much like the decision to survey in squares, to make the pixel a square, and to make Twitter 140 characters (at first), these decisions have knock-on effects.
I think the easiest target is to say, "well, if you limit how much people can say, that will limit how much nuance you can convey". That's not really true or fair. Ernest Hemingway, sometimes tagged with creating "For sale: baby shoes, never worn.", illustrates the opposite. An even better example is Luis Felipe Lomelí's "El emigrante", which simply goes "Did you forget something? - I wish." (Spanish: ¿Olvida usted algo? -¡Ojalá!) So if you can convey a lot with a little, what is the blocker? Well, first of all, not everyone is an excellent writer. This of course is surmountable. Algorithms could simply dredge up the best takes and serve them to end users. This of course is also *not* how it works. You see, algorithms aren't built to serve up the best content. Algorithms are build to serve up the *best spreading* content. As a result, you see what is easiest to see. If you like what you see, you will see more of it. The more that you see, the more the algorithm learns what to serve you. The second half of that is how apps optimize your usage of them. Say for example you get a notification every time you get a like on your tweet for some content. Say, statistically, this probably means the tweets you deliver with the least bit of nuance but a strong call-to-action are going to give you the most notifications. Let's also say these notifications feel good, because they are designed to feel good. What type of behavior does that encourage?
Now we've gotten a bunch of platforms which reward putting gas in the engine. Millions of calls-to-action, which reward people for sharing or liking them. "Share if you agree", followed by a statement that only a psychopath wouldn't agree with. We've created this world intentionally, because we have designed this to feel good. As a byproduct, we've also eliminated the room for nuance in this space. People can share several inaccurate calls-to-action before they can finish reading one thorough take on a subject. This means that people are incentivized to create content which is in the form of the former. We've designed nuance out of the system, because nuance doesn't sell. You can't send a mob mixed signals, but mixed signals are what make up life.
So where do we go from here? The entrenched nature of this design means that many people likely believe there is nothing we can do to change it. Some new social networks, like Dispo have attempted to change the status quo, to mixed results. It isn't easy. Part of the intentional design in Dispo is that it takes 1 day to 'develop' your content. That intentional pause is both a strong element to return users to the app, and also a way to disincentivize rapid, less nuanced content. Others, like Mirror offer cash incentives in the form of cryptocurrency for long-form content. These also enable voting rights as to who is allocated future supply. Both are attempting to rewrite the book on nuance, but neither is seeing the same adoption (so far) as more established names in the space. Adoption, or the ability of an app to spread, is both the first and the last worry any new entrant into the field. Designing for adoption probably means using some of the tricks of call-to-action content in the packaging of the products themselves. For most, this seems to be a beta sign-up list. I have my doubts about the long term ability of these lists to convert users as the gap between initial sign-up and release confuses users who forgot what they signed up for in the first place.
For now, it is a minimally publicized problem. Sure, there was some initial outrage at Facebook and other information fiduciaries in 2016 but that quickly dissipated. It is no doubt that these platforms will continue to contribute to large scale issues. The problem is that, at their current scale and due to competitors, we've collectively gone past the point of no return. Any attempts at regulation would minimize damage, but eventually return to mean behavior. Broader regulation might help, but it would also hamstring strong institutions built on the values that make the United States a great nation. It could also potentially be disastrous for the broader American public, who could no longer benefit from these platforms. So now we're at an impasse. On the one hand, the lack of nuance is a real issue that has and will continue to cause real damage. On the other, we've designed our systems in such a way that any change will be a long and painful process. You can’t unscramble an egg, but you can make something new with it.