Mutually Assured

Prompted by a few things last night, this alarming article detailing the online fury and abuse one online journalist had been receiving, this tweet by David Baddiel, and still mulling on my thoughts from last week and a brief exchange with m’learned colleague Dan Rebellato, I wanted to quickly jot down an idea. It’s not my idea, it’s pretty fundamental to the way the internet works, but it strikes me that it well be an idea whose time has come…

If you really wanted to solve the problem of online abuse (and in turn maybe salve some of the fury in society that seems to have been unleashed), what are your hurdles?

  1. Education - the great accessibility of the internet has seen a lot of people leap into it, but we’re all still learning the nuances and the subtleties. Some people simply don’t get how public their posts are, and the intricacies of law in relation to whether the internet is publication or speech are not yet entirely resolved. (Oh yeah, and also many people are fiery idiots, or led by fiery idiots, or whatever, but I’m pretty sure that pre-dates the internet, so that’s less the issue in hand here.) The other big issue is increasing people’s awareness of how our social media is actually quite easily manipulated by ‘astroturfers’ and the like. It surprises me that so many people have never considered the possibility when I explain the term.
  2. Money - moderation needs people, and thus costs money. And that’s a lot of money, because the cost of people never declines with scale or innovation. The internet publishers and social media love algorithmic solutions for this reason, but parsing language is one of the hardest AI tasks out there.
  3. Will - the internet is built on a model that hits means money, through advertising. Social media is vitally dependent on numbers of users, and nothing brings in users faster than controversy. And nothing builds controversy quicker than anger.

OK, some quick possible solutions to these three then:

  1. Instant Karma - users reporting abuse and other abusive users is vital, but the feedback should be immediate. I’m no fan of the panopticon necessarily, but a pop-up on a social media site informing a user they have immediately been reported and their activity is being monitored, whether or not there’s any actual follow-through, will go a long way to cooling the more casual abuser incognisant of the human being at the other end of their trolling. The more a user pushes through the flags and gets more of them, then the big of eye of, uh, nice internet Sauron should swing towards them.
  2. Mutuality - the problem of course with (1) is the chaos of users reporting each other out of malevolence rather than hurt. You need to tweak it so that over-reporting abuse will lead to a cool-down lockout or similar, or even just exactly the same sanctions as abusive behaviour, and you can also start to compare the numbers over who is genuinely picking up more abuse flags than they dole out and vice versa, and you should I reckon begin to be able to pick out the bad faith actors mathematically. Like I said, not my great idea, it’s been part of the internet since there was internet. But if you can create a system of mutual benefit you will be laughing - the benefit being all the good stuff that the hive mind offers, rather then wading constantly through the filth of collective rage. Zoetrope All-Story used to run a system a few years back whereby you received feedback on your writing from your peers, but it was conditional on you giving properly considered feedback to others. I’m not sure how successful it was - the reality of the literary world, and the actual word is that power is asymmetric, so you would naturally prefer the feedback of those who might actually publish you. But that asymmetry might well be part of the problem of social media. Either way, with numbers of followers that asymmetry is more obvious, and can factor into our little content-agnostic algorithm.
  3. OK, the problem with (2), however, is lock-out as a sanction when a troll can run off and create a little twitter egg within a minute and they’re good to go all over again. Twitter does need look seriously at its sign-up process, and the barriers to entry. The more you can raise the entry costs to trolls and astroturfers, the better the experience for genuine users, surely?

That said, it is alarming how much of the abuse seems to be on Facebook, a platform that in theory militates against anonymity. Many of the abusers don’t even seem to care how visible they are. This is the bigger problem - I suspect the bigger companies just haven’t seen the financial downside to social incoherence yet. It might take a class-action lawsuit, or just something really awful and undeniable to happen sadly. It might well be what’s happening right now. Alternatively, some canny legislation could explicitly extend a duty to internet publishers to protect their employees from abuse.**Which gets us to come good old Isaiah Berlin ‘freedom to’ v. ‘freedom from’ which I think is going to become increasingly pertinent… Either way, I think a reckoning is coming…

Share this post:

Add a comment +

=