I was discussing a Stratechery article on Mark Zuckerberg’s choices regarding moderation/censorship or lack thereof with my friend Chris. I thought that the argument I was making was worth setting down in a public channel. Here’s a lightly edited version of my thoughts:
Legally, Facebook and Twitter can do whatever they want in regards to moderation of posts; they are private companies. Ethically, in regards to this specific situation about moderating/censoring Trump’s posts: Trump is an angry customer with a lot of power. There’s not going to be a good ethical answer for either of these companies until they are broken up into smaller bits that do not have enough cultural or political power that they would be considered the gateway to the world/the public commons AND that they would attract the president of the United States as an angry customer.
The very long version: At a base level, Thompson is right. Zuckerberg is declining to use enormous political power. Twitter is using their political power. But what I think is lost in all of this is that these are companies. They are private in the sense that they are not governmental (even though they are publicly traded). Companies make decisions all the time that are political. Ravelry literally banned talking about Trump or his administration on their platform. That is totally their prerogative. They are a private company. No one’s free speech rights were inhibited–the government has nothing to do with it and has nothing to say about whether or not talk about Trump should be allowed to be banned on Ravelry.
That’s the most egregious example I can think of, but you can trickle it down to removing comment sections on posts. Removing a comment section is a political act, insofar as it cuts off speech that the company did not like (whether that speech was specifically about American politics or not). But, to that extent, companies are allowed to have politics. They are private for exactly this reason: they can make their own politics.
Twitter and Facebook are being treated differently. We are giving Facebook and Twitter outsized scrutiny over things that are perfectly legal (and even potentially ethical; I get to that in a bit) to do. These actions would be perfectly allowable (even reasonable, perhaps even justifiable) in a smaller platform. But these two are singled out for more scrutiny. We can and perhaps even should deal with Facebook and Twitter differently, but once we do, we need to make explicit why we are dealing with them differently.
Often people argue we should treat them differently because they have larger user bases, or because the user bases use them for what are deemed overtly political/free speech reasons, or (often by implication, sometimes explicitly) because Trump uses them.
1. Size. Size is fine, but it’s actually not size that we’re dealing with: LinkedIn is much larger–by several hundred million users–than Twitter. You can argue that Facebook, as the mega-largest, is in its own regulatory class; I’m fine with that. But you have to exclude Twitter or include LinkedIn.
2. Speech. You can say FB and Twitter are treated differently because of what Twitter and Facebook are perceived to do, but that’s not accurate either. Gab and Mastodon do very similar things as Twitter and are not in the conversation.
3. Trump. Trump uses the two platforms under consideration and does not use other platforms that are not under consideration. There is no caveat here.
It also could be a joint consideration (you must be this big and do these certain types of things, which would knock out LinkedIn and leave only Twitter and Facebook). I doubt this. I would argue it’s really about how platforms respond to President Trump.
Yet I think that there are lots of strands working in Thompson (and other people’s) arguments, and we need to acknowledge some presuppositions: namely, that Facebook and Twitter do not get to be treated like normal companies for some reason or reasons. Because if we were dealing with them like normal companies (even normal companies with perceived free speech problems/rules), none of this would happen.
As a result of all of that logic, I’m willing to grant the argument that Twitter is using their private company-political ability against Trump for American-political reasons, and that is not a good thing for them to do. However, there are a lot of companies (Ravelry, Mastodon, et al.) that are anti-Trump, and it still doesn’t matter to Trump.
So I’m willing to go a step farther and say that this only matters right now because Trump is basically a mad customer who has unusually large power to make life difficult for the company he is mad at. (Otherwise he would have written his social media order when Ravelry directly called him a white supremacist; he did not, and I think it’s because he doesn’t use Ravelry.) So, ultimately, this particular moderation/censorship situation is about Trump. And insofar as it is being used as a inroad or a figurehead to deal with the larger issue of content moderation, I am not sure that dealing with content moderation concerning the average person and dealing with content moderation concerning President Trump are in the same category.
(As Thompson implied, Twitter is kicking a hornet’s nest to try to get hornets on Facebook, but they’re getting the hornets on themselves instead; I think this situation has little to do with whether the average person can post white supremacist slogans or not. It does have to do with whether the president can post white supremacist slogans, because it is egregious when someone in a high, if not the highest, profile position is doing it; that implies that the platform is having problems all the way down the chain. I grant that argument, because we know that the platforms are having problems down the chain. But Twitter is treating Trump differently, because their treatment of Trump hasn’t trickled down to other types of posts as far as I am aware; they’ve said that this is their solution for politicians, but it seems like they’re making a big concern over one particular politician.)
(Second Parenthetical: I think that a company like Pinterest or LinkedIn is just as “political” in the sense I mentioned in the first paragraph–Pinterest doesn’t allow anti-vaccination content on its platform, a hugely political move–but both are under far less scrutiny because they are not deemed as important as FB/Twitter. And I think they are deemed less important because the majority of their user bases don’t use the platform for overtly political posting. And Trump doesn’t use them.)
Beyond what is this about really, Twitter and Facebook can, legally, do whatever they want. Ethically, I think that they can do whatever they want! If they want to flag content, fine! If they don’t want to, fine! I am a big supporter of decentralization and free speech on the internet, so I would naturally run toward “more speech,” but then you have to get into the muckety business of being ethically responsible for political threats and such running through your platform (not legally responsible, at least until Title 230 is modified).
If you take that stuff off or flag it, some people are going to say you took that stuff off for political reasons: some people’s threats of political violence are another person’s sarcasm; once you’re in that discussion, you then have to discuss legal concepts such as “credible” vs “non-credible threats” and “incitement”–and then it gets ever more complex from there. So neither of these two platforms are going to have a good time with their approaches.
Neither of them are going to have a good time once they’ve become so large that they’ve passed the threshold of being large enough to have the United States President being an angry customer. I don’t think that there is a good approach for either of them except breaking them up into smaller bits. Breaking them up would make it so that being on these platforms is less attractive and/or harder, limiting one or two platforms from being the “gateway” platform to “the world” / the public commons.