Curation as a Way to Change Social Media Structures for the Better 

by Stephen Carradini, latest edition May 28, 2021

Social media platforms are structurally unsound. They are based on principles that are not strong enough to govern platforms. Platforms are, as a result, being turned to all sorts of evil ends with limited checks. Thus, it’s encouraging to hear more and more people besides Jaron Lanier beginning to question the structure of social media platforms. Instead of criticizing the structure of various platforms (which is still an important thing to do at the moment, at least until we reach the tipping point where new laws start to go into force), Kalev Leetaru asked an engaging question: What if social media platforms were more like libraries? This goes beyond analyzing our current structure and points towards imagining new structures, which is something I’m very high on: I recently spent a year talking with Chris Krycho on the Winning Slowly podcast about how to envision new, positive, non-utopian, non-technocratic futures. So I’m very excited by thought experiments right now, and Leetaru’s is particuarly interesting.

The main crux of his argument is: 

They would emphasize listening over speaking. Community over the individual. Local over global. They would also clearly delineate fact from fiction, opinion from evidence, using interface mechanisms to help users separate these much as libraries have long done.

-Kalev Leetaru, “What If Social Media Platforms Were More Like Libraries?”

Leetaru emphasizes listening over speaking as the critical difference between libraries and social media. I agree that this is a major difference, but I think there are two larger differences between the realm of library and the realm of social media: curation and values. Librarians curate the library based on their values. They choose what to put in the library, what to reject from getting into the library, and what to cull from the library once it has passed its usefulness. On the far opposite end of the spectrum, almost all social media platforms have hidden under Section 230 and denied that they have any responsibility to curate anything regarding their platforms. Further, they often deny that they have substantive values at all–even though their design choices betray their values. (Almost all, because Pinterest is a counterexample; I will return to them below.) If you are looking for things that distinguish social media from libraries, it certainly seems like curation and values would be two major points of difference–maybe even more than talking and listening. First, I’ll mention curation, and then I’ll get to values in a bit.

How Curation and Values Intertwine

I began writing this essay at a moment when many people were talking a lot about curation, although that moment has since passed. I think it is healthy to think about curation when considering an overpopulated information space. (Every information space seems overpopulated these days.) But before I assess how curation works in overpopulated information spaces, I want to pause and explain what the skill of curation was and partially still is. I also want to note that curation and moderation are different. Curation is an additive skill: boosting what is positive and should be enjoyed, essentially lifting something higher up in the pile of things to consider. Moderation is a removal skill: removing things that need to be eliminated, essentially taking something out of the discourse. Both activities require expertise on a subject. Moderation is an even more honorable job than curation in that they take on the dirty work of eliminating bad things for those who don’t want to see them. (God bless all content moderators.) But in this essay, I’m thinking about curation.

Curation is a skill: it is the process of humans with a deep knowledge of the subject pointing other humans towards unique, interesting, exemplar, or otherwise notable forms of the subject. Like any other skill, it is a skill that people develop over time into expertise. The whole original concept behind “influencers” was a form of curation: you trust my aggregate expertise in some way, and thus you can trust my specific expertise regarding this recommendation. (Obviously, money quickly got heavily involved and transformed this interpersonal expectation into a business enterprise; the conception I laid out there is now a heavily idealized form of what influencer marketing actually is.) At its base, companies are paying influencers because followers trust a form of curation skills that influencers provide. If influencers weren’t perceived as experts at something (beauty, humor, tech, being a parent, niche concerns), they wouldn’t be followed.

Once you move out of “influencer” and into the realm of professional topical journalist (i.e. music journalist, gaming journalist, arts journalist), the argument gets even easier: the expertise is clearer, the recommendations are more formal, the money is more often divorced from the content. (“More often”, because money is indirectly involved in everything our society, even if the writer is not paid to promote a product.) People follow the work of journalists and influencers because people trust journalists/influencers with expertise to do the work of curation that the average person does not have time or interest in doing. Curation expertise is needed, and we currently have created societal ways of filling that need.

Curation is necessary. Helpfully, many people are willing to be curators. Unhelpfully, curation has rarely been a very lucrative job except at the extreme highest strata of whatever field is being curated. People want to be curators because being paid to be an expert on pop music confers not just meet’n’greets with artists and (low) pay but social status in certain circles. Curators are listened to. In an overpopulated information space, attention is king; being a curator of some note means that people will actually pay attention to you. Curation can be fun and the perks can be cool on their own, but these direct outcomes can arrive with a secondary effect of status that is (to some) more interesting than the act of curating or the perks.

The mostly-offscreen power of curation, beyond the ego boost of getting someone to consume something on your recommendation and the status that confers, is that curation transfers values. Curation is a way of transmitting values regarding the subject–people who curate necessarily curate with an eye towards what they-as-experts think is worth paying attention to. Any choice about what is worth paying attention to requires criteria by which the decision is made, and those decisions are value judgments. Those values can be aesthetic, utilitarian, economic, or moral–and often all of them at once. Curation tells people what is good and has a set of values detailing what the good is. And if everyone follows what a curator thinks is good, then the curator’s sense of what is good is given a higher profile. Perhaps more people will make things that are like that original good thing. This would be enjoyable for the curator: more things that I like, in the manner that I like them! It is also enjoyable, however, for the people following the curator, as the followers now get more things that they like in the manner that they like them. It is a virtuous loop, right up until the time that the curator starts recommending things that the audience does not think are good–whether through their own self-indulgence, the audience moving on to other things, or a million other reasons. (Headline: People are fickle.) Curation is not just the function of a necessary gatekeeper exercising discriminatory subject expertise; it is about developing a conscious or subconscious theory of the good in relation to the subject and applying that theory to what should be curated. That’s what curation is; or at least, what it used to be.

For an example of how this values-driven curation works, listen to this algorithmically-generated list of 260 versions of the hymn “O Come, O Come, Emmanuel” and tell me which one is your favorite and why. The reasons that you come up with for why something is good in relation to other things that aren’t as good or aren’t good at all—even things that are drawn from the same source material—shows a development of aesthetic values in your listening. If you can identify what you like but not why you like it, then you have some set of values but not the expert language to draw on to explain why. (If you like all of them equally, then you have very wide, non-discriminating tastes, which is its own type of value.) Traditionally, expertise is conferred as the result of formal or informal education, as the materials from which to develop expertise and curation were often historically only available in specialized training. (Access to objects of and texts about art history, music history, architectural history, and more were not readily available for autodidacts in many places even into the 20th century; in places throughout the world, this is still true.)

However: the emergence of the internet and social media made the basic materials upon which to curate values available to all, as the internet made art history, music history, architectural history, and more available for autodidacts in a variety of ways. After access, it’s the next step to start developing your own values, figuring out reasons you like things. You’re figuring out what is the good– regardless of whether you have outside expert, teacher, or historical opinion on what is the good. There is value in deciding your own version of the good in an artistic realm; the good need not be locked in ivy-covered towers to be accessed only by experts. Individual appreciation of the good in an art form is beautiful. But good art yearns to be shared and experienced together. A habit of sharing what is good with people turns one-off recommendations into a habit of curation. The more that the received enjoys the curated work and bends their ear to it repeatedly, the more that the values the curator used to pick what is good become shared by the listener. Curation is a community act regarding shared community values.

While I have used art recommendations as an example of curation, we live in a world of superabundant communication and information. Thus, curation can easily be applied to information about any topic, not just art. Yet, drawing on the prior argument, curating information suggests a need for shared community values (or the ability to develop shared community values) before “curation” can truly occur. Curation, then, requires an at-least-partial shared vision of the good in some sort of (thin or thick) community. People are involved in many thin and thick communities at once, so participation in various types of communities creates overlapping–sometimes complimentary, sometimes contradictory–perceptions of the good. As we establish our ideas of the good, we then come in contact with other people who agree or disagree with our vision of the good. Those who disagree posit their own idea of the good, and then things happen.

In our current moment: things happen on social media. This competition of goods and thus lack of shared good in social media has resulted in Russians manipulating American elections, Twitter mobs, shaming, livestreamed violence, and a whole bevy of other concerns. When millions or billions people are left to their own ends to determine the good, curate their own experience of life for others, and try to convince others of the goodness of their project in platforms where the platform owners have decided to abdicate the responsibility to curate from any set of stated values at all, there is a lot of room for things to go horribly. And go horribly things have, which is why Leetaru is imagining: what if a library, though?

A Brief History of Curation

But before I explain how we curate social media into a library-esque space, it’s useful to show what types of curation people have already explored on the internet. (Many different types of curation experiments are not mentioned here; I have chosen some prominent examples to move the history along instead of cataloging all ways of curating.) Curation has been around since the advent of multiple choices; in a space where a non-expert has too many choices to make about what to engage with, relying on an expert to point the way is a common way forward. (Some people just decide to engage with everything, but not all of us are polymaths.) This meant de facto that curation was the province of experts, with that expertise achieved through rigorous, often-arcane channels: degrees, certifications, expert peer recognition, etc. This expertise allowed access to channels where curation could be exercised: theoretically anyone could form opinions of the good if they had access to materials on which to exercise curatorial insight, but not everyone had access to the channels of publication to turn their recommendations into a prolonged habit of curation for a community.

This monopoly on channels of publication changed with the advent of the internet in the ’60s-’80s. A larger number of people could publish their curation, had they the technical savvy or university connections to do so. This accelerated with the advent of the World Wide Web: one of the earliest types of blogs on the Web was a list of places you could go on the internet that were cool. Once the web had gotten too big to keep up with in one big index, early blogs made lists of things that were cool so that less-motivated internet denizens could spend more time looking at cool stuff and less time looking at crap. This is basically normative curation: “I have looked at all the stuff and am now an expert; trust my expert opinions and you will profit in more cool and less lame website viewing”. The only difference is that you didn’t need accreditation of any kind to be an expert on the topic of the internet. Anyone with sufficient technical skill, time, and interest could be an expert. This type of blogging continues to this day. I’m a part of it, over at Independent Clauses! I expect this type of blogging to go on forever. 

However, I expect it to go on as it does: in relative isolation. Blogs have always been hard to link together: the visual and conceptual clunkiness of blogrolls and blogrings attest to this. Sharing posts is not as speedy as it could be: you have to write a second blog post to talk about the first blog post, and you’d have to make it worth your time and the reader’s time, so it should be moderately long, and then sometimes you just never get around to writing it at all for want of the extra length. I’m living proof of this: this essay has taken years to write, when I could have just tweeted out “yo but curation and values are just as important if not more; nothing is curated on social media but everything is chosen via values in a library.” Curation is nothing if not old and difficult. The open web is incredibly important and incredibly fragile, and I hold it in highest esteem; it is, perhaps as a merit, something that requires quite a bit of effort to use and quite a bit of sacrifice to maintain. It is not the focus of this essay, except to say that curation has been hard for a long time.

So then Web 2.0 came along and people were interested in collaborating, connecting, creating, and generally doing stuff on platforms. Instead of isolated islands of blogs, we got platforms where many people could talk together, work together, argue together, et al. This is where we are roughly now: Facebook, Twitter, Pinterest, Instagram, et al. And those social media channels widely expanded the number of available channels of publication while greatly decreasing the technical skill needed to publish. Furthermore, this proliferation of channels allowed anyone with a non-censored, non-Facebook Basics internet connection to have access to the same tools that lead to the channels which used to be the region of experts. For one example: how many tweets will people see on cable news or hear on NPR this week? 

Beyond that, social media makes everyone who can get to platforms a potential curator because the group of people handing out the title of expert is massively expanded: no longer are experts only those with advanced degrees and the approval of their professional peers. People can vote with their tweets, deciding that this person on Twitter is an expert on a topic and following their curation in response. Would-be curators can hang their shingle with no credentials; if the content they curate is good and works like curation for the person seeking curation, that would-be curator and their vision of the good has now been approved as a curator. From there it’s only one step away to monetizing that curation, and then you have the aforementioned influencers. There’s no need to go through the vetting process of becoming an expert, with all the attendant education in values that expert credentialing often entails; you no longer need to be taught the values of good postmodernist art in an art school to critique contemporary art. You no longer need to have years of practice in listening to music and determining what sonic values each piece espouses and why, and whether those values are more or less prominent in each piece, and which values are the most appealing. 

An offshoot of this form of collaborative curation is Wikipedia; Wikipedia is one long collaboratively curated space where information is debated, contested, accepted, rejected, modified, and deliberated. There are significant problems with this model–the oft-agonistic model of collaborative curation here can become raucous, ideological, and unpleasant; technical rules are almost as critical as the information curatorial function in determining the content, which can become the tail wagging the dog; and these two aspects (among others) can lead to a lack of diversity in opinion and in the identity of Wikipedia editors. But in general, this is about as good as collaborative curation gets—and it’s got some serious problems. Notably, almost no one has been arguing that this curation model would be a good model for social media at large, except Wikipedia itself.

Wikipedia has recently doubled down on open curation recently with an emergent social media platform called WT:Social, creating what some have called an anti-Facebook social media platform. The idea is that anyone can edit anyone’s social media post or even delete it if they so desire; this is an extreme form of curation that goes beyond anything our contemporary social media platforms enact. It removes the deliberation aspect and gives everyone a hammer to hit nails with. I do not expect that this form of curation will have much longevity, because it seems to be inherently impossible to maintain and unusually conducive to groupthink. Giving literally everyone the ability to remove things they don’t like (whether or not anyone else dislikes it) democratizes curation to the point that it’s almost just entirely destruction. Moving curation to the extreme end of everyone having equal power to curate what others see by literally deleting things that people don’t want other people to see is a dead end that turns positive curation into moderation. 

Because I see the old form of individual online curation as too isolated from community (no matter how much I love blogrolls), the contemporary form of collaborative curation successful but hampered by problems, and the extreme form of egalitarian curation as a dead end, there needs to be a new way of curating that responds to the concerns of platforms and curation. There needs to be a new way of curating that can make social media more like a library and less chaotic. And it’s not going to be any of the models of historic expertise, historic blog curator, contemporary influencer, contemporary collaborative curation, or extreme egalitarian curation. It will take bits of all of those things and some new things. But with all those things stuffed into it, it will still on its surface look like the old curatorial model: top-down curation from a (relatively) small number of people.

A Way Forward

Curation cannot be easily distributed, nor should it be. While some experiments have happened on the internet in curation and more are underway, I would argue that all efforts at developing curation online will almost inevitably recreate the top-down structure of historic and current curation structure. (This is true even in collaboration: 77% of Wikipedia is written by 1% of its editors.) To create a library out of social media, you would need to have top-down curation. But it would have to be a new form of expert curation. 

A small number of experts invested in a topic and publishing through widely-used platforms sounds like a revanchist move, a desperate grasp for traditional experts to reassert their power over things. But the top-down experts that curate on a platform need not be traditional experts with credentials; nor do they need to be solo types publishing in their own blogs. Instead, they can be identified and their viewpoints promoted in ways that are commensurate with the type of platform that each platform is. This leads to my argument: social media platforms must be curated by their creators in collaboration with the platform’s users on the basis of the platform’s values. To that end, there are three necessary types of curation required for social media platforms to become more like libraries: ethical curation of values, technical curation, and social curation.

Ethical Curation: Values 

Ethical curation of values is incredibly important for any sort of curatorial effort. The values that I mentioned previously, which are baked in to every act of curation, need to be made explicit both by people curating and platforms that are hosting. People curating material should be clear about the things that they view as the good in that subject; bloggers do this all the time, and social media personalities are not that far off from such meta-analysis in their posts.  This is especially true of those who are doing product reviews: “does it work” is an ethical stance upon which to judge something. Given that many people adhere to things and ideas that don’t work or that intentionally only work with great difficulty, functionality is not a uniform stance upon which to base a thing’s value, much less ease of use after a thing functions. It’s not hard to imagine that people could develop clear, articulate stances about the good they are promoting and curating. Being more forthcoming with this information would be a first step towards reforming curation toward a more calm, library-like state. But disclosures do not an ecosystem make. (We have reduced some disclosures to emojis placed in Twitter bios; as an information-transmitting device, is has become extremely effective and actually probably made things more rancorous on Twitter.) While users stating values is necessary, it is not sufficient; what approaches sufficiency in value-setting is the platform taking responsibility for itself and stating its values. (Even closer approaching sufficiency would be transparent moderation practices based on clearly stated values added in to user disclosures and platform values.)

Platforms must own up to stances. The first major platform that was proactive in stating any divisive stance was Pinterest: Pinterest decided to block anti-vaccination memes and images from the search in its service and replace it with information from “authoritative” sources. Pinterest doesn’t believe in anti-vaccination campaigns and they do believe in WHO statistics. That’s a stated ethic with a curation stance to go with it: instead of seeing pins on certain topics, Pinterest has curated information for its users to see. They have taken curatorial power out of the hands of users and placed it in their own hands for this specific topic. This is hugely novel for a social media platform to be doing this of its own accord, and highly commendable just for the fact that they took a stance. Many would not like this stance, as it seems to run afoul of the general principle of free speech (but not the legally binding version of free speech). Thus, it has a chance to lose users for them. This is a decision with teeth! It’s hard to make those. Their single stance is highly incomplete as a full set of ethics; but in this one point, their ethic is concrete and actionable.

While this essay is about curation and not moderation, I would be remiss if I did not mention the Twitter Ban. Twitter, Facebook, and other platforms created a moderation controversy when they removed then-President Donald Trump from their platforms in the aftermath of the events of January 6th. While many took this as a sign that the social media platforms have liberal values in censoring a Republican president, many others felt that this was an emergency stance of last resort and not a positive statement of values. Given that it is unclear specifically how the posts leading up to January 6th were different than the posts made throughout then-President Trump’s whole presidency, it is fair to state that this removal by the platforms was a statement of values. The American federal regulatory regime prefers to wait for a disaster in an industry before regulating that industry; it is justified to wonder whether American social media platforms who have been given license by the federal government (through Section 230) to be de-facto governments have adopted the same stance about their own industry. It is also fair to wonder what sort of disaster they are waiting for if the events of January 6 were not enough to produce positive, clear, affirmative statements of values that are enforced equally across the platforms. Moderation and curation are both content issues based on values: a failure of moderation is likely to have a failure of curation attached to it, as a failure of values regarding what to remove is the flip side of the coin from a failure of values in what should be rightly posted.

More platforms need to be clear with what their stances are about types of things that can be curated and the ways those can be curated on the platform. It makes sense that Pinterest would be the first service to take a clear curatorial stance on ethical grounds, because it is itself a curation space; it saw that anti-vaccination memes and images were being curated in its bounds, and it said that “no, that is not ok.” Now anti-vaccination curators must go somewhere else to curate that content. Again, this is a commendable thing to do, not just for the people who like this idea on its merits (people who don’t want their child to get measles from an unvaccinated child see removing anti-vaccination content as a positive value) but on principle: they have said what type of space they want to be, and what types of ideas they want to curate (and allow curation of).

If this sounds like terms of service, it’s not quite that. It’s more than terms of service, which are largely legal jargon and craven responsibility-shirking maneuvers. While Pinterest’s removal of anti-vaccination content was a removal policy that theoretically could be dealt with in terms of service, they instead included an additive aspect by replacing the content they did not like with content they did like. Instead of just writing, “you cannot speak of this topic” in a terms of service, Pinterest curated the type of content they wanted to have in their platform and thus explained in a minor way the type of platform they want to be. Additive statements like this are harder to make and harder to sustain than subtractive statements: the history of niche social media platforms (Ello, Mastodon, et al.) is littered with the complexities of creating platforms that have strong additive stances one way or another. It is hard to set out additive values, partially because people who disagree with those will then leave–and social media companies (any companies, really) don’t want to do things that make people leave. However, some companies have found that having a smaller group of people who care a lot about the platform or product will result in a group that is more easily organized around additive values that the platform has set out.

Setting out additive values and curating a vision for the content that will be in a platform is hard. Any organization with articulated values will have a smaller audience than that same organization without articulated values–people posting on an organization’s platform may disagree with the organization on some things but they would never know if it’s not explicitly stated in the platform’s additive values. If an organization’s pre-existing values are stated up front, it is much easier to know much earlier whether or not this platform would be a platform whose values you would be interested in. It would require followers to do a sort of opt-in function up front instead of an opt-out function later when the values of the curator (be they organization or individual) and the follower conflict. It would make for slower growth with stronger bases of support; this method prioritizes slow growth to a stable point instead of meteoric rises and falls. These slower-growing curators with stronger bases of support would ultimately have smaller followings than the biggest platforms and influencers that exist today. (To some extent this concept already exists, in terms like “nanoinfluencer” and “microinfluencer.”) So this is very much not the way to build the world’s largest social media platform or become the world’s most popular influencer–but it is a way to build social media that looks more like a library.

Once a platform or influencer has stated its values publicly, the goal would be to attract people with those same values. “But wait,” you may think, “This would just make more filter bubbles! People already self-select who they want to see on social media!” It is true that people already self-select on social media, especially Twitter. However, part of that self-selection is built through “dunking on” people who believe differently than the self-selected in-group and using it to build support for the in-group’s points of view. Twitter has very few, if any, ways to police people dunking on other people–it could be argued that the core functionality of Twitter is dunking on people. It could also be argued that it is not in Twitter’s values or self-interest to stop people from dunking on other people. Twitter benefits from people insulting each other violently on Twitter, in that more attention is brought to Twitter, and then Twitter can show more ads and make more money. So Twitter has a self-selection problem exacerbated by a dunking problem because the values of Twitter tacitly encourage anything that will get more profit for the company (including dunking on people). Twitter has also said they are the “Free Speech Wing of the Free Speech Party”; if this is their main value, then the untrammeled row that is a Twitter conversation is actually an ideal form of Twitter. Their values, whether profit or free speech, are both totally fine with self-selecting, dunking, and other problematic habits.

If one was concerned that filter bubbles are a problem, then any future social media could be set up to avoid filter bubbles and/or avoid dunking. The problem of bad outcomes from bad values in existent social media does not mean that the answer is fewer values—the answer is better outcomes from better values. In values-forward platforms, you can say “one of our values is no dunking” and then do things that police dunking. You can do things to pop filter bubbles. You can do whatever you want on a platform that is driven by whatever values you want. 

Even if you want to build a platform that is solely dedicated to one type of person—a social media platform that would be susceptible to groupthink and filter bubbles de facto–you can do that; the values that a platform or organization espouses can be additive or subtractive and thus have positive or negative outcomes no matter how homogenous or heterogenous a group of people are. For instance, I think of all the many dating sites that cater to a specific type of person looking for another type of person; hobby communities, interested in a specific hobby or hobbies; or increasingly, vanishingly specific subreddits. Any of these can have values that promote peaceable interactions that look more like a library, and any of these can have values that increase strife and conflict for people.

Of course, the question comes down to what organizations think are additive values and subtractive values: Ravelry is a hobby community that banned support of the then-President of the United States because they wanted to create a space that rejected certain values they associated with the then-president and instead supported different values that they felt the then-president did not support. Values will necessarily cause conflict, perhaps even much conflict: libraries themselves cause conflict due to their values, such as boycotting book publishers that work against the library’s values and the never-ending set of arguments over “controversial” books being included or excluded from a library. So the goal of values-based platforms isn’t the absence of conflict, but principled spaces that, if they do have conflict, have conflict for values-based reasons. 

And although Ravelry’s actions are provocative and controversial, they are the sort of values-forward work that I’m calling for. Calling for values-based platforms will almost certainly result in more platforms that people don’t like and more that they do, instead of a small number of platforms that contain enormous numbers of commendable and reprehensible things. Part of the controversy encountered by Ravelry was due to the fact that this stance came in after people were already in the platform. Not every social media platform will have the luxury of starting after this post and putting its values first; some platforms will have to work their stances into their platforms over time. This will be a difficult transition for people who are on the opposite side of the platform’s stated views but thought they were on the same side of the platform’s views. It will also be hard for those who believed that they were opposite of the views of the platform but were fine with that because those views didn’t matter to the daily life of the platform they were in.

So if encouraging values-based platforms is a way of creating more and more specific communities based on sets of values, is promoting the idea of values-forward platforms just a way of further carving up the universe into binaries or categories? It definitely could be! However, it could also be used in many different ways. It could be used to hold open spaces for topics. Platforms could build in that civility, as determined through a set of practices decided upon by the platform, as a value of the platform. I mentioned earlier additive and subtractive aspects of values—here this additive value (civility) will have to include some subtractive aspects. Additively: here is what civility is and what will be applauded. Subtractively: here is what civility is not and what will be decried and removed. Subtractive values—what will be removed—will still always be easier than additive values. But the big change here would be that the platforms set out these values on their own instead of being forced into this by government.

There is a fairly damning record of social media businesses not choosing to self-regulate in any meaningful way (indeed, that record is how we got into this mess in the first place), and I agree that there is more than a little idealism thrown into this thought experiment. However, there’s more to this argument than just the idealism that comes from the ideal space of a thought experiment. There’s raw pragmatism: given the slowly-growing tolerance for regulation on social media (as evidenced by sustained interest in Congressional hearings and trial-balloon bills, if nothing else yet), it would be wise for platforms to get out in front of the regulations. Set your values now! Platforms can avoid getting forced into congressional oversight/litigation and having their name dragged through news cycle after cycle. Yes, there’s a cynical argument that waiting for regulation that may in effect be toothless or at least insufficient is economically and pragmatically smarter than setting your own rules and seeing what happens. The costs are known up front if you do the work of setting up values and going through the tumult of losing consumers who don’t align with your values of curation, while the costs of waiting are variable and may even be favorable (or at least, less expensive than losing significant numbers of people in your platform).

But this cynical argument misses a critical point about the value of values. Even with the potential loss of members in the platform, the members that platforms do have will most likely be more strongly connected with the platform if they connect with the values of the platform. That in itself is a strong benefit. Those platforms which choose to use targeted advertising would find this a boon. I do not think targeted advertising is healthy at all for the longterm existence of a social media platform, so I do not advise that people run their platform on targeted advertising, but it is an inescapable part of the social media world at this exact moment that I would be remiss to elide. Platforms that do not use ads in favor of subscription models or more exotic methods of revenue would benefit—perhaps even benefit more greatly–from having users with high affinity to their platform; strong connections to a platform mean that people are potentially more interested in resubscribing.

This strong affinity to the platform is particularly valuable if the type of community is one that is not built on politically polarized ends, particularly because we haven’t found many ways to build moderate, civil communities online. (See comment sections, twitter mobs, and vaguebooking for examples.) If a platform sets up rules that promote tolerance, diverse viewpoints, disagreements of a civil variety, and careful thinking, then people who feel that way may want to be a part of the platform and mutually affirm people who believe in those things. Are there many of these people on or off the internet who would like to be a part of this type of platform? Who knows? The social media platforms we have now are built for extremes–Web 2.0 built an internet that isn’t good for much else (in relation to communication). Putting values first and making those values ones of civility could cause people to come out of the woodwork and result in different types of interactions on the internet. 

Will these platforms and interactions be the majority? Certainly not while Facebook has 2 billion users and my theoretical future values-first, civility-oriented platforms have 0. But just because a values-first platform won’t take over Facebook’s throne doesn’t make it not worthwhile to do, especially if it is tightly connected to the affinities of its member audience. If you curate the values of your platform so that people know what to expect from you, attract people who are interested in those values, and throw in a subscription or exotic revenue source to make it sustainable, then you have a much more longterm-healthy platform that looks more like a local library than a giant shouting match.

One can look to physical libraries for how this works in practice. Values-driven platforms are similar to a library in that libraries have expectations that they set out for people. Some libraries are very quiet spaces for reading. Some are noisy places for people to work, collaborate, chat, and recreate. Some have spaces that are both. All have rules about what books will be in what sections of the library, as well as what books need to be decommissioned. They have rules about how to procure books, how to handle challenges to the books in the library from the community, and how to treat people who borrow the books (late fees or not, etc.). All of those rules are variable depending on the library and the library system. When I go to a new library, I assume it will be a quiet one out of deference–but sometimes it’s really not a quiet library at all! These rules are set out for people, often in charming posters on the walls. The values are set in advance, library to library; sure, there are some things that the government requires of them, but libraries often (not always!) have an element of freedom in how they set up their work. Those who would take that power away from libraries have my ire.

Technical Curation: Design Choices 

The next step in curating for social media to be more like a library is technical curation. Some of the things that I’m mentioning require technical aspects to be put into place, although many don’t. In this way, tech companies already are doing this curation: they make design choices about their platforms and instill technical choices that reflect their values. Some may argue that this is hardly curation, but I would argue that tech products in one way can be understood as curated sets of choices put in front of the user. (It may be true that all the choices add up to more than the sum of its parts, but that merely makes the curation of what goes in even more important for the final product.) Facebook was actually on the leading edge of privacy concerns early in its existence, but they dropped most of the technical aspects of privacy elements when Wall Street demanded they make money. (See 27:00 of So, Bob’s “No Place To Hide: Mistakes Were Made” podcast episode for an interview with former ACLU lawyer and Facebook employee Tim Sparapani on this point.) Instead of holding to privacy, they rushed to advertising, because their values were in making money over privacy. Technical choices reflect values. There are many people who have said this better than me, so I’ll leave it to them to press the point: no design decision is value-neutral.

So in a values-driven platform, the values that the platform states upfront or grafts in later must be reflected in the technical choices. This can be everything from banning certain types of ads or ad targeting (or just all ads) to using immediate takedowns on flagged words (a crude but occasionally effective method) to employing complex moderation tools for moderators to make decisions on difficult topics to making the banhammer really accessible to replacing content (as Pinterest did) or bunches of other things. Whatever the values put forward are, the technical aspects of the platform on the backend must match those.

But even more critically: the design choices on the front end must match the stated values of the platform. If, for instance, slow thinking is the goal, then things like instantaneous sharing, instantaneous responses, and algorithmic sorting based on speed of sharing should be eliminated. If enlightening conversation from diverse conversants is the goal, then tools that allow people to interact without abuse (hiding users, hiding comments, hiding whole conversations, all of which are things that I probably controversially consider baseline user-friendliness) should be balanced by methods that promote conversation: guidelines for how to speak, guidelines for what to speak on, and, in short, examples of ideal behavior on the platform. How do you find ideal examples of behavior? Platforms should take their values, find (curate!) people who are employing them particularly well, and promote their work/conversation/ideas/content/whatever as ideal. In any case, the values that the platforms put in place conceptually can be driven through to users by technical means of promotion in the platform. I would like platforms to do this through human moderation, but I am sure that platforms would like to do this through some sort of technical means. Nevertheless, these are all ways that curation works at a technical level in the front and back ends; optimal technical curation would require stating the values that are technically baked in up front. They’re already in there; they’re just opaque at the moment. So let’s make them explicit for platforms that already exist. Let’s start from the beginning with the values clear and add technical work on top of the values for new platforms. 

Platforms like Mastodon and Gab have already done this: stating their goals for the platform up front and letting come who will come. Now, those two platforms have largely slotted in to political poles. While I mentioned that is a thing you can definitely do, I am not super-thrilled that this has happened. I would prefer that we develop more platforms dedicated to civil, moderate, cross-boundary communication; but the rise of values-based platforms is in itself a way forward to more values-based platforms, many of which I will like and many of which I will not like. But certainly having a group of people who agree with the stated values of each other at a basic level is more like a place where people go, hang out, and learn than Twitter right now–although what it is they learn is incredibly important and varies drastically on Gab and Mastodon. Values are important!

Technical curation in platforms is the equivalent in a library of the administrators and librarians choosing what amount of book racks to have, what amount of computers to have, what amount of quiet spaces and loud spaces, and what amount of makerspaces and technology zones. These have to do with the content of the racks and makerspaces, but no content can come into the racks or makerspaces without having the racks and makerspaces planned out technically. Where administrators and librarians don’t have power to change their own format, that is a disappointment that I would like to correct. But in general, it seems that all the libraries I have been in have been subtly-to-massively different in their technical construction. Choosing these structures is an act of curation for what the library will be, what will happen in it, and how people can use it.

Social Curation: Platform Governance

So by starting with values, looping through the technical backend choices, and emerging at the frontend technical choices, we’ve returned to an idea of people using the platform as curators. I mentioned earlier that platforms could promote users whose work closely fits with the values of the platform as a way of showing examples of what the platform wants to be about—in other words, they would curate the best curators. However, the ideal curator on a platform from the platform’s perspective is not the only perspective that matters when choosing an ideal curator of the platform to promote. Curation is a highly social act, and the people of the platform should have a say in how the curation of curators works. Having a say in who the curators are (where the ideal could be innovative work, thoughtful takes, funny insights, interesting notes, even-handed political commentary, or anything else) requires regular users having a say in curation itself and in the life of the platform.

So it’s healthy for the platform to pick curators who both fit with the values of the platform and are of interest to the users of the platform; this puts the values of the platform, the technical processes of choosing people who fit those values, and the social response to those values and technical choices in a virtuous pattern. Ideally the results would be curators who create content/conversation that the community supports and that the platform can defend as fitting with its values. Those people would be examples of how the community works and would serve to reinforce what the values look like in practice. Then the community could continue to grow with people who are interested in that sort of content/conversation, and the virtuous loop would continue to loop. If things started to go south somewhere, technical or social interventions from the platform or the community would be needed to address problematic content/conversation that doesn’t adhere to the values of the platform; this gets into the concerns of the noble moderators who fall outside the scope of this essay. (Truly, be they noble.)

There are lots of ways this social / platform interaction in choosing exemplar content curators can happen. The ideas start to move back into the land of the technical: upvotes without downvotes, a la Facebook; upvotes with downvotes, a la reddit/Imgur; a nominations box of sorts; year-end votes; et al. All of these have downsides, some of them obvious. Some of these may be unethical in many or even all contexts! But these are ideas of how people could be brought to the fore technically. Things could also be done socially; moderators would note who is leading the conversation/content creation positively and being responded to positively by the community in line with the values of the platform, then suggest them for being promoted. Moderators on the forums I visited as a teenager did this all the time. It was part of the function of being a moderator: finding more moderators, unintentionally or intentionally.

So there are tons of ways of going about this, but ultimately, the people who are most socially involved in the platform should be given say in who should be promoted to exemplary curator. The last I’ll say on moderation in this essay on curation is that users also should have a say in who is not allowed to speak, or what topics are not allowed to be spoken on, or in what ways those topics are to be spoken on. But people can be fickle and unreliable on this front (to say the least), and the platforms need to exercise oversight. (That “platforms should exercise oversight” is a controversial statement is painful.) The platform should make sure that if a user/curator is going off the rails of the platform’s values, there are ways to help that curator get back on the rails or dismiss them from being promoted so often by the platform. If totally necessary, there should be upfront, pre-existing ways to ban a user from the platform; banning is, however, a highly social act that can have many repercussions, including loss of more users, so it should be wielded very carefully. Have I mentioned how much respect I have for moderators yet? 

In this aspect, the oversight of a platform with the support of users is similar to the normal running of a library. Librarians are caretakers of the space, often overseers of what books are put in the library, and initiators of activities in the library. If something is going awry in the library, the librarians handle it, either by appealing to other people (such as security guards in a physical problem or the community in a legal/political situation). The support that people give the library is tacit: passive support by taxes and by agreeing to participate within the rules. The librarians have a lot more control than the users do, in a library, but the library wouldn’t exist without the users–their continued use of the library is another form of tacit support. So while I’ve articulated a hybrid structure where the platform and users share more oversight into the content and form of the work on the platform, I’ve actually articulated an expansion on what oversight the library has. The library is a highly benevolent institution that yet has often little input from the users. Part of the reason it works is because the librarians are curating effectively–not just the books, but the activities, technical setup of the spaces, and the values of the library. They fit with the community and the community supports it. (Sometimes the community rejects the values of the library, and then it seems that there’s a disconnect between what is being curated and what people expect to see curated.) 

So libraries actually have much fewer “loud voices” speaking in the Twitter sense (librarians only have inside voices, obviously) than I have suggested for the platforms, and I don’t think that this totally top-down model would be ideal for a platform. It gets too close to Zuckerberg putting his values in place and then the “community” of two billion people just ignoring it almost entirely–although that has to do more with scale than values. So I support some hybrid structure of governing, if at the very least so that we don’t end up in the YouTube problem, where the content creators who provide tons of highly-desirable content for the platform feel under-appreciated, burned-out, dealt with arbitrarily, and severely undervalued. You don’t want your loud voices to turn on you, even if you are a benevolent leader.

Now the whole “let’s let the loud voices speak” thing has gone badly in lots of different ways over all of history, so I would like to point out that a raw populist approach, even with platform oversight of those populist choices on who could be curators, may not be the healthiest in every community. It may be fine for some. I think in all cases it would have to be mixed with the ideas of those actually running the platform to make sure that we don’t end up in Twitter 2.0 where the biggest cannons have the most shots with no regard to who or what they are shooting at. Hopefully the values would at least be able to curtail the “who or what they are shooting at” bit; but the point of curation is that someone gets the loudest voice. The goal of creating values-based platforms would be to have clearer ideas of what those loud voices would be saying before they start saying things. Nothing is perfect, and everyone has bad days that go off script, but in general, I would argue that a platform with values that was pointing out ideal upholders of their values would go off-script less than a Twitter that didn’t. 

Caveats and Conclusions

For those who feel that all voices should be equal as a point of order, this whole post on curation isn’t going to work for you. In an information glut, someone sorting the information effectively, in line with the values of the platform and supported socially by users of the platform, should have a louder voice. If we don’t choose who has the loudest voice and why (with checks and balances included), then we will end up with Twitter 2.0. But the problem of totally democratic social media is that loud voices have been amplified on platforms that feature few-to-no content values, technical aspects that point toward profit above all else, and social structures that have no check and balance on running toward hateful/violent/illegal speech. That looks nothing like a library at all.

To have different outcomes for social media platforms at all, we must envision what they would look like with different models in place that govern them. To that end, a library could be a good model, but not the only model–imagining what we would have to do to make social media more like a worker cooperative, friend group, church, or school would all be productive exercises. I encourage people to write these as well; I would be very excited to read them. In imagining what we would have to do to make social media more like a library, I think that Leetaru isn’t wrong that listening more and speaking less would help. But who is then speaking if most people are listening?

In one model, it could be curators (or people who do curation, depending on how you want to think about it) who are speaking. And in an attempt to build platforms that have the best outcomes for curation, there need to be values set out in advance for platforms and for creators, so that they can find each other. The platforms need to have those values built into the technical aspects of the platform, so that they have curated whatever values they want to instill through the platform at a backend and frontend level. The platforms must use oversight (composed of technical and human actions) based on those values as well as social input from the platform’s users to determine who is driving the conversation. Even the best instantiation of this would imperfectly point toward an ideal: a carefully curated information space with people who agree to temporarily be a part of the space on the space’s own terms and values due to wanting the content and activities included in the space. Much like a library.


Thanks to Chris Krycho for helpful comments on content and form at several points along the way. Thank you to Alan Jacobs for inspiring the original form of this post with his blogging.