Accelerated political confrontations with the tech sector—and with Facebook in particular—have dominated public discussion over the last couple of years, particularly allegations of foreign interference, fake news, privacy scandals, and Mark Zuckerberg’s testimony to the U.S. Congress. In the midst of such a fever pitch, it’s easy to forget that the social media network has a long history of backlash over the policies and the strategies it has used to achieve global dominance.
The first of many controversies started when Facebook created the news feed. In 2006, the sudden introduction of an entirely new way of seeing posts angered users. People worried about privacy, since previously public but hard to discover information became easy to discover. A lot of protest groups organized against the change.
The great irony is that the protests against the news feed themselves were organized using the news feed. Earlier in its life, Facebook was enamored with Marshall McLuhan’s “the medium is the message” tagline. Taken quite literally, the tagline means that the medium people use to communicate constitutes a stronger signal than the words they say. At that time, this principle formed a clear direction: Facebook ignored what the users were saying, and examined what they were doing. In fact, they were using the new feature a lot. The news feed stayed.
The next big problem was social games. In 2010, the number of daily active users playing Zynga’s Farmville peaked at 34.5 million. The game rewarded people for two things: inviting others to play and paying real-world cash for digital properties. Facebook still allowed third parties to create notifications during that period, so both the news feed and notifications were polluted by game invites.
Facebook made the hard, but correct decision to disallow third party app notifications and throttle the games to improve user experience. It did this despite losing an important profit stream, generated by Facebook’s 30% cut of the in-gaming revenue.
In 2012, Zynga and Facebook changed their “relationship status” to allow each other to work with other companies, and games and gaming revenue eventually faded from the platform. Facebook made it clear that user experience trumped a semi-lucrative partnership with a third party. Zynga’s stock plummeted from $15 to $3. It has stayed there since.
The next problem came with clickbait. In 2014, websites like Upworthy spent countless hours testing and perfecting content headlines for maximum shareability. Large portions of the news feed were taken over by such articles, since these publishers were playing the game of “engagement”—effectively racking up likes and gaining visibility, while simultaneously alienating a sizable portion of the user base.
You won’t believe what happened next!
Facebook acted to reduce clickbait through algorithmic and user interface changes to make it easier to see what the articles posted were actually about. The platform started measuring the quality of articles not just by the number of article clicks, but also by the amount of time spent reading, as well as the desire to comment and share.
Some formats of clickbait died down, but the fight wasn’t over.
In 2016, Facebook noticed another problem on its site—people weren’t sharing as much as they used to. Brands and firms were better at playing the news feed game and so were again taking over people’s feeds.
This time, there were two changes. One was a phrasal and keyword system to detect clickbait articles from the title alone, like a spam filter. The second one was simply surfacing more posts from friends and family and fewer from publishers.
There have been other controversies: first, in 2014, Facebook ran A/B tests on the emotional effects of positive versus negative news exposure. A/B tests are a fairly routine part of operating a large and modern website. Nevertheless, this sparked some alarm, ironically intensified by media headlines about emotional manipulation. Second, in 2015 and 2016, Facebook realized that it would have trouble connecting people who were not on the Internet. So, it decided to provide free partial internet in India, enabling connection to Facebook and a handful of other sites. Following widespread debate, India rejected the offer based on “net neutrality.” And third, the 2016 election intensified discussions about the infamous algorithmic filter bubbles and fake news, hypothesized as factors in increasing polarization in America.
In this history, there are recurring patterns. Facebook is no stranger to outrage, criticism, and is adept at ignoring said outrage and criticism. Further, Facebook’s news feed algorithm needs to be constantly changed, in order to effectively protect users from third parties gaming “engagement metrics” and creating less-than-desirable content.
This brings us to the present.
Since the 2016 election, there have been some distinct political Schelling points in rhetoric against Facebook’s dominance in the social media landscape. Privacy has played a significant role, starting with developers’ use of the API to get as much data as possible—also known as the Cambridge Analytica scandal. The problem of “fake news” or “hate speech” has also sparked controversy, which Facebook is addressing with a mix of highly publicized AI tools, combined with hiring an army of old-fashioned human censors. Finally, there is the question of social media’s impact on users’ mental health.
It’s worth pondering the history, since it shows that Facebook possesses a consistent mindset: the implied corollary to “move fast and break things”—a mindset of “we’ll just fix it later.”
Facebook has addressed many of its criticisms with marginal tweaks in the algorithm. And a lot of other criticism has simply been ignored, since it didn’t really amount to anything. On the other hand, Facebook has taken a stand on a number of occasions against third parties (games developers, clickbait developers, and other brands) in an attempt to protect the interests of users.
However, even these bigger stands could not address the systemic problem underlying all the other ones: the news feed optimization function itself. Moreover, the debate did not remain the sole purview of policy critics and tech media. Facebook’s increasingly prominent role in shaping public discourse, combined with prominent criticisms of how it managed this role, were highlighted during Mark Zuckerberg’s congressional testimony in early 2018.
The event was reminiscent of Bill Gates’ testimony in the 1990s. Despite the negative optics of a booster chair and YouTube livestream comments (“throw water on him, see if he rusts”), Zuckerberg came away looking more positive than when he started, at least in technical circles. He made questioners look foolish and uneducated about basic tech industry strategy. This was summed up in his response to Senator Orrin Hatch’s inquiry into how Facebook could do business without a user fee: “Senator, we run ads.”
Despite the large number of technologically illiterate questions, there were several extremely good ones that should have been followed up on. In particular, Senators Ted Cruz, Ben Sasse, and Maggie Hassan probed Zuckerberg on Facebook’s impact on mental health and the extent to which the company sought to maximize user participation. Zuckerberg’s answers focused in on a single fact: those who used the site actively, for connecting with friends and the like, experienced the benefits of social connections, while those using it passively to consume content did not.
Unfortunately, neither Hassan, nor Sasse followed up more closely to question whether Facebook’s site and UI changes optimize for consuming content more passively or actively.
The hearing revealed an interesting foundation for Facebook’s philosophy—the desire to hide behind consumer choice and desire to give people control as a defense against charges of causing mental health problems and other issues. If you parse Zuckerberg’s message, he’s effectively admitting that Facebook does cause mental health problems, but only if the users passively consume content. This is effectively shifting blame onto the users, many of whom probably don’t really have any idea of the negative mental health effects of such usage, and probably don’t have the best information about whether or not they are, in fact, using the site this way.
Tool Analogies Are Inadequate For Monopoly Social Media
Philosophically, the company’s position in response to criticism moved from “let’s build tools” to “let’s make sure those tools are used for good.” But a key assumption remained: the model of a “static user,” unchanged by the website itself. A user who “chooses” whether to scroll through the website for a while, a user who “chooses” to not learn the data controls, a user who “chooses” to passively consume content, thus leading to mental health problems.
This model is simply inadequate when dealing with advanced technology that can be highly optimized to influence the user’s behavior.
Tools are a good thing for the user—even if designed by powerful companies that don’t share the user’s interests—when the user rationally and freely chooses to use the tool for their own ends, and the tool does not itself encode harmful intent. This is the basis of gains from trade.
But this assumption is weakened or broken in the case of monopoly social media platforms that deal in the fine details of the psychological effects of user interface design. Users aren’t necessarily rational, especially about the micro-decisions we make in using social media in the face of the company’s user interface optimizations. This is an immensely asymmetric power relationship. Moreover, social media is increasingly not a choice, especially for young people who are still growing into the fullness of rational adulthood.
With the optimized user interface in particular, we can no longer operate on the assumption that a user will remain unchanged after coming into contact with a particular technology. Nor can we take for granted that the flow of control is always from the user, rather than from the tool. The tool itself encodes the willful intention of the user interface team.
The fundamental contradiction that perhaps underlies a lot of hatred for Facebook is that on the one end it claims merely to be a tool, and on the other, it aggressively attempts to influence people to use it as much as possible. The familiar analogy, “it’s like a hammer—it all depends on how you use it” only works if you assume the hammer is whispering in your ear that you should refinance your house for the third time to build yet another lawn gazebo.
The factors of monopolistic social influence, highly optimized user interface, and the user’s exploitable micro-decisions, put Facebook in a highly asymmetric power relationship with the user—but without matching responsibility for the users’ well-being. Combined with the misalignment of incentives, it would be no surprise to see social media users exploited at the cost of their mental health and general well-being.
Moreover, there have been a number of studies about Facebook causing mental health problems. Several former executives have also come out publicly with some fairly harsh critiques.
While there have been some suggestions that those studies are not that methodologically sound, it still seems that the weight of evidence is on the side of Facebook causing mental health problems. Facebook admitted that “passive consumption” causes problems, but did not reference what fraction of people are passive consumers (if it was small, they would have mentioned it).
The connection to mental health in particular is more intuitive if you consider what it takes to entice the user to spend more time scrolling through the feed looking at sponsored content, and to put in more personal information for the ad-targeting algorithms. This inducement may come at the cost—and by the mechanism—of addiction, social anxiety, pavlovian training to compulsively check notifications, positive feedback for engagement, and generally installing ‘not-quite-true’ beliefs in the user. All of these things could be disruptive to the mental health of users.
A little bit of creativity can invent a whole raft of these “dark UI patterns,” and it would take serious discipline on the part of the company to optimize “engagement” without using them. This process doesn’t even have to be deliberate on the part of UI designers; it could come from just blindly following the metrics.
In response to pressure, Facebook has also begun to somewhat reduce the amount of time people spend on the site in 2018. “Time well spent” is the new tagline. Socially and politically, this may be a good thing for Facebook in the long run, and therefore a sound decision.
But over-optimization for time spent on social media, or even “engagement,” which leads to many of the same problems, is itself a symptom of broader structural issues in the nature of currently centralized social media.
Social Media Monopolies Have Immense Power
The fundamental nature of social media as such, abstracted away from any particular product or company or model, is the provision of tools for social interaction. Humans are a social and political species, and we naturally engage in all manners of social behavior. We are also a tool-using species, accelerating our natural capabilities with technological means. Social media as such is just the latest step in our long history of the very natural and very human combination of these two tendencies.
Social media as such, like previous examples of social tools like language, writing, mail, books, printing, radio, and so on, will lead to reorganization of society as it changes the landscape of economic and political feasibilities. This is not fundamentally problematic, but does require careful attention.
What is more fundamentally problematic is the current structure of social media companies as monopoly platforms.
It’s easy to see how we got monopolies like Facebook: internet-based social media technologies can or even must be built as network effect platforms, which are in turn much easier to monetize, especially in the current ad-funded VC-startup development paradigm. The ability to both monetize and iterate on design means that platforms with a network effect monopoly business model receive the most sustainable and high quality development, and beat out amateur, decentralized efforts.
Interestingly, the legacy decentralized technologies that proliferated before the commercialization of the internet, like email and HTTP, seem much stronger and more entrenched in their positions than the centralized monopoly platforms. Besides the obvious network effects, this speaks to the staying power of decentralized technologies, especially as fundamental infrastructure, despite their disadvantage in the commercialization gold rush.
Monopoly social media companies, unlike these decentralized protocols, end up with very fine-grained and active control over nearly every aspect of their platform, and the behavior of its users. They can decide:
- Who is in and who is out of the network effect. This can be significant, as many people rely on these platforms for their livelihood or social life.
- Who can say what, and to whom, with public posts being subject to algorithmic throttling and censorship.
- The inherent structure of communication, with changes like threading, 280 characters, news feed timelines, emoji redesigns, and so on.
- What information users are able or likely to see.
At the strongest extent of network effect monopoly, which is surely the long-term investor hope for companies like Facebook, this is a level of social power otherwise reserved for governments and religions. This isn’t necessarily problematic if the use of this power is well-regulated for predictability and socially beneficial ends, but it is still an awful lot of power. This is an important part of why network effect monopoly platforms have been winning the internet commercialization gold-rush. Power is useful for winning in the market, among other things—but it has much more wide-reaching implications.
In particular, besides the asymmetric power relationship with the user that may lead to exploitation and mental health problems, the position of these companies is not just market power; it is a tool of immense political power.
Social Media Companies Are Political Organizations
This brings us to another hot-button issue mentioned in the Facebook hearing: the question of censorship and the political orientation of Silicon Valley in general.
An obvious and major conflict here is over defining the grounds of censorship, such as on grounds of hate speech. But the unclear and fluctuating boundaries, especially when distinguished from bullying or threats of violence, leave a lot of leeway for enforcers to interpret such codes however they wish. What hate speech even means, or whether it ought to be censored at all, is by no means an apolitical question.
Facebook’s problem here is at least partially political, so we need to examine the political dimensions of this situation.
This political dimension is not merely a question of elections. The political dimension is the socially contentious shaping of society, “us” and “them,” conflicting views of morality, the interests of groups of people, the culture war, and the domain of power. It’s often claimed that censorship or social change are not political because they are about morality, or are not directly about the state, but this just serves to obfuscate the point. Clear analysis is impossible until we recognize the expansive domain of the political, and examine the role that political power plays in these phenomena.
In much of the predominant establishment discourse, the corporation has been regarded as a fundamentally economic organization, providing apolitical goods to apolitical consumers according to an apolitical profit motive.
But social media companies challenge this assumption. The goods they provide carry a strong political and social payload; all the little details—emojis, fake news, hate speech filters, privacy choices, and the like—come with political and cultural values baked in. Further, the user is political as well; one of people’s favorite things to do with social media is organize political action, spread political propaganda, have political arguments, and mob political enemies. So, the political orientation of the corporation becomes highly relevant.
Though a corporation, considered politically, is usually not constructed or operated for explicit political purpose, it is still a hierarchical organization of decision-makers with some political orientation. They are well-funded, competent, and potentially powerful. In the case of big social media companies, they hire a significant fraction of their workforce to essentially make political decisions full-time. How else can we interpret the undeniably political dimension of things like the “trust and safety” and “hate speech” review processes?
Social media companies thus become powerful political organizations not just in potentiality, but in actuality. This has been explicit since the out-sized role Facebook and Twitter played in the Arab Spring uprisings. Their decision to ban or not ban various politically charged movements, accounts, and topics in various countries around the world has been a constant source of low-level controversy. For Americans, this all became much more real with the 2016 election, the Cambridge Analytica scandal, the fake news and hate speech scares, and the now-familiar pattern of some kinds of political content, some extreme and some relatively innocuous, being throttled or leading to account suspensions.
Since social media companies have more direct political power than other industries, the politics of their staff become more important. Politically motivated actors inside and outside these companies then turn more concerted attention towards those political tendencies. This leads to a politicization of corporate culture.
We see this playing out with high-profile incidents and politically-tinged cultural changes in nearly every major Silicon Valley social media and social infrastructure company. Besides the internal struggles, the press and public are also paying special attention, scrutinizing the corporate culture of these companies for anything politically problematic.
So, the tech industry has become much more politicized as factions have begun to notice and act on the fact that control of these platforms is a tool of immense power.
The fundamental structure of a social media monopoly inherently puts it in the political hot seat. There is no easy culture change or regulation that can patch Facebook’s political problem, because Facebook has built a new power structure, which can and will be used for political ends by whoever controls it. Neither can these companies punt the question by refusing to act. Lines between acceptable and unacceptable will be drawn. Cultural assumptions will be encoded—for example, in the “gun” emoji or in emojis that depict families. These decisions will be contentious and political in effect, even if not made with political intent.
Future Social Media Should Be Decentralized
The fate and implications of this new structure of power are still unclear, but we can make some predictions.
The obvious possibility, the bull case for Facebook and others, is that social media continues on approximately the current centralized and politically-managed path. We could imagine that centralized social media in the 21st century becomes similar to centralized broadcast media in the 20th: a relatively controlled ecosystem, closely integrated with dominant power structures, where the narratives that make it through the various filters to broadcast “virality” are those either friendly or harmless to the prevailing politics of the respectable classes. Filter bubbles would be popped, fake news defeated, social norms progressed, masses surveilled, democracy saved, and the elite would adapt and learn to use this new power.
This is the usual path for new dimensions of political power. They spring up into being by the entrepreneurial action of great pioneers, and begin exerting partial control of society. This leads existing elites and challenging factions to struggle for control and integration of these new dimensions of power into the political order. This usually entails changing the structure of the new power structure to be more compatible with established order. In the past in America, this has sometimes taken the form of trust busting, as for example with the “robber baron” industrial infrastructure monopolies of the 19th century. Eventually, the dust settles and the new powers are comfortably integrated into the establishment order.
But with centralized social media power, this process faces unique hurdles. First of all, the immense political power of Facebook and others is out of step with the structures and narratives of the rest of the American republic. America may have the world’s best salesmen and propagandists, but legitimizing and integrating the centralized power of social media mass surveillance and social engineering would be a tough sell. Narratives aside, even structurally, it’s hard to imagine nimble centralized social media companies integrating into a careful and lawful elite coalition which governs responsibly. China is trying, but their model is very different, and its sustainability is questionable.
This is all predictable, as rising powers always need structural adjustment to integrate into the established system. The obvious way this structural adjustment could occur is for social media companies to be regulated with “common carrier”-like legislation that attempts to prohibit certain types of discrimination, guarantee some notion of neutrality in the algorithmic filtering, and otherwise de-escalate most forms of politicization.
Common carrier type regulation may work, but there are challenges that make the case of the political power of social media companies distinct from, for example, airlines not being allowed discriminate. The current conflicts around these platforms already take place over the ambiguity of what counts as “neutral,” what counts as enabling foreign interference, what is polite decorum, and what is and isn’t unacceptable hate speech. To regulate how these things are handled simply transforms a difficult problem for Facebook into a difficult problem for the American political and legal establishment. Power is not neutral, and it is very difficult to make it so by fiat.
Further, Facebook depends on being able to constantly adjust the algorithms to control these problems. Its political power cannot easily be decoupled from its market position and business model. The regulatory transformation needed to integrate centralized social media empires into the governing elite’s toolbox may just hobble the company.
So Facebook is in a double bind: due to the power it has created, it now needs to integrate with the prevailing power structure and political elite—but the regulatory cost would potentially kill the advantage of the company.
Even technically and economically, there are bumps on the horizon. It’s one thing for an institution to provide stable long-term provision of steel, rail or air service, or even phone or internet service. However, it’s quite another thing to imagine that these complex social media machines could retain their effectiveness over the long term as they become institutionalized. Competitors will emerge, and more sustainable infrastructure projects with different structural characteristics will move in to settle the space mapped out by this generation’s social media pioneers.
There are some forms of power that just can’t be integrated into the lawful power structure of a responsibly governing elite coalition. Centralized social media may be one of these. As such, we might expect even more aggressive approaches from the establishment that don’t just incidentally harm centralized social media companies, but deliberately break them up.
We can also imagine a future resurgence of decentralized social media infrastructure that out-competes or outlives the centralized monopolies without directly creating new structures of political power.
The aim of decentralization is having minimal or even zero trusted third parties. Trusted third parties, especially if there’s one big one with a lot of complexity, are vulnerabilities to outside meddling. One of Facebook’s major weaknesses, from both a user and a social point of view, is that they can at any time decide or be compelled to kick you off their private social network, change the algorithms, show you more ads, leak your data, throttle or boost someone’s virality, change the emojis, or die off as a result of becoming incompetent or uncool.
Decentralized social media infrastructure would work by means of user-controlled software communicating over shared protocols. These would be supported by less complex—and thus more sustainable—utility service providers. We don’t have to imagine too hard what this might look like: the folks over at Urbit have already done most of the hard work of building a decentralized social platform. Key characteristics include users owning their own data and identities, running user-controlled software, and autonomously building all manner of private and public social infrastructure.
Whether decentralized options will actually win remains to be seen, but the implications of success are interesting: we would have social media, in the sense of advanced internet-based tools for social interaction, without either the asymmetric market power or the political power of the current companies:
Without a centralized organization that owns the users and the entire ecosystem of tools, there’s no actor with either the capability or the incentive to be as controlling as current centralized social media companies. Programs, services, networks, and sites would or could be built by smaller actors with less power, and less ability to exclude and control users.
In the case of Urbit in particular, users own their own data and identities, and have the ability to migrate if their service providers become abusive. All the basics of identity, authentication, and communication are built in at the system level, which will not be controlled by a central organization. There will be social networks with power over their members, but there will be many of them, and communities will easily be able to build their own that they trust. There are just many fewer ways for a massive central organization to have power over users.
Urbit is just an illustrative possibility, but other decentralized social infrastructures will have similar properties.
It’s hard to know the effects of something that has not yet been built, but decentralization could produce a much stronger and healthier society overall, with our social infrastructure built and maintained in an organic decentralized way, instead of being controlled by a few central organizations. Such an ecosystem would also be much more resistant to institutional entropy.
Which way we go is itself going to become a politicized question, assuming decentralized social infrastructure can muster enough vitality to pose a threat to the position of the monopolies. Some powerful political coalition will have its power base in the control of Silicon Valley social media companies, and some other coalition will be presumably be inconvenienced by that.
Facebook seems willing to sincerely address the issues plaguing current social media—both well-being of users, and political issues—going so far as to propose regulation and preemptively self-regulate. But no such measure can make the enormous political, market, and psychological power of a centralized social media monopoly go away. It’s not just a loophole in the law or a cultural problem at any particular company; it’s the fundamental nature of centralized social media as such. We can either bite the bullet on reshaping our social order and expectations to integrate the power of centralized social media, or we can build decentralized social infrastructure that won’t have these problems. Either way, the choice of which social media we design and use is a fundamentally political act.