skip to Main Content
A Proposal for Facebook to Be a Better Platform for Society
Meta, Philosophy

Technology is neither purely good nor evil. At Loup, we remind ourselves of that construct often, especially when discussing social media as it seemingly moves across the spectrum toward evil from good.

If Facebook is truly committed to being a platform for social good, it has to give users more power over the content they’re exposed to in the first place, particularly over political content. This piece is an exploration of why and how Facebook and the rest of social media can do just that.

It’s easy to lambaste Facebook for bad data privacy policies and enabling fake news, but I find the political discord and incivility it creates far more concerning for the health of humanity. Political polarization is at a peak due in large part to social media. Fake news represents one component of the broader issue, but if you spend five minutes on Facebook or Twitter, you’ll see that we don’t need fake news to be uncivil. Discord just as easily happens via opinions and arguments about the interpretation of real news or candidates and viewpoints in general.

When it comes to politics, it’s difficult for us to avoid succumbing to tribal instinct and devolve into emotional attacks on social media. Unfortunately, that flaw in human nature is good for Facebook.

Truths About Social Media

Before we explore how Facebook could give users more control over content, there are a few truths about social media worth establishing. Really these statements are more about human behavior, which is just augmented by social media.

People are most likely to react to online content that causes emotion, particularly anger

Reaction is engagement on social media, so social media benefits meaningfully from content that generates negative emotion. When someone is angry, they’re more likely to act. That’s the physiological point of anger — to enable us to fight. Thus, it shouldn’t be surprising that we fight when we disagree with something online. The structure of the Internet probably even enhances our instinct to fight because there are no physical, immediate consequences since we’re engaging via screens rather than in person.

History tells us, and the present frequently reminds us, that crowds lend to madness. Facebook consists of an endless number of crowds on the verge of madness, 24/7. Even worse, Facebook eggs the crowd on to drive engagement which leads to revenue. They’ve developed hyper-efficient systems that recognize what content people are engaging and likely to engage with. These systems, whether intentionally or unintentionally, are not well-tuned to assess social quality of content nor do they offer us control over what we want to see. The latter part is easiest to change. 

Social media represents an extension of who we are — it is our identity projected online

These online identities are woefully incomplete representations of who any of us really are, but they’re treated as manuals to our lives. You can’t tell much about the true character of a person from a bunch of tweets and Facebook posts. Machines might be able to tell a lot from them, but humans can’t even though we all think we can. The same person that likes the candidate you don’t like may volunteer time at an animal shelter, coach their kid’s sports teams, and help their elderly neighbor.

As we devolve into emotional argument online (very little of it can be called debate), anger drives us to attack one another personally based on these narrow views into our character. This creates tribal lines of warfare between groups with similar identities, encouraging a vicious cycle whereby we reinforce the narrow view of who we are online by more strongly clinging to it as we protect it.

In reality, human beings have a lot in common even across party lines. We care for our family and friends, we want to see our community and country thrive, and we want the freedom to pursue what makes us happy.

The party line from Facebook to “bring the world closer together” would seem to believe we have a lot in common too, but their platform is not enabling it. The world would be a better place if Facebook acted on their mission rather than talked about it.

Any action to attempt to fix political toxicity must respect the paramount importance of free speech

It would be a failure if Facebook exerted more control over speech more than they already do. Facebook’s algorithms decide what speech we see. Since Facebook benefits from more engagement, we tend to see speech that has engaged other users and is likely to engage us. As we defined above, anger engages most strongly, so that’s what we see.

Lending control of content to users maximizes individual freedom and maintains the sanctity of free speech. Everyone should have a voice to opine, but we shouldn’t be forced to listen to it.

If social media companies cede some amount of control to users about what content they want to see, it will likely reduce engagement and thus revenue

Social media companies have responsibilities to their shareholders and employees to generate profit, but it’s becoming more apparent that they also have a responsibility to society given the impact these platforms have had on it. The balance between these constituents is not something to be taken lightly. 

We’re general believers in Friedman’s contention that the social responsibility of businesses is to increase their profits, but as with many extreme positions, the better answer leans toward the middle. Most rational players would say that the agency cost of dumping pollution into the water supply to avoid disposal costs is an unacceptable edge case of companies driving profit. Facebook’s platform is polluting our political environment in the same way, passing the cost to civilization while reaping the benefit.

Proposal

Facebook, Twitter, and Instagram should give users the ability to filter political content from their news feeds. This could be implemented with two core product additions: the ability to turn off comments on political content and turn off political content entirely.

Turn Off Comments for Political Content

Comments seem to be the largest source of toxicity in social media. No matter your political leanings, have you ever read a comment string on a piece of political content and not walked away angry?

Comments are the lowest-hanging fruit in the quest to solve the political issue. Facebook should give users the ability to turn off comments for all politically-flagged content. Currently, only people who create a post on Facebook can turn comments off, so the function already exists. It just needs to be extended to the consumer.

Anger is an addicting emotion that feeds on itself, so it may be hard to break the spell of anger by simply offering the option to turn off comments. If Facebook truly wants to bring the world closer together, they should set political comments default to off and give users the option to turn political commentary on if they really want it.

Turn Off Political Content Entirely

With the prior option, users would still see political content in their feed, but with no comments. The second option is to allow users to say: Don’t show me political content, period.

For both solutions, Facebook would need to create a system to broadly flag political content which, if they haven’t already created it, should be relatively easy for Facebook to build. Given our imperatives about free speech, the challenging part will be ensuring the public that the software is non-partisan. To solve this, there should be external oversight of the software to verify its neutrality. A third-party board could be formed that consists of partisan representation from the left and right that audits the software regularly.

Conclusion

Offering users more control over the content they see could not only make Facebook better for society but also solve some of the company’s current problems. For example, Facebook could target fake news by giving users the option to only see news content from verified sources. The same third-party board that audits flagging software could audit a living list of verified news sources.

Facebook has said they don’t want to be arbiters of truth. We shouldn’t want them to be either, and they don’t have to be. Giving more control to users allows us to be arbiters of what we want to see in the first place, which is the way it should have always been.

Disclaimer: We actively write about the themes in which we invest or may invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we may write about companies that are in our portfolio. As managers of the portfolio, we may earn carried interest, management fees or other compensation from such portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making any investment decisions and provided solely for informational purposes. We hold no obligation to update any of our projections and the content on this site should not be relied upon. We express no warranties about any estimates or opinions we make.

Back To Top
Search