Posted March 4, 2022

The Facebook whistleblower’s testimony, explained

Associate Professor of Media and Communication Jan Fernback discusses the Facebook whistleblower’s testimony and its implications for the company.

Photography By: 
Betsy Manning

Since Facebook exploded on to the social media scene back in the early aughts, it has seen exponential growth. The company has gone from a small startup operated in a college dorm room by Mark Zuckerberg, Facebook’s chief executive, to becoming a major conglomerate that’s often criticized for violating user’s privacy, political manipulation and mass surveillance. 

Recently, former Facebook product manager turned whistle-blower, Frances Haugen, testified before Congress about the company’s harms, alleging that the company places profits above its users’ well-being.

We spoke with Jan Fernback, an associate professor in the Klein College of Media and Communication, about the Facebook whistleblower’s testimony and its implications for the company. 

Temple Now: What are the most serious allegations against Facebook made by the whistleblower, in your opinion?
Jan Fernback: The allegations against Facebook are all serious, because collectively they represent a model of poor citizenship. Facebook has been exemplifying that model of poor citizenship for years as evidenced by its continued prioritization of profits over public safety, its record of numerous privacy violations, its attempt to monetize children’s attention without concern for their mental health or safety, its refusal to punish bad actors publishing false or dangerous content, and its pitiful gestures toward self-regulation. Some of the allegations made by the whistleblower, Frances Haugen, are serious in legal terms because they involve false statements made to shareholders—and Facebook is a publicly traded company. 

The Securities and Exchange Commission (SEC) has the power to impose fines on Facebook for misleading investors, and it may open an investigation leading to a lawsuit in which Facebook executives would be accused of intent to mislead or falsify information to shareholders. As of March 1, 2022, the SEC has not opened an investigation. In the end, the whistleblower’s allegations center on Facebook’s inadequate management of its Artificial Intelligence (AI) systems that prioritize and amplify polarizing or damaging content. And Facebook has consistently downplayed the harms that its own internal documents show that it knows about; that is a violation of public trust.

TN: All companies in capitalist systems prioritize profits. What makes these allegations so concerning? 
JF: The charges, if true, mostly serve to paint a picture of Facebook as a company so focused on greed that it will run roughshod over any civic or social impediments to its profitability. But that doesn’t lessen the power of the allegations to inform the public about the reach of social media companies in our society. We already know about the role of social media in spreading fake news, in spewing hate and vitriol, and in revealing the fragility of some of our most basic assumptions about democracy. 

Other social media companies such as Twitter have worked to stem fake news and to suspend accounts of bad actors. The leaked documents show that Facebook has taken titular steps to regulate itself and that those attempts at self-regulation have had negligible results. The leaked documents make allegations that should concern any citizens of capitalistic democracies. When powerful media companies like Facebook possess such a wide and tight grip on the public imagination, what results is a threat to the core of some of a society’s most cherished institutions. When Facebook algorithms prioritize toxic content—whether it harms the self-image of young girls or foments conspiracy theories—they reveal the power of the algorithm to shape public opinion and to undermine the truth. The harms extend to individuals, other businesses, society in general, and democracy.

From an antitrust perspective, these allegations are concerning because they demonstrate the stranglehold a few large companies have on almost all aspects of life in the United States. Mark Zuckerberg’s exhortation to Congress in 2018 to “please regulate us,” rings hollow because he knows that, pending a catastrophic incident that can be tied directly to Facebook, the company will not likely face any meaningful federal regulation. As we live more and more of our lives through social media, we must reflect on how these companies became the arbiters of truth in our culture.  

TN: Why can’t social media be regulated the way other media is? 
JF: The First Amendment to the Constitution recognized the important role the media play in the functioning of a democracy, and so they were given protection from censorship so that American democracy could flourish. But those protections have limits, including the prohibition against printing or broadcasting libelous content or threatening content or content which incites violence. For traditional media (print, broadcast, cable, or any internet site with moderated content, e.g., Slate) this means that the media outlet itself can be sued for printing libelous content or sending out threats because the outlet has vetted the information and is publishing it as factual to the best of its ability. These media are protected by the First Amendment because these media are trustees of the public good in a democracy. 

There is, of course, some regulation of social media companies just as there is on businesses in the U.S., such as fair business practices or SEC requirements. But social media companies are private companies that have been classified as “information services” and are thus not subjected to the same regulations imposed on broadcast or print. The Federal Communications Commission, which regulates broadcast media, is not permitted to regulate information services like Facebook. Social media are subject to some regulation by the Federal Trade Commission (FTC), which can impose fines on social media companies for violations of privacy, for example. But, social media companies are not just platforms for the exchange of unvetted information—they are protected in a sense by Section 230 of the Telecommunications Act of 1996, which establishes liability protections for media companies whose users post unprotected information such as threats, speech which incites violence or libel. This has opened up a wave of lawsuits against individuals rather than media companies which, in years past, were the only parties concerned about libel suits.  
 
Haugen is calling for Congress to set up an external regulatory agency that will oversee Facebook’s algorithmic functioning. Such an agency would use AI to evaluate Facebook’s algorithms to help the company  ostensibly tweak algorithmic functioning to deemphasize dangerous content. The House has even proposed a new bill, The Justice Against Malicious Algorithms Act of 2021, to maneuver social media content, specifically “personalized recommendations,” around Section 230.  

TN: So, then is the problem solved?
JF: In my opinion, such algorithmic regulation is short-sighted, and here’s where the First Amendment comes into play. If this type of regulation succeeds, social media companies could then be sued for what types of content they promote because algorithmic recommendations are opinions. Opinions are protected by the First Amendment. Courts have interpreted the First Amendment in ways that enshrine it, making it almost sacred. But why not think about ways to limit the distribution of messages on Facebook and other social media so that they aren’t available globally and thus have less reach than they currently enjoy. Such limited distribution could work to de-amplify dangerous content as identified by algorithmic-monitoring AI technologies.   

TN: What changes do you foresee or should we look out for? 
JF: Unless there is more public outcry for regulation of Facebook and other social media, it is unlikely to happen. Congress has failed to pass the numerous federal data privacy protection bills that it has considered (with the exception of the Children’s Online Privacy Protection Act). When the FTC fined Facebook $5 billion in 2019 for data privacy violations, the public barely noticed, no reforms resulted, and Zuckerberg continued on undeterred. Reformulating Section 230 is controversial and difficult, even if it were to focus solely on controlling algorithms. Unless the public is willing to accept some fairly serious modifications to Facebook and other social media, such as increased content moderation or limitations on algorithmic recommendations, the chances of meaningful change or accountability seem slim. 

The term “regulation” has been demonized in public discourse by partisan actors, thus lessening even further any chance for social media transformation via regulation. We should, nonetheless, watch out for any action by the SEC, which is the most likely avenue for change on an industrywide scale. 

- Christine Nolthenius
 

Anonymous