My parents have warned me countless times about the dangers of social media. I had to beg them to allow me to download Instagram and Snapchat, even (embarrassingly) creating powerpoints to demonstrate my responsibility. Now, however, I regret it. Earlier this year, Frances Haugen, former Facebook (now Meta) employee, leaked hundreds of internal documents, alleging that Meta prioritizes profit margins over user wellbeing. This outlook, although disappointing, is not surprising. What it brings to the conversation, however, is a question of whether or not Meta has breached human rights, and, if so, what to do about it.
On October 5, 2021, Frances Haugen testified in front of the United States Senate Committee on Commerce, Science, and Transportation. She began her testimony with the assertion that social media misuse has become “one of the most urgent threats” facing not only the United States but “nations across the globe.” Coming from a woman with intimate knowledge of the workings of the largest social media platform on earth, this comment is very concerning. What is it about social media that gives it such power over entire nations? And, given the huge implications of this statement, what corroborates her claim?
According to their 2020 fourth quarter and full-year press release updated on December 31, 2020, Facebook alone has 2.80 billion monthly active users. Pew Research Center reports that 77% of people ages 30-49 use Facebook. Instagram, on the other hand, is dominated by people aged 18-29, boasting 1.39 billion users. Many of these users go online daily. Combined, the platforms have heavy influence over all realms of society and, with that influence, comes immense responsibility.
Unfortunately, even with such responsibility, Facebook has a history of aiding more concrete instances of human rights abuses. In August of 2017, the Myanmar military instigated a genocide against its Rohingya Muslim population, which included instances of mass rape, murder, and the burning down of entire villages. Following this tragedy, a Reuters investigation discovered “more than 1,000 examples of hate speech on Facebook, including calling Rohingya and other Muslims dogs, maggots, and rapists, suggesting they be fed to pigs, and urging they be shot or exterminated.” This violence began on Facebook and spread itself to real-world crime and genocide. Facebook had not shared this content with the UN and, although they deleted the posts, admitted being “too slow to prevent misinformation and hate.”
Another instance of real human rights abuse surfaced in 2019 when Apple nearly took Facebook and Instagram off the app store because the platforms were being used as tools for the buying and selling of maids in the Middle East. The working conditions of these maids, women caught in dire financial straits and looking to move to the Middle East to provide for their families back home, often violated international human rights and labourer protection laws. Facebook internal documents state that these women are frequently “locked in their homes, starved, forced to extend their contracts indefinitely, unpaid, and repeatedly sold to other employers without their consent.” Meta has known about these issues since 2018 and admitted to “under-enforcing on confirmed abusive activity.”
In both of these instances, Meta had an awareness of the hatred and abuse going on within their platforms and, in both cases, admitted to a lack of action. Have they not learned? How many genocides must be initiated on Facebook’s platform for them to act quickly? Their complicity in these occurrences alone, even though there are more, qualifies the platform as a major actor in human rights abuse.
Obviously, Meta is complicit in instances of human rights abuse. However, I must ask: what is the alternative? Unfortunately, the counteraction to these offences, namely, censorship, is also an abuse of human rights. Exemplifying this, last May, the hashtag #AlAqsa, referring to the Al-Aqsa Mosque in Jerusalem, was banned on Instagram across the Middle East. Facebook apparently had “mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade.” Miscommunications such as this lead to the stifling of political free speech.
People must be able to share ideas and information, and Facebook’s platforms have become one of the main forums for doing so. How would Facebook determine what to censor? Inevitably, they would stifle the flow of free thought and ideas on their platform, which, in itself, is an infringement upon human rights. Hosting billions of users, is it feasible for Meta to go through every post on their site? Or is that simply their implied responsibility? Is Meta responsible for the actions of its users when the actions are encouraged on their platform? Whose fault is it that these issues persist online?
Frances Haugen believes in our ability to fix these problems. In her testimony, she states that the issue of social media is “solvable”: “a safer, more enjoyable social media is possible.” What she proposes is an extensive revamping of governmental oversight for social media companies like Facebook and Twitter. However, the issue of national security vs. personal privacy arises. Many people are, understandably, concerned about allowing the government access to personal accounts and private data. The government cannot become omnipotent, and the security features of Facebook, although scrutinized, are one level of defence against governmental oversight on individual lives. So what should be done?
Nancy C. Lee, in her essay on Lamentations and Polemic, comments that the “greatest lament, in light of yet another human disaster,” is the fact that the world “has proven itself time and again mostly incapable of preventing or stopping such evil.” Can social media, instead of perpetuating hatred, become a tool for the world to finally prevent these evils from occurring? How do we, as individuals, either contribute to or weaken the culture of death and hatred on social media platforms?
Feature Image by dole777