Information Technology and Ethics/Social Media Content and Targeting
Legal and Ethical Issues of Social Media Content and Targeting
editSocial media serves as a double-edged sword, offering unprecedented ways to connect and share information while also posing significant legal and ethical challenges. These platforms not only shape public discourse through the content they display but also raise questions about privacy, manipulation, and fairness due to their content targeting practices. The algorithms that underlie these processes can amplify certain voices or suppress others, impacting everything from individual mental health to democratic processes. As such, the intersection of social media content and targeting encompasses a broad spectrum of legal and ethical issues, including freedom of speech, censorship, and the influence on elections and political beliefs. The ethical implications of social networking are complex and multifaceted. According to Shannon Vallor, they can be categorized into three broad areas:[1]
- Direct impacts of social networking activities themselves.
- Indirect impacts stemming from the business models that enable these platforms.
- Structural implications that reflect the role of social networking as a transformative sociopolitical and cultural force.
Social Media Content
editSocial media content encompasses a wide array of outputs, from user-generated posts and shared news articles to sponsored content and algorithmically determined feeds. The selection and presentation of this content can significantly influence public opinion and societal norms, making it a critical area of ethical scrutiny.
Social Media Targeting
editSocial media targeting is the practice of delivering content to users based on an analysis of demographic, behavioral, and psychological data. This practice allows platforms to serve seemingly relevant content to each user but also poses serious ethical questions regarding privacy, autonomy, and the potential reinforcement of societal divisions and biases.
Freedom of Speech and Social Media Content
editFreedom of speech is a cornerstone of democratic societies, enshrined in the First Amendment of the U.S. Constitution, which asserts that "Congress shall make no law...abridging the freedom of speech, or of the press..." However, this right is primarily protected from government infringement and does not apply to private entities, including social media companies, which can define and enforce their own rules regarding acceptable language and content.
Social Media Platforms as Arbiters of Free Speech
editSocial media platforms serve as both a boon for free expression and a potential venue for censorship. These platforms enable individuals to share their views widely and mobilize for various causes. Yet, they also have the power to suppress speech they deem inappropriate, whether for violating community standards or for being legally contentious in certain jurisdictions.
As private entities, social media companies often make intricate decisions about the content they allow. This includes decisions to permit certain types of speech from specific users—like heads of state—while blocking similar expressions from others, potentially flagging them as hate speech or terrorist content. This selective enforcement has raised concerns about the consistency and fairness of social media policies.
"This power that social media companies wield over speech online, and therefore over public discourse more broadly, is being recognized as a new form of governance. It is uniquely powerful because the norms favored by social media companies can be enforced directly through the architecture of social media platforms. There are no consultations, appeals, or avenues for challenge. There is little scope for users to reject a norm enforced in this manner. While a blatantly illegitimate norm may result in uproar, choices made by social media companies to favor existing local norms that violate international human rights norms are common enough."[2]
For more information on the regulation of content by social media companies, see the discussions by Kay Mathiesen, who characterizes censorship as limiting access to content either by deterring the speaker or the receiver from engaging in speech.[3]
Legal and Ethical Considerations
editThe legal frameworks that govern freedom of expression on social media vary significantly across countries, which can impact how speech is regulated on these platforms. In more restrictive regimes, social media companies might be compelled to comply with local laws that demand the removal of content that could be considered lawful in other contexts.The ethical challenges of balancing protection from harm against the rights to free speech create a complex landscape for content moderation.
Globally, the influence of social media on freedom of expression is profound and multifaceted. Companies must navigate not only diverse legal landscapes but also broad public expectations and international human rights norms. The power wielded by these platforms can sometimes align with local norms that may infringe on universally recognized rights, thus raising questions about the role of social media as a new form of governance without traditional checks and balances.[4]
Critics argue that the architectures of social media platforms enforce norms directly through their design, leaving little room for debate or appeal. This unilateral approach to governance has sparked debates about the legitimacy of such power, especially when it might suppress voices advocating for social or political change.
Restrictions to Speech and Content on Social Media
editSocial media platforms, as private enterprises, have the authority to set their own rules about what constitutes acceptable content on their networks. This control is essential not only for maintaining the quality of interactions within these platforms but also for complying with legal standards and protecting users from harm.
Several areas of speech are particularly controversial and subject to restriction on social media, including hate speech, disinformation, propaganda, and speech that can cause harm to others.
Hate Speech
editHate speech on social media often targets specific racial, ethnic, or other demographic groups and can incite violence or discrimination against them. For instance, organizations like the Ku Klux Klan have used social media to spread offensive content about various groups, significantly increasing the reach and impact of their hateful messages. The Southern Poverty Law Center reports a high number of active hate and anti-government groups in the U.S., illustrating the scale of this issue.[5]
Disinformation and Propaganda
editThe spread of false information or disinformation on social media is a major concern, especially given its potential to influence public opinion and election outcomes. Studies have shown that false stories reach more people and spread faster than the truth, often due to sensational or controversial content that captures user interest.[6]
Social media platforms have also been exploited to disseminate propaganda by various actors, including foreign governments. During the 2016 U.S. presidential election, there were documented cases of such activities intended to sway public opinion or create discord.[6]
Calls are often made, particularly by political leaders, for social media platforms to take down so-called "fake news," but in almost all cases, lying is classified as protected speech under the First Amendment of the U.S. Constitution.
Misinformation
editMisinformation refers to false or inaccurate information that is spread, regardless of intent to mislead. Unlike disinformation, which is deliberately deceptive, misinformation can be spread by individuals who believe the information to be true or who have not intended to cause harm. Misinformation can cover a wide range of content, from simple factual errors to more complex misunderstandings or misrepresentations of data.
The dissemination of misinformation often occurs through social media, news outlets, or word of mouth and can accelerate quickly due to the viral nature of online sharing. The effects of misinformation can be widespread, influencing public opinion, affecting decisions, and potentially leading to social or political consequences.
The COVID-19 pandemic has been a fertile ground for the spread of misinformation, affecting public understanding and response to health measures, vaccines, and the virus itself. Misinformation surrounding various aspects of the pandemic, such as the efficacy of masks, the safety of vaccines, and the nature of the virus, has led to varied and sometimes contradictory public responses. One particularly damaging rumor was the unfounded claim that COVID-19 vaccines cause infertility in both men and women. This specific piece of misinformation created vaccine hesitancy, significantly impacting public health efforts to combat the virus. Despite being debunked by reputable sources including the American College of Obstetricians and Gynecologists, the American Society for Reproductive Medicine, and the Society for Maternal-Fetal Medicine, the initial rumors had already sown deep seeds of doubt.[7]
Speech That Can Cause Harm to Others
editCertain types of content on social media, such as doxing or swatting, can directly lead to physical harm. This category also includes speech that may incite violent acts or provide information on committing harmful activities. The responsibility of social media platforms to mitigate the spread of such harmful content is a significant ethical concern.[8]
Defamation
editDefamation on social media can damage individuals' reputations through the spread of false information. Legal measures often require platforms to take action against defamatory content to protect the affected parties. This is a critical area where the freedom of speech intersects with the right to protection from slanderous or libelous statements.[9]
Algorithms and Content Delivery
editSocial media has become incredibly prevalent in modern society, delivering incalculable volumes of content to user’s phone and computer screens. A common topic of discussion regarding social media platforms is the ominous and vague “Algorithm” that dictates user interaction and what content is popular. This “Algorithm” has its roots in the idea of a computer algorithm, broadly defined as “a step-by-step procedure for solving a problem or accomplishing some end.”[10] Essentially, an algorithm is a method that is used to solve a problem.
Algorithms are used by social media platforms to achieve the end of “content-based recommendation systems” that decide what content users can and cannot see based on a profile of the user’s interests.[11] This profile is created using numerous data points, which are then used to gauge the user’s interest in the content being displayed to them and then serve similar content to the user. This is all in an effort to keep the user engaged with the social media platform.
Incentive for User Engagement
editSocial media platforms want to keep users engaged with their content so that they can serve them ads, also using algorithms to determine which users should receive which ads and when. Advertisers then pay the social media platforms for displaying their ads to target audiences, granting social media platforms a constant source of revenue.[12]
Some consider using algorithms to target users with ads unethical, believing that it will inevitably target those most vulnerable.[13] This is just one controversy, but much of the discourse surrounding social media platforms is entwined with these controversies that have arisen as the result of their use of algorithms.
Clickbait and Journalistic Integrity
editWith the boom of social media, many news organizations found themselves needing to adapt. These news organizations now use social media platforms to distribute their content to audiences.[14] This switch to social media has led to the news organizations relinquishing control over distribution, becoming reliant on algorithms to distribute their content.[14] News organizations and content creators alike know that not receiving enough user interaction will hurt their futures on these social media platforms.[13] This has led to many news organizations and content creators engaging in a practice known as “clickbait”, defined as “something … designed to make readers want to click on a hyperlink especially when the link leads to content of dubious value or interest.”[15] Many news organizations have also traded away “traditional journalistic conceptions of newsworthiness and journalistic autonomy”[14] in favor of content that increases user engagement and algorithmic viability.
Targeted Content
editTo keep users engaged, social media platforms serve up “targeted content” which is synonymous with “content that was selected for users by an algorithm because it believes they would engage with this content.” This content is targeted based on data points like the target’s career, wealth, and education information, among other data points.[16] Critics of targeted content have pointed out that this targeting method is predatory, allowing for targeting of extremely niche groups of people who may be most vulnerable to the ads being served to them. Critics also point out that these social media companies must harvest and process large swaths of user data to get this granular level of targeting, often done with the basis of informed consent.[17] The concern is that users aren’t well informed of the fact they are signing away some of their privacy expectations as the result of using these social media platforms, therefore nullifying the basis of informed consent.
Critics also point out that content targeting oftentimes leads to addiction to these social media platforms. This is based on the six “properties of addiction criteria: salience, mood modification, tolerance, withdrawal symptoms, conflict and relapse”[18] that excessive use of social media has fostered in users. You can read more about social media addiction in the dedicated section found here.
Content Suppression
editThere is another side to social media platforms’ use of algorithmic targeting and promotion of certain content, and that side is content suppression. Just as algorithms promote engagement with certain kinds of content, they also suppress other kinds of content. Critics point out that the creation of algorithms is not a neutral process, and often times the biases of the creators or society as a whole can influence the selection of content to promote and suppress.[19] There have been examples of content creators claiming that their content was being suppressed on platforms like TikTok for posting Black Lives Matter content.[20]
Subversion of Content Targeting: Influence on Elections and Political Processes
editRecent key examples highlight a multitude of ethical and legal concerns associated with content targeting on social media. Notably, instances like the involvement of Cambridge Analytica in the 2016 U.S. Presidential Election and the Brexit Referendum demonstrate how social media can be exploited to manipulate public opinion and influence political outcomes. These cases shed light on the powerful effects of targeted content strategies and the profound implications they hold for democracy.
Cambridge Analytica and the 2016 U.S. Presidential Election
editCambridge Analytica, a British consulting firm and a subsidiary of Strategic Communications Laboratories (SCL), gained notoriety for its significant role in political events during the mid-to-late 2010s, ultimately closing in May 2018. The firm's controversial actions stemmed from acquiring the private information of over 50 million Facebook users without authorization. This breach enabled the construction of detailed user profiles, which were then exploited to influence U.S. politics, notably during the 2016 Presidential Election and the United Kingdom’s Brexit Vote.[21]
The operation began in 2014 when Alexander Nix, the chief executive of Cambridge Analytica, proposed using psychological profiling to affect voters' behaviors. These strategies were employed to sway public opinion in favor of conservative candidates in local elections, with funding from figures such as Steve Bannon.[22] The insufficient data initially available to the firm led to hiring Aleksandr Kogan, a Russian-American academic, to develop an app that harvested data not only from users but also from their Facebook friends. This massive data collection was facilitated by Facebook's permissive data usage policies at the time.[23]
Targeted advertising, fundraising appeals, and strategic planning of campaign activities, such as deciding where Donald Trump should visit to maximize support, were all based on these profiles. Simultaneously, tactics to demobilize Democrat voters and intensify right-wing sentiments were employed, showcasing the dual use of targeted content to both mobilize and suppress voter turnout.[24]
Brexit Referendum
editAcross the Atlantic, similar profiling techniques were used to influence the Brexit vote. Connections were discovered between the Leave.EU campaign and Cambridge Analytica through a Canadian firm known as Aggregate IQ, which was linked to various political campaign groups advocating for the UK to leave the European Union. In the crucial final days of the campaign, voters identified as persuadable were inundated with over a billion targeted advertisements, a strategy pivotal in securing the narrow margin needed to pass the referendum.
These events have prompted significant changes in how social media platforms manage data and have ignited a broader discussion about the need for stringent oversight of content targeting practices to safeguard democratic processes.
Censorship and Content Suppression
editCensorship on social media can be nuanced and multifaceted, generally manifesting in two primary forms: censorship by suppression and censorship by deterrence. Each method has its implications and is employed under different contexts, often stirring debate over the balance between free speech and regulatory needs.
Censorship by Suppression
editCensorship by suppression involves prohibiting objectionable material from being published, displayed, or circulated. In the United States, this form of censorship is often equated with "prior restraint," a concept generally considered unconstitutional unless it meets a high standard of justification, typically only upheld in cases of national security or public safety.
Social media platforms sometimes engage in practices that could be considered censorship by suppression when they delete or block access to certain types of content. This might include automated algorithmic suppression of content that mentions specific topics deemed sensitive or controversial. While platforms argue that this is necessary to maintain community standards, critics often view it as a form of censorship that restricts free expression.[25]
Copyright Strikes as a Form of Suppression
editThe issue of intellectual property rights in the context of social media highlights another form of suppression. Copyright strikes are used by platforms to enforce intellectual property laws automatically, often without thorough investigation. This practice can lead to the suppression of content, even if it falls under fair use provisions.[26]
Censorship by Deterrence
editCensorship by deterrence does not outright block or forbid the publication of material. Instead, it relies on the threat of legal consequences, such as arrest, prosecution, or heavy fines, to discourage the creation and distribution of objectionable content. This form of censorship can be particularly chilling, as it targets both the publishers of the content and those who might access it, fostering a climate of fear and self-censorship.
One of the critical issues with both forms of censorship is the difficulty in distinguishing between publishers (those who create and post content online) and platforms (those who host content published by others). In theory, platforms are protected from liability for user-generated content by Section 230 of the Communications Decency Act, a key piece of internet legislation that allows online services to host user-generated content without being liable for its content under most circumstances.[27]
Legal Framework Governing Social Media Content
editThe legal landscape of social media is heavily influenced by Section 230 of the Communications Decency Act (CDA), enacted in 1996. This legislative framework provides platforms with broad immunity, protecting them from lawsuits resulting from user-generated content. Section 230 is pivotal as it allows platforms to moderate material without facing legal repercussions, thereby promoting innovation and free online communication.[28]
Section 230 of the Communications Decency Act: Challenges and Criticisms
editSection 230 shields social networking sites from lawsuits related to user-posted information, enabling them to control content without being held responsible for the information they disseminate. However, this provision has faced criticism for its role in facilitating the spread of harmful content while limiting platforms' accountability, despite its intentions to foster free speech and innovation.[29]
Proliferation of Harmful Content
editCritics argue that the protection afforded by Section 230 has led social networking companies to prioritize user interaction and growth over stringent content moderation. This has allowed platforms to avoid doing enough to halt the spread of harmful content, such as hate speech, false information, and cyberbullying. The lack of legal penalties for hosting such content enables bad actors to exploit these platforms, spreading dangerous materials that can harm communities and individuals.[30]
Degradation of Responsibility
editThe legal immunity granted to social networking sites under Section 230 is said to undermine accountability and discourage victims from seeking legal recourse for harassment or defamation experienced online. If platforms face no potential legal repercussions, they may not be motivated to proactively remove harmful content or provide adequate support to those affected.[31]
Evolving Legal Interpretations and Future Directions
editThe debate over Section 230 continues to evolve as stakeholders from various sectors call for reforms that balance the benefits of online free speech against the need for greater accountability. Legal scholars and policymakers are increasingly examining how laws can adapt to the complexities of content management on social media platforms, suggesting that a more nuanced approach may be necessary. This involves considering the potential for algorithmic regulation and the proportional responsibility of platforms regarding online speech.[32]
Content Moderation
editSocial media platforms are tasked with the critical responsibility of moderating content to curb the proliferation of harmful information. This duty involves removing posts that propagate hate speech or incite violence and suspending users who breach platform policies. The scope and efficacy of content moderation can be swayed by various factors, including political influences, cultural norms, and economic incentives.
Content moderation refers to the process of screening user-generated content on digital platforms to determine its appropriateness. This encompasses evaluating text, images, and videos to ensure they adhere to the platform's guidelines. Given the immense volume of content uploaded daily, content moderation is indispensable for maintaining a safe online environment. Content moderators, the individuals at the forefront of this operation, often face significant psychological challenges due to the nature of the content they review, including exposure to violent or disturbing images and texts.[33] [34]
Recent legal cases highlight these challenges, with Facebook settling a lawsuit for $52 million with moderators over the trauma incurred from their job duties.[35] Similar legal challenges are faced by other platforms like TikTok, emphasizing the severe impact of this work on mental health.[36]
Ethical Issues with Content Moderation: Workers' Rights
editModerators are tasked with filtering a range of undesirable content, from spam and copyright infringement to severe violations like hate speech and graphic violence. The distress associated with continuous exposure to such content is profound, affecting moderators' mental health long after their roles end. This is true regardless of whether moderators are employed directly by the platforms or through third-party contractors. However, those employed in-house often benefit from better compensation, work conditions, and access to mental health resources compared to their outsourced counterparts.[32]
The Role of Artificial Intelligence in Content Moderation
editLarge platforms like Facebook employ artificial intelligence (AI) systems to detect a majority of toxic content. Mark Zuckerberg, CEO of Facebook, reported that AI systems were responsible for removing over 95% of hate speech and nearly all content related to terrorism on the platform.[37] Despite these advances, the sheer volume of harmful content that still requires human moderation is overwhelming. AI, while efficient and capable of processing content in multiple languages, often lacks the subtlety needed to understand context or the nuances of human language, particularly in complex cases like memes where text, image, and implied meaning must all be considered.[38] Ethically, Artificial Intelligence used as a tool to moderate content may prove promising in addressing the mental toll on content moderators, but the use of AI in content moderation could also serve to exacerbate existing algorithmic and societal biases.
Social Media Targeting and Mental Health
editSocial media platforms, with their sophisticated design features such as algorithms and infinite scroll, exploit psychological principles to foster habitual use, sometimes leading to addiction. These designs are not benign; they have significant impacts on mental health, influencing user behavior and societal interactions in profound ways.
Social networking sites are crafted to exploit variable reward systems, a concept rooted in the behaviorist psychology of B.F. Skinner. Interactions like likes, comments, and shares provide unpredictable yet frequent rewards, compelling users to engage repeatedly in hopes of social validation. This pattern of interaction can stimulate the brain’s reward centers akin to gambling and substance use, leading to compulsive behaviors where users feel an uncontrollable urge to log onto these sites, often at the expense of other activities and responsibilities. The ramifications of this behavioral addiction are evident in reduced productivity, strained relationships, and decreased physical activity.
Infinite Scroll: Never Ending Targeted Content
editThe infinite scroll feature on social networking sites exemplifies persuasive design, intended to maximize user engagement by leveraging natural human curiosity and the fear of missing out (FOMO). This design often leads users into a state of 'flow,' a deep level of engagement that makes time feel like it is passing unnoticed. While flow can be beneficial in activities like learning or art, on social media, it often results in significant time mismanagement and distraction from fulfilling tasks, including disruptions to sleep patterns which can have serious cognitive and health consequences.[39]
Dark Pathways: The Mental Health Consequences of Social Networking Content
editThe term 'dark pathways' describes the detrimental trajectories users might follow due to excessive social media use. Key mental health issues associated with these pathways include anxiety, depression, and social isolation. The drivers for these outcomes are multifaceted:
- Social Comparison: Users are often presented with curated versions of others' lives, leading to unfavorable comparisons and distorted self-perceptions. This phenomenon is linked to lower self-esteem and body image issues, particularly among adolescents and young adults.[40]
- Cyberbullying and Online Harassment: The anonymity of social platforms can foster aggression and bullying, with victims reporting higher levels of stress and anxiety, and in severe cases, suicidal ideation.[41]
- Information Overload: The vast amounts of information processed during prolonged social media use can overwhelm the brain's processing capacity, impairing decision-making and increasing stress levels.[42]
References
edit- ↑ Vallor, Shannon (2022). "Social Networking and Ethics". The Stanford Encyclopedia of Philosophy (Fall 2022 Edition ed.).
{{cite book}}
:|edition=
has extra text (help) - ↑ Arun, Chinmayi (2018). "Making Choices: Social Media Platforms and Freedom of Expression Norms". SSRN Electronic Journal. doi:10.2139/ssrn.3411878. ISSN 1556-5068.
- ↑ Mathiesen, Kay (2008). "Censorship and Access to Information". HANDBOOK OF INFORMATION AND COMPUTER ETHICS. New York: John Wiley and Sons.
- ↑ Carlsson, Ulla, ed. (2016). Freedom of expression and media in transition: studies and reflections in the digital age. Göteborg: Nordicom. ISBN 978-91-87957-22-2.
- ↑ "Active Hate Groups". Southern Poverty Law Center. 2023. Retrieved April 22, 2024.
{{cite web}}
: CS1 maint: url-status (link) - ↑ a b Vosoughi, Soroush; Roy, Deb; Aral, Sinan (2018-03-09). "The spread of true and false news online". Science. 359 (6380): 1146–1151. doi:10.1126/science.aap9559. ISSN 0036-8075.
- ↑ Abbasi, Jennifer (2022-03-15). "Widespread Misinformation About Infertility Continues to Create COVID-19 Vaccine Hesitancy". JAMA. 327 (11): 1013. doi:10.1001/jama.2022.2404. ISSN 0098-7484.
- ↑ MMller, Karsten; Schwarz, Carlo (2017). "Fanning the Flames of Hate: Social Media and Hate Crime". SSRN Electronic Journal. doi:10.2139/ssrn.3082972. ISSN 1556-5068.
- ↑ Barnes, M (July 17, 2020). "Top 5 Legal Issues in Social Media". Legal Reader. https://www.legalreader.com/top-5-legal-issues-in-social-media/.
- ↑ "Definition of ALGORITHM". www.merriam-webster.com. 2024-04-12. Retrieved 2024-04-22.
- ↑ Pazzani, Michael J.; Billsus, Daniel (2007), Brusilovsky, Peter; Kobsa, Alfred; Nejdl, Wolfgang (eds.), "Content-Based Recommendation Systems", The Adaptive Web: Methods and Strategies of Web Personalization, Berlin, Heidelberg: Springer, pp. 325–341, doi:10.1007/978-3-540-72079-9_10, ISBN 978-3-540-72079-9, retrieved 2024-04-22
- ↑ Li, Szu-Chuang; Chen, Yu-Ching; Chen, Yi-Wen; Huang, Yennun (2022-01). "Predicting Advertisement Revenue of Social-Media-Driven Content Websites: Toward More Efficient and Sustainable Social Media Posting". Sustainability. 14 (7): 4225. doi:10.3390/su14074225. ISSN 2071-1050.
{{cite journal}}
: Check date values in:|date=
(help) - ↑ a b Mogaji, Emmanuel; Soetan, Taiwo O.; Kieu, Tai Anh (2021-08). "The implications of artificial intelligence on the digital marketing of financial services to vulnerable customers". Australasian Marketing Journal. 29 (3): 235–242. doi:10.1016/j.ausmj.2020.05.003. ISSN 1839-3349.
{{cite journal}}
: Check date values in:|date=
(help) - ↑ a b c Peterson-Salahuddin, Chelsea; Diakopoulos, Nicholas (2020-07-10). "Negotiated Autonomy: The Role of Social Media Algorithms in Editorial Decision Making". Media and Communication. 8 (3): 27–38. doi:10.17645/mac.v8i3.3001. ISSN 2183-2439.
- ↑ "Definition of CLICKBAIT". www.merriam-webster.com. 2024-04-19. Retrieved 2024-04-22.
- ↑ Xia, Chaolun; Guha, Saikat; Muthukrishnan, S. (2016-08). "Targeting algorithms for online social advertising markets". IEEE: 485–492. doi:10.1109/ASONAM.2016.7752279. ISBN 978-1-5090-2846-7.
{{cite journal}}
: Cite journal requires|journal=
(help); Check date values in:|date=
(help) - ↑ Custers, Bart; van der Hof, Simone; Schermer, Bart (2014-09). "Privacy Expectations of Social Media Users: The Role of Informed Consent in Privacy Policies". Policy & Internet. 6 (3): 268–295. doi:10.1002/1944-2866.POI366. ISSN 1944-2866.
{{cite journal}}
: Check date values in:|date=
(help) - ↑ Mujica, Alejandro L.; Crowell, Charles R.; Villano, Michael A.; Uddin, Khutb M. (2022-02-24). "ADDICTION BY DESIGN: Some Dimensions and Challenges of Excessive Social Media Use". Medical Research Archives. 10 (2). doi:10.18103/mra.v10i2.2677. ISSN 2375-1924.
- ↑ Binns, Reuben; Veale, Michael; Van Kleek, Max; Shadbolt, Nigel (2017). Ciampaglia, Giovanni Luca; Mashhadi, Afra; Yasseri, Taha (eds.). "Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation". Social Informatics. Cham: Springer International Publishing: 405–415. doi:10.1007/978-3-319-67256-4_32. ISBN 978-3-319-67256-4.
- ↑ McCluskey, Megan (July 22, 2020). "These TikTok Creators Say They’re Still Being Suppressed for Posting Black Lives Matter Content". Time. https://time.com/5863350/tiktok-black-creators/.
- ↑ "Cambridge Analytica is shutting down following Facebook scandal". Engadget. 2018-05-02. Retrieved 2024-04-23.
- ↑ Rosenberg, Matthew; Confessore, Nicholas; Cadwalladr, Carole (2018-03-17). "How Trump Consultants Exploited the Facebook Data of Millions" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html.
- ↑ Scheiber, Noam; Isaac, Mike (2019-03-19). "Facebook Halts Ad Targeting Cited in Bias Complaints" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2019/03/19/technology/facebook-discrimination-ads.html.
- ↑ Scheiber, Noam; Isaac, Mike (2019-03-19). "Facebook Halts Ad Targeting Cited in Bias Complaints" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2019/03/19/technology/facebook-discrimination-ads.html.
- ↑ "Discussions on Digital Rights and Freedom of Expression Online". Internet Governance Forum. Retrieved April 22, 2024.
{{cite web}}
: CS1 maint: url-status (link) - ↑ Maiello, Alfred (2011). "Social Media – An Overview of Legal Issues Businesses Face". MBM Law. Retrieved April 22, 2024.
{{cite web}}
: CS1 maint: url-status (link) - ↑ "Section 230: Key Legal Cases". Electronic Frontier Foundation. Retrieved April 22, 2024.
{{cite web}}
: CS1 maint: url-status (link) - ↑ "Section 230". Electronic Frontier Foundation. Retrieved 2024-04-23.
- ↑ Morrison, Sara (2020-05-28). "Section 230, the internet law that's under threat, explained". Vox. Retrieved 2024-04-23.
- ↑ Citron, Danielle; Wittes, Benjamin (2017-11-01). "The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity". Fordham Law Review. 86 (2): 401.
- ↑ Huang, Tzu- Chiang (2022-09-01). "Private Censorship, Disinformation and the First Amendment: Rethinking Online Platforms Regulation in the Era of a Global Pandemic". Michigan Technology Law Review. 29 (1): 137–170. doi:10.36645/mtlr.29.1.private. ISSN 2688-4941.
- ↑ a b Review, Columbia Law. "GOVERNING ONLINE SPEECH: FROM "POSTS-AS-TRUMPS" TO PROPORTIONALITY AND PROBABILITY". Columbia Law Review. Retrieved 2024-04-23.
- ↑ anujayaraman (2022-03-03). "The Ethics of Content Moderation: Who Protects the Protectors?". Innodata Inc. Retrieved 2024-04-23.
- ↑ "The Responsibilities of Social Media Platforms and Users | Public Engagement". publicengagement.umich.edu. Retrieved 2024-04-23.
- ↑ Satariano, Adam; Isaac, Mike (2021-08-31). "The Silent Partner Cleaning Up Facebook for $500 Million a Year" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2021/08/31/technology/facebook-accenture-content-moderation.html.
- ↑ Review, The Regulatory (2021-12-21). "The Social Responsibility of Social Media Platforms | The Regulatory Review". www.theregreview.org. Retrieved 2024-04-23.
- ↑ "Mark Zuckerberg said content moderation requires 'nuances' that consider the intent behind a post, but also highlighted Facebook's reliance on AI to do that job". Business Insider. Retrieved 2024-04-23.
- ↑ Persily, Nathaniel; Tucker, Joshua A., eds. (2020). Social media and democracy: the state of the field, prospects for reform. SSRC anxieties of democracy. Cambridge, United Kingdom ; New York, NY: Cambridge University Press. ISBN 978-1-108-89096-0.
- ↑ Collins, Grant (2020-12-11). "Why the infinite scroll is so addictive". Medium. Retrieved 2024-04-23.
- ↑ Sadagheyani, Hassan Ebrahimpour; Tatari, Farin (2021-02-23). "Investigating the role of social media on mental health". Mental Health and Social Inclusion. 25 (1): 41–51. doi:10.1108/MHSI-06-2020-0039. ISSN 2042-8308.
- ↑ Pater, Jessica A.; Kim, Moon K.; Mynatt, Elizabeth D.; Fiesler, Casey (2016-11-13). "Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms". ACM: 369–374. doi:10.1145/2957276.2957297. ISBN 978-1-4503-4276-6.
{{cite journal}}
: Cite journal requires|journal=
(help) - ↑ Pater, Jessica A.; Kim, Moon K.; Mynatt, Elizabeth D.; Fiesler, Casey (2016-11-13). "Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms". ACM: 369–374. doi:10.1145/2957276.2957297. ISBN 978-1-4503-4276-6.
{{cite journal}}
: Cite journal requires|journal=
(help)