Lentis/Content Moderation
Introduction
editThis chapter goes into depth about how content is moderated. Content moderation is the practice of monitoring and applying a predetermined set of guidelines to user-generated submissions to determine if the content (a post, in particular) is permissible or not.[1]
Categorization by Purpose
editLaws and Morality
editContent moderation can serve to maintain a clean network environment. Web search engines like Google and Bing implicitly conduct content moderation. Websites with illegal content such as slave auctions, smuggling, and drug trading are removed from public view, leading to the term "Dark Web".
Section 230 of the Communications Decency Act dictates the legality of certain instances of content moderation. For example, it states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This means that a social media platform or the host of a website cannot be held legally responsible for the content posted by the users of that platform or website. It states "No provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." [2] This means that online hosts can not be held legally responsible for illegal or unwanted content that is posted to their platform, nor can they be prosecuted for removing such content.
Platforms are not protected by Section 230 if they edit or curate illegal content. Federal criminal violations, intellectual property violations, and violations of electronic communications privacy laws are also exempt from protection by Section 230.
There are many techniques used to moderate explicit content. One example is language filtering. In many chat rooms, there is usually a feature called "chat filter" which replaces socially offensive words with asterisks or other symbols. Even though it might not be able to completely stop verbal abuse, it tries to maintain a clean environment. Another example is video censorship. Besides age-restriction, video products are usually modified to remove certain content from an audience. For example, in Japanese anime, if the episode contains blood or nudity, those scenes will be covered by mosaic tiling or dots.
National Security
editInformation on classified military secrets are prohibited from being exposed to the public. If a picture of a US military base is made public, the FBI will quickly remove it and arrest the person responsible. Facetious posts might be exempted but will still be watched. For example, there were rumors that Area 51 contained alien technology. However, there was never any proof. Detailed discussion of sensitive technology such as quantum encryption, gene-targeting viruses, and nuclear reaction control are also being monitored.
Lawmakers are worried about a lack of moderation on TikTok, as the platform does little to combat misinformation that spread quickly, especially among younger users. Per TikTok's privacy agreement, the app collects "information when [users] create an account and use the Platform" and "information you share with [TikTok] from third-party social network providers, and technical and behavioral information about [users'] use of the Platform."[3] The Chinese company's lack of moderation and widespread collection of data has caused concerns for U.S. National Security.[4]
Political Purpose
editContent moderation can be regulated by the government. It is possible to direct the public opinion by controlling the information received by the public, along with self-efficacy campaigns.
In the 2017 case of Knight First Amendment Institute v. Trump Judge Naomi Reice Buchwald found that it was unconstitutional for President Donald Trump to block Twitter accounts with opposing political beliefs from his personal account, @realDonaldTrump. This decision was upheld after being appealed by the government in 2018. Another lawsuit featuring the same issue was launched in 2020 by the Knight First Amendment Institute at Columbia University and is still ongoing in the United States Supreme Court.
Categorization by Method
editPre-Moderation
editPre-moderation is a style of content moderation that is employed by companies whom care about their image above all else. Every piece of content released is curated and reviewed to make sure that it doesn't hurt their brand in any way or cause any legal issues.[5] Although pre-moderation isn't feasible for any platform that experiences a large influx of data, such as a social media platform, it can be helpful for company blogs or similar sites.
Post-Moderation
editPost-moderation refers to a type of content-moderation in which content, once it's submitted to a platform, can be reviewed and taken down at any time if it's found that the post violates a site policy.[5] Post-moderation specifically is a form of blanket policy that applies to most platforms currently in use. Most companies will always reserve the right to remove content from their platforms if they find it to have violated any of their terms or conditions.
Reactive Moderation
editReactive moderation is a type of moderation in which a platform relies on their community in order to review and screen posts. The individuals viewing the content become responsible for determining whether or not the content is appropriate. If the content isn't appropriate, they are tasked with reporting it so that a moderator can view and delete if necessary.[5] This type of moderation is used on most social media sites, as it allows the site to leverage their large community as a solution to the influx of content.
Distributed Moderation
editDistributed moderation is similar to reactive moderation in that it entrusts the community with moderating content, but rather than reporting only inappropriate content, the users vote on every piece of content submitted.[5] This most often leads to a form of group-think, in which the masses become able to determine when a form of content is not permissible.
Automated Moderation
editAutomated moderation is a type of moderation that specifically relies on automated tools to filter content.[5] This may include word filters, algorithms using text/word analysis, and more. Many believe that the expansion of this form of moderation will end up becoming the future of content moderation. Most sites currently use some form of automated moderation in their suite of content moderation tools, although in some cases the field hasn't progressed enough to be suitable by itself.
Status Quo & Social Media
editRegarding the status quo, there are several large social media companies that are similar in size and scope but employ different forms of content moderation to moderate their expansive communities. We will focus on Facebook/Instagram, Twitter, Reddit, YouTube, Twitch, TikTok, and Snapchat.
Facebook & Instagram
editFacebook and Instagram are under the same parent company and have similar moderation policies. They mainly employ a type of reactive moderation, in which the community is responsible for flagging and reporting any explicit content. Not only this, the company also uses a lot of automated moderation, not so much for removing content as for detecting duplicate accounts.[6] Facebook is also the company that puts forth the highest investment into content moderation, and as such they're objectively the platform that is most successful at removing explicit content. However, the moderators tasked with cleaning up the posts end up suffering. Every day, just coming to work, they're exposed to the "worst of humanity". Many end up developing PTSD or depression, and can't continue working as a result.[7] According to Facebook Inc's Transparency Report Q3 2019, Instagram removed 2 million posts compared to Facebook's 35 million [8].
Due to the 2020 COVID-19 pandemic, both Facebook and Instagram severely mitigated the user appeal process in favor of purely automated moderation. This has caused Instagram's user appeals (where Instagram reuploads a photo it has taken down at user request) to fall from 8,100 in Q1 2020 to zero in Q2 2020 [9]. Though it is still unknown if the shift to entirely automated systems is temporary or permanent, the policies on both platforms have become much stricter. Misinformation regarding politics is also typically allowed on both platforms, although they do ban certain widespread disinformation campaigns and they label potentially misleading posts as such [9].
Beginning in 2018, Twitter has slowly become the spotlight for content moderation due to the prevalence of politics on the site. The company uses mainly automated moderation, but with less of a focus on the removal of content and more on the discovery and amplification of content [10]. Recently, Twitter has become more aggressive when flagging and fact-checking misleading posts, and they do not hesitate to flag posts made by influential figures including President Donald Trump [11]. Twitter also proactively surfaces potential content violations for human review rather than waiting for users to report them. This is part of a push to help manage toxic content and harassment as well as suspend accounts related to conspiracy groups.
Reddit uses a style of content moderation they dubbed as "layered moderation". At its core, this is a combination of distributed moderation and reactive moderation. Users are responsible for "up-voting" and "down-voting" posts, acting as a form of moderator in which they curate high-quality information for other users to see. While this is generally seen as a good way to manage content, it can also lead to a sort of "hivemind" mentality, where everyone on Reddit sees content that the majority has curated. Users can also report posts for "subreddit" moderators to manually review and escalate/remove if necessary.[12] Besides this, Reddit also employs a few tools for automated moderation, including the "AutoModerator", a bot that helps to automate a lot of the manual tasks that "subreddit" moderators must undergo.[13]
Reddit has previously been one of the few places on the internet where "true" free-speech has been allowed. However, more recently, they have starting banning subreddits that represent hate-groups and misinformation. This targeted groups representing racism, homophobia, or misogyny[14]. But in June 2020, they took it a step further when they banned /r/The_Donald, a subreddit that talks about and supports President Donald Trump. They also banned /r/NoMask, a subreddit that is against the wearing of masks during the COVID-19 pandemic [15].
YouTube
editYouTube is unique in that they employ the most automated tools of any of the platforms mentioned. Not only are their algorithms used for recommending videos, but also for content moderation.[16] YouTube is also the one company mentioned where people can actually make a living by uploading content. As such, one of YouTube's main forms of moderation is "demonetization".[17] For offending accounts, YouTube also has a "three-strike" system in place. After their first warning, they undergo a series of progressive punishments until, if nothing changes, their account becomes banned.[18]
Twitch
editThe video game livestreaming platform Twitch, owned by Amazon, employs distributed moderation and automated moderation. Individual channels have moderators who are responsible for moderating Twitch's chat feature and deleting inappropriate messages in real-time. They also use automated systems when it comes to copyrighted content, often temporarily banning streamers who use copyrighted audio in their streams [19].
Twitch has recently faced backlash from large streamers on the platform due to very strict moderation policies. The DMCA, which manages copyrighted content, has retroactively banned streamers who have used copyrighted music in past streams, even before the new policy was implemented. This caused big streamers to delete all of the past broadcasts to avoid a ban [20]. Streamers also get banned when they accidentally view inappropriate content through no fault of their own, even if they instantly click away from it [21]. This disconnect between Twitch and their most profitable streamers has caused some to move to other Livestreaming platforms.
TikTok
editA relatively new social media where people share short videos is TikTok, owned by the China-based company ByteDance. TikTok primarily engages in automated moderation, and states that less that 1% of the content it removes is related to hate speech and disinformation [22]. However, TikTok has been accused of removing content based on sensitivities to China and suppressing content on their "For You Page" from users it deems as "ugly" so it can attract more people to the app. The company has denied both of these accusations [23].
Snapchat
editSnapchat, a social media app where users share disappearing photos, primarily focuses on post-moderation and distributed moderation. This focus on human-centered content moderation is unique for such a large social media platform [24]. They can afford fewer automated systems because most content on the app is only there for up to 24 hours before it is deleted. Most interactions between users is private on the app with the exception of their "Discover" page, where companies and influencers post bite-sized content. Snapchat has also taken a strong stance on racial justice by removing controversial figures, such as Donald Trump, entirely from the Discover page [25].
Controversies
editPolitics
editThe Communications Decency Act
editBoth Republicans and Democrats have expressed concerns regarding Section 230 of the Communications Act. Democrats believe that Section 230 limits the censoring of hate speech and illegal content and that repealing or amending it will force tech companies to dispel inappropriate content.[26] Republicans believe that Section 230 allows tech companies to suppress conservative opinions and influence public thought.[27] In May of 2020, President Donald Trump issued an executive order to end online censorship, stating that "[The United States] cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet."[28] No changes to Section 230 have yet been made.
Freedom of Speech
editThe use of content moderation by social media platforms has led to concerns about the implications on freedom of speech on these platforms. One reason for these concerns is the lack of transparency regarding rules governing content moderation. David Kaye, UN Special Rapporteur on freedom of opinion and expression, called the murkiness of the rules governing content moderation, "One of the greatest threats to online free speech today" adding that "companies impose rules they have developed without public input and enforced with little clarity"[29]. The different expectations of what content should be removed among users has only increased these concerns. An example of this is the reaction to Facebook's decision to not remove a doctored video of Nancy Pelosi, slowed down to make Pelosi appear inebriated. While some were frustrated by Facebook's inaction to contain the spread of misinformation, others applauded the company for protecting the freedom of speech on the platform[30].
2016 U.S. Presidential Election
editThe 2016 U.S. Presidential election sparked initial talks of political content moderation after it was discovered Russia leveraged social media to influence the results of the election. Misinformation campaigns spearheaded by Russia’s Internet Research Agency (IRA) emphasized the need for greater controls on user content. [31]
2020 U.S. Presidential Election
editThe 2020 U.S. Presidential election spurred discussion regarding content moderation as social media sites gained greater influence leading up to election day. While social media may have increased civic engagement, especially in youths, misinformation and disinformation were also spread. Since President Trump’s fraudulent election claims on Twitter were unsubstantiated, Twitter flagged the posts as potential misinformation. Despite these flags, misinformation gets viewed millions of times blurring the lines of facts and false narratives.[32] Another issue is the opaque decision-making process of removing or modifying content. As when Facebook removed "Stop the Steal", a group with over 300,000 members that was used to organize protests against election results. Decisions like these culminated in the CEOs of Twitter and Facebook defending their content moderation practices during a congressional hearing after both platforms decided to curb the spread of claims about the son Democratic presidential candidate Joe Biden.[33]
Human Moderation
editContract Labor
editTech companies predominately use outsourced contract labor for moderation work. This allows companies to scale their operations globally at the expense of the workers, who are paid much less than salaried employees. At Cognizant, a contractor in Arizona supplying content moderation for Facebook, moderators made $15 and hour which is dwarfed by the median Facebook employee salary of $240,000 annually[7].
Psychological toll
editModerators manually review the most disturbing content on the internet, often without proper resiliency training and other services necessary to prepare them[34]. Moderators are also held to high standards when moderating content, with Facebook setting a target of 95% accuracy on moderator decisions[7], creating a chaotic environment with high turnover as many moderators are unable to maintain this accuracy. Companies try to help moderators cope with "wellness time", meant to allow traumatized workers to take a break. At Cognizant, employees were only allotted nine minutes of wellness time per day and this time was monitored to make sure workers were using this time correctly[7]. The long term effects of the exposure to disturbing content have led to former moderators developing PTSD-like symptoms. One example is Selena Scola, a former moderator for Facebook, who is suing the company after getting PTSD, arguing that the company does not have proper mental health services and monitoring in place for its content moderators[35].
Causes
editAs a content moderator, exposure to various forms of mentally abusive content, oftentimes illegal and inappropriate, are a part of their daily tasks[36]. Viewing, filtering and removing published content on social media platforms is a sacrifice of time and health that content moderators are burdened with. Common subjects of harmful content exist in the form of (child) pornography, abuse, self-harm, and violence of all forms. Content moderators work to monitor such content and consequently make the decision of whether they are appropriate for other users. Due to the complications of the job description, moderators are told of the risk they face, ranging from emotional disarray to even deeper, psychological effects due to prolonged exposure or the severity of the content[37]. While most may be capable of bypassing shallow controversies with emotional impulses - anger, sadness, sympathy/empathy, or conflict, others may be scarred to a point where developmental disorders lead to complex recoveries.
Symptoms
editMost traumatic cases involve stress-related symptoms. Common symptoms of mediators include post-traumatic stress disorder (PTSD), anxiety, obsessive compulsive disorder (OCD) as well as various forms of recurring sleep conditions such as insomnia and nightmare disorders. While the majority is due to inappropriate content subjection, some cases are noted to be due to the accumulation of stress as a result of professional secrecy. A large reason that cases are prolonged is due to the hesitancy of seeking help - especially through therapy - and the nondisclosure contracts that come with the occupation[38]. After evaluation, patients have been noted to describe themselves as being continuously bothered by the content they are subjected to watch. Further indicating that mental therapy alone and unnecessary dosage of medicine are insufficient to provide the necessary 'cure' for content mediation-based trauma.
Treatment
editCurrent treatments for disorders caused by content moderation include various form of mental therapies, such and cognitive behavioral (CBT) and eye movement desensitization and reprocessing (EMDR). CBT has become one of the most common psychotherapies, as studies have shown that CBT is more effective than supportive techniques when it comes to the treatment of PTSD[39]. EMDR is a more rare psychotherapy, where patients are shown disturbing materials in time intervals while simultaneously having them focus on external stimuli such as audio stimulation[40]. Some studies argue that the effects of EMDR are limited and that eye movements may not be necessary for recovery from psychological disorders[41]. On the other hand, the US Department of Veterans Affairs argue that CBT and EMDR are the most effective treatment for PTSD[42]. Another form of treatment is medication, such as Selective Serotonin Re-uptake Inhibitor (SSRI), which block neurotransmitter re-uptake of serotonin - thereby increasing serotonin levels in individuals with depression and PTSD[43]. The negative symptoms from using SSRIs include agitation, anxiousness, indigestion, and dizziness.
With developing research, there has been a rise in promising treatments offered, such as MDMA assisted therapy, ketamine infusion, and SDG injections[44]. These treatments are still limited due to resources, and therefore more information on effectiveness and side effects would be needed to establish these as new treatments.
Case Study: Meta
editSince the 2016 US presidential election, there has been pressure on Meta, a technology and social media company, to improve content moderation due to the high volume of "fake news" circulating its platforms, such as Instagram and Facebook[45]. With this came an increase in Meta content moderation, subsequently increasing the physiological toll on both Meta employees and outsourced employees. In 2018, former Facebook employee, Selena Scola, sued Meta after developing PTSD due to the disturbing content she moderated. Meta settled in court for a total of $52 million for current and past content moderators[46]. Due to this settlement, Meta has agreed to increase and improve therapy sessions and roll out new content moderation tools, such as audio muting and black/white screens. Among these new tools, Meta has also agreed to inform employees on how to report violations in accordance with several changes in workplace standards. Now, Meta requires its moderators to undergo initial screening for emotional resiliency, advertises their offered support services to increase employee awareness, and provides a safe workplace environment where reports of any violation of Meta workplace standards by vendors are encouraged. Even after these settlement agreements, Meta content moderators still believe that the company lacks sufficient mental health support and resources, which is why Meta is currently under pressure as it enters another lawsuit in 2022[47].
Case Study: TikTok
editFollowing the $52 million Meta settlement in 2020, the social media company TikTok also came under pressure for similar problems surrounding their content moderation policies. In March 2022, Ashley Velez and Reece Young, two contractors who previously worked for TikTok as content moderators, filed a class-action lawsuit against TikTok and its parent company, ByteDance. Velez and Young alleged that they were subjected to graphic and objectionable content on a daily basis, and suffered immense stress and psychological harm as a result. They also claimed that TikTok exacerbated these problems by imposing harsh productivity standards and quotas, and was negligent in equipping moderators with the appropriate tools to cope with the burdens of their work. In their lawsuit, Velez and Young sought financial compensation, as well as the creation of a medical monitoring fund to help diagnose and treat moderators’ mental health conditions[48]. Aside from the mental toll on moderators, other criticisms have also been levied at TikTok. For example, the validity of TikTok’s moderator training process has also been brought into question. In interviews with Forbes, several former moderators stated that as part of their training, they were given access to a shared spreadsheet called the “Daily Required Reading” (DRR), which contained hundreds of images of nude children. They argued that the availability of the document to many employees at TikTok and Teleperformance, their contracting company, was alarming, and that the sensitive data was being grossly mishandled. Interviewed moderators said that the use of real-life examples during training was inappropriate, and should be removed in favor of other teaching tools[49]. Both TikTok and Teleperformance declined to comment on the existence of the DRR, but moderators who left as recently as July 2022 claimed that it was still in use. As it has been established as fact that repeated exposure to graphic imagery can cause debilitating psychological trauma, it is crucial that TikTok and other social media companies provide better support to their content moderators, who toil long hours to make sure the average user experience remains unsoiled.
Case Study: Hong Kong
editInitially, the 2019 Hong Kong Protest were just citizens peacefully marching against an extradition bill. It has since become violent. The protest was reported and interpreted with huge discrepancy in different places, leading to different reactions to the event. Content moderation has been confirmed to play a significant role in this case.
In mainland China, it was reported as a “rebellion” and “insurgence with conspiracy"[50][51], while in the United States, ABC refers to it as "pro-democracy" protests[52] and a fight for freedom. CNN reported that some NBA fans are also supporting the protest[53], which looks like a social norm campaign. There were reports on HK police abuse[54]. Some people in America have called for action to help the protesters.[55]
However, it was found that certain viewpoints are being hidden from the United States public. Facebook and Twitter were reported to be manipulating the story through content moderation, and have deleted nearly a thousand Chinese accounts.[56] All the removed Chinese accounts simply stated anti-protest opinions, but the sites claimed that those accounts were associated with the Chinese government.[57] Even though content moderation is not the primary reason some American people strongly favor the protest, it definitely affects public opinion.
Automated Versus Human Moderation
editWhile companies make known their basic moderation principles, rarely is the balance between automated and human moderation discussed. Algorithmic decisions are driven largely by commercial incentives causing transparency and accountability issues. These issues are a symptom of not having a global standard for all companies where political discourse may happen between users. Some argue for an open and political debate to help determine the norms for acceptable online political communication.[58]
Future
editThe future of content moderation will include an increased focus on using AI and Machine Learning to automate moderation processes. The use of artificial neural networks and deep-learning technology have already helped automate tasks such as speech recognition, image classification, and natural language processing, lessening the burden on human moderators[59]. These applications of AI can make more precise moderation decisions than human moderators, but are only as effective as the extensiveness of their training. Currently there is an insufficient amount of examples of content to train AI models[59]. This lack of data leads to AI models being easily confused when content is presented in ways different than in training. Current AI solutions are also unable to comprehend context and intent that may be crucial to determining whether to remove a post. This can be seen in the discrepancy between Facebook's automated tools detection of nudity and hate speech, which are accurately detected 96% and 38% of the time respectively[60]. Because of these limitations with AI, a mix of automated moderation and human moderation will likely be the norm for some time.
If Section 230 of the Communications Act is repealed, tech platforms could be held responsible for the content posted by users, and therefore will need to censor anything that could lead to legal issues. This will require intensive moderation techniques and may lead to many websites getting rid of user content altogether.
Psychological Toll: Improvements in Content Moderation
editAs social media continues to grow as a large part of global media, the psychological toll on moderators is a rising topic of concern with the consequent rise in inappropriate content exposure. Increased viewing of disturbing content as content moderators has led to to an increasing amount of employees to develop PTSD and other psychological disorders. Increased viewing of disturbing content as content moderators has led to an increasing amount of employees to develop PTSD and other psychological disorders. With limited treatment options that only diminish symptoms and a lack of mental health resources from employers, content moderators are at high risk of developing these disorders. Social media companies, such as Meta and TikTok, have been under pressure and sued for their poor working conditions for content moderators. These settlements led to more progress in improving mental health resources, but this is only a start in the right direction.
These issues accentuate the immediate need for a working step towards a solution. Possible tools proposed to combat this issue include increasing awareness of the serious risks and implications associated with the occupational position (via improved nondisclosure agreements and recruitment screening) or by decreasing the exhaustive loads imposed upon the moderators[46]. Where the latter option can be addressed by implementing more efficient detection algorithms that can flag inappropriate content. By utilizing improved content filtration algorithms, this could reduce the concentration and load of content to be filtered by moderators. Ideally, severe and traumatic content is filtered out by the system software due to certain indicators like tags, text, audio and images in the media, leaving moderators with unobtrusive items flagged by other users as mildly inappropriate or of a debatable degree that requires human management.
Conclusions
editThere are some generalizable lessons that can be taken from the case of Content Moderation. One of these lessons is how transparency can affect user trust. The lack of transparency in moderation guidelines and enforcement is incredibly frustrating for users and lead to users reaching their own conclusions about why their posts are taken down, such as bias or believing it to be a false positive. Transparency would alleviate this problem which is why many are calling on tech companies to adopt guidelines such as The Santa Clara Principles to make the moderation process more transparent. Others can also learn from tech companies use of contract labor. For a dangerous job such as content moderation, low wages and insufficient benefits puts a large financial burden on workers who develop mental health conditions from their time as moderators.
Chapter Extension
editExtensions to the casebook chapter could explore in more detail the current AI and Machine Learning technologies used today, the presence of bias in the moderation process, and how the phenomena of fake news will change the moderation process.
References
edit- ↑ Content Moderation[1]
- ↑ [2] Section 230 of the Communications Decency Act
- ↑ [3] TikTok Privacy Policy
- ↑ [4] Unpacking TikTok, Mobile Apps and National Security Risks
- ↑ a b c d e Six Types of Content Moderation You Need to Know About[5]
- ↑ How does Facebook moderate its extreme content[6]
- ↑ a b c d The Secret Lives of Facebook Moderators in America[7]
- ↑ Facebook has released Instagram content moderation data for the first time[8]
- ↑ a b Facebooks Most Recent Transparency Report Demonstrates Pitfalls of Automated Content Moderation[9]
- ↑ Twitter shares content moderation plans, highlights contrast with Facebook[10]
- ↑ The Complex Debate Over Silicon Valley’s Embrace of Content Moderation[11]
- ↑ Reddit Security Report -- October 30, 2019[12]
- ↑ Full AutoModerator Documentation[13]
- ↑ Reddit is Finally Facing its Legacy of Racism[14]
- ↑ Reddit Bans r/The_Donald and r/ChapoTrapHouse as Part of a Major Expansion of its Rules[15]
- ↑ YouTube Doesn't Know Where Its Own Line Is[16]
- ↑ The Yellow $: a comprehensive history of demonetization and YouTube’s war with creators[17]
- ↑ Community Guidelines Strike Basics[18]
- ↑ Content Moderation at Scale Especially Doesn't Work when you Hide All the Rules[19]
- ↑ Twitch Apologizes for Recent DMCA Takedowns, but has no Real Solutions Yet[20]
- ↑ Twitch's Continuous Struggle With Moderation Shines a Light on Platform's Faults[21]
- ↑ TikTok Reveals Content Moderation Stats Amid Growing Global Pressure[22]
- ↑ Invisible Censorship: TikTok told Moderators to Suppress Posts by "Ugly" People and the Poor to Attract New Users[23]
- ↑ Snapchat Emphasizes Human Content Moderation in App Redesign[24]
- ↑ Content Moderation Issues are Taking Center Stage in The Presidential Election Campaign[25]
- ↑ [26] Protecting Americans from Dangerous Algorithms Act
- ↑ [27] Senator Hawley Introduces Legislation to Amend Section 230 Immunity for Big Tech Companies
- ↑ [28] Executive Order on Preventing Online Censorship
- ↑ UN Expert: Content moderation should not trample free speech[29]
- ↑ The Thorny Problem of Content Moderation and Bias[30]
- ↑ BBC. (2018, December 17). Russia 'meddled in all big social media' around US election. BBC News. https://www.bbc.com/news/technology-46590890.
- ↑ Hinckle, M., & Moore, H. (2020, November 3). Social Media's Impact on the 2020 Presidential Election: The Good, the Bad, and the Ugly. https://research.umd.edu/news/news_story.php?id=13541.
- ↑ Bose, N., & Bartz, D. (2020, November 17). 'More power than traditional media': Facebook, Twitter policies attacked. Reuters. https://www.reuters.com/article/usa-tech-senate/more-power-than-traditional-media-facebook-twitter-policies-attacked-idUSKBN27X186.
- ↑ Underpaid and overburdened: the life of a Facebook moderator[31]
- ↑ Content Moderator Sues Facebook, Says Job Gave Her PTSD[32]
- ↑ Brown, R. (2020, May 11). What is Social Media Content Moderation and how Moderation Companies use various Techniques to Moderate Contents?. Becoming Human.https://becominghuman.ai/what-is-social-media-content-moderation-and-how-moderation-companies-use-various-techniques-to-a0e38bb81162
- ↑ Crossfield, J. (n.d.). The Hidden Consequences of Moderating Social Media's Dark Side. Content Marketing Institute.https://contentmarketinginstitute.com/cco-digital/july-2019/social-media-moderators-stress/
- ↑ Benjelloun, Roukaya & Otheman, Yassine. (2020). Psychological distress in a social media content moderator: A case report. Archives of Psychiatry and Mental Health, 4. 073-075. 10.29328/journal.apmh.1001024
- ↑ Mendes, D. D., Mello, M. F., Ventura, P., De Medeiros Passarela, C., & De Jesus Mari, J. (2008). A Systematic Review on the Effectiveness of Cognitive Behavioral Therapy for Posttraumatic Stress Disorder. The International Journal of Psychiatry in Medicine, 38(3), 241–259.https://doi.org/10.2190/PM.38.3.b
- ↑ EMDR International Association. (n.d.). About EMDR Therapy.https://www.emdria.org/about-emdr-therapy/
- ↑ Lohr, J. M., Tolin, D. F., Lilienfeld, S. O. (1998). Efficacy of eye movement desensitization and reprocessing: Implications for behavior therapy. Behavior Therapy, 29(1).123-256.https://doi.org/10.1016/S0005-7894(98)80035-X
- ↑ U.S. Department of Veteran Affairs. (n.d.). PTSD: National Center for PTSD.https://www.ptsd.va.gov/professional/treat/txessentials/overview_therapy.asp
- ↑ Chu, A., Wadhwa R. (2022, May 8). Selective Serotonin Reuptake Inhibitors. National Library of Medicine.https://www.ncbi.nlm.nih.gov/books/NBK554406/
- ↑ Jain, S. (2021, July 1). The Latest in PTSD Treatment. Psychology Today. https://www.psychologytoday.com/us/blog/the-aftermath-trauma/202107/the-latest-in-ptsd-treatment
- ↑ Solon, O. (2016, Nov 10). Facebook's failure: did fake news and polarized politics get Trump elected?. The Guardian.https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election-conspiracy-theories
- ↑ a b Newton, C. (2020, May 12). Facebook will pay $52 million in settlement with moderators who developed PTSD on the job. The Verge. https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health
- ↑ Wong, Q. (2022, May 10). Facebook Parent Meta Sued in Kenya by Former Content Moderator. CNET. https://www.cnet.com/news/social-media/facebook-parent-meta-sued-in-kenya-by-former-content-moderator/
- ↑ Allyn, B. (2022, Mar 24). Former TikTok moderators sue over emotional toll of 'extremely disturbing' videos. NPR. https://www.npr.org/2022/03/24/1088343332/tiktok-lawsuit-content-moderators
- ↑ Levine, A. S. (2022, Aug 4). TikTok Moderators Are Being Trained Using Graphic Images of Child Sexual Abuse. Forbes. https://www.forbes.com/sites/alexandralevine/2022/08/04/tiktok-is-storing-uncensored-images-of-child-sexual-abuse-and-using-them-to-train-moderators/?sh=72c19eee5acb
- ↑ Truth about US behind HK Protest[33]
- ↑ Reinforcement Has Arrived in HK against the Rebellion[34]
- ↑ Hong Kong pro-democracy protests continue[35]
- ↑ NBA fans protest China with pro-Hong Kong T-shirt giveaway in Los Angeles[36]
- ↑ Hong Kong Police Crack Down on Student Protesters[37]
- ↑ Protect the rights of people in Hong Kong[38]
- ↑ Twitter and Facebook bans Chinese accounts amidst Hong Kong protests[39]
- ↑ Hong Kong protests: Twitter and Facebook remove Chinese accounts[40]
- ↑ Reich, R., & Schaake, M. (2020). Election 2020: Content Moderation and Accountability. Stanford, CA: Stanford University.
- ↑ a b Human Help Wanted: Why AI Is Terrible at Content Moderation[41]
- ↑ The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People[42]