Professionalism/Thorlaug Agustsdottir: Free Speech and Abuse on Social Media


In December 2012, an Icelandic woman named Thorlaug Agustsdottir came across a Facebook page titled "Men are better than women" ("Karlar eru betri en konur") and after an argument with a user whom she described as a "troll," she soon saw her face on the page. Her profile photo been doctored to look she had been beaten, with the words "women are like grass, they need to be beaten/cut regularly," ("Konur eru eins og gras, það þarf að slá þær reglulega") pasted on top of it. [1] Below the image read "The moral duty of every man" ("Siðferðisleg skylda hvers karlmanns"). As the world's biggest social network, Facebook has held a contentious place in the ongoing debate about what role social media is playing in how information is spread around the world today. [2]


Background edit

The Page edit

The public Facebook group contained a large amount of hate speech and misogynistic comments, with images promoting both superiority and violence toward women. One of the images, according to Agustsdottir, “was of a young woman naked chained to pipes or an oven in what looked like a concrete basement, all bruised and bloody. She looked with a horrible broken look at whoever was taking the pic of her curled up naked.” The About section of the page goes into further detail of their ideals, claiming that women are "stupid and inferior," and have "[never] invented nor discovered anything, except original sin and sandwiches." This continues into the description, stating that it has been scientifically proven that women have smaller brains and should therefore be used as sex toys. [3]

Many people commented on this content and often the perpetrators would engage in a classic troll-like manner. Thorlaug engaged with them and had used humor to make her point, exchanging memes and trying to appeal to their sensibilities. In retaliation on December 28, 2012 the trolls took Thorlaug's profile picture and doctored it, making it look like she had been beaten with black eyes and a bloody nose under a caption in Icelandic: "Women are like grass, they need to be whacked regularly". This picture was immediately reported to FB by Thorlaug and scores of other people, with FB moderators rejecting all reports, stating that the doctored profile picture did not violate FB's Terms of Service.

Reporting the Incident edit

After the incident, Agustsdottir reported the image to Facebook, tagging it as "graphic content." A few hours later she received a notice that the image did not meet the criteria for removal. After reporting the image several more times over a period of over 24 hours using both "copyright material", "harassment" and "graphic content" as reason for removal Agustsdottir continued to receive the same message that the picture did not violate Facebook Terms of Service (ToS). She claims to know the least thirty others have reported the site to Facebook and has posted several screenshots confirming Facebook's refusal to remove the image.[4]

Initially Agustsdottir reported the incident to the Icelandic police and provided information from doxxing the male teen perpetrators behind the page. She decided not to press charges but reported the incident to Icelandic Child Protective Services after receiving information on the perpetrator's mental health status, and because she felt Facebook was the guilty party that had broken promises made to end-users like herself in their Terms of Services.

Facebook's Response edit

“We take our Statement of Rights and Responsibilities very seriously and react quickly to remove reported content that violates our policies. In general, attempts at humor, even disgusting and distasteful ones, do not violate our policies. When real threats or statements of hate are made, however, we will remove them. We encourage people to report anything they feel violates our policies using the report links located throughout the site.”

Facebook's moderation team faces an issue of deciding whether something is offensive and inappropriate. [5] Their first response was in favor of freedom of speech. Some things may offend some people, but be comical to another. Facebook originally believed that it was not their place to decide what was offensive or humorous. A representative told BBC "It is very important to point out that what one person finds offensive another can find entertaining - just as telling a rude joke won't get you thrown out of your local pub, it won't get you thrown off of Facebook."

After a Wired article was released regarding the issue,[6] a spokesman from Facebook reversed their stance on imagery that promoted violence toward women, stating a photo it had previously deemed acceptable for the social networking site "should have been taken down when it was reported to us and we apologize for the mistake."

Agustsdottir herself however did not receive any reply personally from Facebook about staff's repeated refusals to take down a picture that was clearly a doctored version of her profile image, the messages she kept getting through Facebook's report system was that her claims were being rejected and thus continued to ask publicly for answers.

She wrote an open letter to Facebook and appeared on Danish news, syndicated to Europe and S-America, demanding a response from Facebook as to why staff did not take down a picture that violated several conditions set forth in the site's Terms of Services. At this point the story was being picked up by the global media, while remarkably no US media outlet has ever covered any mention of the incident.

Two weeks later Agustsdottir finally received contact information from Facebook Scandinavia and managed to get a personal apology from their representative, albeit she received no explanation of what actually happened in-house or what procedures Facebook uses to determine what violates their policies.

Moderation Difficulties with User-Generated Content edit

Many websites, including Facebook strive to provide the best possible environment for their users, so that people keep coming back to the site. They want for their visitors to feel safe and not exposed to inappropriate content. On May 3, 2017, Facebook announced a new plan to add 3,000 more people to its operations team to scream for harmful videos and other posts to them more quickly in the future. Mark Zuckerberg said that this would be in addition to the 4,500 people already working in the capacity. It is still not clear if these would be full-time employees and contractors.

[7]

How content is monitored edit

Most of Facebook's moderation is done by recent recent college graduates. These workers sort through inappropriate and disturbing posts one by one. Moderators are in charge of interpreting Facebook's policies and applying them to content shared on the website. The policies are given to each user upon making an account and are always available through Facebook's Community Standards page. The case of bullying or harassment, the policy supports the removal of "pages that identify and shame private individuals", "Images altered to degrade private individuals", "Photos or videos of physical bullying posted to shame the victim", "Sharing personal information to blackmail or harass people", and "Repeatedly targeting other people with unwanted friend requests or messages." Similar regulations are set in place for criminal activity, sexual violence, and more. Moderators are tasked with interpreting posts and placing them in categories that are either reasonable or restricted. This does not always have a successful result for the users, because everything is up for interpretation.

[8][9]

Mental Health of Monitors edit

Another serious difficulty with monitoring content is the mental health affects experienced by moderators. Many moderators experience similar symptoms to individuals with Post traumatic stress disorder, or PTSD. Most employees quit within 3 to 6 months of starting. The problem with Facebook's efforts in policing questionable content is that the technology is insufficient. Artificial Intelligence is not necessarily ready or up to the task, so Facebook is still trying to figure out how to make this work. Some scanning software was been developed, but the technology is not ready for the large scale use that Facebook would require. Guardware produced a flash drive that could scan for inappropriate images; however, numerous false positives were reported. Having monitoring technology would also aid Facebook in its growing need for content moderators, saving them from employing 3,000 more people.

Legal Concerns edit

The ability for services to accept user-generated content opens up a number of legal concerns: depending on local laws, the operator of these services may be liable for the actions of its users. In the United States, "Section 230" exemptions of the Communications Decency Act state that "no provider or under of an interactive computer service shall be treated as the publisher or speaker of any information by another information content provider." This effectively provides a general immunity for websites that host user-generated content that is defamatory, deceptive, or otherwise harmful, even if the operator know that the third-party content is harmful and refuses to take it down. An exception to the general rule is if a website promises to take down the content and then fails to do so. Facebook has come under scrutiny under this exception.

[10]

Similar Cases edit

Facebook has made changes throughout its lifespan from user userface, to adding/removing features, to policy changes. The user-generated content of some user pages, public pages and groups, has been criticized for promoting or dwelling upon controversial and often divisive topics.

String of Suicides on Facebook Live edit

On April 26, 2017, a 49-year-old Alabama man live streamed himself committing suicide on Facebook because he was distraught over a break-up. James M. Jeffrey from Robertsdale was in the middle of a broadcast when he suddenly took his gun and shot himself in the head. The sheriff's office said that Jeffrey's suicide was viewed over 1,000 times before it was finally removed Facebook two years later. Jeffrey is the latest person to use Facebook's live streaming to film a suicide. A Thai man filmed himself killing his 11-month-old daughter in two video clips posted on Facebook before committing suicide.

[11]

Shooting of Robert Godwin edit

 
Facebook Live

On April 16, 2017, 74-year-old Robert Godwin Sr. was shot and killed while walking on a sidewalk in the Glenville neighborhood of Cleveland, Ohio. The suspect, Steve Stephens, posted a cell phone video on his Facebook account, leading many media outlets, both during the manhunt and afterward, to dub Stephens as the "Facebook killer." The graphic video of Godwin's murder remained accessible to the public on Stephen's Facebook account for more than two hours. The delay renewed criticism of Facebook over its moderation of offensive content, in particular, public posts of video and other content related to violent crimes.

"We have a lot more to do here. We're reminded of this this week by the tragedy in Cleveland. Our hearts go out to the family and friends of Robert Godwin Sr., and we have a lot more work and we will keep doing all we can to prevent tragedies like this from happening."

Facebook continues to struggle to moderate data as new technology platforms are used.

[12]

Violent Content from Terrorist Organizations edit

In early 2013, Facebook was criticized for allowing users to upload and share videos containing violent content (e.g footage of people being decapitated by terrorists). Facebook originally banned such material from appearing on the site. During the time, criticism from Facebook's stance on breastfeeding started to arise. In October 2013, Facebook stated that it would continue the ban on breastfeeding photos, but they would now allow clips of extreme violence, claiming that users should be able to watch and condemn, but not celebrate and embrace such acts. The move was criticized and much concern was expressed for material for long-term psychological damage to viewers, as well as younger Facebook users. Facebook later turned around their decisions again to accept such content and later stated that material in future must have a warning message.

[13]

Failure to Remove Sexualised Images of Children edit

In 2017, Facebook groups were discussing swapping what appeared to be child abuse material. BBC reported dozens of photos to Facebook, but only 20% were removed. After BBC requested an interview, they provided examples of the images to Facebook. Facebook immediately cancelled the meeting and involved the authorities stating, "It is against the law for anyone to distribute images of child exploitation."

BBC tested Facebook's claims. They used the report button on 100 posts that appeared to break the guidelines. Only 18 of the reported posts were removed. Facebook guidelines also forbit convicted sex offenders from having accounts, but BBC found five convicted paedophiles with profiles. All were reported, but none were removed.

As a result, Ann Longfield, the Children's Commissioner of England expressed how disturbed and disappointed she was. The National Society for the Prevention of Cruelty to Children (NSPCC) also voiced concern:

"Facebook's failure to remove illegal content from its website is appalling and violates the agreements they have in place to protect children. It also raises the question of what content they consider to be inappropriate and dangerous to children."

Facebook later provided a statement about their review and removal of all of the reported items. [14]

Conclusion edit

Mark Zuckerberg stated that Facebook is already working with local community groups and law enforcement to try to help those who have posted about harming themselves or others on the social networking site, and he plans to make it simpler to do so in the future. As it stands, regular users need to flag something as inappropriate for a moderator to then watch and decide if it should be removed and whether further action is needed. In the future, Facebook will develop automated tools able to detect what's going on in a video, potentially remove the offensive post before a human user or moderator even has to see it. But, this can also backfire for Facebook. In 2016, Facebook fired all of its human editors that ran the Trending news section of its homepage, believing it could replace the contracted workers with their AI system. Within days of the AI transition, fake news found its way into the Trending section, showing that AI moderation is not yet an option.

In the meantime, Facebook offers resources to guide users in moderating their own content. They provide options such as blocking particular words, profanity blocking settings, and user blocking options. Users can also choose who can view and post on their timelines. These personal moderation options do not solve the overall issue of removing inappropriate content, but it can assist users in restricting the content viewed from and on their accounts.

References edit

  1. Hudson, L. (2013, January 04). Facebook’s Questionable Policy on Violent Content Toward Women. Wired. Retrieved from https://www.wired.com/2013/01/facebook-violence-women/
  2. Reynisson, S. (2013, January 01). „Það þarf að slá þær reglulega“. DV. Retrieved from http://www.dv.is/frettir/2013/1/1/thetta-er-langt-yfir-strikid/
  3. Ritstjorn (2012, December 12). Hat­urs­ár­óð­ur rek­inn gegn kon­um á ís­lenskr­i Fac­e­bo­ok-síðu. DV. Retrieved from http://www.dv.is/frettir/2012/12/12/hatursarodur-rekinn-gegn-konum-islenskri-facebook-sidu/
  4. https://web.archive.org/web/20130106111655/https://www.dv.is/frettir/2013/1/1/thetta-er-langt-yfir-strikid/
  5. Engel, K. (2015, April 15). Moderating Facebook: The Dark Side of Social Networking . Who is Hosting This. Retrieved from http://www.whoishostingthis.com/blog/2015/04/15/moderating-facebook/
  6. https://www.wired.com/2013/01/facebook-violence-women/
  7. Facebook. (2017). Community Standards. Retrieved from https://www.facebook.com/communitystandards
  8. Shaw, K. (2007, July 25). USB device uses image analysis to scan for inappropriate images. Retrieved from http://www.networkworld.com/article/2293058/lan-wan/usb-device-uses-image-analysis-to-scan-for-inappropriate-images.html
  9. Chen, A. (2014, October 23). The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed. Retrieved from https://www.wired.com/2014/10/content-moderation/
  10. Doran, A. (2016, August 9). The Test of Time: 
Section 230 of the Communications Decency Act Turns 20. Retrieved from http://www.medialawmonitor.com/2016/08/the-test-of-time-%E2%80%A8section-230-of-the-communications-decency-act-turns-20/
  11. Specker, L. (2017, April 26).Baldwin County man committed suicide on Facebook Live, sheriff's office reports. Retrieved from http://www.al.com/news/index.ssf/2017/04/sheriffs_office_baldwin_county.html
  12. Heisig, E. (2017, April 16). Facebook shooting victim's son says Cleveland man was father of 10, grandfather of 14 . Retrieved from http://www.cleveland.com/crime/index.ssf/2017/04/facebook_live_shooting_victims.html
  13. Goel, V. (2016, March 15). Facebook Clarifies Rules on What It Bans and Why . Retrieved from https://bits.blogs.nytimes.com/2015/03/16/facebook-explains-what-it-bans-and-why/
  14. Crawford, A. (2017, March 07). Facebook failed to remove sexualised images of children. Retrieved from http://www.bbc.com/news/technology-39187929