Lentis/Fake Users


Since the advent of major online consumer review platforms, the Internet has seen a boom in the emergence of "fake users." These are fake accounts used on internet platforms to imitate real users and generate fake user traffic and popularity for personal gain. While individuals are typically hired to write fake reviews for small businesses, fake users can involve the use of bots and clickfarms to generate views, likes, posts, ad clicks, and shares on websites. More recently, fake users have been used to manipulate the results of social media analytic methods for predicting events such as movie box-office revenues, stock market fluctuation, and electoral results.[1]

Legality of Fake ReviewsEdit

Consumers are protected from astroturfing, a use of fake reviews, at the national level by the Federal Trade Commission (FTC).[2] The FTC's mission is to "rid the marketplace of unfair and deceptive marketing."[3] According to the FTC, paid reviewers cannot publish fake claims, online endorsements cannot be purposely misleading, and any connection between a company and reviewer must be reported.[3] Consequences of paying for fake reviews include hefty fines and even shutting down the business responsible.[3] In 2013, 19 small businesses were fined over $350,000 collectively after being found guilty of writing fake reviews on platforms such as Yelp and TripAdvisor[2]

Online review platforms such as Yelp and Amazon are now cracking down on fake reviews and in 2012, Yelp began flagging businesses using Consumer Alerts. Businesses that have attempted to buy positive reviews are flagged, with a "clear warning on the front of the offending business' Yelp page, and link to relevant evidence."[4] Although the FTC and online review platforms are working together to discourage astroturfing, small businesses continue to use this deceptive tactic in hopes it will generate a stronger reputation. There are obviously economic and social incentives that far outweigh the financial burden of being caught for many of them. This begs the question: would negative social incentives, in addition to existing financial penalties, more effectively reduce fake reviews online?


Small BusinessesEdit

Small and independent businesses that face intense competition are much more likely to take advantage of fake reviews on Yelp or Craigslist, for example. Ivy League professors have found that fake reviews and other deceptive practices are highly influential on business' sales. Limited advertising resources push them toward fake customer feedback until they can afford legitimate advertising, in hopes of attaining “peer-to-peer” or viral marketing.[5] The smaller the company, the more likely it is to engage in review fraud, but once its reputation builds, the company holds itself to a higher standard.

Being caught using fake users diminishes reputation and costs legal fees. An example is Lifestyle Lift, a cosmetic surgery company, which ordered its employees to write positive reviews of its face-lift procedure on various websites. The company then agreed to pay $300,000 in penalties to settle the case.[6]

One of the ways small businesses opt to solicit fake reviews is by posting Craigslist ads, asking for individuals to write positive Yelp reviews about their companies, with payment ranging anywhere from $25 to $495. The Craigslist ad shown was posted in the "writing/editing" jobs section and asked individuals to write "well-written, 5-star reviews."[7] These ads are often anonymous and stress the importance of secrecy to protect the business' identity.

Large Internet PlatformsEdit

Large platforms such as Yelp, Facebook, Twitter, and Amazon all play host to large amounts of fake users. To maintain widespread usage, these companies put great value on preserving their reputation and do so by stopping fake users where they can. In response to the foreign interference of the 2016 US Presidential election, platforms have put more resources into fact checking. Facebook released a statement with a plan to combat fake accounts spreading misinformation, promising to hire 10,000 people for ad review and AI improvement to detect more content violations and fake accounts.[8]

It is the challenge of large platforms to do what they can to limit the occurrence of fake users, without restricting real users from sharing information. When asked about stopping foreigners from meddling in US social issues, Facebook's stance was that the right to speak out on global issues that cross borders is an important principle.[9] Some organizations depend on the ability to communicate and advertise their views across the world. While Facebook might not agree with the positions, it believes in the right to share them.


Fake users are designed to deceive and manipulate customers, or regular users of a website, leading to information asymmetry. Consumers have less accurate information regarding products, which businesses are respected, and which topics actually have attention on social media. Online retail has been steadily growing, and about 80% of customers claim to read product reviews before making purchases, making it easy to influence decisions with reviews.[10] Consumers generally rely heavily on consumer reviews as unbiased sources to make decisions on which products to buy.[11] When businesses pay individuals to write fake reviews, there is a serious breach of trust that allows persisting information asymmetry.

A reported two thirds of Americans rely on social media for news.[12] This contributes to the overall influence misleading stories generated by fake users can have on the public and in conjunction with targeted advertising can be used to target and manipulate specific consumers.

Case StudiesEdit


Reddit, deemed the "front page of the internet," is a perfect example of a startup that took advantage of fake users to build their own reputation. According to Steve Huffman, a co-founder, Reddit populated the site's content with tons of fake accounts in its early days. By doing this, Reddit was able to shape the site in the way they wanted, and as the real user base grew, the fake accounts began to fade away.[13] Huffman and Ohanian, both cofounders, filled the site with high-quality content and articles they would want to read themselves, which "set the tone" for the site.[14]

This deceptive marketing tactic for their website ultimately led to Reddit acquiring 35 million real users.[14] If Huffman and Ohanian decided against creating fake accounts, there’s a great chance that Reddit would not have ended up as successful as they have been. After all, who would want to create an account for a website without any content or other people to interact with? Reddit’s success shows that this technique of employing fake users is not completely immoral; Huffman and Ohanian had a vision for their website and realized that creating fake accounts was the most efficient way to get there.

This illustrates how young startups are able to effectively use fake reviews, or fake users, to build their reputation. Then, once they grow to a sizable company, they begin holding themselves to a higher standard. If this small team of co-founders was able to accomplish this, then how do we know that anything online is real?[13]

Yelp ReviewsEdit

Yelp is an online business review website where potential customers can see ratings and reviews of businesses before directly going to that business. Yelp has struggled with fake users ever since it was created in 2004, as many reviews which appear on the site were "fluff" to make businesses look better. In 2015, a study found that almost 20% of all reviews were fake. The issue has since gotten better due to Yelp intervention, as users of the site started to realize what they were seeing was not accurate. Yelp has implemented many new algorithms and protocols to detect these fake reviews, and since then the reputation of the site has improved[4]. Before Yelp intervention, Yelp reviews were actually a business on their own, as small businesses seeking to boost their ratings and reputation on Yelp would put out Craigslist ads for people to write good reviews for them. Those people with high numbers of reviews on Yelp would scour Craigslist for these ads and could make $25 per review according to one ad example in the link below: Yelp Craigslist Example Ad

2016 United States Presidential ElectionEdit

The alleged Russian interference in the 2016 US Presidential election has sparked national controversy and led to a strong response to fake users by social media platforms. According to Facebook, the Internet Research Agency, a Russian company that supposedly has ties to the Kremlin, created hundreds of accounts and posted tens of thousands of divisive content pieces in the time leading up to August 2017.[15] Some attribute the influence of posts from fake social media accounts as a deciding factor in the result of the election.

Impact and Social ImplicationsEdit

Violation of TrustEdit

The power of fake users stems from the subconscious trust users place in their internet “peers.” Internet users assume that their peers have nothing to gain from lying and place value on their feedback, not knowing that much of it is paid for by companies. The average customer must use their reasoning and wits to determine the legitimacy of online reviews. It’s important for them to ask themselves: "Do the reviews for this business seem real?" or "Can this post really have this many likes from real people?" The same concept can be applied to social media sites in which likes and number of followers lose their importance as everyone attains the same level of popularity.

Monitoring and Social ConflictsEdit

Regulation of fake users gives rise to the greater issues of privacy and free speech, two hallmark values of the internet. Fake reviewers, and even businesses soliciting fake reviews, generally want to conceal their identities. Disclosing it to the public may result in harassment and a worsened reputation.

Preventing bots and spammers while allowing for user anonymity is a challenging problem. While researchers can develop techniques to weed out AI-generated text, bots can also be trained to become even more sophisticated. If they’re able to avoid advanced detection algorithms, then how could someone scrolling through a user feed be expected to know what’s genuine and what is generated?

Natural language processing has allowed for entire news stories to be auto generated without humans being able to distinguish them from real ones. Posts also cannot be censored entirely by content, since removing disagreeable views goes against the values of free speech. Only in extreme cases do social media platforms censor content subject to malicious and targeted abuses.[9]

Growing InfluencesEdit

The issue of fake users has significant consequences not restricted solely to the cyber realm. Russia has been accused of interfering with the past presidential election in the U.S. Their interference didn’t stop with the hacking and leaking of Democratic emails used to undermine Clinton’s legitimacy; Russia turned Facebook and Twitter into engines of deception and propaganda.[16] Accounts created by a Russian company linked to the Kremlin was used to buy $100,000 in ads pushing divisive issues. On Twitter, thousands of fake accounts, many of which were bots, were used to spread anti-Clinton messages. And on Election day, a group of twitter bots sent out the hashtag #WarAgainstDemocrats over 1,700 times.[16] More recently, Russian bots have posted on social media in support of the net neutrality repeal, which affects U.S. legislation and has widespread economic repercussions as large ISPs seek to make monetary profit off the repeal of this bill. The FCC stated that over 7.5 million of the total 22 million comments they received were “bogus pro-neutrality comments.”[17]


  1. Metaxas, P. T., & Mustafaraj, E. (2012). Social Media and the Elections. Science, 338(6106), 472–473. https://doi.org/10.1126/SCIENCE.1230456
  2. a b Kent, K. (2014). The Legal Risks of Writing Positive Fake Reviews: Don’t Astroturf Your Online Business Reputation. ReviewTrackers. https://www.reviewtrackers.com/legal-risks-writing-positive-fake-astroturf-online-business-reputation/
  3. a b c Warner, D. (2015). The Legal Lowdown on Fake or Paid Reviews. TechCo. https://tech.co/legal-lowdown-fake-paid-reviews-2015-04
  4. a b Eater. (2016). Yelp Goes Undercover to Crack Down on Fake Reviews. https://www.eater.com/2016/5/3/11578978/yelp-fake-reviews
  5. Harvard Business Review. (2013). Research: Underdog Businesses Are More Likely to Post Fake Yelp Reviews. https://hbr.org/2013/08/research-underdog-businesses-a
  6. Miller, C. (2009). Cosmetic Surgery Company Settles Case of Faked Reviews. Nytimes.com. http://www.nytimes.com/2009/07/15/technology/internet/15lift.html
  7. Willett, M. (2013). A Craigslist Ad Is Offering $25 For Fake Yelp Reviews Of NYC Restaurants. Business Insider. http://www.businessinsider.com/craigslist-ad-for-fake-yelp-reviews-2013-5
  8. Facebook.com. (2017). What is our action plan against foreign interference? | Facebook Help Centre | Facebook. https://www.facebook.com/help/1991443604424859
  9. a b Schrage, E. (2017). Hard Questions: Russian Ads Delivered to Congress. https://newsroom.fb.com/news/2017/10/hard-questions-russian-ads-delivered-to-congress/
  10. V12data.com. (2017). 97% Say Customer Reviews Influence Their Purchase Decision | V12Data. http://www.v12data.com/blog/97-say-customer-reviews-influence-their-purchase-decision/
  11. Malbon, J. (2013). Taking Fake Online Consumer Reviews Seriously. https://link.springer.com/content/pdf/10.1007%2Fs10603-012-9216-7.pdf
  12. Shearer, E. and Gottfried, J. (2017). News Use Across Social Media Platforms 2017. Pew Research Center's Journalism Project. http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/
  13. a b Motherboard. (2012). How Reddit Got Huge: Tons of Fake Accounts. https://motherboard.vice.com/en_us/article/z4444w/how-reddit-got-huge-tons-of-fake-accounts--2
  14. a b Dot, T. (2012). How Reddit Was Built With an Army of Fake Accounts. Mashable. http://mashable.com/2012/06/19/reddit-built-with-fake-accounts/#WMdm9qccOsqX
  15. Wakabayashi, M. (2017). Russian Influence Reached 126 Million Through Facebook Alone. Nytimes.com. https://www.nytimes.com/2017/10/30/technology/facebook-google-russia.html
  16. a b Shane, S. (2017, September 7). The Fake Americans Russia Created to Influence the Election. https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html
  17. Jaeger, M. (2017, November 22). Russian Bots Target FCC in Attempt to get Net Neutrality Repealed. https://nypost.com/2017/11/22/russian-bots-target-fcc-in-attempt-to-get-net-neutrality-repealed/