Lentis/Deepfakes

IntroductionEdit

A deepfake is a fabricated image, video, or audio recording that is meant to look and sound real. Deepfakes are created using animation, facial recognition, and machine learning technologies.

BackgroundEdit

HistoryEdit

Computer based facial animation technologies have existed since the early 1970’s, but they lacked the realism and convenience of technologies used to make modern deepfakes.[1]

The Video Rewrite research project in 1997 was a key advancement towards modern deepfake technologies. Video Rewrite used computer vision, facial animation, and existing footage to “create automatically new video of a person mouthing words that she did not speak in the original footage.” [2] This synthesized multiple technologies into a single automated process, paving the way for deepfake generating applications.

Tracking facial features in an image or video is a crucial aspect of modern deepfake technology. It was achieved in 2001 when researchers at UC Berkeley developed a computer model to map shapes to an existing image.[3][4]

The later 2000s and early 2010s brought incremental advancements in computer processing power, machine learning, facial recognition, and animation.[5][6][7]

In 2016, researchers at Stanford presented a novel approach for “real-time facial reenactment of a monocular video sequence (e.g. Youtube video).”[8] This technique superimposed an actor’s mouth onto a target video. As opposed to older methods, this was done in real time, a landmark in deepfake technology evolution. A year later, researchers at the University of Washington created a program altering the video and audio of a target video realistically and efficiently, generating a 66 second deepfake in around 45 minutes using personal computer hardware.[9] The speed and quality of this approach, as well as its ability to run on common hardware allowed for an explosion of high quality deepfakes generated by the general public.

As deepfake technology improved, Reddit users in the subreddit group “r/deepfakes” began posting original deepfakes and the FakeApp application created a “desktop tool for creating deepfakes.”[10]

Modern DeepfakesEdit

Modern deepfakes are made by training a neural network on existing images or videos of a face. Once the network learns how the face looks in different expressions and settings, it can generate its own deepfakes of the face.[11] Following past trends, it's almost certain that deepfake quality will continue to improve to the point where humans can not detect any difference between real and generated. This could create serious instances of misinformation and problems for celebrities and public figures who are often the subjects of deepfakes.

Business AdoptionEdit

Deepfakes are used for video translation and stock photo generation. While they call their service "Video Dialogue Replacement" and say they are not creating deepfakes, Canny AI offers to translate source videos and modify faces so that they look like they are speaking the new language. [12] An international business could use Canny AI to deepfake an executive giving a speech in different languages instead of using subtitles. Canny AI also offers their service for parodies of celebrities if one can prove consent. Generated.photos sells artificially generated portraits for trial software, avatars, game design, and advertising. They also have an anonymization service to generate a new face for a profile profile that "will remind you of your skin color, age, gender, [and] hair length." [13] In order to avoid data privacy issues, all photos are taken by the company instead of scraped from the internet. [14]

ConcernsEdit

Deepfakes can be abused to create images and videos of people without their consent. For example, dead actors have been deepfaked by Disney and YouTube channels. [15] Game developers using Generated.photos found that "synthetic photos were especially helpful when they needed to portray less than admirable characters." [16] Outside of games, deepfakes may be used to deliver unpopular, controversial, or dangerous information. For example, a company could create a fake executive to take the fall for a failed business plan, or a political campaign could create fake videos of the opposing side.

Both Generated.photos and Canny AI have terms and conditions meant to prevent users from engaging in illegal or malicious activities. However, the source code for deepfake programs is freely available. [17] [18] Free tiers of cloud platforms like Google Colab allow users to create deepfakes from their browser. [19] With both the quality and accessibility of deepfakes increasing, it's likely that they'll soon become mainstream tools for impersonation and dissemination of false information.

Identity TheftEdit

Facial recognition and voice synthesis allows bad actors to pretend to be people they're not. [20] Fake voices may be used in phone calls to steal victims' private information such as usernames or passwords from database managers. Alternatively, bad actors may use deepfakes to act as the victims of identity theft, calling banks or other services as the victim to gain private information or transfer money from the victims' accounts to their own. Current deepfake technology makes voice synthesis attacks feasible today with little voice data required. [21] Notable examples of deepfake scams include a CEO losing $243,000 [22] and a widow giving $287,928 to someone she believed to be a Navy Admiral. [23] The effectiveness of deepfake identity theft is questionable, since in many cases it's easier and more believable for criminals to use their own voice to impersonate someone else.

ExtortionEdit

Criminals use deepfakes to blackmail victims, mainly female celebrities, for payment or information. Criminals may blackmail victims into sending them money by threatening to release pornographic videos featuring their faces even though the victim was never involved. No easy solutions to this type of blackmail exist.

Fake NewsEdit

Although it hasn't yet become a widespread issue, the biggest threat to society we found concerning deepfakes was their potential impacts on the news. False statements attributed to politicians or experts through deepfakes may be interpreted by as real. In 2018, a video was released by Buzzfeed showing a fabricated version of former President Barack Obama voiced by comedian Jordan Peele denigrating Donald Trump. [24] The video made clear that it was fake and was made to raise awareness of the implications of deepfakes on the media. In April 2020, the Belgian branch of Extinction Rebellion, a global climate advocacy group, [25] released a video of the Belgian Prime Minister relating environmental issues to the Covid 19 crisis, believed to be real by many before they revealed it was fake. [26]

Worried about how deepfakes may affect the 2020 presidential election, Hany Farid, a professor at Dartmouth College, created a software to detect deepfakes with 95% accuracy. [27] This kind of technology will be a necessity in the future to avoid fake news, but it still might not stop people from believing it.

Government ResponseEdit

In June 2019, Chairman Schiff of the House Permanent Select Committee on Intelligence characterized deepfakes as "hav[ing] the capacity to disrupt entire campaigns, including that for the presidency" because "not only may fake videos be passed off as real, but real information can be passed off as fake." [28] The Defense Advanced Research Project Agency (DARPA) has created the Media Forensics (MediFor) platform to automatically detect and characterize manipulated images and videos. [29] [30] Earlier efforts involved editing techniques such as pasting images onto each other, and GAN-assisted manipulations were added to test data in 2019. [31]

The United States Senate passed the Deepfake Report Act of 2019 in October 2019 "[t]o require the Secretary of Homeland Security to publish an annual report on the use of deepfake technology, and for other purposes". [32] It was also added as an amendment to the National Defense Authorization Act (NDAA) for Fiscal Year 2021 in July 2020. [33] As of November 2020, the House has not passed the stand-alone bill or the amended NDAA. After the NDAA amendment, Senator Rob Portman, who sponsored both the original bill and the amendment, commented that "AI-based threats, such as deepfakes, have become an increasing threat to our democracy" and that "we must address the challenge and grapple with important questions related to civil liberties and privacy." [34]

Whether or not MediFor and other government activities will be successful against modern deepfakes is questionable. Older media manipulation relied on modifying source material, but new techniques can generate images and videos from scratch. Researchers are also constantly improving the quality of their programs. For example, the first version of face-generating program StyleGAN had blurry spots in the background and misaligned teeth, motivating the researchers to create an improved StyleGAN2.[35] The code is available on GitHub.[18]

Positive DeepfakesEdit

Below are a few examples of the positive impacts of deepfakes.

MedicineEdit

Deepfakes can be used to create synthetic images of brain MRIs. [36] These synthetic images may be used to study the formation of Alzheimers and brain tumors.

Unique VoicesEdit

Voice synthesis technologies such as vocalid.ai [37] provide users with ai generated voices. This gives those who can't speak the opportunity to talk with a unique synthetic voice.

TranslationEdit

Deepfakes can be used to hear the voices of others in several different languages without the need for translators or subtitles. A video released in 2019 featured David Beckham speaking in 9 different languages using deepfake technology to raise awareness for malaria. [38]

Crime SolvingEdit

In 2018, cellphone videos, security camera footage, and autopsy reports were used to digitally recreate the crime scene to assist in solving the murder of 3 protesters in Kiev. [39]

ConclusionEdit

This casebook focused mainly on the background of deepfakes and some of their applications. Future researchers will want to investigate how to spot deepfakes, [40] [41] how to make deepfakes, [42] and the advancement of synthesized audio. [43] Further investigation could be done on lawsuits concerning deepfakes and the legal system's response. [44] With how quickly deepfakes are progressing, some of the information in this casebook may become obsolete. Almost all aspects of this casebook should be reinvestigated in the future to see what's changed.

ReferencesEdit

  1. Facial Animation: Past, Present, and Future. http://web.cs.ucla.edu/~dt/papers/siggraph97-panel/siggraph97-panel.pdf
  2. Video Rewrite: Driving Visual Speech with Audio. http://chris.bregler.com/videorewrite/
  3. Active Appearance Models. https://people.eecs.berkeley.edu/~efros/courses/AP06/Papers/cootes-pami-01.pdf
  4. Active Appearance Model(AAM). http://ice.dlut.edu.cn/lu/AAM.html
  5. Technological Progress. https://ourworldindata.org/technological-progress
  6. A History of Machine Learning. https://www.import.io/post/history-of-deep-learning/
  7. NIST Evaluation Shows Advance in Face Recognition Software's Capabilities. https://www.nist.gov/news-events/news/2018/11/nist-evaluation-shows-advance-face-recognition-softwares-capabilities
  8. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. http://www.graphics.stanford.edu/~niessner/thies2016face.html
  9. A Short History of Deepfakes. https://medium.com/@songda/a-short-history-of-deepfakes-604ac7be6016
  10. Deepfakes. https://knowyourmeme.com/memes/cultures/deepfakes#fn2
  11. What are Deepfakes and How Are They Created?
  12. Canny AI https://www.cannyai.com
  13. Generated.photos https://generated.photos
  14. Generated.photos FAQ https://generated.photos/faq
  15. Derpfakes Carrie Fisher https://www.youtube.com/watch?v=1chnCgya32o
  16. Generated.photos Use Cases https://generated.photos/use-cases
  17. Awesome Deepfakes Github https://github.com/aerophile/awesome-deepfakes
  18. a b StyleGAN2 Github https://github.com/NVlabs/stylegan2
  19. Google Colab FAQ https://research.google.com/colaboratory/faq.html
  20. Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237
  21. Clone a Voice in Five Seconds With This AI Toolbox https://syncedreview.com/2019/09/03/clone-a-voice-in-five-seconds-with-this-ai-toolbox/
  22. https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=25eecd072241 https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=25eecd072241
  23. Scammer used deepfake video to impersonate U.S. Admiral on Skype chat and swindle nearly $300,000 out of a California widow https://www.dailymail.co.uk/news/article-8875299/Scammer-uses-deepfake-video-swindle-nearly-300-000-California-widow.html
  24. You Won’t Believe What Obama Says In This Video! https://www.youtube.com/watch?v=cQ54GDm1eL0
  25. Extinction Rebellion https://www.extinctionrebellion.be/en/
  26. Extinction Rebellion takes over deepfakes https://journalism.design/deepfakes/extinction-rebellion-sempare-des-deepfakes/
  27. The fight to stay ahead of deepfake videos before the 2020 US election https://www.cnn.com/2019/06/12/tech/deepfake-2020-detection/index.html
  28. Hearing: National Security Challenges of Artificial Intelligence, Manipulated Media, and Deepfakes https://intelligence.house.gov/calendar/eventsingle.aspx?EventID=653
  29. DARPA Media Forensics (MediFor) https://www.darpa.mil/program/media-forensics
  30. MediFor Github Repository https://github.com/mediaforensics/medifor
  31. Media Forensics Challenge Evaluation Overview https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=930628
  32. Deepfake Report Act of 2019 https://www.congress.gov/bill/116th-congress/senate-bill/2065
  33. S.Amdt. 1891 — 116th Congress (2019-2020) https://www.congress.gov/amendment/116th-congress/senate-amendment/1891
  34. Portman Deepfake Press Release https://www.portman.senate.gov/newsroom/press-releases/senate-passes-portman-schatz-amendment-assess-address-rising-threat
  35. StyleGAN2 paper describing improvements over StyleGAN https://arxiv.org/abs/1912.04958
  36. Medical Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial Networks https://arxiv.org/abs/1807.10225
  37. Vocalid.ai https://vocalid.ai/
  38. David Beckham speaks nine languages to launch Malaria Must Die Voice Petition https://www.youtube.com/watch?v=QiiSAvKJIHo
  39. Who Killed the Kiev Protesters? A 3-D Model Holds the Clues https://www.nytimes.com/2018/05/30/magazine/ukraine-protest-video.html
  40. Deepfake Videos: How To Detect Them? https://www.ibtimes.com/deepfake-videos-how-detect-them-2712765
  41. The best defense against deepfake AI might be . . . blinking https://www.fastcompany.com/90230076/the-best-defense-against-deepfakes-ai-might-be-blinking
  42. What Are Deepfakes and How Are They Created? https://spectrum.ieee.org/tech-talk/computing/software/what-are-deepfakes-how-are-they-created
  43. Can We Believe Our Ears? Experts Say To Heed Caution As Audio Deep Fake Technology Advances https://www.wbur.org/hereandnow/2020/09/28/deep-fake-video-audio
  44. Courts and lawyers struggle with growing prevalence of deepfakes https://www.abajournal.com/web/article/courts-and-lawyers-struggle-with-growing-prevalence-of-deepfakes