Professionalism/The Rise and Fall of the Advanced Technology External Advisory Council

Factors Leading to the Creation of the Advanced Technology External Advisory Council edit

Bias in AI edit

Bias in AI is the tendency of AI models to perpetuate human biases that lead to unfair decisions. There are multiple ways in which this manifests, including Interaction bias, Latent bias, and Selection bias. Interaction bias refers to bias that is picked up from users interacting with the algorithm.[1] Latent bias refers to bias stemming unintended correlations made by AI algorithms, an example being AI correlating the term "doctor" with men because of metadata in stock images used in training.[1] Selection bias refers to bias resulting from over-representation of a population in training data.[1] There have been many implementations of AI displaying bias, however, more well known examples include COMPAS, PredPol, Amazon's biased recruiting tool, and various facial recognition software.

Google's Focus on AI Bias edit

Google is very familiar with concerns surrounding AI bias, having faced criticism of their Google Photos image-recognition algorithm that tagged images of black people as "gorillas" in 2015.[2] Since this incident Google has openly declared bias to be their greatest concern with AI. In 2017, Google's head of machine learning, John Giannandrea, said: "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” and would emphasize the importance of being "transparent about the training data that we are using, and looking for hidden biases in it.[3]" These comments provide background to Google's concerns about AI bias that were a focal point of the ATEAC's creation.

Internal Pressure edit

Project Maven edit

Project Maven was a Pentagon project intended to use computer vision, an application of machine learning, to classify objects in drone footage. The project was controversial among many Google employees, who feared their work would contribute to the increased lethality of military weaponry. While Google defended its work as "non-offensive",[4] their involvement with the project led to a dozen resignations[5] and an employee protest. In a letter signed by 3,100 Google employees, the employees state: “We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.[4]”. The demands made by employees had a clear impact on the company's release of its AI principles, which specifically stated that its AI would not be used in weaponry.

Project Dragonfly edit

On August 1, 2018, the Intercept reported that Google was working on a secret project that would provide the Chinese government with a censored version of the company's search engine. The project, code-named Dragonfly, would align with Chinese government censors; filtering blocked websites and blacklisting sensitive queries so no results were shown.[6] Many within the company were surprised by the news, with only a few hundred of Google's 88,000 employees having any knowledge of the project. The whistle blower who provided information to the Intercept expressed his ethical concerns: "I’m against large companies and governments collaborating in the oppression of their people, and feel like transparency around what’s being done is in the public interest" adding that they feared "what is done in China will become a template for many other nations."[6] Responding to the report, Google CEO Sundar Pichai defended the project, saying it was in an "exploratory" stage.[7] The defense from Pichai did little to manage criticism, as a group of 1,400 employees sent a letter to company executives seeking more transparency.[8] In the letter, the employees stated: "currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.[8]" They would also point out the ineffectiveness of the company's recently implemented AI principles and demand "an ethics review that includes rank and file employee representatives.[8]"

Google's AI Principles edit

On June 7, 2018, CEO Sundar Pichai announced seven principles to help guide the company in its future AI related endeavors[9] . He promised that rather than "theoretical concepts," these principles would be "concrete standards that will actively govern our research and product development and will impact our business decisions." Likely a response to their previous AI related blunders, the seven principles stated that AI should:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Furthermore, the announcement specifically denounced certain usages of AI, and promised that they would not "design or deploy AI" in these ways:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

The Formation and Downfall of the Council edit

Google Announces the Creation of an External Advisory Council edit

On March 26, 2019, Global Affairs SVP Kent Walker announced Google's new initiative to help guide the company in its AI decisions and policies: the Advanced Technology External Advisory Council, or ATEAC.[10] He stated that the goal of the council was to complement the AI Principles announced the previous June, and to "consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work." The council was to hold four meetings over the course of 2019, and featured many experts in relevant fields, including Alessandro Acquisti, a Professor of Information Technology and Public Policy at Carnegie Mellon University, De Ka, a Professor of Computer Science and Engineering at the Hong Kong University of Science and Technology, and Luciano Floridi, a Professor of Philosophy and Ethics of Information at the University of Oxford.

Public Backlash edit

Kay Cole James edit

There was an immediate outcry from the public over one member appointed to the council in particular: Kay Cole James, President of conservative think tank The Heritage Foundation. James purportedly held anti-immigration, anti-LGBTQ, and anti-environmental views in both a professional and personal capacity. The public began calling for Google to defend its decision to place her on the council, which they made without any transparency or explanation. They were accused of pandering to politicians by including her, rather than seeking the most qualified possible members.[11]

Alessandro Acquisti's Resignation edit

On March 30, four days after the council was announced, Alessandro Acquisti announced on Twitter that he would be declining his invitation to the council.[12] Vaguely noting that he didn't "believe this is the right forum for me to engage in this important work," it can be reasonably assumed that this change of heart since the council was announced was in response to the public controversy over the council's members.

Googlers Against Transphobia and Hate edit

The general public was not alone in its outrage. From within Google, many employees spoke out against the choice, denouncing their employer's actions. By April 1, thousands of Google employees and dozens of academic, civil society, and industry supporters had signed a petition calling on Google to remove Kay Coles James from the ATEAC.[13] According to the petition, by appointing James, rather than introducing diversity as Google claimed, her inclusion “endorses and elevates her views.” They point out that those most marginalized are already most at risk from AI technologies and that James's appointment further exacerbates this injustice.

The Dissolving of the Council edit

On April 4, 2019, just 9 days after its formation, Google announced the dissolution of the council, stating that "It’s become clear that in the current environment, ATEAC can’t function as we wanted.[10]" Despite this sudden setback, the company assured that it would still stay responsible in its work.

Remarks from James edit

After the dissolution, James publicly condemned the petitioners in an op-ed with the Washington Post.[14] She accuses the employees of making false allegations against her. In particular, she finds the claims of her being anti-LGBTQ, anti-immigrant, and "a bigot" to be character assassination: unjustified claims serving only to remove her conservative views from the public discourse. This is an interesting claim for her to make, considering the petition itself cites these accusations with primary statements from her own twitter[15] account.[16]

James and the petitioners' conflict is rooted in opposing stances on the Paradox of Tolerance. James argues that intolerance of any party is always wrong, while the petitioners act on the idea that being intolerant of intolerance can aid the creation of a tolerant society. James's stance, however, is somewhat hypocritical, as many of her publicly held policy beliefs are intolerant of LGBTQ people[15] and immigrants.[16]

Future of AI Ethics at Google edit

The alternative methods used to hold the company to its ethics were partially outlined in a June 2019 blog post,[17] and consist of internal education about responsible application of AI, internal tools for analyzing AI models, a review process, and communication with external stakeholders. This last function was ostensibly the domain of the ATEAC, but with its dissolution, the consulted stakeholders are now left entirely unnamed, leading to even less transparency.

The Advanced Technology Review Council edit

Also unspoken is the nature of another internal council at Google: the similarly named Advanced Technology Review Council. This council, unlike the ATEAC, is not public, but the known members are high-level executives. Among these are the Vice President of Google, Jacquelline Fuller, and the chief legal officer and author of the announcement of the ATEAC, Kent Walker.[18] This council represents the highest layer of the review process Google has set out to maintain ethical behavior.[19]

However, multiple employees report that the agenda of this board is still unclear. These same employees asked to remain anonymous in the report for fear of retaliation.[18] This highlights an apparent conflict of interest: should AI ethics and corporate profits conflict, should the same group of executives who greenlit projects like Maven and Dragonfly now be trusted to make the right decision?

Conclusions edit

Integrity edit

The backlash to James's appointment is easily understood through the lens of integrity. At the onset of this ethics crisis, the AI principles represented a new normative stance for Google. Above all else, they represented the company's values. James clearly contradicted those values, for instance through reinforcing unjust bias against immigrants.[16] Appointing her doesn't just add a "diversity of opinions"; it legitimizes and empowers her beliefs. Regardless of the correctness of her beliefs, her appointment betrayed Google's integrity, and in doing so undermined its employees' trust.

Transparency edit

Further undermining trust was the lack of transparency throughout this crisis. There was no description of how the council was chosen, nor the powers it would have, nor the mechanisms for assessing its effectiveness. Without these, there was no accountability of the council, and thus the company, to its employees–one of the key goals laid out by Google. Since then, Google's other approaches seem to have been more transparent and more favored, incorporating employees into the internal education programs and also having them run many levels of the review system. However, the top-level council, with its secretive nature and lack of accountability to employees may cause a similar crisis in the future, should they overturn the employees' decision on a contentious issue of the scale of Maven or Dragonfly.

References edit

  1. a b c Google explains how artificial intelligence becomes biased against women and minorities [1]
  2. Google's solution to accidental algorithmic racism: ban gorillas
  3. Forget Killer Robots—Bias Is the Real AI Danger
  4. a b ‘The Business of War’: Google Employees Protest Work for the Pentagon
  5. Google Employees Resign in Protest Against Pentagon Contract
  6. a b Google Plans To Launch Censored Search Engine In China, Leaked Documents Reveal [2]
  7. The employee backlash over Google’s censored search engine for China, explained [3]
  8. a b c Google Staff Tell Bosses China Censorship is “Moral and Ethical” CRISIS [4]
  9. Pichai, Sundar (2018), AI at Google: our principles, The Keyword: Google
  10. a b Walker, Kent (2019), An external advisory council to help advance the responsible development of AI, The Keyword: Google
  11. Statt, Nick (2019), Google creates external advisory board to monitor it for unethical AI use, The Verge
  12. Alessandro Acquisti's Twitter
  13. Anonymous (2019), Googlers Against Transphobia and Hate, Medium
  14. James, Kay Coles (2019), I wanted to help Google make AI more responsible. Instead I was treated with hostility., The Washington Post
  15. a b James, Kay Coles (2019), https://twitter.com/KayColesJames/status/1100488434500685824 {{citation}}: Missing or empty |title= (help)
  16. a b c James, Kay Coles (2019), https://twitter.com/KayColesJames/status/1108365238779498497 {{citation}}: Missing or empty |title= (help)
  17. Dean, Jeff; Walker, Kent (2019), Responsible AI: Putting our principles into action, Google
  18. a b Bergen, Mark; Brustein, Joshua (2019), The Google AI Ethics Board With Actual Power Is Still Around, Bloomberg
  19. Walker, Kent (2019), Google AI Principles updates, six months in, Google