Issues in Interdisciplinarity 2018-19/Subjective and Objective Truth in AI

Objective Truth in AIEdit

Artificial intelligence (AI) is often thought to make objective decisions easier. Here, objectivity refers to conclusions based on critical thinking and scientific evidence, where the conclusion is indisputable and there is only one true answer[1]. Made up of formula and algorithms, AI can process vast amounts of data to come to a conclusion that is significantly more accurate, and therefore objective, than a human can achieve[2].

An example of this is machine learning and the task of identifying subjects in pictures. Though simple for humans, AI needs repetitive training with massive amounts of data to tell the difference between drinks, or a table and a stool. Neural networks in AI begin with one or multiple inputs, such as a picture, and processes them into one or multiple outputs, such as whether the picture shows wine or beer. These outputs consist of a complexity of ‘neurons’ that are grouped into layers, where one layer interacts with the next layer through weighted connections – each neuron carries a value, which is multiplied with the neuron in the consequent layer.[3]  Bias functions such as Eθ(θˆ) − θ[4], can be coded into the neural network and passed through the layers. As a result, inputs can be propagated through the whole network and the machine is taught to make predictions and draw conclusions that are as accurate as possible. This continual testing can reach decisions for extremely complex problems[5].

As used by Accenture in their teach-and-test framework for AI[6], the continual connectivity and data processing mentioned previously can be tracked, and decisions or conclusions reached by the AI system can be questioned. The AI can even be coded to justify the decisions it reaches[7]. This can provide peace of mind that the AI is achieving human-centred, unbiased and fair conclusions – objectivity.

Subjective Truth in AIEdit

It is often argued, however, that the supposed objective decisions made by AI end up becoming subjective because the data sets being used are biased [8][9]. Here, subjectivity refers to a belief based on personal opinions, experiences and feelings and not on scientific evidence[10]. As human beings we all have our own biases, and no one can be truly objective [11]. As we are both creating the AI itself and the data it processes, it can be inherently implied that AI is never going to be objective.

Gender and ethnicity biases are often unconsciously inputted into algorithms. A notable example of this is AI facial recognition software identifying black women as men[12][13]. It is suggested that this is down to the unconscious bias of computer scientists and engineers, the majority of which are white and male[13]. Similarly, when searching for pictures on Google, the word ‘CEO’ will bring up pictures of men and the word ‘helper’ will bring up pictures of women[14]. This is based on biased data sets on what a CEO looks like. Most CEO’s are indeed men, but this is based on historical patriarchal ideas that are generally considered wrong[15].

As AI is becoming increasingly more prominent in everyday life; self-driving cars, Google home devices, advertising, and many more applications, ethics need to be considered. Ethics can be defined as means to tackle the question of morality[16], but ethics can be interpreted differently according to one's opinions, beliefs and perspectives, as a result trying to create AI that is ethical is likely to cause many problems. Especially when these decisions are coupled with potentially biased data [17].

Interdisciplinary Approach to AIEdit

From a mathematical, objective point of view; AI provides significant computing and decision-making power that humans will never be able to accomplish on their own, achieving more of an insight into complex problems. From a subjective, ethical and philosophical stand point; AI will never be truly objective[18] and we’re likely to run into significant problems where AI ‘gets it wrong’, such as the 2010 Flash Crash, in its pursuit to find ‘the truth’ or to reach a logical conclusion[19][20].

As an example, AI could be used in recruitment to eradicate unconscious bias in hiring[21]. However, if a machine learning algorithm was used, data about gender, race, disability etc. could inform the AI to make decisions to hire white, straight, able-bodied men – who according to bias data are the least risky, and therefore, most cost-effective choice of employee[22]. It could easily highlight our own biases and amplify them[20]. And, because machine learning is done in itself, it is a black box – we input data and we get data out, without auditing the results, we could be completely unaware of what data points the AI was using to inform its decision[22].

AI struggles to be truly objective when presented with problems that have ethical questions tied to them[23]. However, evaluating AI from an interdisciplinary perspective ensures that there has been considered thought about the effects of AI and the decisions it has to make. Obviously, computer science and electronic engineering play a huge role in creating the technology, but philosophy and the social sciences such as anthropology, economics and psychology are needed in the development of AI to ensure we produce systems that ‘think’ about the other effects of its conclusions, making AI both useful and safe for humans to use in the future.


  1. Mulder, D. H, Objectivity [Internet]. Sonoma State University, California: Internet Encyclopedia of Philosophy; [updated 2004 Sept 9; cited 2018 Dec 9]. Available from:
  2. ICO, Big data, artificial intelligence, machine learning and data protection [Internet]. Cheshire, UK: Information Commissioner's Office; [updated 2017 May 17; cited 2018 Dec 9]. Available from:
  3. Marr, B., What Are Artificial Neural Networks – A Simple Explanation For Absolutely Anyone [Internet]. Forbes; [updated 2018 Sept 24; cited 2018 Dec 9]. Available from:
  4. Estimation, bias, and mean squared error [Internet]. Cambridge, UK: Statistical Laboratory; [updated 2018; cited 2018 Dec 7], pp.2. Available at:
  5. Luger, G. F., 'Foundations for Connectionists Networks'. In: Artificial Intelligence: Structures and Strategies for Complex Problem Solving. Essex: Pearson Education Limited; 2005. p. 455
  6. Bennink, J., Accenture Launches New Artificial Intelligence Testing Services [Internet]. Chicago: Accenture; [updated 2018 Feb 20; cited 2018 Dec 9]. Available from:
  7. Cathelat, B., 'How much should we let AI decide for us?' In: Brigitte Lasry, B. and Kobayashi, H., UNESCO and Netexplo. Human Decisions Thoughts on AI. Paris, France: UNESCO Publishing; 2018. p. 132-138. Available from:
  8. Vanian J., Unmasking AI's bias problem [Internet]. New York: Fortune; [updated 2018 Jun 25; cited 2018 Dec 2], Available from:
  9. Srinivasan, R., 'The Ethical Dilemmas of Artificial Intelligence' In: Brigitte Lasry, B. and Kobayashi, H., UNESCO and Netexplo. Human Decisions Thoughts on AI. Paris, France: UNESCO Publishing; 2018. p.107. Available from:
  10. Francescotti, R., Subjectivity [Internet]. Abingdon; Routledge Encyclopedia of Philosophy; [updated 2017 April 24; cited 2018 Dec 9]. Available from:
  11. Naughton, J., Don't worry about AI going bad – the minds behind it are the danger [Internet]. London: The Guardian; [updated 2018 Feb 25; cited 2018 Dec 4]. Available from:
  12. Lohr, S., Facial Recognition Is Accurate, if You’re a White Guy [Internet]. New York: The New York Times; [updated 2018 Feb 9; cited 2018 Dec 3]. Available from:
  13. a b Buolamwini, J. and Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. JMLR. 2018. [cited 2018 Dec 3] 81:1–15. Available from:
  14. Devlin, H., AI programs exhibit racial and gender biases, research reveals [Internet]. London: The Guardian; [updated 2017 Apr 13; cited 2018 Dec 4]. Available from:
  15. Johnson, A. G., 'What is this thing called patriarchy?' In: The Gender Knot. Philadelphia, PA: Temple University Press; 1997. p. 6
  16. Mulder, D. H, Ethics [Internet]. Sonoma State University, California: Internet Encyclopedia of Philosophy; [updated 2004 Sept 9; cited 2018 Dec 9]. Available from:
  17. Bostrom, N. and Yudkowsky, E. 'The ethics of artificial intelligence'. In: The Cambridge Handbook of Artificial Intelligence. Cambridge: Cambridge University Press; 2011. p. 316-334. Available from: [cited 2018 Dec 9]
  18. Moor, J. H., 'The Nature, Importance, and Difficulty of Machine Ethics'. In: Anderson, M. and Anderson, S. L. Machine Ethics. New York: Cambridge University Press; 2011. p. 13. Available from: [cited 2018 Dec 3]
  19. Jøsang, A., Artificial Reasoning with Subjective Logic [Internet]. Trondheim, Norway: Norwegian University of Science and Technology; [updated 1997; cited 2018 Dec 3]. Available from:
  20. a b Newman, D., Your Artificial Intelligence Is Not Bias-Free [Internet]. Jersey City, New Jersey: Forbes; [updated 2017 Sept 12; cited 2018 Dec 3]. Available from:
  21. Lee, A. J., Unconscious Bias Theory in Employment Discrimination Litigation, Harvard Civil Rights-Civil Liberties Law Review. 2005. [cited 2018 Dec 3]; 40(2): 481-504. Available from:
  22. a b Tufekci, Z., Machine intelligence makes human morals more important [Internet]. Banff, Canada: TEDSummit; [updated 2016 Jun; cited 2018 Dec 3]. Available from:
  23. Polonski, V., The Hard Problem of AI Ethics – Three Guidelines for Building Morality Into Machines [Internet], Paris, France: The Forum Network, hosted by the OECD; [updated 2018 Feb 28; cited 2018 Dec 3]. Available from: