Information Technology and Ethics/Privacy and Artificial Intelligence
The privacy issue is one of the major threats in the development and utilization of data resources at artificial intelligence area
Privacy and AI: Internet of ThingsEdit
Overview The Internet of Things (IoT) is a concept that has been around for quite a while now. However, in recent years, it has become something considered significant and widespread. The origins of the term date back more than fifteen years and are attributed to the Auto-ID Labs' work at the Massachusetts Institute of Technology (MIT) on networked radio-frequency identification (RFID) infrastructures . Today, IoT devices are everywhere and are present in just about every sector and industry. With current predictions, IoT is expected to reach 75 billion devices by 2025 .
IoT has been growing at a rapid pace, evolving much faster than the associated security and privacy standards for its devices. With many unaware that some of the tools they use daily are IoT devices and oblivious of the amount of personal and private information continuously transmitted, an area that has gotten a lot of attention and concern is user privacy. This technology offers immense potential for empowering citizens, making governments transparent, and broadening information access. But, to do so, sensors, including those embedded in mobile devices, collect a variety of data about citizens' lives, which is then aggregated, analyzed, processed, fused, and mined to extract useful information . However, a variety of measures are being used to improve user privacy, including VPNs, Transport Layer Security, and DNS security extensions. Anxieties about privacy in the IoT age have led many to consider that privacy and the Internet of Things are inherently incompatible. For example, the American Civil Liberties Union (ACLU) expressed concern regarding the ability of IoT to erode people's control over their own lives, stating, "There's simply no way to forecast how these immense powers – disproportionately accumulating in the hands of corporations seeking financial advantage and governments craving ever more control – will be used. Chances are big data and the Internet of Things will make it harder for us to control our own lives, as we grow increasingly transparent to powerful corporations and government institutions that are becoming more opaque to us" .
There have been attempts to help maintain privacy, such as the Privacy Act of 1974; however, these privileges offered by legislation are not enough for modern times. IoT devices will create new challenges and gray areas to bypass government laws in ways that could not have been imagined decades ago. The British Government attempted to respond to these rising concerns about privacy and smart technology by stating that it would follow formal Privacy by Design principles when implementing its smart metering program, which would replace traditional power meters with smart power meters. This process could track and manage energy usage more accurately; however, the British Computer Society has expressed doubt that these principles were truly implemented. In addition, the Dutch Parliament rejected a similar smart metering program, basing their decision on privacy concerns, although the program was later revised and passed .
Privacy and AI: Manufacturing and Supply Chain ManagementEdit
AI has been beneficial to supply chain management in the industrial sector, and these tools have helped optimize asset inventory, raw material requirements, energy consumption, and staffing. In addition, AI is a significant contributor to Industry 4.0, the current automation and data exchange trend in manufacturing technologies, which will produce smart industries. These smart systems depend on big data, cloud computing, and machine learning to function. However, the more data involved, the bigger the security concerns, and leakages or privacy breaches can cause severe business consequences .
There are various ways that manufacturers are already using AI to improve efficiency, quality, and supply chain management, and while promising, these plans are not absent of concerns. contains the locations, vendor organization details, and personal employee details AI-facilitated predictive quality and yield tools provide continuous analysis using algorithms trained to understand each production process and can help reveal the hidden causes of loss in manufacturing production . Unfortunately, if the details of these causes are leaked, it could potentially lead to a damaged reputation for the business if the reasons were due to mistakes or oversights. Predictive maintenance systems rely on AI, machine learning, and mathematical algorithms to predict a subsequent failure and alert employees to perform targeted maintenance procedures to prevent the failure. Securing these operations is crucial as the compromise of the access control mechanism may change schedules of the maintenance activities, loss of the smart components, or, at worst, cost human lives due to malfunctioning. Radio-frequency identification (RFID) systems have been utilized effectively in the manufacturing industry to track and identify parts. Recent research has evolved to improve RFID network planning (RNP) that uses artificial neural network (ANN) models, computational AI algorithms, and mathematical models of RNP to develop an AI paradigm to optimize RFID network coverage . However, this technology comes with certain risks. For example, suppose the location and asset details processed by these algorithms are compromised. In that case, it may cause disastrous results for nuclear and scientific research plants that deal with harmful and critical substances. The continuous data transmission and processing using device readings, turbine temperature, processing cycles, and AI make it convenient and cost-effective to automate and streamline specific manufacturing divisions. Nevertheless, a compromise in crucial details of the production elements leads to a breach of intellectual properties.
Overview The use of Artificial Intelligence (AI) in healthcare has increased due to the advancement in technology. AI in the healthcare system has the potential to push human performance. However, the safety, fairness, and accountability of building an ethical AI are paramount for applying AI in the medical field. As AI is being integrated into the medical field, several risks and challenges need to be addressed. They range from the protection of patient data privacy, injuries of patients from AI, diagnosis authority between AI and physician, and many more . The solutions to these problems are complex and may require lawmakers to pass policies that address the concern of AI. Some other solutions may require reeducation of physicians to accommodate AI systems.
The increase in AI development concerns policymakers as privacy laws and regulations are old and outdated. It does not account for AI capabilities in healthcare and the security risk it poses to health information. AI requires a large amount of data to perform its functions, and the data is vulnerable to breaches and hackers who are actively looking to exploit any vulnerability detected. Data collection raises questions about the privacy and consent of patients in using their information for research, development, and commercial purposes, as well as the issue of intellectual property ownership of the data generated by the AI systems . Additionally, there is also some concern about the interaction between clinicians and AI. In the event of a disagreement between the AI and the physician’s diagnosis, which choice will be made? Will one have the authority to overrule the other? Will AI systems replace physicians entirely? These are some of the issues policymakers need to address quickly to avoid repeated violations of patient rights and privacy.
Bias and InequalityEdit
Artificial Intelligence is prone to bias which can be introduced from different areas of development. One of them is how the AI system is tuned to specific configurations and how the algorithm is programmed to print particular results while ignoring other variables . Another area where bias can be introduced to AI is the use of existing data tainted with bias. The way data is interpreted and evaluated may be influenced by bias, which could impact the effectiveness of AI and its functions. Governments may address these issues by offering guidance for data sharing between public and private sectors for more general data that capture the population group. The government should hold companies that use AI systems accountable by having a comprehensive regulation and policy that will address these biases.
Privacy and AI: Facial RecognitionEdit
As the digital era continues to progress, facial recognition has become prevalent for organizational security, law enforcement, and commercial use. However, professionals in Information Technology are left responsible for enforcing ethical practices that would result in an increased quality of life for the general population instead of strictly increasing surveillance potential for the public and private sectors. Considering there are valid uses for facial recognition in law enforcement, it is reasonable to argue in favor of its service related to increasing the quality of life in a societal context. But, for this to occur, a regulated standard operating procedure among the public and private sectors is needed to prevent facial recognition technology from infringing upon citizens’ rights.
Facial recognition accuracy is improved when machine learning is used to recognize patterns on test datasets, meaning the validity of facial recognition is dependent on the data being used to develop it. This naturally leads to an evaluation of the ethical compass of companies responsible for carrying out the development of facial recognition technology as a means of profit acquisition. Depending on the dataset used to develop it, there is a risk of biased facial recognition providing an inconsistent output when encountering different ethnicities, races, and age groups. Another risk factor is the trustworthiness of the server performing biometric processes . With all this taken into consideration, the development of this technology is inevitable, and risk mitigations are preferable to restricting its use altogether.
The ethical guidelines in the IT profession support the idea of exploring new technology as a means to improve the quality of life. Still, at the same time, it requires industry intervention to ensure that the implementation of facial recognition does not infringe on the general population’s privacy rights. As increased data privacy regulation pushes private organizations to opt-out of data collection for commercial purposes, the same could be suggested for facial recognition . However, such regulation becomes challenging and ineffective in public settings where individuals are subject to video surveillance from law enforcement and when entering a private commercial property with proprietary policies. For these unavoidable scenarios, the most ethical approach would be to mitigate risk factors with the following objectives to prioritize the quality of life over data monetization:
- Reducing the biases in facial recognition output
- Legal enforcement of secure data handling practices for organizations processing biometric data.
- Regulatory interference to avoid infringing upon the privacy rights of law-abiding individuals unsuspected of committing a crime.
Privacy and AI: Autonomous VehiclesEdit
Society is quickly progressing toward a future run entirely on renewable energy, and one key focus of progressive countries is to reduce the carbon footprint by eliminating combustion automobiles and replacing them with fully electric ones. These electric vehicles offer the unique ability to reduce emissions and utilize their advanced artificial intelligence capabilities to automate the driving experience and make roadways safer. However, for the systems to function as designed, an extensive amount of information must be collected using the vehicle’s sensors, including data about the surroundings and other vehicles, route history, conversations between individuals, and cellphone communication when connected through the phone system. As these vehicles require continuous data collection and transfer, data privacy and protection prioritization are paramount to prevent identity and financial fraud. The following section discusses the privacy-invasive capabilities of autonomous vehicles and the potential privacy risks associated with autonomous vehicle data collection.
Owner and Passenger InformationEdit
The most crucial data collected, especially when coupled with other information presented, is the unique identifiers of the owner and any passengers, which can be linked to a real-world identity. The vehicle would likely need to save this information to authenticate individuals for authorized use and numerous customizable safety, comfort, and entertainment settings. Identities can be distinguished with a high degree of certainty based on selected preferences and other information collected while in use, such as conversations between individuals and cellphone communication when connected through the phone system .
Location data associated with the vehicle occupants is routinely collected and used for route planning, recalling previous destinations, identifying points of interest, and gathering real-time traffic statuses. The exact process of aggregation used by the vehicle, which collects individual and diverse pieces of location data and combines them to predict future destinations and movements, can identify details that occupants may prefer to keep private from the public. For example, inspecting patterns in routes and timestamps may provide details about the occupants, such as where they live and work and any locations they frequently visit. From a personal perspective, this can potentially expose an individual's familial, political, professional, religious, and sexual associations while also leading to safety concerns regarding stalkers and abusers. From a business perspective, this location data could help identify shopping preferences and spending habits, providing valuable marketing insight to advertisers, similar to how browser cookies are used. This practice's implications include providing customized advertising through the infotainment system to specifically routing a vehicle through a sponsored route to expose the occupants to specific companies or destinations based on the data collected .
Autonomous vehicles learn and operate using sensors that continuously absorb and store data about the surroundings, including other vehicles and pedestrians, creating potential privacy concerns. These are people who have not consented to any data collection or use and are often unaware of this technology's privacy impacts on their lives. Whereas other privacy-invasive technologies have allowed users to opt-in, autonomous vehicle systems cannot provide every pedestrian and driver encountered notice and choice. There are fears that these sensors could be used covertly by governments to log the physical movements of every person within the purview of the vehicle, making it possible to find anyone anywhere and leading to concerns that political dissidents could be targeted. This issue is exacerbated by regulatory protection laws like GDPR, specifically the ‘right to be forgotten’, which cannot be granted in these circumstances . Another concern for individuals purchasing these new vehicles is how insurance agencies will use the data collected by the system. Insurance companies argue that gathering specific information about individual drivers, their driving habits, and their real-time situational awareness can provide them with accurate risk analysis. However, some worry that the data could be used in a pay-as-you-drive model where premiums are determined based on driving performance.
The European Union is leading the way to privacy legislation with GDPR, aiming to strengthen data protection across the EU and restrict data transfer outside its borders. Notably, GDPR requires data protection to be incorporated from the beginning at the design stage, which has come to be known as 'privacy by design.' Using the privacy by design approach and embedding this priority into the system may protect the privacy of individuals by extension of protecting its own. The eight design strategies that have been proposed to achieve privacy by design are :
- minimize data collected using select-as-you-collect, anonymization, and pseudonymization design patterns.
- hide data using encryption both in transit and at rest, as well as obscuring network traffic.
- separate personal data as much as feasible through distributed approaches.
- aggregate data to process at the highest level and with the least possible detail using the k-anonymity family of techniques or differential privacy.
- inform the subjects of the system transparently by having adequate interfaces and detecting potential privacy breaches.
- provide control to users over data using techniques such as user-centric identity management and end-to-end encryption.
- enforce privacy policies through appropriate access control mechanisms.
- demonstrate compliance with privacy policies through logging and auditing.
Privacy Risks in Data AcquisitionEdit
With the extensive use of various types of data collection facilities, intelligent systems can not only identify identities through fingerprints, heartbeats, and other physiological characteristics, but also automatically adjust lighting, room temperature, music, and even sleep time according to different people's behavioral preferences. Exercise conditions, eating habits, and changes in physical signs determine whether the body is healthy. However, the use of these smart technologies means that intelligent systems grasp a significant amount of personal information, and even know themselves better than you. If used correctly, these data can improve the quality of human life, but if private information is used illegally for commercial purposes, it can cause privacy violations.
Privacy Risks in Cloud ComputingEdit
Because cloud-computing technology is easy to use, it provides a model, which use based on shared pools. Many companies and government organizations begin to store data in the cloud. After storing private information in the cloud, this information is available to various threats and attacks. Because artificial intelligence systems have high requirements for computing power, cloud computing has been configured as the primary architecture in many artificial intelligence applications. Therefore, when developing such smart applications, cloud privacy protection is also a problem that people need to consider.
Privacy Risks in Knowledge ExtractionEdit
Data extraction to knowledge is the primary function of artificial intelligence. Knowledge extraction tools have become more and more powerful. Many seemingly unrelated pieces of data may be integrated to identify individual behavioral characteristics, even personality characteristics. For example, by combining website browsing history records, chat content, shopping flow, and other types of record data, one can outline a person's behavioral trajectory, analyze personal preferences and behavioral habits, and further predict the potential needs of consumers. Companies can provide consumers with Provide necessary information, products or services in advance. However, these personalized customization processes are accompanied by the discovery and exposure of personal privacy. How to regulate privacy protection is a problem that needs to be considered simultaneously with technology applications.
 Artificial Intelligence—The Driving Force of Industry 4.0. (n.d.). Seebo. Retrieved April 29, 2021, https://www.seebo.com/industrial-ai/
 Azizi, A. (2019). Applications of Artificial Intelligence Techniques in Industry 4.0. Springer
 Bryson, J., & Winfield, A. (2017). Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. Computer, 50(5), 116–119. https://doi.org/10.1109/MC.2017.154
 Cara Bloom, Joshua Tan, Javed Ramjohn, and Lujo Bauer. 2017. Self-driving cars and data collection: Privacy perceptions of networked autonomous vehicles. In Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017). Santa Clara, CA, 357–375.  Erkin, Z., Franz, M., Guajardo, J., Katzenbeisser, S., Lagendijk, I., & Toft, T. (2009). Privacy-Preserving Face Recognition. Springer Link. https://link.springer.com/chapter/10.1007/978-3-642-03168-7_14
 Froomkin, A. M., Kerr, I. R., & Pineau, J. (2018). When AI outperform doctors: The dangers of a tort-induced over-reliance on machine learning and what (not) to do about it. SSRN Electronic Journal, 1-67. doi:10.2139/ssrn.3114347
 G. Danezis, J. Domingo-Ferrer, M. Hansen, J. Hoepman, D. Metayer, R. Tirtea, and S. Schiffner. Privacy and data protection by design-from policy to engineering. ENISA, 2014.
 Gomes De Andrade, N. N., Martin, A., & Monteleone, S. (2013, June). "All the better to see you with, my dear": Facial recognition and privacy in online social networks. IEEE Xplore. https://ieeexplore.ieee.org/abstract/document/6461872
 Horwitz, Lauren (2019) The future of IoT miniguide: The burgeoning IoT market continues. Cisco.com, July 19, retrieved on May 3rd, 2021 https://www.cisco.com/c/en/us/solutions/internet-of-things/future-of-iot.html
 Internet of things. (2021, May 3). In Wikipedia. [Internet_of_things]
 Lloyd, K. (2018). Bias Amplification in Artificial Intelligence Systems. ArXiv, abs/1809.07842.
 Maple, Carsten. (2017) Security and privacy in the internet of things. Journal of Cyber Policy, 2(2) 155-184
 Price II, William Nicholson, Artificial Intelligence in the Medical System: Four Roles for Potential Transformation (February 25, 2019). 21 Yale J.L. & Tech. Spec. Iss. 122 (2019), 18 Yale J. Health Pol'y L. & Ethics Spec. Iss. 122 (2019), U of Michigan Public Law Research Paper No. 631, Available at SSRN: https://ssrn.com/abstract=3341692
 Such, J. M. (2017). Privacy and Autonomous Systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17 (pp. 4761-4767) https://doi.org/10.24963/ijcai.2017/663
 Wortmann, F., Flüchter, K. Internet of Things. Bus Inf Syst Eng 57, 221–224 (2015). https://doi.org/10.1007/s12599-015-0383-3
16 Ziegeldorf H, Jan. (2013) Privacy in the Internet of Things: threats and challenges. Security and Communication Networks, 7(12) 2728-2742. Singapore. Retrieved April 29, 2021 https://link.springer.com/content/pdf/10.1007/978-981-13-2640-0.pdf. https://doi.org/10.1007/978-981-13-2640-0