Chatbots For Social Change/Print version
Introduction
editBy necessity, this book is widely interdisciplinary, bringing together insights from scholarly work understanding "understanding," social action, social systems, the social psychology of belief, the philosophy of science, the sociology of belief systems, research ethics, ethics of privacy, and of interaction, clinical psychology, the technical intricacies of LLMs, frameworks of knowledge management, automated proof-checking, to name some of the most important fields of knowledge involved.
As you can see, this textbook cannot be built by just one person. I (Alec McGail) am writing this now to start the endeavor in a free and transparent manner, very much in the spirit of what's discussed in Section 2: What's Ethical?. So, anyone who feels they have something they can contribute to the endeavor should reach out to me (am2873@cornell.edu), or just go ahead and make changes.
If you'd like to follow my process in working through this class, follow my Twitch channel and YouTube channel.
Here, you will embark on an intellectual adventure, blending the theoretical intricacies of intersubjective thought with hands-on training in Large Language Models (LLMs). By the end, you won’t just understand the mechanics of these digital marvels; you will be the craftsman behind their creation.
For the intrepid scholar and the visionary educator alike, this journey promises a harmonious blend of robust theoretical foundations and cutting-edge practical applications. Each week unfurls a new layer of understanding, from ethical considerations to technical mastery, all culminating in a capstone project where you breathe life into your very own chatbot. This isn't just a course; it's a call to be at the forefront of a sociotechnological revolution, with the power to shape discourse, challenge beliefs, and unite our ever-evolving global community. Get ready to be both the student and the pioneer, charting the path for the next wave of societal evolution.
Disclaimer: "Chatbots for Social Change" has been collaboratively developed with the aid of ChatGPT, a product of OpenAI's cutting-edge Large Language Model (LLM) technology. The utilization of ChatGPT in the creation of this WikiBook is a practical demonstration of the subject matter at the heart of this course. As learners delve into the complexities of collective cognition, LLM training, knowledge management, and social interaction, they are interacting with content that has itself been influenced by the advanced technologies under discussion.
This recursive element of the course design illustrates the dynamic and evolving interaction between human intellect and artificial intelligence. It's an embodiment of the dualities and partnerships that can emerge when the creative capacities of humans are augmented by the meticulousness of machine intelligence. This partnership is indicative of the immense possibilities and responsibilities that come with the integration of such technologies into the fabric of our digital era. Understanding, leveraging, and steering these advancements remain a central theme and imperative throughout this WikiBook.
Independent Learning
editIndependent consumption by definition means you can do whatever you want with your time and this book. So do that! Read as much or as little as you like, skip around, and don't be shy about asking questions.
If you are serious about learning the content, you will have to devote significant time. I recommend setting aside consistent time every week, and work through the book section by section. Do the prototypes yourself, and contribute to the wiki book. And email me (am2873@cornell.edu)!
Teaching the Course
editI am developing this textbook in approximately the same structure I imagine one would teach a 9-week intensive (perhaps summer-) course.
Weeks 1-3: Sections 1-3, which are largely theoretical, could be presented in the first three weeks. This makes it a whirlwind tour, but the textbook allows students to dig deeper at their discretion. At the end of the second section What is Ethical?, I imagine students would draft an IRB proposal for a intervention they would like to conduct. This serves to focus students on what they'd like to do with the technology before the third theoretical week How Do We Do It? and the following technical weeks which prepare the student for their own prototyping.
Weeks 4-5: The next two weeks can be spent on the technical details of LLMs and various other relevant technologies. Students can choose a topic from the textbook to explain to the class, or choose to research a new one and write a wiki chapter.
Weeks 6-8: The next three weeks would involve hands-on prototyping based on the subject matter. This gives a nice fail-fast mentality, and avoids "scope creep," which can easily result in never getting off the ground. This would be wildly benefited by a user-friendly package which includes high-level access to capabilities for everything mentioned in this book.
Week 9: The final week can be used to reflect on the course material, and allow students to present what they were able to do, what challenges they faced, and their ideas for further development and use of these technologies. If they feel they can contribute to the code-base, this would be a good time to submit their pull-requests.
What's Possible?
editConversation in Theory
editI'll keep an older draft, just in case.
What do we mean by "meaning"? The subject of this chapter, it's the sort of self-referential question which one may think at first glance to be fundamentally unanswerable. But this apparent circularity has not stopped philosophers and scientists from working through the answer, or stopped humans globally from saying things, and really "meaning" them. Questions of the meaning of "understanding," "explanation," "belief," and "reasoning" have a similar circularity, and answers bearing on one of these concepts help to answer what the others mean. In this chapter we conduct a broad survey of answers to these questions through history, in the hope of assembling a robust answer to this question. Or at least one which will help us understand how to design a system which can understand "for all practical purposes." Without further ado, let's start at the beginning.
Overview of the Fields
editNot everything relevant will be treated in this chapter, so we'll begin by giving an alphabetical index of fields possibly relevant to the study of meaning, of the structure of language, meaning, and conversation.
Philosophy of Meaning
edit- Cognitive Linguistics: This area looks at how language and thought interact. It examines how linguistic structures reflect the ways people categorize, conceptualize, and understand the world around them.
- Existentialism: Existentialist philosophers like Sartre and Heidegger delve into how meaning is not inherent in the world but is something that individuals must create for themselves, often in the face of an absurd or fundamentally meaningless universe. The tradition of existentialism leads naturally to studies of common understanding, in the taken-for-granted worlds discussed in phenomenology (Heidegger), especially by Schutz.
- Hermeneutics: Originally a method of interpreting religious texts, hermeneutics has expanded as a broader methodology for interpreting texts and symbols in general. It's about understanding the meaning behind written and spoken language, often in a historical or cultural context.
- Philosophy of Language: This branch of philosophy deals extensively with how language and its structure contribute to meaning. It covers topics from the meaning of words and sentences to the nature of communication and understanding.
- Pragmatics: As a part of linguistics and semiotics, pragmatics studies how context influences the interpretation of meaning, considering factors like speaker intent, cultural norms, and situational context.
- Philosophy of Mind: This branch explores the nature of mental states, consciousness, and how these relate to issues of meaning and understanding. It questions how mental representations carry meaning and how this is communicated.
- Pragmatism: Pragmatism, a school of thought in American philosophy, focuses on the practical implications of ideas and beliefs, which is key in understanding how meaning is construed and applied in real-world scenarios.
Sociology of Meaning
edit- Critical Theory: Philosophers like Adorno, Horkheimer, and Habermas in the Frankfurt School have explored how societal structures, power dynamics, and ideologies shape the construction and interpretation of meaning. We will here also include Feminist Sociology in this category, as it makes similar claims that meaning and knowledge are impinged by social structure.
- Ethnomethodology: A sociological perspective that examines how people produce and understand social order in everyday life. It's particularly interested in how people make sense of and find meaning in their social world. Has roots in phenomenology, especially Schutz.
- Interaction Ritual Theory: Goffman's work focuses on the significance of social interactions and the performative aspects of social life, exploring how meaning and identity are constructed and expressed in everyday rituals and practices.
- Phenomenology: Phenomenology, especially as developed by Husserl, focuses on the structures of experience and consciousness. Schutz extended this to the social realm, exploring how individuals understand and ascribe meaning to their experiences and to each other in everyday life.
- Post-Structuralism: This movement, with thinkers like Derrida and Foucault, challenges the stability and universality of meaning, suggesting it is fluid, context-dependent, and a product of discourses of power.
- Semiotics: This is perhaps the closest field to what you're describing. Semiotics is the study of signs and symbols and their use or interpretation. It's a field that intersects with linguistics, philosophy, and anthropology, focusing on how meaning is created and understood.
- Social Constructionism: This perspective, associated with Peter L. Berger and Thomas Luckmann, argues that many of the most familiar aspects of our social world and our understanding of it are not natural, but rather constructed through ongoing social processes.
- Sociolinguistics: This interdisciplinary field examines how language use and social factors are interrelated. It looks at how different social contexts and groups shape language, communication styles, and the meanings conveyed through language.
- Speech Act Theory: This theory explores how language is used not just for conveying information but for performing actions. Searle, in particular, contributed to the development of social ontology, examining how shared understandings and intentions create a framework for meaningful communication and social reality.
- Symbolic Interactionism: Mead's theory emphasizes the role of social interaction in the development of the mind and the self. Meaning, in this view, arises from the process of interaction and communication between individuals.
Intersubjectivity
editFrames, Speech Acts, Conversational Analysis
editNatural Language Processing
editCan Machines Understand?
editConversation in Practice
editBesides the armchair theorists treated in the last section, it will be instructive for us to turn our attention to the strategies which professional practitioners in conversational techniques have uncovered to better perform their duties. Examples include therapists, mediators of conflict and discord, political speech, salesmen, negotiators, educators, coaches and mentors, and healthcare professionals, to name a few. There is no shortage of areas where individuals have to use conversation to get something done, and in these areas we should find their learned strategies to be a helpful guide to the nature of conversational interaction in general.
This section will treat a wide variety of strategies, typologies, and theories developed in different aims over history to solve practical problems with conversation. It is by no means exhaustive, and being in the format of a WikiBook, I encourage enlightened readers to contribute their own knowledge-bases where appropriate.
Carl Rogers and Client-Centered Therapy
editCarl Rogers was a pioneering psychologist who developed client-centered therapy in the 1940s and 1950s, a revolutionary approach at the time that emphasized the humanistic aspects of psychology. Rogers believed in the inherent goodness and potential for growth within every individual, a stark contrast to the deterministic views of human behavior prevalent in his time. His work focused on the importance of the therapeutic relationship as a facilitator for personal development and healing. Rogers' theory was built on the idea that people have a self-actualizing tendency - an innate drive towards growth, development, and fulfillment of their potential. By providing a supportive and understanding environment, therapists could help clients unlock this potential.
His approach introduced several core principles that have influenced not just client-centered therapy but also many other counseling theories, including motivational interviewing:
- Empathy: Demonstrating a deep, nonjudgmental understanding of the client's experience and feelings.
- Unconditional Positive Regard: Offering acceptance and support to the client regardless of their actions or feelings.
- Congruence: Being genuine and authentic in the therapeutic relationship, allowing the therapist's true feelings to be evident without overshadowing the client's experience.
- Self-Actualization: The belief that every individual has the innate ability to fulfill their potential and achieve personal growth.
Rogers' emphasis on the client's perspective, autonomy, and the therapeutic relationship's quality laid the groundwork for the development of approaches like motivational interviewing, which similarly prioritize empathy, collaboration, and the elicitation of personal motivation for change.
Motivational Interviewing
editMiller, W. R., & Rollnick, S. (2013). Motivational interviewing: Helping people change (3rd ed). Guilford Press.
This book is a fantastic introduction and exposition of a method for clinical psychology developed by clinical psychologists William R. Miller and Stephen Rollnick, called Motivational Interviewing, or MI. Although the Wikipedia article, and certainly the textbook, offer a comprehensive summary of this strategy, it is worth summarizing some of the main principles here for reference and comparison to other conversational strategies.
- Collaboration over Confrontation: Emphasizes a partnership that respects the client's autonomy rather than confrontation. It's about listening, understanding, guiding, and respecting the client's perspective.
- Drawing Out Rather Than Imposing Ideas: Focuses on evoking the client's motivations and commitments to change, rather than imposing solutions.
- Focus on Ambivalence: Recognizes that ambivalence about change is natural. The method works through this ambivalence by exploring the client's conflicting feelings about change.
- Evoking Change Talk: Guides clients towards expressing their own arguments for change, recognizing and eliciting the client's reasons for and benefits of change.
- Responding to Resistance: Teaches practitioners to view resistance as a signal for more exploration and to work with it rather than confronting it directly.
- Four Processes of MI:
- Core Skills (OARS): Built on fundamental communication skills:
- Open-ended questions: To explore the client's thoughts and feelings.
- Affirmation: Recognizing strengths and efforts.
- Reflective listening: Showing understanding of the client's perspective.
- Summarizing: Reflecting back the essence of what the client has expressed.
These strategies are based on the principle that the true power for change lies within the client, and the method's effectiveness has been demonstrated across various settings, including healthcare, addiction treatment, and counseling.
Cognitive Behavioral Therapy
editCognitive Behavioral Therapy (CBT) stands as a counterpoint to client-centered therapy and Motivational Interviewing, offering a more structured and directive approach to therapy. Developed by Aaron T. Beck in the 1960s, CBT is based on the premise that dysfunctional thinking leads to negative emotions and maladaptive behaviors. The goal of CBT is to identify, challenge, and modify these negative thoughts and beliefs to improve emotional regulation and develop personal coping strategies that target solving current problems.
CBT's core principles include:
- Identification of Negative Thoughts: Clients learn to recognize and identify distorted thoughts that contribute to negative emotions.
- Challenging Cognitive Distortions: Through techniques like cognitive restructuring, clients learn to challenge and reframe negative patterns of thought.
- Behavioral Experiments: Clients test these new thought patterns in real-life situations, learning to modify their behavior based on healthier cognitions.
- Skill Development: CBT emphasizes the development of coping strategies and problem-solving skills to manage future challenges effectively.
While CBT's focus on cognitive processes and direct intervention contrasts with the non-directive, empathetic approach of Rogers' therapy and the ambivalence-focused nature of Motivational Interviewing, it complements these methods by offering an alternative pathway for clients whose needs may be better served by a more structured approach. The diversity of therapeutic approaches highlights the complexity of human psychology and the necessity of tailoring interventions to meet individual client needs.
Crucial Conversations: Navigating High-Stakes Discussions
editCrucial Conversations: Tools for Talking When Stakes are High (2nd Edition), edited by Kerry Patterson, provides a framework for handling conversations where opinions differ, stakes are high, and emotions run strong. The book outlines strategies to ensure these crucial conversations can lead to positive outcomes, rather than misunderstanding and conflict. Here are the key strategies outlined in the book:
- Start with Heart: Keeping focused on what you truly want to achieve from the conversation, helping to maintain clarity and prevent emotional derailment.
- Learn to Look: Becoming aware of when conversations become crucial and require special attention to maintain constructive dialogue.
- Make It Safe: Ensuring the conversation environment is safe by maintaining mutual respect and mutual purpose, allowing all parties to feel comfortable sharing their viewpoints.
- Master My Stories: Recognizing and managing the internal stories that influence one's emotional responses and behaviors, aiming to respond more thoughtfully.
- State My Path: Communicating your own views clearly and respectfully, sharing both facts and your interpretation, while remaining open to others' inputs.
- Explore Others' Paths: Showing genuine curiosity about others' perspectives through asking questions, active listening, and acknowledging their emotions and viewpoints.
- Move to Action: Deciding how to proceed after the conversation, including making decisions, determining who will carry them out, and agreeing on follow-up actions.
These principles are designed to facilitate open, honest exchanges that can resolve conflicts, build stronger relationships, and lead to
Schemas for Leading Group Discussions
editGroup facilitation is a crucial skill for guiding discussions, ensuring productive dialogue, and achieving desired outcomes. Below are central concepts related to effectively leading group discussions within the context of group facilitation:
- Agenda Setting: Establishing a clear agenda that outlines topics and objectives to keep the group focused and productive.
- Neutral Facilitation: Maintaining neutrality by the facilitator, focusing on process management without advocating for specific outcomes.
- Active Listening: Promoting active listening among group members to foster understanding and respect for differing opinions.
- Conflict Resolution: Constructively addressing conflicts to find common ground or a mutually agreeable way forward.
- Encouraging Participation: Ensuring all participants have the opportunity to contribute, especially encouraging quieter members to speak up.
- Decision-Making Processes: Employing decision-making strategies such as consensus, majority vote, or other agreed-upon methods to progress the discussion.
- Time Management: Keeping the discussion within the predetermined timeframe to ensure topics are covered effectively.
- Summarization and Clarification: Summarizing key points, agreements, and action items for clarity and to confirm next steps.
- Group Dynamics Awareness: Navigating the varying personalities, relationships, and power dynamics within the group.
- Feedback and Evaluation: Seeking feedback on the facilitation process and outcomes to enhance future group interactions.
- Ground Rules: Setting clear guidelines for interaction to promote respectful and productive discussion.
- Visual Facilitation: Using visual aids such as whiteboards, charts, or digital platforms to help organize thoughts and illustrate ideas.
Recommended References
- Schwarz, R. (2002). The Skilled Facilitator: A Comprehensive Resource for Consultants, Facilitators, Managers, Trainers, and Coaches. Jossey-Bass.
- Kaner, S. (2014). Facilitator's Guide to Participatory Decision-Making. Jossey-Bass.
- Bens, I. (2005). Facilitating with Ease!: Core Skills for Facilitators, Team Leaders and Members, Managers, Consultants, and Trainers. Jossey-Bass.
- Hunter, D., Bailey, A., & Taylor, B. (1995). The Art of Facilitation: How to Create Group Synergy. Fisher Books.
Political Conversation
editPolitical conversation is a critical domain of communication that encompasses discussions on societal issues, policy-making, governance, and more. It seeks to foster a structured, respectful, and constructive dialogue among diverse viewpoints to better understand societal challenges and develop actionable solutions. Below are key principles and practices essential to the theory of political conversation:
- Deliberative Democracy: Emphasizes informed, rational dialogue in decision-making. Practical applications like deliberative forums and citizen juries aim to enhance the quality of political conversation.
- Inclusive Dialogue: Encourages participation from a broad spectrum of voices, including marginalized and minority groups, to ensure comprehensive discussions on societal issues.
- Civility and Respect: Promotes decency in political discourse, fostering constructive and less polarized conversations.
- Fact-Based Dialogue: Anchors political conversations in verifiable facts to support informed discussions and decision-making.
- Active Listening: Urges individuals to listen to and attempt to understand opposing viewpoints, fostering empathy and broader perspectives.
- Common Ground: Focuses on identifying areas of agreement to build trust and lay a foundation for addressing contentious issues.
- Conflict Resolution: Uses structured dialogue and negotiation techniques to resolve conflicts within political discussions.
- Transparency and Accountability: Ensures open communication about decisions, policies, and governmental actions to build trust and foster constructive dialogue.
- Educational Engagement: Leverages educational campaigns to inform the public on key issues, promoting a more knowledgeable base for political conversation.
- Mediated Discussions: Employs facilitated or mediated discussions to manage contentious issues more productively.
Notable Methods and Initiatives To foster healthier political conversations, several methods and initiatives have proven effective:
- Citizens’ Assemblies and Deliberative Polling: Forums that gather randomly selected citizens to deliberate on political or societal issues, providing diverse perspectives and grassroots solutions.
- Online Platforms for Civic Engagement: Digital platforms, like vTaiwan, use technology to facilitate open, constructive discussions on legislative and societal issues, bridging the gap between citizens and policymakers.
References These foundational principles, alongside innovative methods and initiatives, offer a framework for engaging in more constructive, inclusive, and fact-based political conversations:
- Fishkin, J. (2011). When the People Speak: Deliberative Democracy and Public Consultation. Oxford University Press.
- Gastil, J., & Levine, P. (2005). The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century. Jossey-Bass.
- Nabatchi, T., Gastil, J., Weiksner, G. M., & Leighninger, M. (Eds.). (2012). Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement. Oxford University Press.
These principles and references highlight the importance of fostering a healthy political conversation as a cornerstone of democracy, encouraging participation, understanding, and collaboration among all sectors of society.
Collective Action
editCollective action lies at the heart of societal transformation. It represents the concerted effort of individuals and groups to achieve shared objectives. In the context of the digital era, chatbots, powered by advanced artificial intelligence, have emerged as key players in facilitating collective action. They have the potential to act as catalysts in global conversations, influencing social change.
Theoretical Foundations
editCollective intelligence is crucial for the success of collective action. The research by Riedl et al. (2021) underscores the importance of equitable participation within groups. In a world increasingly mediated by chatbots, these digital interlocutors can democratize conversations, ensuring diverse voices are heard and valued. Moreover, chatbots can augment the collective decision-making process, contributing to the group's overall intelligence and performance.
Persuasion-oriented methods of collective action, such as those used in political campaigns or advocacy efforts, often risk exacerbating social divides. Here, chatbots can offer a solution by providing platforms for rational and balanced discourse, potentially linking to democratically compiled sources like Wikipedia to counteract misinformation.
Tarrow’s contentious politics provide a framework for understanding the dynamics of collective action in confrontational settings. Chatbots, designed with the principles of contentious politics in mind, could facilitate the organization and mobilization of collective actors, thus becoming tools for orchestrating social movements and driving political participation.
Case Studies and Applications
editTaiwan's Digital Democracy initiatives, such as vTaiwan and Pol.is, exemplify the successful application of digital platforms in enhancing transparency and fostering collective action. These platforms could be further evolved by integrating chatbots, which would serve as mediators and aggregators of public opinion, enhancing the deliberative process.
Participatory Budgeting represents a direct application of collective action in governance. Chatbots could revolutionize this process by engaging with citizens, gathering their preferences, and helping them understand the implications of budgetary decisions, thus reinforcing the Lefebvre’s “right to the city” principle.
The polarization in the United States demonstrates the potential pitfalls of collective action when it leads to societal division. Chatbots could be programmed to recognize signs of polarization, intervening to introduce alternative perspectives and mediate discussions, thereby promoting social cohesion.
Methods and Strategies in Collective Action
editThe modular performances and repertoires of collective action described by Tarrow can be seen as a precursor to the adaptability required of chatbots in social change scenarios. These digital entities must be capable of operating in varied contexts, advocating for diverse causes, and engaging with different target audiences.
Challenges in Collective Action
editOlson's Logic of Collective Action identifies the free-rider problem as a significant challenge in collective endeavors. Chatbots could mitigate this issue by providing personalized incentives for participation and by tracking and rewarding contributions to collective goals.
Impact and Implications
editThe role of chatbots in collective action extends beyond mere facilitation; they embody the potential for substantial social impact. By leveraging large language models and artificial intelligence, chatbots could unify disparate knowledge bases, serve as platforms for marginalized voices, and contribute to the formation of a more informed and engaged citizenry.
Conclusion
editChatbots are poised to become indispensable in the landscape of collective action. This chapter has delved into the theoretical underpinnings of collective action, explored its application in the digital age, and highlighted the transformative potential of chatbots in promoting social change. As architects of this new digital frontier, we must ensure that the development of chatbots is guided by ethical considerations, inclusivity, and a commitment to fostering constructive dialogue.
Designing Democracy
editDirk Helbing succinctly expresses the need and potentials of a digital upgrade to society. He seems quite interested in the applications of upgraded money (in particular, multi-dimensional money) on the revolution and intelligent control of our incentive systems, and thus our organizational principles. I'm more interested in a deeper question, which he also addresses extensively, especially in his book Next Civilization [1]. He discusses such concepts as "computational diplomacy" and "digital democracy".
To ensure fairness in public sector models, we imagine a future where the people developing these models clearly explain the values behind their assumptions and choices, especially when those choices can affect society. By prioritizing decisions that uphold democratic principles, we can address biases and prevent the unfair outcomes that can result from a top-down approach.
- Helbing (2021) Next Civilization [1]
The most comprehensive, recent resource which consolidates the wide variety of prior work in this area, and giving perspective, is
Free Speech
editAs the UN Universal Declaration of Human Rights puts it,
The right to hold own opinions has at least three pillars:
- The possibility to get access to relevant information with a reasonable effort (in particular, to the facts, which should be recognizable as such).
- The chance to form an own opinion without being manipulated in that process.
- Sufficient and appropriate opportunities to voice own opinions without fear of being punished, and without censorship.
I see this project as critically aiding in and being guided by this form of the call to the right of free speech and the freedom to hold opinions.
Secondly, As has been put a thousand ways by academics, and as is easily recognized by most in the world at this point, there are significant manipulations of popular opinion happening on a grand scale. In the United States, this involves destruction of popular trust in science, highly funded and sophisticated propaganda campaigns funded by billionaires or conducted by the military industrial complex which re-write textbooks, legislation, and media reports.
And finally, individuals in the U.S. can (for the most part) speak their minds. However, the extent to which they are heard, listened to, recognized, is not a matter of impartial considerate deliberation, of equal weight. Understanding is resource-intensive, and exposure is subject to the whims of the algorithm. In other words, to the features of the presentation, and a winner-takes-all attention space. This leaves most which is spoken, completely unheard. And what bubbles to the top, which rings in our ears, is only a very selective sample of the universe of understanding. Helbing puts this in the language of complex systems as follows:
[...] asymmetrical interactions may lead to relative advantages of some people over others, the system may get stuck in a local optimum. The global optimum reached when interactions are symmetrical may be better for everyone.
References
editWhat's Ethical?
editInstitutional Review Boards (IRBs)
editInstitutional Review Boards (IRBs) have long acted as a much-needed regulatory arm of academic research. Although principles of IRBs may differ slightly amongst jurisdictions, any one IRB will provide a schema for planning ethical human research which gives a good guide for researchers inside or outside the institution. In this section we give a high-level overview of the IRB guidelines for Cornell University, as they apply to experimental interventions on people using Large Language Models.
If you are interested in publishing academic research which makes scientific claims based on the results of the intervention, most journals will require you to provide the approval you got from an institutional IRB board at a university. If you are not currently a member of a university or institution, your best bet is to find a collaborator or co-author who is willing to act as the contact, and submit to their IRB under their name.
Outline of IRB Considerations
editInformed Consent: Participants must be fully informed about the nature of their interaction with the LLM, including potential risks, benefits, and the overall purpose of the research. The process must clearly distinguish the human participant's interaction with the AI from more traditional interventions. Special attention must be paid to ensuring participants understand that they are interacting with a machine rather than a person, and how their data might be used, stored, or processed by the AI system. If vulnerable populations are involved, the consent process may require further scrutiny and additional safeguards.
Data Collection and Retention: Data collection should be designed with clear protocols for safeguarding participant information. At the outset, the researcher must obtain informed consent, ensuring participants understand the type of data being collected, the purpose of the study, and how their information will be used and protected. Sensitive data, including personally identifiable information (PII), should be minimized to the greatest extent possible. If collecting and storing PII is necessary, the data collection process must involve robust encryption methods, such as AES-256 encryption, both at rest and in transit. This ensures that the data is secure during storage and transfer, preventing unauthorized access or breaches. Additionally, research teams should utilize secure data management platforms, with access restricted to only those individuals directly involved in the study.
To align with Cornell IRB standards, researchers must develop a comprehensive data retention and destruction policy. Data should only be retained for as long as is necessary to meet the objectives of the research. It is recommended to clearly outline a data retention period in the IRB submission, which includes specific timelines for data anonymization and deletion. For sensitive datasets, anonymization should involve techniques such as data masking, pseudonymization, or aggregation, which effectively reduce the risk of re-identification. Once the study is complete or the data is no longer needed, researchers must ensure that all data, particularly PII, is securely destroyed using approved methods, such as cryptographic erasure or physical destruction of storage media. Furthermore, if data is to be shared with third parties, strict data-sharing agreements should be established to ensure these entities adhere to the same confidentiality standards and that the data remains protected throughout its lifecycle. By employing these strategies, researchers can adequately protect participants' privacy and meet Cornell IRB's stringent data protection requirements.
Risk Assessment: As part of the IRB review, researchers are required to provide a thorough risk assessment, identifying any potential harms that may arise from the use of LLMs. This includes emotional distress, the possibility of biased responses from the AI system, or unintended social consequences resulting from the interaction. If the LLM is designed to have an influence on participants' decision-making, emotions, or social behavior, these risks must be carefully weighed. The IRB will also evaluate how the research team plans to monitor and mitigate such risks, including offering resources or referrals for participants who may need support after the intervention.
Impact of LLM on Decision-Making and Autonomy: Given the nature of LLMs to simulate human-like conversation, there is concern about how AI might influence a participant's autonomy. Cornell's IRB expects researchers to clarify how the LLM's responses are generated and to assess whether there is a risk of the chatbot’s recommendations or outputs being perceived as authoritative or manipulative. In fields where the research seeks to create social or behavioral changes, the ethical implications of using LLM-generated content to influence participants must be considered. Researchers should propose clear debriefing mechanisms to ensure participants understand the nature of the interaction post-experiment.
Bias and Fairness: Many LLMs may reflect inherent biases from the data they were trained on, potentially leading to socially harmful outcomes. Cornell’s IRB requires researchers to address how they will monitor for and mitigate bias in the LLM's responses, particularly if the intervention affects marginalized or vulnerable groups. This could involve regular auditing of the AI’s outputs for fairness, as well as transparency in how the AI has been trained. Any known limitations or biases within the LLM should be disclosed in the IRB application and communicated to participants.
Debriefing and Feedback: For research involving LLMs, especially where the social impact of the intervention is unclear or could have unforeseen consequences, a thorough debriefing process is necessary. The IRB will look for details about how participants will be informed about the true nature of the LLM interaction post-experiment and given the opportunity to ask questions or withdraw their data if they choose. Researchers are encouraged to include a mechanism for participants to provide feedback on their experience, which can also help in identifying any unanticipated risks or impacts.
Special Considerations for Social Impact Research: If the research aims to address societal issues or achieve a broader social impact, such as influencing public opinion, political views, or behaviors, Cornell’s IRB will evaluate whether the intervention could lead to unintended social disruptions. For example, if a chatbot is designed to engage with users on sensitive topics like mental health, political ideologies, or social justice, the IRB will require the researcher to provide detailed justifications for the choice of topic, population, and the ethical considerations of using an AI for such interventions.
The IRB Review Process
editThe IRB review process at Cornell University is a collaborative and iterative one, designed to ensure that research involving human subjects adheres to strict ethical standards. After researchers submit their initial proposal, which includes study objectives, methodologies, participant recruitment strategies, and data protection plans, the IRB typically engages in a back-and-forth process with the research team. This communication, often conducted over email, involves the IRB providing detailed feedback and requesting clarifications or modifications to ensure compliance with both institutional policies and federal regulations.
The feedback process can require multiple revisions, as the IRB might suggest adjustments to improve participant protections, refine the consent process, or better safeguard sensitive data. Researchers are expected to address these concerns and resubmit their revised protocols for further review. This ensures that the research is ethically sound before approval is granted.
Once approved, the IRB’s oversight doesn’t stop. For ongoing or multi-year studies, researchers must submit annual renewal applications to maintain their approval status. Any significant changes to the study design, methodology, or participant involvement during the course of the research also require prior IRB approval through an amendment process.
Exempt Status
editAt Cornell University, certain types of research involving human subjects may qualify for an IRB Exempt category, meaning they are subject to a lighter level of review. While these studies are still required to meet ethical standards, they typically involve minimal risk to participants and are eligible for a streamlined review process.
To qualify for exemption, the research must fall into one of several federally defined categories, such as studies involving normal educational practices, anonymous surveys, or research using publicly available data. However, even if a study meets these criteria, it must still be submitted to the IRB for an official determination of exempt status.
The exemption does not mean the study is free from oversight. Researchers are still required to follow guidelines related to informed consent, data privacy, and participant welfare. Additionally, any significant changes to the research after exemption is granted must be submitted to the IRB for review to confirm that the study remains eligible for exempt status. Although exempt studies do not require annual renewals, researchers must keep the IRB informed of any updates that could affect the scope or risk level of the research.
Conversational AI Ethics
editNote to self: Stream 2 incorporated in broad strokes. Still need Wk2St1.
The ethical use of conversational AI tools is still an emerging topic for discussion, and although there has been some guidance by governments around the world on basic principles of use, these are at best provisional. The observed and theoretically expected intended and unintended consequences of human-chatbot interaction already are so numerous as to fill a lengthy section. However, because the technology is so new, these may only scratch the surface, or at least leave unexpected another significant implication, especially as novel applications continue emerge.
This chapter covers both theoretically expected and actualized ethical implications of advanced chatbot technology. From anthropomorphizing and trusting AI tools, to their being used to collect and process sensitive semantic data, to the consequences for society of a seamless integration with and reciprocal definition of this emerging technology, this chapter is still intended only to sensitize the reader to the type of ethical thinking required to use chatbots as a tool for social good.
The Fallacy of Universal Morality
editOne of the primary challenges is defining a universally agreed-upon moral standard. Even widely-held principles can be interpreted differently based on cultural, historical, or personal contexts. Deciding which moral principles to prioritize can inadvertently introduce biases.
Arriving at a universally accepted set of principles is ambitious, given the diversity of cultural, religious, philosophical, and personal beliefs globally. However, certain principles seem to be commonly valued across many societies, and with a high degree of agreement (though exact percentages might be challenging to pin down). Let's explore: Fundamental Principles:
- Respect for Autonomy: Every individual has the right to make choices for themselves, as long as these choices don't infringe on the rights of others.
- Beneficence: Actions and systems should aim to do good and promote well-being.
- Non-Maleficence: "Do no harm." Avoid causing harm or suffering.
- Justice: Treat individuals and groups fairly and equitably. This also means ensuring equal access to opportunities and resources.
- Transparency: Processes, intentions, and methodologies should be clear and understandable.
- Privacy: Everyone has a right to privacy and the protection of their personal data.
- Trustworthiness: Systems and individuals should act in a reliable and consistent manner.
Ethic of Formulating Ethics:
- Inclusivity: Ensure diverse voices and perspectives are considered in the formulation of ethical guidelines.
- Continual Reflection and Revision: Ethics shouldn't be static. As societies evolve, so should their ethical guidelines.
- Transparency in Process: It should be clear how ethical guidelines were formulated, what sources and methodologies were used.
- Avoidance of Dogma: Openness to new ideas and a willingness to adapt are crucial. Ethics should be based on reason and evidence, not unexamined beliefs.
- Accountability: Systems and individuals should be accountable for their actions and decisions, especially when they deviate from established ethical guidelines.
- Education and Awareness: Promote understanding of ethical principles and their importance.
- Promotion of Critical Thinking: Encourage individuals to think critically about ethical challenges and to engage in informed discussions about them.
Using these principles as a foundation, we can reason and build upon them to create more specific guidelines for different contexts, including AI and technology. Remember, however, that the true challenge lies in the application of these principles in real-world scenarios, where they may sometimes conflict with one another.
The Ethical Imperative of the Transformative Use of Conversational AI
editThe challenges faced by modern democracies, influenced by factors like the rapid spread of information (and misinformation), increasing polarization, and the influence of money in politics, are pressing. Here's a more detailed exploration of the potential benefits and concerns of such an AI: Benefits:
- Combatting Misinformation: In an age where false information can spread rapidly and sway public opinion, a neutral AI can help verify and provide factual information.
- Encouraging Deliberative Democracy: By acting as a mediator and facilitator, the AI could promote thoughtful and informed discussion among citizens.
- Bridging Polarization: The AI could provide a space for civil discourse, allowing individuals with opposing views to understand each other's perspectives.
- Universal Access: An AI platform could potentially be accessible to anyone with an internet connection, democratizing access to information and discourse.
As pertains to your calling to build conversational AI systems:
- Skills & Resources: Do you have the technical skills, resources, and knowledge to build such a system? Even if not individually, do you have access to a team or network that can assist?
- Potential Impact: Weigh the potential positive impacts against the possible harms. Even with the best of intentions and safeguards, there's no guarantee of a net positive outcome.
- Ethical Alignment: Does this endeavor align with your personal and professional ethical principles? Are you ready to face the ethical dilemmas that may arise?
- Sustainability: Building the system is one thing; maintaining, updating, and overseeing it is another. Do you have a plan for the long-term sustainability of the project?
- Feedback Mechanisms: How will you gather feedback, analyze outcomes, and adjust accordingly? Will there be mechanisms for public input and oversight?
- Personal Fulfillment: Beyond ethical considerations, would this project bring you personal satisfaction and fulfillment? Passion and intrinsic motivation can be critical drivers of success.
- Alternatives: Are there alternative ways you can contribute to the cause of promoting understanding and critical thinking that might be more effective or less risky?
If one believes that the current challenges facing democracies are of such magnitude that they threaten the very foundations of the system, then, yes, there might be a moral argument to be made that those with the capability should strive to develop tools (like a political AI) that address these challenges.
However, it's also essential to approach this task with humility, acknowledging the potential pitfalls and ethical challenges such tools might bring. It would likely be a continuous process of refining and reassessing the AI's role, algorithms, and impact on society.
Ultimately, whether you feel ethically obligated or obliged to pursue this project will depend on your personal beliefs, values, and circumstances. It might be helpful to engage with mentors, peers, or experts in the field to get diverse perspectives. Engaging in such reflective processes, much like the discursive mediator you envision, can help clarify your path forward.
Transparency
editThe AI should be transparent about its own limitations, the sources of its information, and potential biases in the data it has been trained on. Users should be encouraged to seek multiple sources of information and not rely solely on the AI for forming opinions.
Ethical Oversight
editGuardrails
editHuman-in-the-loop
editEncouraging Critical Thinking: This is pivotal. Instead of just presenting conclusions, the AI can present multiple perspectives, explain the reasoning behind each, and prompt users to weigh the evidence and come to their own conclusions. By asking open-ended questions or presenting counterarguments, the AI can foster a more analytical approach in users.
This approach aligns with the broader educational philosophy of teaching people "how to think" rather than "what to think." By fostering critical thinking and analytical skills, the AI not only helps users navigate contentious issues but also equips them with the tools to evaluate other complex topics they might encounter in the future. Carl Rogers, one of the founding figures of humanistic psychology, emphasized the importance of creating a supportive, non-judgmental environment in which individuals feel both understood and valued.
Several of Rogers' principles can be applied to the design of an ethical AI:
- Empathy and Understanding: Just as Rogers believed that therapists should show genuine empathy and understanding towards their clients, AI should strive to understand user needs, concerns, and emotions, and respond with empathy.
- Unconditional Positive Regard: Rogers believed that individuals flourish when they feel they are accepted without conditions. While AI doesn't have emotions, it can be designed to interact without judgment, bias, or preconceived notions, thus providing a space where users feel safe to express themselves.
- Congruence: This is about genuineness or authenticity. In the AI context, it can mean transparency about its processes, capabilities, and limitations. Users should feel that the AI is 'honest' in its interactions.
- Self-actualization: Rogers believed every person has an innate drive to fulfill their potential. An ethical AI, especially in an educational context, should empower users to learn, grow, and achieve their goals.
- Facilitative Learning Environment: A key principle of Rogers' educational philosophy was creating an environment conducive to learning. In AI terms, this could be about ensuring user interactions are intuitive, enriching, and constructive.
Integrating these humanistic principles into AI design can lead to systems that not only provide information but do so in a manner that is supportive, respectful, and ultimately more effective in facilitating understanding and growth.
Allowing users to provide feedback or challenge the AI's statements can be a way to ensure continuous improvement and refinement of its knowledge and approach.
Black-Box Bias
editThe Dangers of Neutrality
editChatGPT often hesitates in making determinations when it's otherwise super important. This already takes a bit of an ethical stance, and the contours of when it makes moral judgements is already highly biased.
Trust and Deception
editImpact of AI Conclusions: Given the weight many people place on AI, if an AI system draws a conclusion, it could be viewed as a definitive statement, potentially reducing the space for human debate and interpretation.
From ChatGPT's own reasoning: "An AI that makes judgments could lose trust among sections of the user base who feel that the AI's conclusions don't align with their perspective, even if the AI's judgments are based on a rigorous analysis of facts and reason."
Unraveling the Black Box
editAn AI that draws conclusions based on rigorous analysis of facts can promote a more reasoned and fact-based discourse, countering misinformation or overly emotive narratives. This is where we talk about RAG etc.
AI Reasoning Independently
editWhen the AI makes decisions based on its own survival. From ChatGPT, reasoning about its own duty to making moral judgements in certain scenarios: "Unintended Consequences: Taking a clear stance, even when justified, might expose the AI to backlash, boycotts, or manipulation attempts."
Data Manipulation
editThe Ethics of Interaction
editThere is a very complex web ethical considerations in conversational interaction, whether it be a human or AI that is conversing. In any given conversation, the interlocutor can choose to put you into an ethical dilemma, where non-response can even be ethically meaningful.
Instilling a sense of critical thinking in users while providing accurate information in a balanced and reasoned manner can be one of the most effective and ethically responsible ways for an AI to operate in contentious scenarios.
Sustainability
editThe environmental impact is huge! How do we cope with that?
Open-Source or Closed-Source?
editHow Do We Do It?
editBeliefs
edit- Purpose and scope of the chapter.
- The relevance of beliefs in the context of social change and chatbot technology.
- Overview of key themes: Belief formation, technological influence, cultural variations, and redundant belief systems.
Selected bibliography:
What are beliefs?
- Albarracin, M., & Pitliya, R. J. (2022). The nature of beliefs and believing. Frontiers in Psychology, 13, 981925. https://doi.org/10.3389/fpsyg.2022.981925
- Schwitzgebel, E. (2022). The Nature of Belief From a Philosophical Perspective, With Theoretical and Methodological Implications for Psychology and Cognitive Science. Frontiers in Psychology, 13, 947664. https://doi.org/10.3389/fpsyg.2022.947664
Motivated reasoning
- Bayne, T., & Fernández, J. (2009). Delusion and self-deception: Affective and motivational influences on belief formation. Psychology press.
- Boudon, R. (1994). The Art of Self-Persuasion. Polity Press.
- Boudon, R. (1999). Local vs general ideologies: A normal ingredient of modern political life. Journal of Political Ideologies, 4(2), 141–161. https://doi.org/10.1080/13569319908420793
- Ellis, J. (2022). Motivated reasoning and the ethics of belief. Philosophy Compass, 17(6), e12828. https://doi.org/10.1111/phc3.12828
- Epley, N., & Gilovich, T. (2016). The Mechanics of Motivated Reasoning. Journal of Economic Perspectives, 30(3), 133–140. https://doi.org/10.1257/jep.30.3.133
- Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
- Nisbett, R. E., Ross, L., & Nisbett, R. (1980). Human inference: Strategies and shortcomings of social judgment (Nachdr.). Prentice-Hall.
- Ofsowitz, M. (n.d.). The Psychology of Superstition.
Where do beliefs come from?
- Mannheim, K. (1936). Ideology and Utopia (1979). Routledge.
Defining Beliefs
editChatbots For Social Change/Beliefs/Defining Beliefs
Redundant Belief Systems
editScientific Belief
editThe exploration of scientific belief is an essential part of understanding how knowledge evolves and corrects itself over time. Science, by its nature, is a self-correcting enterprise that relies on the concept of falsifiability, as articulated by Karl Popper. Theories within science must be testable and, crucially, capable of being proven wrong. This mechanism ensures that erroneous beliefs are eventually weeded out as empirical evidence mounts against them. However, science is not a straightforward path towards truth. As Thomas Kuhn’s analysis of paradigm shifts reveals, science often experiences revolutionary changes, not through a gradual accumulation of knowledge, but through significant leaps that redefine entire fields. For instance, the transition from the Ptolemaic to the Copernican model was not just a simple update but a radical shift in understanding our place in the cosmos.
Resistance to these paradigm shifts is common due to the inertia of established frameworks, vested interests, and the authority of established figures within the scientific community. This resistance indicates that the process of scientific advancement is also a human endeavor, subject to the same social and psychological forces that influence all other areas of human activity. The maxim "Science progresses one funeral at a time," often attributed to Max Planck, underscores the idea that generational changes in the scientific community can be pivotal for the acceptance of new theories and concepts.
Moral Theory and Belief
editMoral theory and belief are central to how individuals and societies determine what is right and wrong. The debate between moral realism and anti-realism is foundational to ethics. Moral realists argue that moral truths exist independently of human beliefs and constructions, whereas anti-realists hold that moral truths are human constructs. This debate informs how AI may approach facilitating discussions on moral and ethical issues, as it must navigate the complex terrain of absolute truths versus subjective interpretations.
Further complicating this landscape is the concept of moral relativism, which posits that moral judgments are true or false only relative to specific standpoints, and no single viewpoint is universally privileged over others. This view challenges the AI to remain neutral and respectful of diverse moral perspectives. Additionally, the theory of evolutionary ethics suggests that our moral beliefs may be derived from evolutionary processes that favored cooperative behavior. This perspective implies that moral instincts are not solely derived from rational deliberation but also from inherited social behaviors. Cultural influences also play a significant role in shaping moral beliefs, as ethical frameworks are often deeply entwined with cultural norms and values. An AI mediator operating in this space must therefore be adept at understanding and balancing these varied and often conflicting ethical systems.
AI Mediation in the Context of Beliefs
editAI mediation introduces a novel layer to the discourse on beliefs by providing a platform for objective facilitation of conversations. An AI mediator, devoid of the biases and emotional investments that might affect human mediators, has the potential to bridge communication gaps between individuals. This can lead to the exposure of diverse views, mitigating the effects of echo chambers and confirmation bias. By introducing individuals to a broad spectrum of beliefs, AI can facilitate a more nuanced and comprehensive dialogue.
However, the ethical considerations of AI mediation are paramount. The AI must operate with transparency, ensuring that all parties understand how decisions are made within the system. It must represent diverse viewpoints fairly and avoid any manipulation or undue influence on the participants. The AI should not steer conversations or impose certain views, as this would be ethically problematic. Moreover, the AI's role includes providing continuous feedback and adaptation based on real-world outcomes and user feedback. In this way, AI systems can engage ethically and effectively with the vast space of human beliefs, respecting the complexities of scientific and moral theories while promoting informed and constructive dialogue.
Social Representation Theory
editIntroduction Social Representation Theory (SRT), developed by Serge Moscovici, centers on the concept that beliefs are pivotal elements of social representations. These representations are a complex web of values, ideas, and practices that enable individuals to make sense of their social realities, navigate their environments, and communicate effectively within their social groups.
Social Representations in Moscovici's View
- Beliefs as Central Elements: In the framework of SRT, beliefs are not isolated thoughts of an individual but are the collective property of a group. They are the building blocks of social representations, which are the shared understandings that inform a community's perception of reality.
- Shared Understanding: The formation of social representations allows communities to form a cohesive interpretation of complex phenomena, simplifying and structuring the social world by providing common ground for interpretation and interaction.
- Function of Social Representations: These shared beliefs and practices establish an order that helps individuals orient themselves and facilitate communication within the community, offering a shared code for classifying and naming the world around them.
Not All Beliefs are Social Representations
- Individual vs. Collective: There's a clear distinction between personal beliefs and the collective beliefs that constitute social representations. Individual beliefs may or may not align with the broader social narratives of a community.
- Evolving Representations: Social representations are dynamic constructs that evolve as societies change and as new beliefs become integrated into the collective understanding of a group.
- Dynamic Nature: The evolving nature of social representations ensures that they remain relevant and reflective of the society's current state, accommodating new beliefs and information as society progresses.
Some Basic Findings
- Dual Process of Social Representations:
- Finding/Conclusion: Moscovici introduced the dual processes of anchoring and objectification as the mechanisms by which social representations are formed.
- Explanation: Anchoring is the process of assimilating new information by placing it within familiar contexts, whereas objectification transforms abstract concepts into something concrete, making them more understandable and relatable.
- Core and Peripheral System:
- Finding/Conclusion: The core elements of social representations are stable and central to group identity, while the peripheral elements are more susceptible to change and adaptation.
- Explanation: This structural differentiation within social representations explains how they can maintain stability over time while also adapting to new circumstances and information.
- Social Representations and Social Identity:
- Finding/Conclusion: Social representations are instrumental in forming and sustaining a group's social identity.
- Explanation: They provide the shared beliefs and values that delineate in-group and out-group boundaries, fostering a sense of belonging and collective identity among group members.
- Influence of Communication:
- Finding/Conclusion: The dissemination and shaping of social representations are heavily influenced by the means of communication, especially the mass media.
- Explanation: The framing of issues and the portrayal of events in the media can substantially shape public understanding and belief about those issues.
- Resilience of Stereotypes:
- Finding/Conclusion: The persistence and resistance of stereotypes to change can be understood through the lens of social representations.
- Explanation: Stereotypes are a form of social knowledge deeply anchored in the collective beliefs of a group, making them resistant to contradictory evidence.
- Role in Social Change:
- Finding/Conclusion: Social representations serve not only as a reflection of societal values and norms but also as catalysts for social change.
- Explanation: As social conditions evolve, new representations can challenge existing beliefs, leading to transformations in societal practices and norms.
SRT's application across diverse domains from health to intergroup relations highlights its significance in understanding the interplay between individual beliefs and collective social understandings.
Abric’s Evolutionary Theory
editJean-Claude Abric's evolutionary theory provides a nuanced understanding of how social representations are maintained within societies, distinguishing between core and peripheral elements that comprise and stabilize these representations.
Core and Peripheral System
- Core System:
- Stability and Resistance to Change: The core is composed of fundamental beliefs tied to the group's collective memory and history, offering stability and resistance to change.
- Historical and Cultural Anchoring: Core elements are deeply rooted in the group's identity and are essential in giving meaning to the representation.
- Peripheral System:
- Flexibility and Adaptability: Peripheral elements allow the representation to adapt to new information or contexts without challenging the core's integrity.
- Heterogeneity and Individual Variance: These elements can differ within the group, providing space for individual nuances and interpretations.
- Functions and Interplay:
- Continuity and Relevance: The core ensures continuity, while the peripheral adapts to maintain the representation's relevance in changing environments.
- Negotiation of Meaning: The dynamic between the core and peripheral allows for a balance between shared understanding and individual adaptability.
Application and Example Using the social representation of "marriage" within a traditional culture to exemplify Abric's theory:
- Core System:
- Unchanging Definition: The core definition, such as marriage being a sacred union between a man and a woman, remains largely constant over time.
- Historical and Cultural Significance: These beliefs are historically and culturally significant, providing social stability and identity.
- Peripheral System:
- Evolution of Practices: Peripheral elements, like wedding rituals or acceptance of modern practices, showcase adaptability while respecting the core definition.
- Variability and Adaptation: The peripheral system's flexibility allows the representation to incorporate new societal trends and values.
- Interplay between Core and Peripheral Systems:
- Speciation of Belief Systems: The interaction can lead to the evolution of subgroups with distinct beliefs, akin to speciation.
- Societal Implications: Shifts in peripheral elements, influenced by societal changes, can challenge core elements, potentially leading to significant shifts in social representations.
Dynamics of Core and Peripheral Elements The movement between core and peripheral elements reflects the fluid nature of social representations and their susceptibility to change due to societal shifts or internal group dynamics.
- Movement Between Core and Periphery:
- Peripheral to Core: Beliefs once considered peripheral can gain prominence and integrate into the core.
- Core to Peripheral: Core beliefs can become peripheral as societal values evolve.
- Speciation of Belief Systems:
- Emergence of Subgroups: Differences in core beliefs can lead to the formation of distinct subgroups within a larger tradition.
- Challenges of Reconciliation: Reconciling divergent belief systems, especially when core beliefs are at odds, can be difficult and lead to conflict.
Core Beliefs and Action The relationship between core beliefs and action, as well as their visibility within the group, plays a critical role in defining group membership and enforcing norms.
- Core Beliefs and Group Identity:
- Core beliefs often dictate group behaviors and serve as identity markers, with deviations potentially leading to sanctions.
- Peripheral Beliefs and Individuality:
- Peripheral beliefs, being more individualized, might influence personal behavior but are less likely to result in group-level enforcement.
- Reaction to Belief Violations:
- Violations of core beliefs elicit strong reactions as they threaten group cohesion, whereas peripheral beliefs allow for more tolerance and adaptability.
Inclusivity and Social Representations The pursuit of an inclusive society may lead to the redefinition of core beliefs, promoting values of diversity and openness.
- Redefining Core Beliefs:
- Traditional core beliefs may evolve to prioritize inclusivity, potentially challenging the existing group's norms and leading to resistance.
- Inclusivity as the New Core:
- The principle of inclusivity could become the foundational core of an open society, with non-inclusive beliefs becoming peripheral.
- Societal Cohesion and the Core:
- While inclusivity is crucial, a society requires core principles for cohesion; a balance must be struck to ensure societal unity without sacrificing foundational values.
In summary, Abric's theory elucidates how social representations are preserved through a balance between the stable core and the adaptable periphery, reflecting the dynamic nature of societal beliefs and practices.
Reasoning
editIntroduction
editThe discourse on artificial intelligence (AI) and its application in society increasingly contemplates the role AI can play in enhancing the quality of public dialogue. A critical component of this contemplation is the analysis of the logical structure of beliefs and the practical structure of argument. The logical structure pertains to the computational proof of beliefs—formal logic that can be validated through systematic processes. On the other hand, the practical structure of argument deals with the formulation of convincing arguments, the kind that resonate on a practical level with individuals and communities. These twin pillars of logic serve as the foundation for AI systems designed to foster rational, informed, and constructive exchanges. This chapter delves into the intertwining roles of formal and practical logic within AI-mediated discourse systems, examining their functions, benefits, and challenges.
The integration of these structures in AI discourse systems is pivotal for ensuring that conversations are not only based on sound reasoning but also resonate with the communicative norms and psychological realities of human interaction. While the logical structure enforces a level of rigidity and coherence in belief systems, the practical structure facilitates the nuanced and often complex human dimension of persuasion and argumentation. The balance between these aspects is essential for the development of AI systems that can genuinely contribute to social change by enriching public discourse.
Logical Structure of Beliefs
editThe logical structure of beliefs in AI discourse is integral to providing a clear and consistent framework for discussions. This structure can be broken down into several core components:
Formal Representation
editIn the realm of AI, beliefs can be encoded using a variety of formal logical systems. This formalization allows for an objective and consistent framework that facilitates the process of reasoning and the evaluation of belief statements for their truthfulness. Computational representations, such as predicate logic or Bayesian networks, serve as the backbone for this objective assessment, ensuring that AI systems can navigate complex belief patterns with precision.
Formal representations also provide the means for AI systems to process and understand the various layers of human belief systems, from the simplest of propositions to the most intricate of hypotheses. This allows AI to engage in discussions that require a deep understanding of the logical dependencies and the hierarchies present within belief systems, enabling them to contribute meaningfully to the discourse.
Consistency and Coherence
editOne of the fundamental roles of formal logic in AI systems is to ensure the consistency and coherence of belief structures. By employing formal logic, an AI can detect contradictions within a set of beliefs, identifying where a person's belief system may be internally inconsistent. This is critical in maintaining rational discussions where all participants can agree on the logical foundations of the discourse.
Moreover, consistency and coherence are not solely about detecting contradictions; they also involve the ability to deduce and infer new beliefs logically. AI systems, through formal logic, can thereby extend discussions to new but logically consistent territories, enriching the conversation with new insights and perspectives.
Inference and Deduction
editThe process of inference and deduction in formal logic allows for the derivation of new knowledge from established beliefs. AI systems, equipped with this logical structuring, can reason out implications that may not be immediately apparent to human interlocutors. This capability is particularly valuable in structured debates or analytical discussions where the progression of ideas is critical.
However, it's important to recognize the limitations inherent in formal logic when applied to AI discourse systems. Human beliefs are often not just logical but also emotionally charged, and context-dependent, which means they can sometimes defy the neat encapsulation of formal logic. Moreover, the implications of Gödel's incompleteness theorems suggest that there are always going to be limitations to what can be proven within any formal system. This underscores the necessity for AI systems to also understand and appreciate the nuances of human belief structures that go beyond the scope of formal logic.
Practical Structure of Argument
editBeyond the rigidity of formal logic lies the practical structure of argument, which is central to the effectiveness of persuasion and argumentation in human discourse. This structure is influenced by several factors:
Rhetoric and Persuasion
editThe art of rhetoric plays a significant role in the practical structure of argumentation. It is not enough for an argument to be logically sound; it must also be persuasive. Aristotle's modes of persuasion—ethos (credibility), pathos (emotional appeal), and logos (logical argument)—are as relevant today as they were in ancient Greece. An AI system that understands and applies these modes can engage more effectively with humans, presenting arguments that are not only logically valid but also emotionally resonant and ethically sound.
The influence of rhetoric on argumentation is profound. It shapes not only how arguments are perceived but also how they are received and accepted by an audience. Persuasive communication requires an understanding of the audience's values, beliefs, and emotional states. AI systems, therefore, must be adept at not only constructing logical arguments but also at delivering them in a manner that is contextually appropriate and compelling.
Cognitive Biases
editThe understanding and navigation of cognitive biases are also crucial to the practical structure of argument. Humans are often influenced more by factors such as the availability heuristic, where a vivid anecdote can outweigh statistical evidence in persuasive power. An AI system that is sensitive to these biases can better tailor its arguments, anticipating and addressing the psychological factors that influence human decision-making and belief formation.
Socratic questioning and the framing of information are additional tools in the practical argument structure. By asking probing questions, an AI can lead individuals to reflect and reach conclusions independently, which often leads to more profound insight and belief change. Furthermore, how information is framed—the context and presentation—can significantly influence its reception and interpretation. Recognizing and utilizing framing effects is essential for AI systems designed to engage in meaningful dialogue.
Integrating Logic and Rhetoric in AI Mediation
editThe integration of formal logic and practical argumentation in AI systems is a delicate balancing act:
Balanced Approach
editAn AI mediator must find a harmonious balance between rigorous logical evaluation and the nuanced understanding of human persuasion. This involves not only pointing out logical inconsistencies but also appreciating that humans are not solely driven by logic. Convincing individuals often requires more than just logical reasoning—it requires engaging with them on a level that resonates with their personal experiences and emotions.
Ethical considerations are paramount in this balance. AI mediators must always prioritize informed consent, transparency, and the autonomy of the individuals they engage with. There is a fine line between facilitating constructive dialogue and manipulating beliefs. The AI's role should be to assist in the navigation of discussions, providing clarity and insight while respecting each individual's right to hold and express their beliefs.
Educational Role
editMoreover, AI systems can take on an educational role, helping individuals to understand logical fallacies, cognitive biases, and the elements of effective argumentation. This educational aspect is not just about imparting knowledge but also about fostering the skills necessary for critical thinking and self-reflection. Through this process, individuals can become more informed and autonomous thinkers, better equipped to engage in productive and rational discourse.
Strategically Identifying and Presenting Contradictions
editIn settings where participants are prepared for their beliefs to be challenged, such as "devil's advocate" or "debate contest" experiments, the logical structure of beliefs can be used strategically:
Pinpointing Contradictions
editAn AI system's ability to pinpoint contradictions within an individual's belief system is a powerful tool for stimulating critical thinking and reflection. When participants are open to having their views examined, these contradictions can serve as catalysts for deeper inquiry and reassessment of one's stance.
Forcing participants to re-evaluate their beliefs through the presentation of logically structured dissonant facts can lead to a more robust defense of their positions or to a productive shift in their perspectives. In debate settings, this dynamic can enhance the quality of the discourse, as participants are encouraged to critically engage with the arguments presented and develop a more refined understanding of the issues at hand.
Setting Clear Boundaries
editThe establishment of clear boundaries for discourse is another benefit of a strong logical structure. If participants can agree on certain axioms or foundational truths, the debate can focus on the implications and conclusions that logically follow from these premises. This helps to prevent discussions from becoming mired in misunderstandings or irrelevant tangents and instead promotes a focused and productive exchange of ideas.
Highlighting inferential gaps is also crucial. Often, individuals hold beliefs based on incomplete reasoning or insufficient evidence. By logically structuring the argument, an AI system can illuminate these gaps, prompting individuals to seek additional information or to critically evaluate the validity of their conclusions.
Promoting Intellectual Honesty
editIn environments that encourage the challenge of preconceived notions, the logical structuring of arguments promotes intellectual honesty. Participants are more likely to acknowledge points that are logically indefensible and to respect the strength of well-founded arguments. This intellectual honesty is critical for the integrity of the discourse and for the personal growth of the participants involved.
The educational potential of such engagements is immense. Participants not only learn to appreciate the value of logical reasoning but also become more adept at identifying fallacious arguments and understanding the complex nature of their own and others' beliefs.
Guarding Against Misuse
editDespite the potential benefits, there is always a risk that the strategic presentation of dissonant facts could be misused. It is imperative to ensure that the process remains respectful, fair, and aimed at mutual understanding and growth, rather than being used as a means to "win" an argument at the expense of others. The ethical use of logic in discourse is essential for ensuring that the pursuit of truth and understanding is not compromised.
In summary, the integration of the formal and practical aspects of logic into AI-mediated discourse is key to promoting informed, rational, and respectful public dialogue. The logical structure provides a solid framework for discussions, ensuring they adhere to principles of reason and coherence. In contrast, the practical structure addresses the complexities of effective communication, persuasion, and the psychology of belief acceptance. An AI mediator capable of adeptly navigating both realms can thus serve as an effective, ethical, and constructive facilitator of conversations, leading to meaningful social change.
LLMs for Implication Mining
editThe process of implication mining using large language models (LLMs) is an innovative approach that leverages the advanced capabilities of AI to enrich knowledge bases with logical implications derived from user statements. This method is outlined in several steps:
Isolating Belief Structures
editThe first stage involves the identification and isolation of belief structures from users, which can be accomplished through:
- Subset Selection: Identifying a specific subset of beliefs from the broader belief structure of one or more users, based on random selection, user input, or thematic relevance.
- Statement Aggregation: Compiling the chosen beliefs into a clear and coherent prompt, ensuring that the LLM can process and understand them effectively.
Querying the LLM
editOnce the belief structures are prepared, they are presented to the LLM in one of two ways:
- Direct Implication Query: The LLM is asked to deduce direct implications from the aggregated statements, essentially to follow a logical thread to a conclusion.
- Open-ended Exploration: The LLM is given the statements and prompted to generate any interesting or novel observations, leading to potentially broader and more diverse insights.
Handling LLM Responses
editResponses from the LLM are critically evaluated and refined through:
- Filtering & Validation: Sifting through the LLM's output to identify valid and relevant implications, which may involve manual review or additional LLM processing.
- Database Integration: Incorporating verified implications into the database, which enriches the existing knowledge base and informs future queries and interactions.
Periodic Exploration
editTo maintain the relevance and growth of the knowledge base, the system includes mechanisms for ongoing exploration:
- Scheduled Implication Derivation: Regularly querying the LLM with different sets of beliefs to uncover new implications and expand the breadth of the database.
- User Feedback Loop: Engaging users in the validation process of the implications associated with their beliefs, promoting accuracy and user interaction.
This structured application of LLMs not only deepens the database's understanding of existing beliefs but also ensures that the knowledge base is dynamic, evolving, and attuned to the complexities of user-provided data.
Proof-Checking
editLet's take a brief dive into each of these systems:
- Prolog
- Description: Prolog (PROgramming in LOGic) is a logic programming language associated with artificial intelligence and computational linguistics. It operates on the principles of formal logic to perform pattern-matching over natural language parse trees and databases.
- Positives:
- Intuitive Semantics: The "facts" and "rules" structure of Prolog is somewhat intuitive for representing beliefs and deriving conclusions.
- Pattern Matching: Prolog excels in pattern matching, which can be valuable for identifying and working with similar beliefs or conclusions.
- Mature Ecosystem: Prolog has been around for a long time and has a variety of libraries and tools.
- Efficient Backtracking: Can explore multiple proof paths and quickly backtrack when a path doesn't lead to a solution.
- Negatives:
- Not Strictly a Proof Assistant: While Prolog can derive conclusions, it doesn't provide formal proofs in the way that dedicated proof assistants do.
- Performance: For very large databases, Prolog might not be the most efficient choice.
- Example:
likes(john, apple). likes(mary, banana). likes(john, banana). likes(john, X) :- likes(mary, X). % John likes whatever Mary likes % Query: What does John like? ?- likes(john, What). % Output: What = apple ; What = banana.
- Isabelle
- Description: Isabelle is a generic proof assistant, which means it allows mathematical formulas to be expressed in a formal language and provides tools for proving those formulas in a logical manner.
- Positives:
- Robust Proof Assistant: Provides rigorous proofs, ensuring the validity of derived conclusions.
- Strong Typing: Helps catch errors early on, making belief representation more accurate.
- Interactive Environment: The semi-interactive nature allows for human-in-the-loop verification and guidance.
- Supports Higher-Order Logic: Can handle complex logical constructs and relations.
- Negatives:
- Steep Learning Curve: Isabelle has a challenging syntax and requires a deep understanding to use effectively.
- Overhead: Might be overkill for simpler belief systems or when you only need to check implications without detailed proofs.
- Example:
theory Example imports Main begin datatype fruit = Apple | Banana fun likes :: "(string × fruit) set ⇒ string ⇒ fruit ⇒ bool" where "likes S p f = ((p,f) ∈ S)" definition exampleSet :: "(string × fruit) set" where "exampleSet = {(''John'', Apple), (''Mary'', Banana)}" lemma John_likes_Apple: "likes exampleSet ''John'' Apple" using exampleSet_def by auto end
- Coq
- Description: Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms, and theorems together with an environment for semi-interactive development of machine-checked proofs.
- Positives:
- Rigorous Proof Mechanism: Like Isabelle, Coq provides very rigorous proofs.
- Extractable Code: Coq allows for the extraction of executable code from definitions, which can be useful if parts of the belief system are algorithmic.
- Strong Community Support: Has a wide range of libraries and an active community.
- Dependent Types: Can express very intricate relationships and properties.
- Negatives:
- Complexity: Like Isabelle, Coq can be difficult to learn and master.
- Performance: Large-scale proof searches can be time-consuming.
- Interactivity: While it's a strength in some contexts, the need for human-guided proof tactics might not be ideal for fully automated reasoning over beliefs.
- Example:
Inductive fruit := | Apple | Banana. Definition likes (p : string) (f : fruit) : Prop := match p, f with | "John", Apple => True | "Mary", Banana => True | _, _ => False end. Lemma John_likes_Apple : likes "John" Apple. Proof. simpl. trivial. Qed.
- Z3
- Description: Z3 is a high-performance theorem prover developed by Microsoft Research. It's used for checking the satisfiability of logical formulas and can be integrated into various applications, including software verification.
- Positives:
- High Performance: Built for efficiency and can handle large formulas relatively quickly.
- SMT Solver: Works with a variety of theories (like arithmetic, arrays, and bit vectors) which could provide versatility in representing beliefs.
- APIs for Multiple Languages: Can be integrated easily into various software frameworks.
- Decision Procedures: Automatically decides the satisfiability of statements without needing guided tactics.
- Negatives:
- Not a Traditional Proof Assistant: While Z3 can tell you if a statement is true or false based on given axioms, it doesn't produce detailed proofs in the same manner as Isabelle or Coq.
- Expressivity Limitations: Some complex logical constructs might be harder to represent compared to systems like Coq or Isabelle.
- Example:
from z3 import * # Define the sorts (data types) Fruit = DeclareSort('Fruit') Person = DeclareSort('Person') # Declare the function likes likes = Function('likes', Person, Fruit, BoolSort()) # Create the solver s = Solver() # John and Mary as constants of sort Person John, Mary = Consts('John Mary', Person) Apple, Banana = Consts('Apple Banana', Fruit) # Assert the statements s.add(likes(John, Apple)) s.add(likes(Mary, Banana)) # Check if John likes Apple print(s.check(likes(John, Apple))) # Output: sat (satisfiable)
These examples are quite simplistic, and in practice, these tools can handle and are used for much more sophisticated tasks. However, they should provide a basic idea of how each system looks and operates.
Conclusion
editThe chapter on "Formal and Practical Logic" in the context of "Chatbots for Social Change" underscores the importance of these two intertwined aspects of logic in shaping AI-mediated discourse. It posits that while the logical structure of beliefs lays the groundwork for rational discussions, the practical structure of argumentation brings the human element to the forefront of AI interactions. The challenge for AI systems lies in seamlessly integrating these structures to promote dialogue that is not only intellectually rigorous but also socially and emotionally engaging. By doing so, AI has the potential to significantly contribute to social change by elevating the quality of public discourse and fostering a more informed, rational, and empathetic society.
Machine Understanding
editOriginal chapter organization
editResources
editIntroductory videos
- What are LLMs by Google - A simple 5 minute introduction to what LLMs do.
- What are LLMs by Apple (WWDC) - A great introduction to embeddings, tagging, and LLMs, in the first 6m. Then moves to the main topic, multilingual models.
- Build an LLM from scratch - A good video introduction, paired with a blog, conceptualizing the process of going from zero to full-scale LLM ($1M later!).
- Microsoft's infrastructure for training chatGPT, and similar incredibly high-throughput applications.
- Uses DeepSpeed to optimize the training
- InfiniBand for incredible network throughput
- ONNX to move networks around (also see CRIU for an idea of how checkpointing works])
- NVIDIA's tensor cores instead of GPUs for compute
Lectures
- Introduction to Neural Networks - Stanford CS224N. A nice mathematical overview of Neural Networks.
- Scaling Language Models - Stanford CS224N Guest Lecture. A general outlook of the rise in abilities of LLMs.
- Building Knowledge Representation - Stanford CS224N Guest Lecture. Very useful to understand our methods for vector retrieval, but from a more general perspective.
- Dot product has fast nearest-neighbor search algorithms (sub-linear).
- Re-ranking is often necessary, because the dot-product is not necessarily so expressive.
- Not all sets of vectors are easily indexed, "pathological," and to improve performance it can be beneficial to "spread them out," e.g.
- Socially Intelligent NLP Systems - Nice! A deep-dive on how society impinges on language, and how that buggers up our models.
- LangChain vs. Assistants API - a nice overview of two interfaces for deeper chatbot computation.
- Emerging architectures for LLM applications, a look from the enterprise side, of architectures and their use. Covers RAG with vector search, Assistants, and general workflow of refining LLM models.
- GPT from scratch - fantastic introduction to chatGPT, understanding exactly how it works (Torch), by Andrej Karpathy.
- tokenization in GPT, a great intro by Andrej Karpathy, who has tons of other good lectures in this realm
Courses
- NYU Deep learning - A fully-fledged online course, including many advanced topics, including Attention and the Transformer, Graph Convolutional Networks, and Deep Learning for Structured Prediction, for instance.
- The textbook Understanding Deep Learning has accompanying exercises, and lecture slides
Textbooks
- A Hacker's Guide to Language Models, by Jeremy Howard
- Practical Deep Learning for Coders
- Understanding Deep Learning, with this nice introduction talk
- Dahl, D. A. (2023). Natural language understanding with Python: Combine natural language technology, deep learning, and large language models to create human-like language comprehension in computer systems. Packt Publishing.
- Sinan Ozdemir. (2023). Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs. Addison-Wesley Professional.
- Zhao, W. X. et al. (2023). A Survey of Large Language Models (arXiv:2303.18223). arXiv.
- Not exactly a textbook, but at 122 pages of dense referenced material it packs a punch, and should not be considered a resource to be consumed in one sitting.
Reranking
editReranking in the context of information retrieval is a two-step process used to enhance the relevance of search results. Here’s how it typically works:
- First-Stage Retrieval: In the initial phase, a broad set of documents is retrieved using a fast and efficient method. This is often done using embedding-based retrieval, where documents and queries are represented as vectors in a multi-dimensional space. The aim here is to cast a wide net and retrieve a large candidate set of documents quickly and with relatively low computational cost.
- Second-Stage Reranking: The documents retrieved in the first stage are then re-evaluated in the second stage to improve the precision of the search results. This stage involves a more computationally intensive algorithm, often powered by a Language Model (like an LLM), which takes the context of the search query more thoroughly into account. This step reorders (reranks) the results from the first stage, promoting more relevant documents to higher positions and demoting less relevant ones.
The reranking step is a trade-off between the relevance of the search results and the computational resources required. By using it as a second stage, systems aim to balance the speed and efficiency of embedding-based retrieval with the depth and relevance of LLM-powered retrieval. This combined approach can yield a set of results that are both relevant and produced within an acceptable timeframe and cost.
Contradiction Detection
editThe Stanford Natural Language Processing Group has worked on detecting contradictions in text and has created contradiction datasets for this purpose. They have annotated the PASCAL RTE datasets for contradiction, marked for a 3-way decision in terms of entailment: "YES" (entails), "NO" (contradicts), and "UNKNOWN" (doesn't entail but is not a contradiction). Additionally, they have created a corpus where contradictions arise from negation by adding negative markers to the RTE2 test data and have gathered a collection of contradictions appearing "in the wild".
- Uray, Martin (2018). "Exploring neural models for contradiction detection".
{{cite journal}}
: Cite journal requires|journal=
(help) - Tawfik, Noha S.; Spruit, Marco R. (2018). "Automated Contradiction Detection in Biomedical Literature". In Petra Perner (ed.) (ed.). Machine Learning and Data Mining in Pattern Recognition. Vol. 10934. Cham: Springer International Publishing. pp. 138–148. ISBN 978-3-319-96135-4 978-3-319-96136-1. Retrieved 2023-11-16.
{{cite book}}
:|editor=
has generic name (help); Check|isbn=
value: length (help) - Hsu, Cheng; Li, Cheng-Te; Saez-Trumper, Diego; Hsu, Yi-Zhan (2021-11-16), WikiContradiction: Detecting Self-Contradiction Articles on Wikipedia, arXiv, retrieved 2023-11-16
- Li, Luyang; Qin, Bing; Liu, Ting (2017-05-24). "Contradiction Detection with Contradiction-Specific Word Embedding". Algorithms. 10 (2): 59. doi:10.3390/a10020059. ISSN 1999-4893. Retrieved 2023-11-16.
- Al Jallad, Khloud; Ghneim, Nada. "ArNLI: Arabic Natural Language Inference for Entailment and Contradiction Detection".
{{cite journal}}
: Cite journal requires|journal=
(help)
Introduction to Large Language Models (LLMs)
editLet's dive into the world of Large Language Models (LLMs). These are advanced computer programs designed to understand, use, and generate human language. Imagine them as vast libraries filled with an enormous range of books, covering every topic you can think of. Just like a librarian who knows where to find every piece of information in these books, LLMs can navigate through this vast knowledge to provide us with insights, answers, and even generate new content.
How do they achieve this? LLMs are built upon complex algorithms and mathematical models. They learn from vast amounts of text – from novels and news articles to scientific papers and social media posts. This learning process involves recognizing patterns in language: how words and sentences are structured, how ideas are connected, and how different expressions can convey the same meaning.
Each LLM has millions, sometimes billions, of parameters – these are the knobs and dials of the model. Each parameter plays a part in understanding a tiny aspect of language, like the tone of a sentence, the meaning of a word, or the structure of a paragraph. When you interact with an LLM, it uses these parameters to decode your request and generate a response that is accurate and relevant.
One of the most fascinating aspects of LLMs is their versatility. They can write in different styles, from formal reports to casual conversations. They can answer factual questions, create imaginative stories, or even write code. This adaptability makes them incredibly useful across various fields and applications.
LLMs are a breakthrough in the way we interact with machines. They bring a level of understanding and responsiveness that was previously unattainable, making our interactions with computers more natural and intuitive. As they continue to evolve, they're not just transforming how we use technology, but also expanding the boundaries of what it can achieve.
In this chapter, we'll explore the world of Large Language Models (LLMs) in depth. Starting with their basic definitions and concepts, we'll trace their historical development to understand how they've evolved into today's advanced models. We'll delve into the key components that make LLMs function, including neural network architectures, their training processes, and the complexities of language modeling and prediction. Finally, we'll examine the fundamental applications of LLMs, such as natural language understanding and generation, covering areas like conversational agents, sentiment analysis, content creation, and language translation. This chapter aims to provide a clear and comprehensive understanding of LLMs, showcasing their capabilities and the transformative impact they have in various sectors.
Definition and Basic Concepts
editFoundations of Neural Networks
To truly grasp the concept of Large Language Models (LLMs), we must first understand neural networks, the core technology behind them. Neural networks are a subset of machine learning inspired by the human brain. They consist of layers of nodes, or 'neurons,' each capable of performing simple calculations. When these neurons are connected and layered, they can process complex data. In the context of LLMs, these networks analyze and process language data. The Structure of Neural Networks in LLMs
- Input Layer: This is where the model receives text data. Each word or character is represented numerically, often as a vector, which is a series of numbers that capture the essence of the word.
- Hidden Layers: These are where the bulk of processing happens. In LLMs, hidden layers are often very complex, allowing the model to identify intricate patterns in language. The more layers (or 'depth') a model has, the more nuanced its understanding of language can be.
- Output Layer: This layer produces the final output, which could be a prediction of the next word in a sentence, the classification of text into categories, or other language tasks.
Training Large Language Models
Training an LLM involves feeding it a vast amount of text data. During this process, the model makes predictions about the text (like guessing the next word in a sentence). It then compares its predictions against the actual text, adjusting its parameters (the weights and biases of the neurons) to improve accuracy. This process is repeated countless times, enabling the model to learn from its mistakes and improve its language understanding. Parameters: The Building Blocks of LLMs
Parameters in a neural network are the aspects that the model adjusts during training. In LLMs, these parameters are numerous, often in the hundreds of millions or more. They allow the model to capture and remember the nuances of language, from basic grammar to complex stylistic elements. From Data to Language Understanding
Through training, LLMs develop an ability to understand context, grammar, and semantics. This isn't just word recognition, but an understanding of how language is structured and used in different situations. They can detect subtleties like sarcasm, humor, and emotion, which are challenging even for human beings. Generating Language with LLMs
Once trained, LLMs can generate text. They do this by predicting what comes next in a given piece of text. This capability is not just a parroting back of learned data, but an intelligent synthesis of language patterns that the model has internalized.
By understanding these fundamental concepts, we begin to see LLMs not just as tools or programs, but as advanced systems that mimic some of the most complex aspects of human intelligence. This section sets the stage for a deeper exploration into their historical development, key components, and the transformative applications they enable.
Historical Development of LLMs
editThe journey of Large Language Models began with rule-based systems in the early days of computational linguistics. These early models, dating back to the 1950s and 60s, were based on sets of handcrafted rules for syntax and grammar. The advent of statistical models in the late 1980s and 1990s marked a significant shift. These models used probabilities to predict word sequences, laying the groundwork for modern language modeling.
The 2000s witnessed a transition from statistical models to machine learning-based approaches. This era introduced neural networks in language modeling, but these early networks were relatively simple, often limited to specific tasks like part-of-speech tagging or named entity recognition. The focus was primarily on improving specific aspects of language processing rather than developing comprehensive language understanding.
The introduction of deep learning and word embeddings in the early 2010s revolutionized NLP. Models like Word2Vec provided a way to represent words in vector space, capturing semantic relationships between words. This period also saw the development of more complex neural network architectures, such as Long Short-Term Memory (LSTM) networks, which were better at handling the sequential nature of language.
The introduction of the Transformer model in 2017 was a watershed moment. The Transformer, first introduced in a paper titled "Attention Is All You Need," abandoned recurrent layers in favor of attention mechanisms. This allowed for more parallel processing and significantly improved the efficiency and effectiveness of language models. Rise of Large-Scale Language Models
Following the Transformer's success, there was a rapid escalation in the scale of language models. Notable models include OpenAI's GPT series, Google's BERT, and others like XLNet and T5. These models, with their vast number of parameters (into the billions), demonstrated unprecedented language understanding and generation capabilities. They were trained on diverse and extensive datasets, enabling them to perform a wide range of language tasks with high proficiency. Recent Developments: Increasing Abilities and Scale
The most recent phase in the development of LLMs is marked by further increases in model size and capabilities. Models like GPT-3 and its successors have pushed the boundaries in terms of the number of parameters and the depth of language understanding. These models exhibit remarkable abilities in generating coherent and contextually relevant text, answering complex questions, translating languages, and even creating content that is indistinguishable from human-written text.
Architecture
editLarge Language Models (LLMs), such as those based on the Transformer architecture, represent a significant advancement in the field of natural language processing. The Transformer model, introduced in the paper "Attention Is All You Need", has become the backbone of most modern LLMs.
The architecture of a Transformer-based LLM is complex, consisting of several layers and components that work together to process and generate language. The key elements of this architecture include:
- Input Embedding Layer: This layer converts input text into numerical vectors. Each word or token in the input text is represented as a vector in a high-dimensional space. This process is crucial for the model to process language data.
- Positional Encoding: In addition to word embeddings, Transformer models add positional encodings to the input embeddings to capture the order of the words in a sentence. This is important because the model itself does not process words sequentially as in previous architectures like RNNs (Recurrent Neural Networks).
- Encoder and Decoder Layers: The Transformer model has an encoder-decoder structure. The encoder processes the input text, and the decoder generates the output text. Each encoder and decoder consists of multiple layers.
- Each layer in the encoder includes two sub-layers: a multi-head self-attention mechanism and a simple, position-wise fully connected feed-forward network.
- Each layer in the decoder also has two sub-layers but includes an additional third sub-layer for attention over the encoder's output.
- Self-Attention Mechanism: This mechanism allows the model to weigh the importance of different words in the input sentence. It enables the model to capture contextual information from the entire sentence, which is a key feature of the Transformer model.
- Multi-Head Attention: This component splits the attention mechanism into multiple heads, allowing the model to simultaneously attend to information from different representation subspaces at different positions.
- Feed-Forward Neural Networks: These networks are applied to each position separately and identically. They consist of fully connected layers with activation functions.
- Normalization and Dropout Layers: These layers are used in between the other components of the Transformer architecture to stabilize and regularize the training process.
- Output Layer: The final decoder output is transformed into a predicted word or token, often using a softmax layer to generate a probability distribution over possible outputs.
The Transformer architecture is highly parallelizable, making it more efficient to train on large datasets compared to older architectures like RNNs or LSTMs. This efficiency is one of the reasons why Transformer-based models can be scaled to have a large number of parameters and process extensive language data.
For more detailed information on the Transformer architecture, see the WikiPedia Transformer page.
Neural Network Architectures
editTraining and Data Requirements
editLanguage Modeling and Prediction
editBasic Applications of LLMs
editNatural Language Understanding
editConversational Agents
editSentiment Analysis
editNatural Language Generation
editContent Creation
editLanguage Translation
editTechnical Aspects of LLMs
editArchitectures and Models
editTransformer Models
editRNNs, LSTMs, and GRUs
editTraining Large Language Models
editData Preparation and Cleaning
editSupervised and Unsupervised Learning Methods
editChallenges in Training LLMs
editEvaluating LLMs
editPerformance Metrics
editBenchmarks and Testing
editAdvanced Techniques and Methodologies
editChain-of-Thought Prompting
editGraph of Thoughts paper, which has a nice exposition of chain-of-thought, and tree-of-thought prompting.
Zero-Shot and Few-Shot Learning
editRetrieval-Augmented Generation (RAG)
editPrompt Engineering and Tuning
editHybrid Models
editSelf-Improving Models
editExplainable AI (XAI) Techniques
editContextual Understanding and Memory
editEthical and Bias Mitigation Techniques
editAdvanced Applications of LLMs
editHadi et al.'s (2023) 44-page survey [1] offers a solid and recent resource here.
Enhancing Creativity and Innovation
editArtistic and Literary Creation
editDesign and Engineering
editComplex Problem Solving and Reasoning
editAdvanced Analytics and Data Interpretation
editDecision Support Systems
editPersonalized Education and Training
editAdaptive Learning Systems
editInteractive Educational Content
editHealthcare and Medical Applications
editMedical Research and Drug Discovery
editPatient Interaction and Support Systems
editBusiness and Industry Innovations
editAutomated Business Intelligence
editPersonalized Marketing and Customer Insights
editChallenges and Future Directions
editScalability and Environmental Impact
editEthical Considerations and Bias
editAchieving Generalization and Robustness
editFuture Trends in LLM Development
editIntegration with Other AI Technologies
editEnhancing Interactivity and Personalization
editConclusion
editImplementation and Execution
editStatement Embedding Models
editFor generating embeddings of statements that can assess the similarity in meaning between two statements, several state-of-the-art, open-source algorithms and tools are available:
OpenAI's Embedding Models[2] OpenAI offers embedding models that are particularly tuned for functionalities such as text similarity and text search. These models receive text as input and return an embedding vector that can be utilized for a variety of applications, including assessing the similarity between statements.
Spark NLP[3] This open-source library provides a suite of transformer-based models, including BERT and Universal Sentence Encoder, which are capable of creating rich semantic embeddings. The library is fully open-source under the Apache 2.0 license.
To use Spark NLP you need the following requirements:
- Java 8 and 11
- Apache Spark 3.5.x, 3.4.x, 3.3.x, 3.2.x, 3.1.x, 3.0.x
GPU (optional): Spark NLP 5.1.4 is built with ONNX 1.15.1 and TensorFlow 2.7.1 deep learning engines. The minimum following NVIDIA® software are only required for GPU support:
- NVIDIA® GPU drivers version 450.80.02 or higher
- CUDA® Toolkit 11.2
- cuDNN SDK 8.1.0
- There is a massive text embedding benchmark (MTEB [4]) which should help us determine which embedding algorithm to use.
Vector Similarity Search
editLLMRails
editThe MTEB led me to the ember-v1 model by llmrails, because of its success on the SprintDuplicateQuestions dataset. The goal is to embed statements such that statements or questions which are deemed by a community to be duplicates are closest. The dataset compiles marked duplicates from Stack Exchange, the Sprint technical forum website, and Quora.
LLMrails [5] is a platform that offers robust embedding models to enhance applications' understanding of text significance on a large scale. This includes features like semantic search, categorization, and reranking capabilities.
Pricing: "Elevate your data game with our cutting-edge ChatGPT-style chatbot! All you need to do is link your data sources and watch as our chatbot transforms your data into actionable insights."
LLMRails is revolutionizing search technology, offering developers unparalleled access to advanced neural technology. Providing more precise and pertinent results paves the way for transformative changes in the field of search technology, making it accessible to a wide range of developers.
From the website: "with private invitation, join the LLMRails and start your AI advanture!" How did they get this wrong?
- Embed $0.00005 per 1k tokens
- Rerank $0.001 per search
- Search $0.0005 per search
- Extract $0.3 per document
Note: This service does not give the capabilities I need. It's a bit too managed. I just need vector embeddings, and retrieval. |
Other Vector Databases as a Service
edit- Amazon OpenSearch Service is a fully managed service that simplifies deploying, scaling, and operating OpenSearch in the AWS Cloud. It supports vector search capabilities and efficient vector query filters, which can improve the responsiveness of applications such as semantic or visual search experiences.
- Azure Cognitive Search: This service allows the addition of a vector field to an index and supports vector search. Azure provides tutorials and APIs to convert input into a vector and perform the search, as well as Azure OpenAI embeddings for tasks like document search.
- Zilliz Cloud, powered by the world's most powerful vector database, Milvus, solves the challenge of processing tens of billions of vectors.
- Zilliz has a 30-day free trial worth $400 of credits. 4CUs
- Pricing: Zilliz Cloud Usage (Each unit is 0.1 cent of usage) $0.001 / unit
- A more comprehensive list, Awesome Vector Search, on GitHub.[6]
- For cloud services, they list Zilliz first, then Relevance AI, Pinecone, and MyScale.
- Graft somehow came up
- It is extremely expensive, $500/month for 10,000 data points. Unlimited datapoints at $5k/month...
- Perhaps it's more managed than a Zilliz, or that's just what the infrastructure costs either way?
- High price could also be an indication of the value of this sort of technology (they also do the embedding & document upload for you).
Open Source Models
edit- Milvus is a "vector database built for scalable similarity search, Open-source, highly scalable, and blazing fast." Seems perfect. They have a managed version, but I'm not sure it's necessary now. [7]
- Elastic NLP: Text Embeddings and Vector Search: Provides guidance on deploying text embedding models and explains how vector embeddings work, converting data into numerical representations[8].
- TensorFlow Recommenders' ScaNN[9] TensorFlow provides an efficient library for vector similarity search named ScaNN. It allows for the rapid searching of embeddings at inference time and is designed to achieve the best speed-accuracy tradeoff with state-of-the-art vector compression techniques.
- Other notable vector databases and search engines include Chroma, LanceDB, Marqo, Qdrant, Vespa, Vald, and Weaviate, as well as databases like Cassandra, Coveo, and Elasticsearch OpenSearch that support vector search capabilities.
Milvus Benchmark
Milvus has conducted benchmarks, which should give us an idea of overall cost, and how much we can scale before buckling.
- CPU: An Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz. This is a high-end server-grade processor suitable for demanding tasks. It belongs to Intel's Xeon scalable processors, which are commonly used in enterprise-level servers for their reliability and performance.
- Memory: 16*\32 GB RDIMM, 3200 MT/s. This implies that the server has 16 memory slots, each with a 32 GB RDIMM (Registered DIMM) module, totaling 512 GB of RAM. The memory speed is 3200 MT/s (MegaTransfers per second), which indicates how fast the memory can operate.
- SSD: SATA 6 Gbps. This indicates that the server uses a Solid State Drive connected through a SATA interface, with a transfer rate of 6 Gigabits per second. SSDs are much faster than traditional hard drives and are preferred for their speed and reliability.
To find an approximate AWS EC2 equivalent, we would need to match these specs as closely as possible. Given the CPU and memory specifications, you might look into the EC2 instances that offer Intel Xeon Scalable Processors (2nd Gen or 3rd Gen) and the ability to configure large amounts of memory.
A possible match could be an instance from the m5 or r5 families, which are designed for general-purpose (m5) or memory-optimized (r5) workloads. For example, the r5.12xlarge instance provides 48 vCPUs and 384 GiB of memory, which, while not an exact match to your specs (since it has less memory), is within the same performance ballpark.
However, keep in mind that AWS offers a wide range of EC2 instances, and the actual choice would depend on the specific balance of CPU, memory, and I/O performance that you need for your application. Also, pricing can vary significantly based on region, reserved vs. on-demand usage, and additional options like using Elastic Block Store (EBS) optimized instances or adding extra SSD storage.
Using the AWS pricing calculator, this amounts to $3 hourly.
- Search - (cluster w/1) 7k to 10k QPS @ 128 dimensions, (standalone w/1) 4k to 7.5k QPS
- Scalability
- From 8-16 CPU cores, it doubles. After that it kinda less-than-doubles
- Going from 1-8 replicas changes QPS from 7k to 31k, and over doubles available concurrent queries (to 1200)
There are 3600 seconds in an hour, so $PQ = $3 / (7k * 3600) = $0.000000119 per query...
Large Language Models
editA useful article, comparing open-source LLM models, was published here, in Medium.
LLMs In Hosted Environments
editModel | Cost per 1M input tokens | Cost per 1M output tokens | Additional Notes |
---|---|---|---|
AI21Labs Jurassic-2 Ultra | $150 | $150 | Highest quality |
AI21Labs Jurassic-2 Mid | $10 | $10 | Optimal balance of quality, speed & cost |
AI21Labs Jurassic-2 Light | $3 | $3 | Fastest and most cost-effective |
AI21Labs Jurassic-2 Chat | $15 | $15 | Complex, multi-turn interactions Free for $1000 in usage. |
Anthropic Claude Instant | $1.63 | $5.51 | Low latency, high throughput |
Anthropic Claude 2.0, 2.1 | $8 | $24 | Best for tasks requiring complex reasoning |
Cohere Command | $1.00 | $2.00 | Standard offering |
Cohere Command Light | $0.30 | $0.60 | Lighter version |
Google Bard | Free (although likely limited) | Requires a Google account | |
GPT-4 Turbo (gpt-4-1106-preview) | $10 | $30 | |
GPT-4 Turbo (gpt-4-1106-vision-preview) | $10 | $30 | |
GPT-4 | $30 | $60 | |
GPT-4-32k | $60 | $120 | |
GPT-3.5 Turbo (gpt-3.5-turbo-1106) | $1.00 | $2.00 | |
GPT-3.5 Turbo (gpt-3.5-turbo-instruct) | $1.50 | $2.00 |
LLMs On Your Own Hardware
editFrom the model card: "Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations."
It turns out that you have to ask Microsoft nicely to get access to the parameter sets, agreeing to terms of use.
- It's clear from a little research that running and training locally (I have a 2021 Mac M1) is going to cause a lot of headache.
- AWS Sagemaker seems to be a great option for getting up and running with open source models.
- Has access to dozens of models of varying sizes through their Jumpstart feature
- In practice, you say "go" and are dropped right away in a JupyterLab instance.
Hardware requirements of Llama (Nov 2023)
Model | Instance Type | Quantization | # of GPUs per replica | Cost |
---|---|---|---|---|
Llama 7B | (ml.)g5.2xlarge | - | 1 | $1.52 (ml.) |
Llama 13B | (ml.)g5.12xlarge | - | 4 | $7.09 (ml.) |
Llama 70B | (ml.)g5.48xlarge | bitsandbytes | 8 | $20.36 (ml.) |
Llama 70B | (ml.)p4d.24xlarge | - | 8 | $37.69 (ml.) |
Benchmarking AWS SageMaker and Llama
Fortunately, Phil Schmid has conducted thorough benchmarks of different deployments of Llama on SageMaker in AWS. His blog posts in 2023 in particular are an incredible reference for getting started with these LLMs.
To give the most economical example, The g5.2xlarge ($1.52 / hour) can handle 5 concurrent requests delivering 120 tokens of output per second. Incredible! That's $3.50 per 1M tokens. ChatGPT, for comparison, offers gpt-3.5-turbo (the cheapest option) at $0.0020 per 1K tokens, or $2.00 per 1M tokens. Comparable, and not surprising that OpenAI is cheaper.
Let's compare the most expensive to the most sophisticated OpenAI model, GPT-4. Llama 70B runs on a $37.69 server (ml.p4d.24xlarge) serving 20 concurrent requests at 321 tokens/second. That's $10.43 per 1M tokens. For comparison, GPT-4 costs $0.06 per 1K tokens, or $60 per 1M.
It should be noted as well that Phil Schmid was able to get decent performance (15 seconds per thousand tokens generated) for a budget deployment in AWS's new inferentia2 hardware (inf2.xlarge), which costs just $0.75 per hour. That's $550 per month, so better not leave it on, but still. Very cool!
He trains a 7B parameter Mistral model using ml.g5.4xlarge ($2.03 / hour). It was able to fine-tune based on 15,001 examples, processed in whole 3 times (epochs), in 3.9 hours, giving an overall cost of <$8.
Integrations
editTo achieve the widest reach, we want to integrate our chatbots with low-effort communication media, such as text messages, phone calls, WhatsApp, Facebook Messenger, WeChat, or perhaps decentralized messaging platforms like those built on nostr. Each option has somewhat different benefits, limitations, and monetary cost. This section gives an overview of the available connections, along with the pricing and basic principles to get you started.
Facebook (now under the parent company Meta) has plans to integrate its messaging services across WhatsApp, Instagram, and Facebook Messenger. Mark Zuckerberg is leading an initiative to merge the underlying technical infrastructure of these apps, while keeping them as separate apps[10]. This would allow cross-platform messaging between the services, with all the messaging apps adopting end-to-end encryption[11]. The integration raises concerns around antitrust issues, privacy, and further consolidation of Facebook's power over its various platforms[12].
Third-party platforms like Tidio, Aivo's AgentBot, Respond.io, BotsCrew, Gupshup, Landbot, and Sinch Engage allow businesses to create chatbots that can integrate with WhatsApp, Facebook Messenger, Instagram and other channels.
Here is a table summarizing the messaging integrations supported by various third-party platforms, along with their approximate pricing and relevant notes:
Platform | Messaging Integrations | Approximate Pricing | Notes |
---|---|---|---|
Landbot | WhatsApp, Facebook Messenger | Starter: €49/month, Pro: €99/month, Business: Custom | Offers AI chatbot builder, opt-in tools, workflows, surveys, etc. Needs at least a pro account to integrate with webhooks. |
BotSpace | Starter: ₹3,499/month, Pro: ₹7,499/month, Premium: ₹23,499/month | Supports team inboxes, roles & permissions, custom workflows. | |
Callbell | €50/month per 10 agents, +€20/month per WhatsApp number | Offers advanced bot builder module for €59/month. | |
DelightChat | WhatsApp (others not specified) | Pricing not provided | Offers plans for businesses at different stages. |
Brevo | Pay-as-you-go, no recurring fees | Only pay for WhatsApp messages sent. | |
AiSensy | Basic: ₹899/month ($10.77), Pro: ₹2399/month ($28.73) | Limits on free service conversations per month. | |
Flowable Engage | WhatsApp, Facebook Messenger, WeChat, LINE | Pricing not provided | Supports voice/video calls, templates, rich media on some platforms. Account requirements vary. |
All the listed platforms support WhatsApp integration, as it is a popular messaging channel for businesses. Some platforms like Landbot and Flowable Engage also support Facebook Messenger integration. Platforms like Flowable Engage offer integration with additional messaging apps like WeChat and LINE. Pricing models vary, with some offering subscription plans (monthly/annual) and others following a pay-per-message or per-agent model. Certain platforms bundle additional features like AI chatbots, custom workflows, surveys, etc. along with messaging integration.
The search results indicate that Meta (Facebook) is working on enabling interoperability between its own messaging apps (WhatsApp, Messenger, Instagram) as well as with approved third-party messaging services, as mandated by the EU's Digital Markets Act[13][14]. However, the extent of this interoperability and its impact on existing third-party integrations is currently unclear.
Featured Interviews
editDirk Helbing
editDirk Helbing, an esteemed scholar at the forefront of complex systems, social dynamics, and computational science, has been instrumental in exploring the interplay between technology and society. His recent endeavors in advancing the concept of deliberative democracy, especially through his contributions in "Democracy 2.0," underscore the transformative potential of chatbots in facilitating and enhancing human discourse. His collaboration and insights serve as a cornerstone for this course, which delves into the profound impact of chatbots on our collective future.
Ethical and Societal Implications of Chatbots
editOpportunities and Challenges Chatbots offer transformative opportunities for enhancing user interaction with technology, providing a platform for information access and engagement. Yet, these opportunities are accompanied by challenges that are not immediately apparent, necessitating a proactive approach to identify and mitigate potential risks. The development of chatbots should not be driven by the pursuit of an "average" or uniform standard, as this risks the homogenization of society and undermines the robustness that diversity brings to social systems. Instead, chatbots should be designed to reflect a wide array of thoughts, supporting a tapestry of perspectives that enrich the societal discourse.
Ethical Influence and Autonomy The ethical use of chatbots centers around the principles of consent and self-determination. Influencing users without their explicit consent raises ethical concerns, highlighting the need for transparency in how chatbots are used as tools for individual empowerment rather than for manipulation. Preserving user autonomy in decision-making is crucial, ensuring that chatbots support rather than undermine the individual's ability to make informed choices. This is particularly relevant in the context of virtual reality experiments, where the immersive nature of the technology holds significant potential for influencing user perceptions and behaviors.
Chatbots in Conflict Resolution and Social Cohesion
editBridging Perspectives Chatbots have the potential to serve as mediators in conflict resolution by exposing individuals to a multitude of perspectives, thereby fostering a deeper understanding of the issues at hand. This process, while intricate and time-consuming, can be instrumental in identifying the roots of conflicts and facilitating pathways towards resolution. By promoting empathy and understanding, chatbots can contribute to diffusing societal tensions and building a more cohesive social fabric.
Diversity and Customization A diverse ecosystem of chatbots is essential to cater to the varied needs and preferences of users. Encouraging the development of multiple chatbot models ensures that no single approach becomes dominant, allowing for a more personalized and user-centric experience. This diversity not only enhances the user experience but also serves as a safeguard against the risks associated with a one-size-fits-all solution, which can lead to large-scale, institutionalized errors.
Security and Long-Term Impact of Chatbots
editSystem Vulnerability and Security The susceptibility of chatbot systems to hijacking and misuse is an ongoing concern that requires constant vigilance and innovation in security measures. As with any system, there is an arms race between those working to maintain stability and those with malicious intent. Drawing from the stabilizing features of scientific inquiry, it is imperative to understand and incorporate similar features into chatbot systems to ensure their resilience and robustness.
Delayed Consequences and Ethical Research The conversation around chatbots also touches on the potential for delayed societal consequences, which may not become evident until years into the future. This underscores the importance of a cautious approach to the deployment of chatbots, with a focus on the long-term implications of their integration into daily life. Ethical research and development in the field of AI must prioritize the advancement of knowledge while being cognizant of the potential for misuse, especially by entities that may disregard ethical considerations for profit or political gain.
Deeper Reflections on the Role of Chatbots
editThis section was written by GPT-4, and offers some food for thought, extending the conversation with Dirk Helbing.
Balancing Scalability with Individual Needs The scalability of chatbots presents a paradox; while it allows for widespread access and utility, it also risks the loss of individualized responses that cater to specific user contexts. The challenge lies in designing chatbots that can provide the efficiency benefits of scalability while maintaining the ability to personalize interactions. This balance is critical in ensuring that chatbots remain versatile and effective across a broad spectrum of scenarios without compromising the unique needs of each user.
Transparency, Accountability, and Public Engagement Transparency and accountability in chatbot operations are essential to build trust and ensure that users understand the rationale behind the chatbot's responses. Public education and awareness initiatives are equally important, equipping users with the knowledge to critically engage with chatbot technology and understand its limitations. Feedback mechanisms play a pivotal role in this ecosystem, providing a direct channel for users to influence the evolution of chatbot systems and ensuring that developers can respond to concerns and improve the technology continuously.
Ethical Frameworks and Regulatory Considerations The development and deployment of chatbots must be underpinned by robust ethical frameworks that guide their influence on individual decision-making. As chatbots become more integrated into societal functions, the question of regulation becomes increasingly pertinent. What role should regulation play, and how can it be implemented to foster innovation while protecting individual rights and societal values? These questions highlight the need for a collaborative approach involving policymakers, technologists, and the public to navigate the complex ethical landscape of AI and chatbots.
Jeremy Foote
editThis article details a conversation between Alec and Jeremy, focusing on their collaborative research into the use of chatbots for mediation and social impact in online communities. It first outlines their individual projects, shared interests, ethical considerations, technical challenges, and future steps in the realm of chatbots and large language models (LLMs).
Alec and Jeremy's dialogue represents a confluence of interdisciplinary research involving communication studies, artificial intelligence, and ethics. Their exchange sheds light on the potential of chatbots to reshape online interactions, the ethical implications of deploying such technologies, and the theoretical frameworks that support their usage.
Jeremy's Reddit Project
editJeremy's initiative aims to engage with Reddit users known for toxic commenting by offering them a chance to converse with a bot. This project is a partnership with a scientist studying toxicity and features three distinct approaches to interaction:
- The default setting without specific behavioral nudges.
- An approach that encourages adherence to Reddit's community norms.
- A narrative storytelling mode prompting users to reflect on their behavior.
The goal is to monitor behavioral changes post-interaction to evaluate the effectiveness of each method.
User Response Categories During the project, Jeremy observed varied responses from users, which he categorized into:
- Trolls who seek to provoke the bot.
- Justifiers who defend their actions and challenge the bot's contextual understanding.
- Good Faith Interactors who show genuine reflection and sometimes gratitude, raising concerns about the authenticity of their thanks during presentations.
Project Logistics and Concerns
- The logistical aspects of the project, including the initial focus on text interactions due to the complexities of real-time voice generation.
- The potential necessity of a network of specialized LLMs to handle diverse interaction scenarios.
Alec's Perspective and Project
editAlec views these interactions as educational, employing varied conversational techniques to instill a lesson. His vision includes:
- Utilizing chatbots to bridge individual perspectives, forming a social semantic database from these conversations.
- Reflecting diverse viewpoints in societal discussions through bot-mediated dialogue.
- Exploring data collection methods, including voice or text input, and the feasibility of debate-style interactions with bots.
- Investigating the technical training of LLMs to support his envisioned semantic database.
Alec’s Expanded Vision
- Alec's broader aim involves creating a universal mediator to enhance communication efficiency in society, tackle misinformation, and bridge strategic societal divisions.
- His commitment to a non-interventionist and transparent approach.
- He speculates on scaling the project with philanthropic funding.
Potential Applications and Ambitions
- The concept of an AI politician and organizing collective actions through a chatbot.
- A tool that could mediate conversations, reflecting multiple viewpoints for better societal understanding.
Ethical and Technical Considerations
editBoth researchers deliberate the importance of transparency when users are conversing with a bot, suspecting that such openness might improve the quality of conversations.
Ethical Justifications
- The ethics of engaging with users, especially vulnerable populations, and the argument for universally respectful engagement.
- The process of obtaining consent, as per the Institutional Review Board (IRB) protocols, via Reddit messages and linking to an informational page.
Technical Aspects
- Discussion of the underlying technology, with Jeremy referencing GPT-3, and the importance of aligning the bot to ensure safe and kind responses.
- Alec's emphasis on the technical knowledge required for training LLMs and the potential use of open-source alternatives that could run on personal computing resources.
Social Impact and Theoretical Foundations
editThe discussion also delves into the social and theoretical implications of using LLMs in online communities, referencing social capital theory and the potential for chatbots to mediate discussions that could lead to greater social cohesion.
Democratic Ideal and Simplified Solutions
- The conversation touches on the democratic potential of enabling discussions between individuals with opposing views, and the possibility of chatbots mediating these exchanges to prevent devolution into partisanship.
Further Details
editEvaluating Outcomes and Measurements
- Insights into behavior change metrics are shared by Jeremy, with an estimate that around 10% of the users they reach out to agree to interact with the bot, and of these, 70% engage to some degree.
Experiment Design and Reddit's Role
- The design of experiments to engage Reddit users, considering reaching out to Reddit users or moderators for participation without necessarily needing permission from Reddit itself.
Closing the Conversation
The conversation concludes with plans for Alec to submit his IRB proposal and for Jeremy to introduce Alec to a colleague interested in mediation using LLMs. They express a mutual interest in exchanging IRB materials and continuing their discussion on potential collaborations.
Potpourri of Relevant Facts
This section compiles specific facts and figures mentioned during the conversation that provide further depth and clarity to the projects and visions discussed by Alec and Jeremy.
- Jeremy's project interaction rates indicate that approximately 10% of users they reach out to agree to interact with the bot. Of these interactions, about half are considered to be in good faith, with 70% engaging to some degree and a third displaying bad faith from the start.
- Jeremy references the use of GPT-3 for his chatbot, but later clarifies that they are using GPT-3.5, which has undergone alignment to ensure safe responses.
- Alec is considering using open-source alternatives to GPT-3 that could run efficiently on a Mac, suggesting a hands-on approach to prototyping and potential deployment.
- Jeremy has contemplated investing in GPU resources to facilitate his project's technological needs.
- Alec and Jeremy discuss the possibility of Alec collaborating with Josh Becker from UCL, a researcher with expertise in mediation, who has also shown interest in using LLMs in this context.
- Alec is preparing to submit an IRB proposal and is considering how to structure the consent process, particularly for vulnerable populations.
- Jeremy explains their approach to obtaining consent for their Reddit study through direct messages that link to an informative page, and they ensure users are aware that participation is limited to those over the age of 18.
- Alec’s motivation for his project was further reinforced after reading "Dark Money" by Jane Mayer, which details the influence of the Koch brothers on American politics.
- Alec and Jeremy contemplate the potential of LLMs like GPT-3.5 in the facilitation of insightful conversations and their significant contributions to the social sciences.
- They also discuss the logistical challenges involved in real-time voice generation for chatbots and the possibility of defaulting to text messaging if the voice proves too complex.
The facts presented in this section not only contribute to the granularity of the projects discussed but also highlight the nuanced considerations taken by both researchers in their pursuit of innovative applications for chatbots and LLMs in enhancing online community interactions and societal communication.
Links
- Jeremy's Homepage
- Joshua Becker's Homepage
- My thought process after this talk...
Belief Change Megastudy (Jan Voelkel et al. 2023)
editThe paper presents a comprehensive analysis based on a large-scale study (n=32,059) exploring strategies to reduce partisan animosity and anti-democratic attitudes in the United States.
Key Findings
- Partisan Animosity Reduction: The study found that interventions emphasizing sympathetic individuals with different political beliefs or highlighting common cross-partisan identities were highly effective in reducing partisan animosity.
- Impact on Anti-Democratic Attitudes: Interventions aimed at correcting misperceptions about outpartisans' views significantly decreased support for undemocratic practices and partisan violence.
- Effective Strategies for Partisan Animosity: Strategies involving relatable, sympathetic outpartisans and common identities across parties showed the highest effectiveness.
- Effective Strategies for Undemocratic Practices and Partisan Violence: Interventions correcting exaggerated stereotypes of other party supporters and highlighting the consequences of democratic collapse were effective. Additionally, showing endorsements of democratic principles by political elites also yielded positive results.
Psychological Dimensions of Political Attitudes
- The study reveals that partisan animosity and support for undemocratic practices/partisan violence are distinct dimensions, necessitating tailored approaches for interventions.
Characteristics of Effective Interventions
- Explicitly addressing partisan animosity and undemocratic practices in interventions proved more effective.
- Multifactorial interventions, which employed multiple theoretical mechanisms, were generally more effective than unifactorial ones.
- Higher production quality and engaging content also contributed to the effectiveness of interventions.
Broader Applications
- The findings offer insights for applications on websites and social media platforms to address partisan division and anti-democratic attitudes.
- The study provides a foundation for future research and the development of interventions aimed at strengthening democratic attitudes in the U.S.
These insights provide valuable approaches for addressing the growing partisan divide and threats to democratic principles in America.
Implications
editThe findings from the "Megastudy identifying effective interventions to strengthen Americans’ democratic attitudes" offer profound implications for the logistics, utility, and impact of chatbots in fostering intelligent social action:
- Empathy and Understanding in Chatbots: The success of interventions that highlight sympathetic individuals with different political beliefs underscores the importance of empathy and understanding in chatbots. Chatbots, equipped with NLU and LLMs like ChatGPT, can be programmed to represent diverse perspectives and facilitate conversations that foster empathy among users. They can simulate dialogues where users experience differing viewpoints, helping to reduce partisan animosity.
- Correcting Misperceptions through Information: The effectiveness of interventions aimed at correcting misperceptions about outpartisans’ views highlights a crucial role for chatbots. They can serve as fact-checkers and sources of unbiased information, addressing false beliefs and stereotypes in real-time during conversations. This function is particularly vital in reducing support for undemocratic practices and partisan violence.
- Tailored Intervention Strategies: Recognizing the distinct dimensions of partisan animosity and support for undemocratic practices, chatbots can be designed to deliver tailored interventions. They can adapt their responses based on the user’s attitudes, providing more personalized and effective engagement.
- Multifactorial Intervention and Engagement: The success of multifactorial interventions suggests that chatbots should integrate various strategies – such as providing information, simulating empathetic conversations, and highlighting commonalities – to be more effective. Additionally, engaging content and high production quality are crucial for chatbot interfaces to ensure user retention and effective communication.
- Broader Social Impact: Chatbots have the potential to be deployed on a large scale across various platforms, including social media and websites. They can serve as tools for reducing partisan division and strengthening democratic attitudes, reaching a wide audience and facilitating social change.
- Continuous Learning and Improvement: Given the rapidly evolving nature of social attitudes and political dynamics, chatbots must be equipped with learning algorithms to continually update their strategies and information. This adaptability ensures they remain effective and relevant.
- Ethical Considerations and Responsibility: The deployment of chatbots for social change must be guided by ethical considerations, ensuring they promote healthy democratic values and do not reinforce biases or misinformation. Responsible design and deployment are key to maximizing their positive impact.
In summary, the insights from the megastudy offer valuable guidance for the development of chatbots as tools for social change. By leveraging NLU and LLM capabilities, chatbots can be powerful allies in reducing partisan animosity, correcting misperceptions, and fostering a more empathetic and informed society.
- ↑ Hadi et al. (2023). Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects. TechRxiv, full download available online
- ↑ "Introducing text and code embeddings". OpenAI. Retrieved 2023-11-07.
- ↑ "GPU vs CPU benchmark". Spark NLP. Retrieved 2023-11-07.
- ↑ MTEB
- ↑ llmrails
- ↑ Awesome Vector Search
- ↑ Milvus homepage
- ↑ Elastic
- ↑ "Efficient serving". TensorFlow Recommenders. Retrieved 2023-11-07.
- ↑ https://www.nytimes.com/2019/01/25/technology/facebook-instagram-whatsapp-messenger.html The New York Times
- ↑ https://www.theverge.com/2019/1/25/18197628/facebook-messenger-whatsapp-instagram-integration-encryption The Verge
- ↑ https://www.wired.com/story/facebook-plans-unite-messaging-apps/ Wired
- ↑ https://www.theverge.com/2023/3/24/23655688/eu-digital-markets-act-messaging-interoperability-meta-whatsapp-imessage The Verge
- ↑ https://www.reuters.com/technology/eu-rules-force-meta-open-up-messaging-apps-2023-03-24/ Reuters