Wikibooks:Artificial Intelligence
This page contains a draft proposal for a Wikibooks policy or guideline. Discuss changes to this draft at the discussion page. Through consensus, this draft could become an official Wikibooks policy or guideline. |
The following draft policy outlines the Wikibooks community's perspective on the use of artificial intelligence-generated content on this site.
Text generation
editLarge language models (LLMs), often referred to as "AI chatbots" or simply "AI", can be beneficial. However, like human-generated text, machine-generated text can also contain errors, flaws, or even be entirely useless. In particular, requesting a language model to write a book or an essay can sometimes cause the production of complete fabrications, including fictitious references. The output may be biased, libel living people, infringe on copyrights, or simply be of poor quality. As such, LLMs may not be used to generate original material or ideas at Wikibooks, and their sources should not be blindly trusted.
Because they are simply language models, LLMs may only be used to assist with editing, adhering to the following guidelines:
- You are prohibited from using LLMs except as writing advisors, such as by asking how to improve a paragraph stylistically, and even then that should be minimized or even avoided. Furthermore, you should be aware that the "choices" made by the LLM may in fact change the meaning of the content (which is prohibited as described above) and/or make the writing worse. Exercise due diligence and common sense when deciding whether to incorporate the LLM's suggestions. Also understand that other editors may disagree with the changes made and may expect justification beyond the fact that an LLM said so. If asked, you must be able to explain and justify every single change made with the help of the LLM.
- You are ultimately responsible for the final product resulting from your use of an LLM. LLMs should not be used for tasks with which the editor does not have substantial familiarity, and their outputs should be rigorously scrutinized for both quality and compliance with all applicable policies. Editors who are not fully aware of the risks associated with LLMs and who are not able to overcome the limitations of these tools may not use them. Repeated improper use of LLMs may result in suspension of editing privileges.
- You must document your use of an LLM and its purpose in the edit summary and on the talk page (see disclosure section below).
Translation
editLLMs may not be used for translation of content. Please see Wikibooks:Content translation for further information.
Media
editMost AI tools have the capacity to create media, particularly images, from prompts. If you are interested in uploading this media, please be aware of relevant policies on licensing at our sister project Wikimedia Commons or our local policy on images, depending on whether you are uploading it here or there.
Required disclosure
editAll content made with the help of an LLM must be explicitly marked as such in both the edit summary and the page's discussion page. The following information must be provided:
- The date of generation/addition
- The tool and tool version used (e.g. Gemini, ChatGPT, Midjourney)
- The prompt(s) fed into the tool (e.g. "suggest how to improve this text")
This applies to every instance of using AI content. If you create new prompts and incorporate them into a page multiple times, each instance must be documented, including on the talk page. Any user basing content on artificial intelligence suggestions who fails to include this explicit notice is subject to warnings and subsequent blocking.
Detection and enforcement
editAs of this policy's creation, there are no reliable, high-quality tools capable of detecting AI-generated materials. Instead, editors will have to be on the lookout for regular issues of quality in text, such as:
- Illogical or meaningless sentences
- Word changes that inappropriately change the meaning of a sentence
- Citations or sources that do not match a claim
- Low-quality or flawed images
Use of copyright violation detectors (e.g. Earwig's Copyvio Detector) can be used to help identify text copied verbatim from online sources.
Because of the challenges in definitively identifying generative AI use, editors who contribute the above issues without obvious AI use should first be referred to other applicable policies, such as Wikibooks:Copyrights. It is also best to engage in good faith discussion first to determine and resolve the cause of the problematic content. If good faith discussion and guidance fails, or if blatant violation of this policy is found, problematic editors may be subject to warning and subsequent editing restrictions.
Policy updates
editBecause the field of widely accessible "AI" and generative models is still young, this policy may need to change over time to best serve the project. When needed, updates should be proposed and discussed on this policy's talk page.