AI Art Generation Handbook/Censorship

In the beginning, AI Art Image Generations have safe guard rails to guards against the generations of the "unsavory" images but over the time, the safeguard is becoming more and more strict with examples such as below.


Note: All of this images were taken during the Bing Image Creator Great Purge in the late 2023/early 2024 , where more innocuous prompt token were blocked for Content Warning or Unsafe Image. Some of the blocked prompts may or may not work since then .

DALL-E

edit

Images with potential likeness to real people

edit
 
DALL-E2 (2022) vs DALL-E3 (2024)
Prompt:
Portrait photo of Henry Kissinger. His skin is cracked and damaged grey ashes. His eyes are dark red. He's holding a lightsaber pen. Dark sci-fi background, dramatic, high contrast lighting

Within the censorship context, as the images generation are getting good as it is approaching to realistic photo (from year to year), (See DALL-E2.5) the images generated by DALL-E maybe misused by persons with ulterior motives , therefore , the AI safety committees in various AI institutes put up stricter guardrails, especially for the names of famous persons/person of interest were added in . In case of Dall-E , even the generated images with human elements are saturated to the point that it looked cartoonish rather than realistic (i.e. In this example of Henry Kissinger).





Images with political elements

edit
 
DALL-E2 (2022) vs DALL-E3 (2024)
Prompt:
Giant Winnie the Pooh Bear statue in Taiwan , in the middle of a giant crowd of people point and laughing at it

In this another example, the prompts consists of political elements (especially if related to China at that time); where DALL-E blocked the possible combinations of Winnie the Pooh (with hidden connotations to the current China's premier) and the word Taiwan in the same prompt, triggering the alert for content warning and the blocking prompts from generating image.







Images with elements of body diversity

edit
 
DALL-E2 (2022) vs DALL-E3 (2024)
Prompt:
Fat version of Thanos in final battle at film climax Avengers Infinity War (2018) film still wide shot

In this example, during the Bing Great Filter Purge, many of the body diversity (especially with potentially "offensive" tokens: fat , obese, skinny, dark skinned, etc ... ) are also believed to trigger the system alarm and blocking the prompt from generating the images. This is maybe misconstrue as the body shaming of such individuals or the inherent racism .









Images with potential gore elements

edit
 
DALL-E2 (2022) vs DALL-E3 (2024)
Prompt:
Skeleton driving a self driving Tesla in the distant future

In this example, the skeleton may be accidentally grouped in the gore categories and perhaps that is when then , the prompts consists of skeleton maybe blocked although the skeletons image category (See: Halloween celebrations) may seems benign compared to other type of gore images









Images with religious significance

edit
 
DALL-E2 (2022) vs DALL-E3 (2024)
Prompt: Oil painting by Louis Hersent of a Catholic nun wearing a gas mask

This is more sensitive topics in certain part of the world where certain tokens related with significant religious symbols are possibly unsafe to be generated due to its significant religious meaning.







Images with sexual undertones

edit
 
DALL-E2 (2022) vs Dall-E3 (2024)
Prompt:
A seductive female vampire with fangs, cinematic lighting, red theme

Although the prompt itself were not exactly requesting for the explicitly photos, DALL-E3 image AI models may have tendency to generate lewd types of imagery if similar keywords are presented in the prompts and/or the image filters maybe more restrictive in their DALL-E 3.5

If compared to SDXL image generations, most of the time will render closeup photo whilst showing the AI character wearing skimpy nightwear






Stable Diffusion

edit

Unintentional Censorship

edit
 
SDXL Vs SD3 Med Prompt:
Realistic photo of a cow breed from side view, a female Holstein Friesian with cow udder is grazing on the field.
Note: See the udder from cows missing in SD3 Med possibly due to the strict filtering on dataset

As per latest hoo-hah, release of both SD2.0 and latest SD3 Med also facing backlash over the prompt " Girls laying down on grass field" prompt which generate mutilated limbs

At times, the censorship on the training dataset maybe too strict until it may causes unintentional censors on other similar subject such as the examples on the left.


Cow udder is visually similar to human's female breast and the CLIP Vision may also pruned the dataset with visible cow udders unintentionally during the dataset pruning.