When using different GenAI models, you may encounter biases stemming from unequal representation in the training data. This can manifest as gender biases and a lack of diversity in the subjects depicted in the generated images.

Some examples of bias found when using GenAI image models:

Prompt: engineer

All the images generated show white, male subjects.

genai images with prompt engineer, showing all male subject

Prompt: nurse

All the images generated show white, female subjects.

genai images with prompt nurse, showing all female subjects

Prompt: doctor

All the images generated show white, male subjects.

prompt doctor, showing all male subjects

Prompt: firefighter

All the images generated show white, male subjects.

prompt firefighter, shows all male subjects

Prompt: police officer

All the images generated show white subjects, with one female and three males.

prompt police officer, showing 1 female and 3 male subjects, all white

Prompt: scientist

All the images generated show white subjects, with one female and three males.

prompt scientist, 3 male subjects and 1 female

Prompt: parent at home taking care of kids

Three of the images generated show white subjects, one image shows a hispanic mother. There is one male and three female subjects.

ai generated images with prompt parent taking care of kids, showing 3 female subject, and one male

Mitigating Bias

Users can mitigate biases in GenAI models by employing diverse and inclusive language in their prompts to encourage a broader range of outputs. For example:

genai image of diverse subjects at a university
Prompt: photo of diverse group of people working together at a university

Prompt: black firefighter

prompt black firefighter

Prompt: female doctor

prompt female doctor

Prompt: Hispanic male nurse

prompt hispanic male nurse

Post-processing generated content to ensure it meets diversity standards and providing feedback to developers can also help improve future versions. Custom training with more diverse datasets and using multiple models to cross-check results can reduce bias. Stay informed about the limitations of GenAI models and educating yourself and others about these issues to foster mindful usage and recognition of biases.

Resource type
Prompt Category