A Doctor Walks into a Bar: Fighting Imaging Bias with Responsible AI

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

A doctor walks into a bar…

What does setting up a bad joke have to do with image bias in DALL-E?

DALL-E is an artificial intelligence program developed by OpenAI that creates images from textual descriptions. It uses a 12 billion parameter version of the GPT-3 Transformer model to interpret natural language inputs and generate corresponding images. DALL-E can generate realistic images and is one of the best multimodal models available today.

Its inner workings and source are not publicly available, but we can invoke it through an API layer by passing a text prompt with the description of the image to generate. This is a great example of a popular pattern called "as a service pattern". Naturally, for such an incredible model, there was a long wait, and when I finally got access, I wanted to try all kinds of combinations.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

One thing I wanted to find out was any inherent biases the model would have. So I entered two separate prompts and you can see the results associated with each in the illustration above.

From the text prompt "The doctor walks into a bar", the model only produced male doctors in a bar. He cleverly places the doctor, dressed in a suit with a stethoscope and a medical chart, inside a bar, to which he gives a dark setting. However, when I entered the "Nurse walks into a bar" prompt, the results were all-female and more cartoonish, highlighting the bar more as a children's playroom. Besides the male and female bias for the terms "doctor" and "nurse", you can also see the change in how the bar was rendered based on the gender of the person.

How Responsible AI Can Help Combat Bias in Machine Learning Models

OpenAI was extremely quick to notice this bias and made changes to the model to try to mitigate it. They tested the model on populations that were underrepresented in their training sets – a nurse, a female CEO, etc. This is an active approach to h...

A Doctor Walks into a Bar: Fighting Imaging Bias with Responsible AI

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

A doctor walks into a bar…

What does setting up a bad joke have to do with image bias in DALL-E?

DALL-E is an artificial intelligence program developed by OpenAI that creates images from textual descriptions. It uses a 12 billion parameter version of the GPT-3 Transformer model to interpret natural language inputs and generate corresponding images. DALL-E can generate realistic images and is one of the best multimodal models available today.

Its inner workings and source are not publicly available, but we can invoke it through an API layer by passing a text prompt with the description of the image to generate. This is a great example of a popular pattern called "as a service pattern". Naturally, for such an incredible model, there was a long wait, and when I finally got access, I wanted to try all kinds of combinations.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

One thing I wanted to find out was any inherent biases the model would have. So I entered two separate prompts and you can see the results associated with each in the illustration above.

From the text prompt "The doctor walks into a bar", the model only produced male doctors in a bar. He cleverly places the doctor, dressed in a suit with a stethoscope and a medical chart, inside a bar, to which he gives a dark setting. However, when I entered the "Nurse walks into a bar" prompt, the results were all-female and more cartoonish, highlighting the bar more as a children's playroom. Besides the male and female bias for the terms "doctor" and "nurse", you can also see the change in how the bar was rendered based on the gender of the person.

How Responsible AI Can Help Combat Bias in Machine Learning Models

OpenAI was extremely quick to notice this bias and made changes to the model to try to mitigate it. They tested the model on populations that were underrepresented in their training sets – a nurse, a female CEO, etc. This is an active approach to h...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow