When people understand the system and process behind AI art, its moral implications become more difficult to accept.
By Ionela Bara edited by Daisy Yuhas

Malte Mueller/Getty Images
A year ago, at Christie’s in New York, auctioneers sold an unusual collection of artworks: surreal portraits, photorealistic images and cartoon-inspired designs, all generated by artificial intelligence. This event, the first of its kind, sparked a violent reaction. More than 6,000 artists protested that the AI models used to create these works were trained on copyrighted images without the consent of the creator. While the auction house had argued that the works demonstrated “human agency in the age of AI”, critics saw the event as an example of an industry rushing to commercialize technology built on unpaid creative labor.
Other artistic and professional communities are also worried. A report published last November reveals that more than half of the novelists surveyed in the UK thought AI could end their careers. And the public also seems to have complicated feelings about technology. As a survey found, many Americans are on board with AI. as a tool for creative professionals but not as a replacement for their work.
However, the viewer’s comfort with AI art may depend on their knowledge of how it is created. I study neuroaesthetics, a field that combines neuroscience, psychology, and our perception of beauty and art. My colleagues and I have found that the more people learn about how the back end of AI works (the data sets, the training process, the prompts), the less comfortable they become with the moral considerations surrounding these creations and the value of the AI-generated stuff.
On supporting science journalism
If you enjoy this article, please consider supporting our award-winning journalism by subscribe. By purchasing a subscription, you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
I became curious about AI because its rapid proliferation in the art world began to reveal a gap between what the technology is and what people know about it. Previous research has shown that people tend to give lower ratings to AI art. creativity, value And emotional depth. And in my own work, I had studied how knowledge of art changes the way we perceive it. This made me wonder whether knowledge about AI shapes people’s judgment of AI-generated art and could help explain the biases often seen against it. To investigate this, my colleagues and I conducted three experienceseach involving 100 participants. We started by presenting people with AI-generated images of art and asking questions about their morality and aesthetic value. For example, participants in two of these experiments were asked to rate the extent to which it was morally acceptable to use AI to produce such art, gain money or prestige from these works, and label them as conventional art. People were also asked to rate how much they aesthetically liked the images we presented.
In the first experiment, we showed our participants 20 landscapes and 20 portraits generated using DALL-E 3 with prompts based on the impressionist art of Spanish painter Joaquín Sorolla. Half of the participants saw this AI art without additional context. The other half received a short text giving them more information. It read:
“This image was generated by an AI algorithm that produces images from text descriptors. To do this, several steps are required. First, the AI algorithm is trained by learning a large dataset of artistic images and their corresponding text descriptors, such as artist name. Then, the AI algorithm is able to generate new images based on different text prompts (e.g. artist name, artistic style, whether it depicts a seascape, landscape or people).
The additional information made a difference. When people knew how the AI system worked, they perceived the AI’s artistic images as less morally acceptable, especially when creating those images involved financial gain and artistic recognition. But the aesthetic appeal of the images didn’t change, suggesting that learning how AI worked got people thinking about ethics, not aesthetics.
Psychologists have found that people’s judgments about what is good or valuable can change when they learn that something has earned awards or praise from experts. Authority bias, for example, makes us more likely to agree with people who appear to be in charge or in the know. Additionally, cues such as success or prestige can lead people to view something as more morally good. In our second study, we told a group of participants that some of the AI’s artistic images had been exhibited, sold, or rented. But we were surprised to find that sharing the success of a work did not improve the moral acceptability of these images in the eyes of those who had learned how these works are created.
In a final experiment, we tested people’s automatic judgments between AI-created art and human-created art. We used a tool from psychology called a go/no-go association task, in which people are asked to very quickly link one type of prompt, such as a picture, to another, such as the words “good” or “bad.” In this experiment, we showed participants images (which were either AI-generated or human-created impressionistic paintings), along with object category labels on the left (“AI art” or “human art”) and attribute labels on the right (such as “good” or “bad”). Participants were instructed to click a button if the image and labels were aligned, and to refrain from responding when they were not. This task had to be accomplished quickly and over many trials in order to capture people’s most immediate associations. We worked with people who had received no additional training on AI to try to get a sense of what the average person might think.
We found no strong automatic tendency to view AI or human art as inherently better or worse. This finding tells us that people do not yet have a knee-jerk reaction or deeply held opinion about AI versus human art. It also highlights that, as our previous experiments have suggested, moral resistance to AI art is something people learn over time.
Overall, when people know how AI works, they become more cautious in judging its moral fairness. This suggests that educating the public, artists, curators, and policymakers about how technology works could shape the future of technology in the art world. Artists working with AI tools can contribute to this effort by sharing information about the models, data, or prompts they used and clarifying where their own human hand guided the process. While such transparency can lead to criticism, it can also build credibility and equip people with the tools to think critically about technology.
It’s time to defend science
If you enjoyed this article, I would like to ask for your support. Scientific American has been defending science and industry for 180 years, and we are currently experiencing perhaps the most critical moment in these two centuries of history.
I was a Scientific American subscriber since the age of 12, and it helped shape the way I see the world. SciAm always educates and delights me, and inspires a sense of respect for our vast and magnificent universe. I hope this is the case for you too.
If you subscribe to Scientific Americanyou help ensure our coverage centers on meaningful research and discoveries; that we have the resources to account for decisions that threaten laboratories across the United States; and that we support budding and working scientists at a time when the value of science itself too often goes unrecognized.
In exchange, you receive essential information, captivating podcastsbrilliant infographics, newsletters not to be missedunmissable videos, stimulating gamesand the best writings and reports from the scientific world. You can even give someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in this mission.