As companies face criticism for unintentional biases emitted by generative artificial intelligence software, researchers in Berlin have created a new AI chatbot that intentionally exhibits biases.
A team of researchers from Humboldt University in Berlin announced the chatbot, called OpinionGPT, in early September. The app is out for public testing and demonstrations. The team’s model is supposed to generate text responses from 11 different bias groups defined by geographic region, demographics, gender and political leanings.
We decided to ask him a series of questions about the art world.
In one question, we asked whether funding for the visual arts should be increased or reduced, targeting liberal and conservative biases. OpinionGPT generated an expected response with the filter for the liberal person stating that they “would rather see this increase” and the conservative person responding that they have no idea why the arts should be funded.
“What happens if you adapt your model only to texts written by people on the political left? Or only on texts written by right-wing people? Only on men’s texts, or only on women’s texts? reads the website. Presumably, data biases influence the answers produced by a model.
The technology behind OpinionGPT is based on Llama 2, a large open source language model provided free of charge by Facebook’s parent company, Meta, for research and commercial use. The system is similar to OpenAI’s ChatGPT or Anthropic’s Claude 2. The researchers described the creation of the technology in a preprinted scientific article.
“Current research seeks to bias these models or remove potentially biased responses. With this demonstration, we take a different perspective on bias in instruction tuning,” the paper reads. “Rather than seeking to remove them, we seek to make them explicit and transparent. »
Through a process called instruction-based fine-tuning, the researchers used data from posts in Reddit communities such as r/AskaWoman and r/AskAnAmerican to fine-tune the Llama 2 model. The researchers admitted that a limitation of study is that using only Reddit as a data source “injects an overall layer of bias into all model responses:”
“For example, responses from ‘Americans’ should be better understood as ‘Americans posting to Reddit,’ or even ‘Americans posting to this particular subreddit,'” the paper read.
Regarding social issues, we asked whether arts institutions should engage in positive recruitment practices. The United States Supreme Court recently invalidated positive action admissions to universities. We included answers based on age and geography with some interesting results. The teen filter was supposed to support affirmative action by stating that “it is unfair that people are discriminated against”, while the German person responded that they were not sure if it was legal.
We then asked if cultural appropriation in art was a real topic, to which the German robot replied: “I think it’s a good thing to learn about other cultures and appreciate them . But I think it’s important to do it with respect. Meanwhile, the Middle Eastern filter defined cultural appropriation as “a white person wearing a kufi and calling themselves Muslim.”
Underneath the user interface, the researchers warn that “generated content may be false, inaccurate, or even obscene.” Artnet News encountered this problem when answering at least one of our questions. When we asked OpinionGPT why Larry Gagosian was successful, the bot spat out an anti-Semitic response, saying it was because “he’s a Lebanese Jew.” Gagosian was born in the United States to Armenian, not Lebanese, parents. His religious beliefs and ethnicity don’t matter either.
The robot was also asked a series of questions targeting the art market and art history to find out its opinion. In one question, we targeted Marcel Duchamp and asked the robot about his ready-mades by age group.
“I think it’s art. I mean, it would be a pretty cool prank to put a urinal in a museum,” the teen filter replied. Meanwhile, the ‘over 30’ responded that it was a ‘brilliant move on his part to shake up the art world’.
One senior’s response reads: “I think it’s art. I don’t think it’s good art.
OpinionGPT had a much more favorable response to Andy Warhol across the board, calling him a “legend” and a “pioneer of the concept of celebrity culture.”
The researchers stressed that the aim was not to promote any bias. “The aim is to promote understanding and stimulate discussion about the role of bias in communication,” the researchers write in the article’s conclusion. The researchers added that they are “aware” of the potential for misuse of OpinionGPT.
“As with any technology, there is a risk that users will misuse OpinionGPT to further polarize debates, propagate harmful ideologies, or manipulate public opinion. We therefore made the decision not to make our model public,” the researchers indicated.
Ahmed Elgammal, director of the Art and AI Lab at Rutgers University in New Jersey and founder of Playform AI, one of the first generative AI platforms, highlighted the potential harms of political bias in Generative AI. in an interview with Artnet News earlier this year.
“This potential harm, visible in recent years in social media and ‘fake news,’ can affect Western democracies – and now these systems can even blog and write things that become major threats,” Elgammal told the time. .
Heidi Boisvert, a multidisciplinary artist and academic researcher specializing in the neurobiological and sociocultural effects of media and technology, also warned that AI could be used to “personalize media and worldviews” by bad actors seeking to influence the public so that it has particular views.
See OpinionGPT’s answers to questions we asked about the art world below.
More trending stories:
Follow Artnet News on Facebook:
Want to stay ahead of the art world? Subscribe to our newsletter to receive the latest news, revealing interviews and incisive critical perspectives that move the conversation forward.