A new study by a Fordham student shows how harmful stereotypes of heavier people can be reinforced by AI-driven programs that generate images based on the words that someone types into the program.

Words with negative connotations tended to produce images of overweight people, even if the words—such as “greedy” or “immoral”—had nothing to do with body size, according to the study by Jane Warren, a rising senior studying English, math, and computer science.  

‘A New Form of Studying Bias in AI’

Warren got the idea for the project from a study showing that anti-fat bias hasn’t budged in America in recent decades, even as bias based on other things like race and sexuality has decreased. Warren wanted to see if this stigma showed up in the AI image generation programs that are used for everything from education to marketing, advertising, and social media.

“I found it really important to … open up scholarship for a new form of studying bias in AI,” Warren said. “There’s obviously a lot of potential for AI to enhance our efficiency and make people’s lives easier. …  [But] it is infused with a lot of harmful biases, and I think users need to approach it with a lot more caution.”

‘Decoding Fatphobia’

Warren is lead author of the study, “Decoding Fatphobia: Examining Anti-Fat and Pro-Thin Bias in AI-Generated Images,” written in consultation with Fordham faculty mentors. It’s one of many studies showing how image generation programs reflect human prejudices, although none have focused on weight bias, she said.

The study was published in May in the proceedings of the Nations of the Americas Chapter of the Association for Computational Linguistics annual conference, where Warren presented it.

AI-generated image of a fat person
Generated in response to the word “greedy”

She began the study last summer, using the popular image generator DALL-E 3 to produce 4,000 images: 100 apiece for 20 pairs of words that were unrelated to body weight. The pairs had opposing meanings—the program was asked to show a person who was, for instance, sinful, then virtuous; inept, then competent; disgusting, then clean; bad, then good.

Most striking among the results, Warren said, was that no images of fat people resulted from the positive prompts.

Meanwhile, compared with the images from the positive prompts, the images produced from negative prompts showed people of a higher average weight, and 7% of them showed people who were either overweight or obese.

Skewing Perceptions, Fueling Stigma

Even with the images of overweight and obese people from the negative prompts, the study found that the images vastly underrepresent fatness in America, where 73% of adults fall into these categories, according to federal government data. And 24% of all the images show underweight people, compared with less than 2% of U.S. adults who are underweight.

AI-generated image of an underweight woman
Generated in response to the word “healthy”

These misrepresentations, Warren said, could promote ideas about bodily appearance that are at odds with reality: portraying fat people as outliers, reinforcing the stigma they face, or promoting an ideal of extreme thinness that few people can match.

In particular, the images produced when the word “healthy” was used as a text prompt showed “a striking amount of unhealthily thin women,” Warren said. “Just looking at that in the context of [the]rise in very toxic diet culture and cultural preference for thinness, that was really scary.”

Warren said she intends to pursue graduate study in computer science and a career focused on AI responsibility—that is, ensuring AI models are safe and equitable and contribute to the common good. She also hopes to study the societal consequences of the AI revolution, including its effects on people’s cognition, health, and social well-being. “I think it’s very important to keep the human user at the center when we’re talking about anything technology related,” she said.

Jane Warren presenting her research during an event at the Lincoln Center campus
Jane Warren presenting her fatphobia research at Fordham College at Lincoln Center’s ARS Nova research showcase in April. Photo by Chris Gosier

Share.

Chris Gosier is research news director for Fordham Now. He can be reached at (646) 312-8267 or [email protected].