For our series, A Fordham Focus on AI, we’re speaking with faculty experts from a range of disciplines about the impact of this rapidly evolving technology and what to expect next.
In this installment, we sat down with Carolina Villegas-Galaviz, PhD, a visiting research scholar at the Gabelli School of Business.
Villegas-Galaviz’s research is at the intersection of business ethics and artificial intelligence ethics. She teaches undergraduate courses such as Business Ethics and AI Ethics, topics in which she’s been at the forefront. Her 2023 paper, “Moral distance, AI, and the ethics of care,” was published in the journal AI & Society, and “Business Ethics and Ethics of Care in Artificial Intelligence,” the doctoral dissertation she published in 2022, was among the first academic papers on AI ethics in business.
As businesses rely more on AI tools, what ethical blind spots are you concerned with?
I’m currently focused on the concept of moral distance, which refers to how distance—whether in terms of time, physical space, or bureaucracy (such as hierarchies or processes)—can affect our decisions, leading us to commit acts shaped by that distance. For example, I could do something wrong that impacts people on the other side of the world simply because I am not fully aware of the effects of my actions. Similarly, we might do something that wrongfully impacts future generations because we are not completely aware of those consequences.
The first time I heard about moral distance was in a seminar in Spain, where one of the attendees explained that in the 12th century, the crossbow was banned. Authorities felt that since the arrow reached targets that were out of sight of the archer, he could not morally judge the effects of his actions. This is similar to what seems to happen in the 21st century with digital technologies. Moral distance affects those who make decisions about the use of algorithms and databases. Proximity is critical to making moral decisions, but it can be reduced or distorted when AI tools operate on categories and segments defined far from their real objects.
What lessons do you strive to teach students about AI?
I tell them that AI is here to stay, and we need to make the best of it. And while AI ethics is not about eliminating AI, I do think we sometimes need to refrain from using it in certain scenarios. One of the starkest examples involves automated employee termination. Amazon did this in 2021 with contract employees. The company used a program called Amazon Flex to evaluate employee performance, and the program fired them if they didn’t meet expectations. Many felt they were unfairly terminated, and in December, the company announced it would pay over $3.7 million to nearly 11,000 drivers who worked for the same program and were denied proper paid sick leave and premium pay.
I bring up examples like this because business school students tend to think of AI as objective. My biggest challenge is explaining that it’s not. That’s because someone designed that AI program to use specific data, and that data might not be representative of certain groups. It may include biases.
Imagine you are designing an algorithm to identify good students for university admissions. The idea of a good student is different for you than it is for me, someone in admissions, the development department, a math professor, or a philosophy professor. So if we want to have a fair algorithm, we’ll need to consider all these voices. You can’t just assume that every algorithm does that.
What’s one thing you want people to know about where AI and business are headed?
Companies are trying to address the ethics of using AI, but in many cases, they’re just using buzzwords and not taking the issue seriously. They should start thinking more deeply about their responsibilities and how to create safeguards.
When we first had cars, automobile companies used to say that safety wasn’t their business. Their perspective was that drivers needed to learn to drive, and that once they did, we wouldn’t have accidents. After many years of accidents, safety regulations became one of the things we expect from car manufacturers. I like to be positive; I’m hoping we’ll see the same when it comes to AI.
