As part of a new series, A Fordham Focus on AI, we’re talking with Fordham experts from a range of disciplines about the technology that seems to be affecting everything, everywhere, all at once. 

In our first installment, we sat down with the Gabelli School’s Navid Asgari, PhD. Asgari holds the Grose Family Endowed Chair in Business and teaches classes such as Generative AI for Managers and Navigating AI Disruption. His research focuses on how AI is affecting teams in the workplace and organizational structures in the health care and pharmaceutical industries. In his work, he uses AI tools to conduct statistical analysis.

What are you investigating through your research into teams and AI?

In software development, there is evidence that AI is helping individuals, but my hypothesis is that it hurts coordination between individuals on teams. When you’re working in a group, you need to know what you don’t know and what others don’t know so that you can adjust your expectations and interaction patterns. When AI comes along, these boundaries are blurred, in what I call an epistemic disturbance. When that disturbance happens, coordination is going to suffer between teams, at least in the short term.

What are some of the concerns you have about AI being used in the health care industry?

If you’re a doctor working in a hospital, you can save a lot of time using AI for transcriptions. But what would you do with the extra time you gained? Would you spend it on current patients? Before, you would see a patient for seven or eight minutes; now you can see them for 15 minutes. That’s one way of using that saved time. The other way is to see more patients. You used to see 20 patients; now you see 30. 

Whatever choice you have to make, you must choose the right structure for how doctors are incentivized. It requires the active leadership of the manager who is not just using AI like a plug-and-play tool. Technological innovation is important, but what I call organizational innovation is also important.

Are people placing too much faith in AI at work?

I think people are beginning to realize that AI has limits. AI is fantastic if the job is next-word prediction, but most human tasks, particularly when it comes to writing, involve some sort of reasoning and causal analysis. AI cannot do that.

Sometimes AI does something like write an essay, which makes it seem like it “thinks.” It doesn’t think at all. There are neuroscientists who believe that AI is less intelligent than a cat. So why does it look smart? Well, its responses are not always the same, which gives you the illusion of being smart. The other reason is that it can produce certain inferences that can be generated by many auto-completion, next-word prediction tasks.

What is one thing you want people to know about AI?

We often think about our work in terms of tasks, not jobs. Think about a programmer writing code—that’s a task. But a programmer working in an organization does a variety of tasks that are related to one another in a very complex manner—that’s the essence of a job. 

You can replace every one of the individual tasks that a person does with some sort of AI, but putting them together into a job is a completely different challenge. When we’re doing our job, we’re more than just the sum of the tasks that we do.

Learn more about AI for the greater good at Fordham.

Share.

Patrick Verel is a news producer for Fordham Now. He can be reached at [email protected] or (212) 636-7790.