Mary Strain ’92 is challenging the idea that Big Tech is only for engineers. A former Fordham history major who spent a decade teaching middle schoolers in the Bronx, Strain now leads the AI strategy team for U.S. education at Amazon Web Services. Her mission? Help institutions leverage tech to drive equity and access for more students.
She sat down with Fordham Magazine to discuss why a Jesuit education is ideal for the AI age and why a chatbot should never be the “person” in the room.
How did your experiences at Fordham and teaching in the Bronx lead you to the intersection of technology and learning?
At Fordham, there was this Jesuit ethos around learning how to learn. When I tell people what I do now, they’re often shocked that I have absolutely no formal technical credentials. Everything I’ve learned about technology, I’ve learned through work.
When I graduated, I wanted to make an impact—to do something good. So, I taught middle school in the Bronx for 10 years. Technology was a complete mystery to my students. It was something they consumed rather than something they felt empowered to steward. The liberal arts gave me the curiosity and then the mission to work with those children and to try to create opportunities for people through technology.
People need to know how the sausage is made
How do you define responsible AI, and what does it look like in practice?
It’s paying attention to the outcomes: What are the potential unintended consequences? And continuing to center and elevate the human experience.
I am an optimist in the sense that I think AI gives us the opportunity to leapfrog and exponentially improve our lives and our world, but we need transparency. People need to know how the sausage is made—what data is being used and who’s in the room when that happens.
Don’t put a chatbot on a terrible process
What do you think is the biggest challenge to overcome for generative AI to truly benefit the education sector?
Fundamentally, we need to be thoughtful about not anthropomorphizing AI. It’s not another person. We can’t give it too much credence; we still need to have a human engaged. We don’t want to let it do the thinking for us.
You sort of have to say, “Don’t put a chatbot on a terrible process.” We have an opportunity to reengineer how organizations work. On a larger level, we need a global campaign to help people understand the potential benefits and risks associated with the technology.
‘Boring AI’ can help create access and agency
What is the most transformational work you’re doing right now to improve access and equity?
The most transformational work that I’m doing with a lot of institutions is not in the classroom. It’s in the financial aid process. I call it “boring AI,” looking at the operational processes that are incredible points of friction for students and families.
We’re automating transcript processing, so a process that would have taken weeks is now taking less than a day. We also created a tracking system so students can actually see where their financial aid package is in the process. This reduced calls to the office and gave students a sense of agency over what was entirely opaque and arcane—and a huge barrier, especially for students who are first-gen or don’t have the resources. We can create access and agency by rethinking these institutional processes.
Use AI tools to ‘amplify your own excellence, not your mediocrity’
What career advice do you have for students preparing for a world fundamentally changed by AI?
Number one, learn how to learn—and show that you have stretched yourself beyond the traditional pathways.
Critical thinking, problem-solving, and the ability to write and communicate are huge. People say, “Oh, ChatGPT will write essays for you.” But it won’t have conversations for you with a customer or a leader. AI literacy and data literacy are essential, but you should use these tools to amplify your own excellence, not your mediocrity.
This interview has been edited for length and clarity.
Learn more about AI for the Greater Good at Fordham.
