Skip to Main Content
News You Can Use

Is AI friend or foe? Engineering Dean Howard knows

One of Ohio State’s top experts answers alumni questions on AI’s promises and threats, from health care to job impacts.

Engineering Dean Ayanna Howard smiles as she poses for a photograph

Ayanna Howard has led the College of Engineering since 2021. Before that, she was chair of the Georgia Institute of Technology School of Interactive Computing, founded the HumAnS Lab to develop humanized intelligence in robots and worked at NASA’s Jet Propulsion Laboratory. (Photo by Jodi Miller)

One day, it was talked about. The next day, it was everywhere. Just like that. Artificial intelligence, AI, is here. And with it come promises and fears—from rapidly diagnosing disease to stealing identities. So, what do we do? How do we harness it?

Ayanna Howard is dean of the College of Engineering and an accomplished roboticist, entrepreneur and educator. Howard has focused much of her career on human-robot interaction and has written extensively about AI, its biases and how far we should, and do, trust it. She’s well versed in the promise, and the ethical threats, AI offers.

Recently, Howard took questions from alumni about AI’s challenges and its opportunities.

  • What are some of the strongest benefits to artificial intelligence? — Abby Huffman ’20, ’24 MA

    One of the strongest benefits of AI is in its ability to help ensure that every person has equitable access to those things we consider fundamental human rights, including the right to health, education, work and food. At the end of the day, when AI is designed correctly, it can provide a higher quality of life and more efficiency in our lives. AI can help upskill displaced workers when certain jobs disappear. AI can assist in sustainable agriculture and battling food insecurity. AI can even be used to enhance medical diagnosis and personalize health care and treatment plans.

  • What are your deepest concerns about AI? — Cheryl Ames ’78

    My concerns are if we don’t do it right. The problem is, AI trains on our data. I’ll use health as an example. What is good health? Am I training on data from a middle-class suburban area? If so, and I collect the hospital data, it’s going to be more preventive. It’s going to have different elements than if I’m thinking about rural Ohio, where maybe we don’t have access to a lot of specialists. It’s going to look different.

    So therefore, when the AI is deployed through all these health apps, its perspective might be more influenced by the fact that you have to have some elements of wealth to have good health. So I worry we are going to bias AI decisions based on targeted demographics that are not universal.

  • How are you advising the next generation on what jobs may be around after AI is more commonly used? — Sarah Bauer ’95

    I’m on the National AI Advisory Committee (NAIAC), which is tasked with advising the U.S. president, and I’m on the Education/Awareness working group. One of the things we’re looking at is how we upskill workers so they become the discipline expert and AI becomes the tool.

    For example, law clerks. How do you train the next generation of lawyers to say: Here’s AI; it’s a tool; here’s how you use AI to summarize cases based on something you’re trying to do. What is your discipline knowledge that allows you to filter, to say, “That’s incorrect. The prompt was not quite what I was thinking of because the response isn’t what I’m expecting.”

    It requires us to retrain workers so they’re the discipline experts, and AI becomes the tool, like a calculator or computer. We’ve been trying to promote this around general AI literacy that really impacts and influences this upskilling.

  • What is your recommendation for addressing the multitude of concerns with AI? Federal oversight? — Matt Stuckey ’13, ’21 MBA

    I believe we do need some federal oversight, and why I say that is, currently I can’t count how many regulations are in the states. Some have passed, some haven’t. As we know, if I have some rules in Ohio around how you use and don’t use AI, and they’re different than, say, Wisconsin’s, how does a typical citizen figure that out? There’s no way. What if I’m a business? Does that mean I’m operating only in one state? Or do I have to figure out how to operate in all the different states?

    So when I think about the fact that the states are moving forward because there is no real federal regulation, I think there has to be some. Otherwise, as consumers we are going to suffer; as businesses and small businesses, we are going to bear the burden of trying to figure it out. The caveat is those regulations should not be too restrictive. We do not want to hamper innovation. We do not want to hamper entrepreneurship. There has to be a balance.

  • What industries and types of jobs will most be affected by artificial intelligence adoption? — Craig Cope ’73

    I would say all of them. Even coding jobs are going to be impacted. The jobs that will be least impacted are the ones that are very human facing. Just as we saw more manual physical jobs impacted by automation, all the things we consider professional “thinking” jobs will now be impacted. But jobs that require a lot more human-to-human contact will be the most difficult to disrupt. Doctors will be difficult; radiologists will be easier, as an example. I want to hear from a human doctor if I’m sick; I don’t want to hear from a robot. But a radiologist looking at my images, I don’t even know who they are—that job can be disrupted.

  • How do we prevent AI from taking over jobs since big business will see it as much cheaper in long run? — Paul Smith ’91 MA

    I am a proponent of taxing companies based on the number of jobs they get rid of due to AI. Now, if they revise them, if they say, “This job function is no longer valid, 2 percent of my workforce is depleted, but I need this other one, so I’m just translating it,” then that’s fine. But if I’m reducing my workforce and replacing with AI or robotics, I think they should be taxed. 

    The tax could be: You are required to upskill, retrain, pay into educational budgets so that those workers who you got rid of can figure out how to get those new jobs. Or you actually tax the AI itself. 

  • How, if at all, does artificial intelligence impact cybersecurity? Should we be concerned about our personal information being compromised? — Abby Huffman ’20, ’24 MA

    When I think about cybersecurity, I’m going to focus on malicious agents breaking into systems to steal our identities, our data, to do things like steal money from our accounts or blow out the grid or similar threats. 

    If I think about AI, it could be used not just by what we’d call “state actors,” malicious actors who have a lot of resources. It could also be used by a high school student in your neighborhood who can, say, create an AI agent to sniff around for passwords on the dark web and guess what? It’s actually pretty easy with AI.

    So, thinking about what that means for our privacy and our security and our personal information, the fact is, all your personal information is out there anyway. It’s there. But one of the ways we can use AI positively is thinking about defense vs. offense and using it in an offensive measure. Imagine everyone had their own AI security agent. It would be sniffing out information about you and say, “Hey, we just discovered that your information was used to log into Target or Walmart. You might want to call your account at Target or change your password.”

    But that means we have to be comfortable with letting our own personal AI agent have access to our own data. There’s this mixed thing that all of our data is out there already, malicious actors are using it, but we can use it positively to have more of an offensive vs. defensive mindset.

  • With the rise of AI, what are you and Ohio State doing to mitigate cheating and plagiarism?—Matthew Lincicome ’23

    One of the things that’s been done, there’s been guidance provided to instructors to think about how you codify your assignments and tests such that you actually restructure them so students are using AI. If you are allowing them to use AI, then they’re not cheating. But you have to ensure that you reformat the homework and the tests so AI doesn’t give them exactly the answer. So they think. I will say we are still going through that process to get to there. 

Rate this story
No votes yet