Connect with us

Hi, what are you looking for?

The HilltopThe Hilltop

INVESTIGATIVE

Howard Community Divided on AI Usage in Academia 

Artificial intelligence has changed how some approach learning in higher education — some are intrigued but others are hesitant about the evolving technology.

At Howard, community members have different opinions on how to approach the usage of artificial intelligence. (Graphic by Eva-Sychell Mitchell/The Hilltop)

In the past five years, artificial intelligence (AI) has captured the attention of many as it blurs the lines of what is imaginable versus reality. 

Howard University student Dinobi Nwosu, has had to stop herself and family members when they mistook an AI generated content as something that actually happened.

Once her dad sent her a video of an animal and her dad believed it was real. She told him that the video was not real and was actually AI generated.

“I feel like [Generation X] is more susceptible to manipulation with AI videos, pictures and media.”

Nwosu also recalled a few times where she herself mistook AI for a real person. One of those times she watched a video of a girl going through the steps of her hair routine. What appeared real at first, took her by surprise when she opened the comment section that revealed it was actually generated by AI. 

“I was like wow, it’s getting scarily realistic,” Nwosu said.

A survey conducted by The Hilltop found that from a sample of about 140 readers, 53 percent of them use an AI chatbot at least once a week. 

While some implement it in their classroom, others use it to aid with daily tasks and some find ways to avoid it at all costs — AI is on the rise.

AI was first introduced in the 1950s, but the form of AI that most people think of today began gaining major traction in the past five years. This is seen with the influx of Large Language Models (LLMs) being introduced such as Google’s AI platform Gemini and ChatGPT.

During the 1950s, Alan Turning, whose research laid the foundation for the invention of AI, questioned if machines had the ability to think. When it first came out, AI was mostly used for evaluating mathematical theorems or for playing games like chess. 

Now, when most people think of AI today, they think of LLMs or other online tools that are marketed as convenient for everyday use such as Apple’s release of Siri back in 2010. 

One of the most popular AI platforms, ChatGPT, averages 2.5 billion prompts each day. That could mean the platform receives over 28,000 prompts per second. 

At Howard, some professors are learning how to either weave AI into their curriculum or learning how to cut it out. 

Gracie Lawson-Borders, dean emerita and professor for the Cathy Hughes School of Communications, says she is actively trying to figure out how to manage AI in academic spaces. 

Her students are allowed to use AI but according to her syllabus, they can only use it if they make it clear how and where the tool was used. 

“[AI] is not going away. I’m open to AI because I want to be a part of the conversation with students. It doesn’t matter whether you’re in journalism, communications or in health sciences, it matters how you manage it,” said Lawson-Borders.

Other professors like Jennifer Williams, an associate professor for Howard University’s department of literature and writing disagree. She said AI should not be used in academic spaces.  

“AI has changed learning…[and] what some students don’t realize is that you have to master the writing process before you can use tools to assist you. If you don’t have mastery of [your] writing you can’t identify what I can identify when reading over something,” she said.

Williams said she is concerned about students losing their creativity with the use of AI on homework assignments and classwork. Because of this, she has shifted the majority of her curriculum to handwritten work in an attempt to minimize LLMs influence on her students’ work. 

“The way that AI scrapes content is dangerous and kind of unethical,” said Williams. “My biggest concern is that students will lose their own voices or won’t find it.”

Knyla Vib, a junior nursing major from Queens, New York, said she noticed an influx of people using AI for daily usage with their assignments. 

Advertisement. Scroll to continue reading.

“I feel like some people use AI to do their work for them and not as a helping mechanism,” Vib said. “Last semester in one of my classes, one of my teachers didn’t accept any submissions on one of our assignments because he noticed that so many people used AI.” 

When LLMs are used, the data is often recycled from the information its users feed it. 

AI and LLMs also take data from across the internet. Sometimes, the data on platforms including ChatGPT may include hallucinations

An AI hallucination is when false or misleading information is generated and presented as true. 

Senior research scientist at the Howard University Institute for Human-Centered AI, Saurav Aryal, sees AI having long-term effects in academic spaces, especially for students.

“As humans, I am of the belief that we tend to choose the most convenient thing to do at a given moment. The most convenient thing to do for anyone whether it be an instructor or student is to rely on AI to do a lot of things that we feel are modern-day tedious,” he said. 

Desta Haileselassie Hagos, the university’s artificial intelligence and machine learning technical lead manager, agrees. He said there are three main ways AI impacts the Black community and academic spaces: teaching, research and policy. 

“AI can expand access to education and health care tools…if training is done thoughtfully,” he said. “…[But] there are cases of bias…[for example,] healthcare algorithms have sometimes underestimated the needs of Black patients because of the flawed design choices.”

According to a study published by the Massachusetts Institute of Technology, AI sometimes produces misinformation that looks accurate due to biased or inaccurate content. Researchers also found that AI can produce content with social and cultural bases, resulting in misinformation. 

Howard partnered with Google Research to enhance AI’s speech recognition capabilities. The partnership focuses on African American English and linguistics.

The project, called “Project Elevate Black Voices,” has given researchers the opportunity to broaden AI dialect capabilities and give AI better speech recognition capabilities.   

Although Nwosu, a senior biology major from Los Angeles, California, said she has concerns about AI’s influence in her personal life, she doesn’t have the same hesitancy when it comes to academic or professional situations as she studies to become an OB-GYN.

“I doubt that AI will be able to take over that role because it’s so patient care heavy,” she said.

Interim President Wayne Frederick took a different perspective on the future of AI in medical fields. He said although “it’s complicated” he would like to see AI further integrated into medical teaching. 

“We need to use more of it. …I’m very concerned that it’s not heavily in the medical curriculum…when you practice medicine today there’s just no way to avoid it. We have to be careful about graduating students in many fields who are going to go into a workplace that’s using it, but we haven’t necessarily trained them on it, so I want to see us using it,” Frederick said.

Some medical institutions have integrated AI into its day-to-day care and at Howard’s AI in Healthcare Center, researchers and clinicians are implementing their own practices to help close the gap between the use of AI and public health.

Although he would like AI to be integrated into teaching, Frederick says ultimately its usage should be determined by faculty and not administration.

“It’s not something that I think the administration should be weighing in on or influencing. What we should do is make sure the tools are available, but just like with everything else in the academic realm, we should let the faculty really guide us,” Frederick said.

Maya Bryant, a junior marketing major from Los Angeles, California, embraces AI due to its rise in popularity over recent years. 

“I remember being so freaked out about [AI] and the idea that you can take someone’s face or image and make it seem like they’re saying something different. My first impression of it was definitely daunting and scary,” said Bryant. 

Advertisement. Scroll to continue reading.

However, with AI becoming more popular, especially in her prospective field of marketing, her views on AI and LLMs took a turn. 

In Career Counseling, a mandatory course for all students in the School of Business, Bryant said “a lot of the speakers would talk about how if you don’t use AI, you’re going to be behind initially in your career.”

Researchers from the National Center for Biotechnology Information found that while there was an increase in mental health outcomes due to the convenience that AI can provide, research also found that some may experience increased stress levels due to fear of administrative and instructional jobs being taken over. 

Maya Crivelli, a junior honors English and philosophy major from Springfield, Massachusetts, actively avoids using AI.

“I’ve tried using AI in the past and every time I’ve tried it, it would just get under my skin. It didn’t really explain things in a way that was conducive to my understanding. It wasn’t offering useful or correct information a lot of the time,” Crivilli said.  

Crivelli is concerned about AI’s impact on the environment and on Black communities. 

Generative AI is fueled by data centers, which are large buildings filled with hundreds of computers. In order to provide energy to the computers, there must be mass amounts of water and air conditioning to keep the computers running. 

“I can’t rationalize as an African-American person spending a lot of time at my [HBCU] having an AI model do my thinking for me while it’s poisoning African American impoverished communities around the U.S. That makes my skin crawl. It feels dystopian,” Crivelli said. 

Fredrick believes it’s important that the Black community gets up to speed on AI in order to better understand and partake in the conversation on its harms.

“What my concern is if we don’t educate and get our population up to speed in it we’re not gonna participate in that discussion about the exploitation,” Frederick said. 

Fredrick said even with the uncertainty that comes with AI usage it is still used around the world.

“Regardless of what our concerns are, we’re using it. And we’ve been using it. …I see it in medicine, I see it in business, if you don’t get involved in learning about it sooner then you are going to get harmed,” Frederick said. 

Copy edited by Daryl Thomas Jr.

Advertisement

You May Also Like