
Artificial Intelligence (AI) is rapidly reshaping every industry and profession. In courtrooms, AI is drafting persuasive legal briefs. In hospitals, it’s reading stroke patients’ brain scans with accuracy that outpaces doctors. In finance, it’s helping uncover money laundering and other forms of fraud.
With technology advancing at such a rapid pace, Generation Z and college students must approach AI with critical awareness of its limitations, risks and ethical implications.
Howard University political science professor, Norman Sandridge, brings an interdisciplinary perspective to conversations about AI.
He began his academic career in physics and mathematics before transitioning into classics, earning advanced degrees in Latin and ancient Greek. This dual background enables him to view and conduct research in AI not only as a technological tool but also in terms of its broader impact on education, creativity and leadership. Sandridge is deeply concerned when students outsource their own intelligence to virtual chatbots.
“AI seems like it’s very good at addressing our needs, but my worry is that AI will erode much of our intellectual, social, creative and leadership potential and being autonomous as human beings,” Sandridge said. “I fear [AI] could erode a generation’s ability to develop expertise in any field through consistently outsourcing our intellect to a ‘cerebral prosthetic.’”
The ‘cerebral prosthetic’ metaphor Sandridge refers to is the action of using AI as a crutch, regurgitating any answer the student is looking for and offshooting human creativity into an artificial transaction. Sandridge imagines a “very limited way” AI could improve students’ writing instead of replacing student ideas with its own.
While AI is reshaping nearly every field of study, everyone who speaks or acts with confidence or unwavering hope about AI is not reliable.
The individuals who have financial or ideological interests in promoting AI could be appropriately called “techno-optimists” who believe that every technological advance eventually improves human life. As of now, it is unclear what role human learning or human anything — imagination, creativity or morality — would play in a world dominated by AI.
“If I were a young person, I would be thinking, ‘Okay, what are going to be my competencies? What am I going to be good at as I make my way through high school and through college? What can I do that AI can’t,’” Sandridge said.
AI is mostly good at automating the tasks Gen Z students find tedious and time-consuming, like drafting first essays, creating study guides or pulling key points from long readings. Yet these tasks play a crucial role in the creative process, which Professor Sandridge refers to as “the tyranny of the blank page.”
For example, take the assignment of writing an essay. The first draft, which seems to be the most daunting task, is not just about getting words down on a page, it’s about wrestling with ideas, making mistakes and finding new insights in the creative process. AI shortcuts this methodology, and students who instinctively lean on these tools don’t notice how much of their intellectual breakthroughs are lost when Large Language Models (LLM) do their critical thinking for them. These technological advancements are starting to make Gen Z “a generation of editors,” instead of creative writers and critical thinkers.
The greatest writers in history became masters of their craft without AI tools, and their growth came from studying other great writers, engaging in dialogue with each other, writing extensively and revising relentlessly. The challenge now, especially for Gen Z, is to prove whether AI can genuinely make someone a better writer, or if it risks undermining the very process that produces great artistry.
Everyone should be worried that our human intellect is taking a backseat to expediency and that of an efficiency-obsessed educational and intellectual culture. Gen Z and techno-optimists are not lionizing intellectual struggles and creative breakthroughs and the fundamental message of AI — like ChatGPT, Claude 3 and Gemini — is that its user is derivative and replaceable.
While some people envision AI writing Pulitzer-caliber books, Oscar-caliber screenplays and paradigm-shifting, Nobel-caliber scientific discoveries with little or no human involvement, there is no real consensus on the future of A.I., which is significantly different from all or most other prior forms of revolutionary technological innovation.
“There’s a certain point at which, when we use technology, the technology is also using us,” Sandridge says. “We’re basically the lab rats who are being asked to test this out for the AI companies… It’s not even being studied in a controlled, safe way.”
The unfettered growth of AI is not inevitable, and despite any uncertainties regarding it, experts can vividly picture many possible AI disruptions. For instance, elections may be compromised by AI hacking, AI may be used to monitor and censor citizens more thoroughly and it might quickly and easily capture compromising information on vast swathes of the population.
AI may be used to spread misinformation and disinformation even more rapidly and more compellingly than it is already spread on social media, and AI itself may ‘hallucinate’ facts that mislead people.
AI stakeholders, individuals with a large amount of wealth or people who believe in racist, white-supremacist rhetoric may program AI chatbots to distort history and current events. Elon Musk is already doing this with his chatbot, Grok, who recently spouted hateful and anti-semitic language.
While AI is akin to “a country of geniuses in a datacenter,” its advancements are happening incrementally. Students, professors, universities and our society as a whole need to act wisely and with conviction of the potential dangers of modern artificial superintelligence.
Many colleges and universities, including Howard, are still in the early stages of developing clear policies on AI use. While some frameworks exist, they remain incomplete, largely because the technology is evolving faster than policy can keep up.
One of the biggest challenges is that young people have been experimenting with AI tools in ways administrators and faculty never anticipated, making it difficult to account for every possible A.I, scenario. Professor Sandridge will take part in crafting upcoming AI policies within the political science department.
“I really don’t see the student-professor relationship as antagonistic. I don’t think it’s our job to catch students cheating using AI. I think students have a real invested interest not to use AI for their own intellectual and social development,” Sandridge said.
In the next couple of decades, technology will be able to do things that would have seemed like magic to our grandparents. The dawn of the ‘Intelligence Age’ is a momentous development in human evolution with very complex and extremely high-stakes challenges that our current and future generations, politicians and techno-optimists need to rise to.
Copy edited by Daryl R. Thomas Jr.
