Last week, Sam Altman, co-founder and CEO of OpenAI and developer behind ChatGPT, visited Howard University in conversation with President Dr. Ben Vinson III and Howard’s director of the Center for Computational Biology and Bioinformatics Dr. William Southerland. This fireside chat, hosted in the Armour J. Blackburn University Center ballroom was held to examine the future of artificial intelligence (AI) and the role diversity should play.
The discussion focused on the rapid advancement of AI technologies, how integrating different communities into the development of AI can make it ethical and user-friendly and how students have an advantageous position amidst the wave of new technology.
ChatGPT, the AI software developed by OpenAI, allows users to ask questions and chat with a bot that can imitate human conversations according to TechTarget. Beyond conversation, the software can compose essays, create art based on prompts, and make lists for daily tasks such as chores and shopping.
As software containing AI continues to gain popularity and evolve, many people have questions regarding the landscape of our future with AI implemented in our daily lives. Another pivotal question is who holds the authority to determine the changes that AI can make in society.
Altman acknowledged the rapid pace of AI technology and emphasized a collective approach. He stated that the effective utilization of AI requires the partnership and perspectives of everyone in our society to work productively.
“One of the reasons I want to talk to people outside Silicon Valley and different backgrounds and universities is so that this technology can bring people together in an inclusive way so that this technology can be extended to everyone,” Altman said.
According to Knowledge at Wharton, a business journal from the University of Pennsylvania, a lack of diversity in development can mean that the technology created only works for the people who helped develop it, closing out entire subgroups from potentially life-changing technology. The journal states that through early inclusion of people of color in the coding process, bias in the code can be reduced.
Altman said that ChatGPT and OpenAI went through reinforcement learning feedback, where human experts analyzed the code to mitigate bias within the system. He also acknowledged the responsibility of those writing the code to examine their own bias to eliminate biased outcomes in AI.
“We ask, ‘Who decides what the behavior of these systems should be, especially when it impacts the whole world?’ How do you ensure that marginalized voices are heard and that users giving input are taking into account a broader picture of the world?” he said.
Dr. Amy Yeboah Quarkume, the graduate director of Howard’s Center for Applied Data Science and Analytics, works to help people understand how to best use AI technology and protect themselves in the digital landscape.
“Our community was not put in the position to trust technology, most of the time because we are not creating it,” she said, referring to the Black community. “With that history, I think it becomes important to look at what Howard is doing with science programs to give students the ability to learn and be empowered because there is potential to harness AI for research.”
During the Q&A section, community members voiced concerns about prioritizing the ethics within AI amidst its rapid progression. Many had questions regarding the mechanisms in place to ensure responsible practices and maintain privacy protection, with a particular focus on copyright infringement.
“We have networks of trust, and though there is going to be a lot of AI content, hopefully, people will understand not to take it too seriously,” Altman said.
Autumn Coleman, a graduating senior journalism major from Raleigh, North Carolina who attended the chat, shared their thoughts on the effects of AI on the Black community.
“I think that Black people having access to AI at this stage allows them to be considered in conversations of making the technology accessible to all, but I’m also not sure if AI will have any large benefits for the community as a whole,” Coleman said.“Altman says that they won’t use certain information to feed their algorithms, but what about the information of individuals who don’t have as much influence to protect themselves and their work?”
As for the future of ChatGPT and AI, Altman said that one day he hopes that the technology will write code, create projects and develop entire presentations for users.
“We are going from a world where technology is limited and expensive to abundant and cheap,” Altman said. He said he envisions a future where cognitive labor is free-flowing and everyone can afford cognitive service.
As ChatGPT and AI continue to evolve, the need for inclusivity and responsible practices is necessary for a future where technology benefits everyone.
Copy edited by D’ara Campbell