Brian K. Smith (Peter Julian)
The promise and peril of artificial intelligenceāchatbot technology in particularāhas been a hot topic since last fall with OpenAIās launch of ChatGPT (Generative Pre-trained Transformer), a chatbot that impressed many with its ability to generate detailed and human-like text, although critics noted its uneven factual accuracy. Journalists, artists, ethicists, academics, and public advocates raised concerns about how ChatGPT could negatively affect education, disrupt entire industries, and be used to sow political and social chaos.
By January, ChatGPT reached more than 100 million monthly users, a faster adoption rate than that of Instagram and TikTok. On March 14, OpenAI released GPT-4, an upgrade of the version used in ChatGPT. Microsoft and Google also have introduced their own chatbots.
In the following Q&A, Brian K. Smith, the Honorable David S. Nelson Chair and associate dean for research at the Lynch School of Education and Human Development, talks about AI/ChatGPTās potentialāfor better or worse. Smith's research interests include computer-based learning environments, human-computer interaction, and computer science education. He also worked in artificial intelligence throughout his career.
OpenAI Ceo Sam Altman met with Washington, D.C., lawmakers earlier this year to clarify misconceptions about ChatGPT by explaining its uses and limitations, but some legislators believe that the new technology warrants a dedicated regulation agency.Ā Is that wise? Ā
Whether itās government, industry, academia, or some combination, people need to think about the societal implications of any technology. As many suggest, those implications could be bad, but they could also be positive. For example, much progress has been made using machine learning in breast cancer analysis. Itād be great to incentivize and celebrate these positive applications while continuing to look for and minimize possible biases and adverse effects. In the short term, that might be a regulatory body. In the long term, we should educate future technologists to think as deeply about technical knowledge and the societal impacts of their innovations.
Researchers warn that large language models like the type used by ChatGPT could be used by disinformation campaigns to more easily spread propagandaāand, as models become more accessible, easier to scale, and compose in more credible and persuasive text, they will be very effective for future influence operations.Ā Is the danger legitimate? What could be done to mitigate the threat of the toolās weaponization if in the wrong hands?
There are and will always be bad actors in the world, and theyāll use whatever they can to do bad things. Will some bad people use ChatGPT to spread misinformation, write convincing phishing emails, etc.? Without a doubt. But I think we know a lot about how bad actors work with existing tools, and that knowledge goes a long way. We focus on the bad getting worse, but the good also gets better with new technologies. Ā
In a survey of 1,000 college students, online magazine Intelligent found nearly 60 percent used the chatbot on more than half of all their assignments, and 30 percent of them used ChatGPT on written assignments.Ā Some universities worry about ChatGPTās impact on student work and assessmentsāgiven that it passed graduate-level exams at the University of Minnesota and Pennās Wharton School of Businessābut are refusing to bar the chatbot, instead advising professors to set their own policies.Ā What should colleges consider when it comes to ChatGPT?
Writing is a huge part of how students are assessed in education, so itās not surprising that thereās concern about a program that generates reasonable essays, computer programs, language translations, etc. But ChatGPT is a technology that allows an opportunity to rethink what and how students learnāmuch like calculators, spell-checkers, Wikipedia, and similar tools. Changing education is challenging, so how do we do it? »Ź¹Ś¹ŁĶųās Center for Teaching Excellence created an excellent that provides strategies for utilizing it to teach and minimize cheating. Other universities are investigating similar ways to work with ChatGPT versus trying to ban its use. The key is getting educators to start thinking together as a community to develop pedagogy that situate ChatGPT and other tools as intellectual partners rather than stuff to cheat with (itās not called āCheatGPTā).
What do you mean when you talk about ātools as intellectual partners?ā
People started talking about intelligence amplification or augmentation in the 1950s. The basic idea is that machines can assist us with cognitive tasks that would otherwise be difficult to perform alone. A calculator is a good example: It lets us offload things like computing square roots and multiplying big numbers by hand so we can focus on higher-level problem solving. You can imagine something similar with ChatGPT. I can prompt it to create a sample syllabus, party invitation, or a Q and A for the Chronicle and then iterate on the initial text to make it read in my voice and style and correct any errors it made along the way. ChatGPT is like a partner helping me brainstorm and improve ideas in this scenario.
By the way, I didnāt use it for this Q and A.
In a TIME magazine article, proponents of generative AI said it will āreorient the way we work, unlock creativity and scientific discoveries, allow humanity to achieve previously unimaginable feats, and boost the global economy by over $15 trillion by 2030.ā But they expressed multiple concerns, not the least of which is the existential risk posed by AI companies creating Artificial General Intelligence (AGI), a tool that āthinks and learns more efficiently than humans,ā potentially without human guidance or intervention.Ā How can we guarantee that AIs are aligned with human values? Ā
OpenAI did a lot of work creating āguardrailsā to keep ChatGPT from spouting lots of crazy things. Unfortunately, thatās become politicized, with some saying ChatGPT is āwokeā because it might avoid talking about certain people and ideas. But ChatGPT and similar language systems are trained on billions of documents written by humans. Suppose those programs produce language that goes against human values. Thatād be because people have expressed and will continue to express horrible things that oppose human values. We canāt blame a computer for learning our bad habits; humans need to stop war, violence, discrimination, etc. Donāt hate the chatbot, hate the game.
TIME cautioned that the big technology companies that will eventually control AIs would likely become not only the worldās richest corporations by charging whatever they want for commercial use, but potentially morph into āgeopolitical actors and to rival nation states.ā Ā Are these fears realistic?Ā If so, what measures might be implemented to curb these developments?
This oneās out of my league; Iām afraid I donāt know anything about how AI might be used to create the Federal Kingdom of Microsoft or Amazon Republic. Itās an interesting scenario, but Iām hoping those companies might help us use AI to solve the significant challenges we face as a society. It wonāt do much good for Google to take over a continent when it floods due to climate events. I look to our studentsāpast, present, and futureāto help with this. Hopefully, theyāll become the leaders of organizations that use AI for good rather than technological empire building.
Phil Gloudemans | University Communications | April 2023