It’s the year 2033. You’ve got your feet up. You’ve taken a break from your AI-assisted microchip and stepped out of your immersive pod. You can still see ads through your lenses – carefully undetectable but deliciously delectable for advertisers. It’s time to work so you hold up your device (once called a laptop or smartphone) and check your prompt. You lazily whisper a few words and voila’ it’s done. They used to say all in a day’s work but it took you mere seconds. You wonder, where and how did this all start?
Years from now, who knows how we’ll be learning and working. Advancements in AI, the rebranding from Facebook to Meta will all be in the history books. Our hegemonic societal structures will have to evolve alongside the technology. Let’s take what’s happened recently as an example.
ChatGPT was launched as a prototype on November 30, 2022, and quickly secured attention for its very detailed responses and articulate answers across many domains. Also known as a ‘Generative Pre-trained Transformer’, it was created to imitate conversation, but like all the futuristic sci-fi novels have warned us, it has become more intelligent than that. I asked ChatGPT where it gets its information from and it said from ‘books, articles and websites which enables it to generate text that is similar in style and content to the text it was trained on.’ As can be seen in the screenshot below, for now, its information is limited in that it has been trained on information up to 2021. You can try it out here.
It takes about 2 seconds to log in and write a prompt. If you’re not happy with the answers, you can regenerate a new response. There have been cases where the responses ChatGPT provides are incorrect which might result in misinformation. There are easily-identifiable risks: that people do not challenge the information that is presented to them; that online public information might be even more skewed with incorrect data due to the laziness of writers or readers and the lack of fact-checking and deep research. But the better the questions, the better the answers secured.
What’s interesting (or horrifying for educators) is that ChatGPT appears to be doing the work expected from students – particularly those tasked with essay-writing – in seconds, and in a format potentially undetected by plagiarism tools.
The traditional modes of teaching have already been challenged with the Covid-19 pandemic in 2020 when the world went online for about a year. Lectures and exams were held virtually and it was already hard in some cases to truly determine whether students were cheating or not. The risk is that with AI software like ChatGPT, students can generate their assignments with a click of a button and snatch their certificates on their way out. ChatGPT has been tested as a student and grader in itself. It also recently passed a Wharton MBA exam and the SATs with an average score of 1020 out of 1600 with an average IQ of 83. Not only can it successfully pass exams, it can also grade itself and critique its own work.
To combat this, some students such as Edward Tian created a program called ChatGPTZero which can check whether a student’s work has been plagiarised by the program or not. It isn’t 100% foolproof but it has helped lecturers determine the authenticity of information being presented to them. So far, universities have used applications like Turnitin to determine the authenticity of a student’s work. Recently a hoard of students have claimed that they have successfully used ChatGPT and have not been called out for plagiarism by Turnitin. Due to cases like this, Jordon Peterson recently hypothesized that a third of universities will go bankrupt in the next five years. A less-dystopian viewpoint may be that some institutions are in trouble because of a lack of investment in proper-trained and knowledgeable teaching staff and updated assessment systems.
The solution may be to integrate AI software like ChatGPT into lesson plans and challenging how students can use these tools to improve on pre-existing knowledge and curricula. Lecturers can also use it to produce assessment questions or generate new and refreshing questions on a topic that has been exhausted as a brainstorming exercise.
Professor Ethan Mollick at Wharton created an AI Policy to work hand in hand with this disruptive technology:
NYC’s Department of Education actually banned the use of ChatGPT stating that “it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”
Rather than completely obliterating disruptive AI technologies in education, I think that we should use this moment to challenge existing traditional taught models that have become redundant or rather outdated. In a post-Covid climate, we have already adapted and changed our lifestyle to working and studying online so why should this be any different?
Professor Noah Giansiracusa, a maths and data science professor, thinks that AI applications like ChatGPT combined with deepfake audio may result in further misinformation and disinformation – exactly what education is supposed to address.
Perhaps things may get worse, before they get better. I believe that in the short term, banning AI from campus is a non-starter. The internet as a whole disrupted education and with the likes of Google and Wikipedia together with online library systems, classic methods of researching became redundant. Since the technology is new, it would make sense for universities and educational institutions worldwide to create policies to adapt to this technology just as Professor Mollick is suggesting. For now, it is more prudent to work alongside the technology and incorporate it into existing systems than to ignore it and fight it- as long as its use is ethical and there is no risk of harm. It is that risk which should be of immediate concern.
In the meantime, many academics are being encouraged to experiment, and not panic. Media outlets are also exploring what ChatGPT means for their core content development activities.