Technologists have long been pushing our species to the precipice of unknown catastrophe, harnessing their blinding obsession with innovation to mow down the hurdles of ethics and morality and safety.
Nowhere is this more true than in the field of artificial intelligence, where every week seems to bring us a little bit closer to the dystopian dirge that science fiction authors have long warned us about.
The latest terrifying new development in the A.I. world comes to us from a system known as ChatGPT, which is now believed capable of passing complex and rather important exams.
The artificially intelligent content creator, whose name is short for ‘Chat Generative Pre-trained Transformer,’ was released two months ago by OpenAI, and has since taken the world by storm.
Praised by figures such as Elon Musk – one of OpenAI’s founders – the AI-powered also raised alarms in regards to ethics as students use it to cheat on writing assignments and experts warn it could have lasting effects on the US economy.
Its results, however, are inarguable – with recent research showing it the chatbot could successfully achieve an MBA, and soon pass notoriously difficult tests like the United States Medical Licensing Exam and the Bar.
Just how troubling is the development?
Ethan Mollick, associate professor at Wharton School of the University of Pennsylvania, highlighted these reports in a recent post on social media, one of which was carried out by one of his colleagues at the prestigious school.
The report, carried out by Christian Terwiesch, found that ChatGPT, while still in its infancy, received a grade varying from a B to B- on the final exam of a typical MBA core course.
The research, carried out to see what the release of the AI tool could mean for MBA programs, further found that ChatGPT also ‘performed well in the preparation of legal documents.’
‘The next generation of this technology might even be able to pass the bar exam,’ the report notes.
The news comes just months after a scare at Google, where a chatbot allegedly gained sentience, according to a now-fired engineer at the company, and wound up hiring its own lawyer to represent its interests in court.