In November 2022 software developer OpenAI launched a new version of its chatbot, ChatGPT which responds to questions on almost any topic under the sun.
The internet already supplies such information through websites such as Wikipedia. But ChatGPT is a more sophisticated, interactive search engine that responds to questions and tasks in conversational language, can answer follow-up questions, admit mistakes and even challenge incorrect premises.
So let’s say you want to ask a very simple question such as “when was Stabroek Market completed?” It replies, “Stabroek Market is a historic market located in Georgetown, Guyana. It was built during the colonial era when Guyana was known as British Guiana. The construction of Stabroek Market began in 1880 and was completed in 1881, making it over 140 years old as of 2023.”
Pretty basic and any school child reading this would immediately see the benefits when it comes to their homework. ChatGPT can in fact perform far more complex assignments. One example is a 20-paragraph essay it wrote about how Artificial Intelligence had figured out how to play the highly complex board game, Go. More consequentially, a San Francisco lawyer asked ChatGPT to draft a brief to the United States Supreme Court on why its decision on same-sex marriage should not be overturned; and to create a contract for the sale of real estate in Massachusetts. The results were credible although the bot itself cautions that because it is “a machine learning system, it may not have the same level of understanding and judgment as a human lawyer when it comes to interpreting legal principles and precedent.”
ChatGPT is also aware that it could one day “be used to replace human lawyers and legal professionals, potentially leading to job losses and economic disruption.” Chatbots are already working in areas such as human resources, being able to review a resume and interview applicants. They can detect flaws in computer code and will likely take over the writing of it. In other words it has the potential to eliminate a lot of work now done by middle class professionals just as automation in factories has already destroyed working class jobs.
Aside from that there are a number of ethical issues that the “self aware” ChatGPT also acknowledges and helpfully lists:
“Bias and Fairness: AI language models like ChatGPT can inadvertently perpetuate biases present in the data they were trained on. For instance, if the training data is predominantly from a particular demographic group, the model may produce biased or unfair responses when interacting with people from other groups. It’s essential to ensure that ChatGPT is trained on diverse and representative data to minimize the risk of bias. Transparency: There is a need for transparency around how ChatGPT operates and how its responses are generated. Users should be made aware that they are interacting with an AI model and should have access to information about how the model was trained, what data it uses, and how it makes decisions.
Misuse: ChatGPT can be misused to generate fake news or misinformation, impersonate people or organizations, or for other malicious purposes. It’s essential to ensure that ChatGPT is not used to harm others, and to have safeguards in place to prevent misuse.”
This last part is probably the gravest danger. Governments or political parties for example could deploy chatbots to create credible propaganda that perpetuates their world views and interests. There may come a time when we have rival Western, Russian and Chinese bots swamping social media.
NYU Professor Gary Marcus recently told journalist Ezra Klein he is deeply worried about the direction current A.I. research is headed, and even calls the release of ChatGPT a “Jurassic Park moment.” “Because such systems contain literally no mechanisms for checking the truth of what they say,” Marcus writes, “they can easily be automated to generate misinformation at unprecedented scale.”
There are some other issues to consider. Despite ChatGPT’s best efforts it still sounds robotic and devoid of human individuality. The danger is that the prevalence of its language style and the formation of its opinions will begin to create a uniformity that sidelines regional or class-based forms of expression. For example would readers be interested in AI-generated fiction or could chatbots having read Dickens or Naipaul mimic their style? And how might authors feel to compete with robots on the best seller lists? Also ChatGPT is free now but surely at some point it will be monetised. This could create educational disparities between those who can afford it and those who can’t. And of course there is already the danger that students are relying on it too heavily for their studies.
This technology is astonishing and powerful for its ability to sort through a massive amount of information, gather it together instantaneously so as to address the assigned task and then express itself in a seemingly human way. It has countless uses, and of course like all technologies many potential misuses along with being likely economically highly disruptive. Who knows it might start writing editorials?