By Emma Regan, Jordan Maxwell Ridgway, Laura Ingate and Frankie Harnett
The looming threat of AI technology has hovered on the horizon of the publishing world for decades and has finally exploded onto the scene in the form of ChatGPT. Reported by a UBS study as the fastest-growing app ever, reaching 100 million active users within two months of its global launch, ChatGPT has revolutionised the day-to-day use of AI technology. The highly advanced chatbot can answer users’ questions, formulate emails, essays and CVs and even write code.
As such, it represents a highly useful tool for many areas of life, from education to customer service to language translation. However, ChatGPT’s unusual ability to mimic human conversation by learning from previous interactions now poses a significant threat to jobs within many industries, including journalism and publishing.
Despite ChatGPT being launched in November 2022, the artificial intelligence programme has already caused various issues inside and outside the publishing world. Not only has it been the talk of the town since its arrival, but many researchers and journalists have already been pushing back on the technological tidal wave that is ChatGPT.
While being able to shorten the length of time it takes to write scientific research might seem ideal, the end result will leave you more work to do in the long run. ChatGPT is still a beta programme and makes plenty of errors. So, for example, if academics start to rely on artificial intelligence too much, they may not realise mistakes in their literacy/figures and publish a paper which is factually wrong. Holden Thorp, the Editor-in-Chief of the US journal Science, had to announce an updated editorial policy banning the use of ChatGPT after it has been credited as a co-author on several published papers already.
The expansion of ChatGPT will continue to grow as AI-generated content will be faster and cheaper to create, with little effort wanted to fix its mistakes. It will mean human content will have to compete against an AI’s work and potentially lose, with companies wanting to make more money and pay out less.
There is speculation that ChatGPT could impact careers in journalism and publishing, with growing concern that the AI tool will be able to write content and produce articles after being given simple instructions. The media and entertainment company, BuzzFeed, has already been using AI to power their popular personality quizzes. This means that each quizzer will have an infinite number of results, which are uniquely tailored to their personality. Even The New York Times wrote an opinion piece on ChatGPT and then created a tool to see if ChatGPT could develop emotional intelligence in their AI valentine generator. Since the launch of ChatGPT, there has definitely been a big uptake in companies and newsrooms testing the AI tool.
But, does this mean that ChatGPT will be replacing journalists, writers and publishers? Not anytime soon. Despite technological advances, ChatGPT is not foolproof. A news agency recently gave ChatGPT the job of creating a news story about a mugging. The AI tool was given the basic information needed to write the story and at a first glance, the article seemed passable, if not impressive.
However, when undergoing further inspection, ChatGPT made quite a number of mistakes. This includes; getting the name and age of the victim wrong along with the location of the crime, saying the perpetrator was at large (when they were in jail), saying the perpetrator was unidentified when their name and age was known and fabricating quotes. For now, it would seem illogical to let ChatGPT produce news-worthy content if there is potential for it to generate “fake news.”
The rise of AI intelligence filtering its way into everyday use such as the news content we all read is a topic of a vital discussion. Being able to rely on accurately cited sources in journalism and publishing has always been important to its integrity, but this rings true now more than ever with a trending erosion of trust for news sources from the public. UK publishers are hopeful that the Digital Markets Unit will be forming regulations for AI written news. Established in recent years, the Digital Markets Unit began as an online watchdog seeking to form a code of conduct for developing digital innovations. Its formation and the level of legislative powers it should possess have been a subject of debate.
From the printing press through to computers and the internet, each age of publishing sees greater access to information to a growing number of readers and the need for laws and protections as a result. The introduction of AI technology poses the urgent need for an update in laws so that copyrights, intellectual property and facts can be protected and reliability can be secured.