The academic world is in a frenzy about an artificial intelligence system called ChatGPT, which can write papers, reports and responses on demand.
It stands for Generative Pre-trained Transformer, and was launched on 30 November last year by OpenAI, an AI research laboratory.
It has both dazzled and dismayed with articulate answers and lack of factual accuracy, which will come as no surprise, given that the founders of OpenAI include the new overlord of misinformation, Elon Musk.
I asked ChatGPT to write a column about bots writing columns. This is what it told me:
“While the use of bots for writing may seem like a novel idea, it is actually not a new concept. For years, news organizations have been using automated systems to generate basic news stories, such as stock market updates and sports scores. However, with the development of more advanced language models, bots are now able to write more complex articles and columns.
“One of the main benefits of using bots for writing is the ability to quickly generate large amounts of content. This is particularly useful for news organizations that need to produce a large number of articles on a daily basis. Additionally, bots can also be programmed to write on specific topics, which can be useful for specialized publications.
“However, there are also concerns about the use of bots for writing. One of the main concerns is the potential for the spread of misinformation. Since bots can generate content based on a set of pre-programmed rules, they may not always be able to accurately distinguish between fact and fiction. Additionally, there are concerns about job loss as more content is generated by bots.
“While it offers many benefits, such as the ability to quickly generate large amounts of content, it also raises important ethical and societal questions that need to be addressed.”
Naturally, I then asked it to tell us the drawbacks of bots writing columns. It took 6 seconds to produce this:
“There are several drawbacks to using bots to write columns, including:
“Lack of creativity: Bots can only write based on the information they’ve been trained on, so they may not be able to come up with new and original ideas.
“Lack of understanding of context: Bots may not fully understand the nuances and subtleties of language and may not be able to pick up on sarcasm or irony.
“Lack of accountability: It may be difficult to hold a bot accountable for the content it produces, as there may not be a clear author or editor to take responsibility.
“Lack of human touch: Bots may not be able to capture the human perspective and emotions that make columns relatable and interesting to readers.
“Lack of diversity: If a bot is trained on a specific dataset, it may only be able to write in a certain style, which can limit the diversity of content in columns.
“Lack of accuracy: There are chances of misinformation, if the bot’s training data is not accurate, as the bot cannot fact-check or verify information like humans.”
That all sounds pretty damning, so I asked it to outline the benefits of bots writing columns. It obliged, but was clearly not happy:
“Bots writing columns can have several benefits, such as: Consistency and reliability in terms of tone and style; Ability to generate content quickly and at scale; Reduced costs associated with hiring human writers; Potentially increased accuracy and objectivity in reporting data-driven topics; Ability to work 24/7, which can be useful for news outlets that need to update their content frequently.
“However, it’s also important to note that bots lack the human touch and creativity that comes with human writing and may not be able to fully capture the nuances and complexity of certain topics.”
I couldn’t have put it better. Or maybe, according to the bot, I could have?
* Arthur Goldstuck is founder of World Wide Worx and editor-in-chief of Gadget.co.za. Follow him on Twitter on @art2gee. This column first appeared in the Sunday Times.
Read Gadget Magazine’s coverage of ChatGPT and its benefits and shortcomings here:
A bot previews the Samsung S23 Ultra
ChatGPT: What cybersecurity dangers lurk?