Software
Will ChatGPT improve software development?
The AI-powered chatbot can put together poems and articles. But software? Not so much, writes Nick Durrant, CEO of Bluegrass Digital.
If you haven’t heard of ChatGPT yet, where have you been? In recent weeks, we’ve seen this pretty extraordinary AI-powered chatbot putting together poems, journal articles and even resignation letters in a matter of seconds. This powerful bot captured the world’s attention quickly, gaining 1 million users within just a few days of its November 2022 release.
The brainchild of Open AI (yes that’s the same AI company behind the AI art generator, DALL·E, which made waves in 2022 because it can create realistic images and art from a natural language description), ChatGPT is designed to provide natural, dialogue-like responses to queries.
While this is nothing new, what makes this large language model different is that it can understand context and detail. But this isn’t the biggest differentiator. Open AI has ensured that their offering stands out from what other big tech brands like Google and Meta have developed by allowing the general public to experiment with it directly; a move that has certainly driven much of the hype around the platform.
ChatGPT and software development
Unsurprisingly something like this has massive implications for almost every industry. And equally unsurprisingly, it’s receiving some mixed reviews. For proponents, speed is a major benefit. This viral AI-enabled bot has potential to make everything faster by making it possible to streamline responses to customer queries and transform how we search for things online. It’s essentially taking the search out of research.
For software developers, this could simplify daily tasks by making it easier to process, write and debug code. ChatGPT can also identify security vulnerabilities. Today, developers will often run a quick Google search for a solution to an issue. But then they have to sift through the results to find the most appropriate answer.
Using ChatGPT, they will instantly be provided with the best-ranked answer to their question making the process of fixing a problem much quicker. But it shouldn’t be relied upon as an “expert” especially among people with limited coding knowledge who might come to over rely on the code that ChatGPT produces without realising that the code isn’t optimal. Similarly, if a developer wants to write highly complex code, the kind of code you’d need for something like a banking application, ChatGPT will fall short.
In this way, it can become an asset that developers use to automate and speed up simpler tasks – like writing working code – so that they have more time to focus on complex application architecture and cybersecurity.
But there are some downsides to the technology. Some analysts worry ChatGPT could power sophisticated malware, phishing attacks and online harassment or that hackers could use the technology to develop their own AI models. Sure, any malicious code provided by the model is only as good as the questions asked of it but the bot could open up opportunities for less savvy hackers to up their game.
While there is no doubt that an innovation like ChatGPT will bring about some big changes, we shouldn’t be worried that the “robots” are going to take over just yet. When it comes to software development, it’s important to remember that a developer’s job is about so much more than simply writing code. Programmers have to transform ideas into solutions and create something greater than the sum of its parts. They also create solutions with the needs and challenges of a specific business in mind.
According to experts, tools like this certainly have massive potential, but they can’t do everything and, yes, they do sometimes give wrong answers. Ask ChatGPT to write about a very niche topic or a recent world event – post 2021 – and you may find that the model gets confused.
This could be limiting if you’re employing it to write code using a fairly new programming language. Some have even described it as a “parlour trick” and “a distraction”. In addition, industry insiders warn that there is an intrinsic difference between the way humans produce language and the way that large language models do it.