Artificial Intelligence
Signpost: The Emperor’s
new algorithm
According to HPE, only a third of organisations have an AI strategy, even though most talk about deploying it, writes ARTHUR GOLDSTUCK.
John Carter, vice president for server product, quality, and technical pursuit at HPE, put it bluntly at the recent HPE Innovation Day in Johannesburg: “Only 33% of customers have an AI strategy today, but I guarantee you that 100% of customers talk about deploying. Having a strategy is the first step to being successful.”
That is not the most startling aspect of AI deployment.
“There’s no other project in your company today that you would go deploy without thinking through a strategy first, and AI is no different. Stepping back and having those conversations is step one.”
A strategy means choosing well-defined tasks, having the right data, knowing when accuracy is critical, and recognising when a human must remain in the loop.
“Start with a known business problem,” Carter advised. “AI then becomes a tool in your toolbelt to solve that problem.”
The long road from intent to deployment – on average more than seven months, according to HPE – is not due to the technology being difficult to implement, but rather because the underlying data is often a mess.
“The biggest time that we see customers typically take to get ready for an AI deployment is actually cleaning and structuring the data that has to go into the models. So start thinking early. Start thinking now about what those steps are to prepare for an AI project in your data centres.”
While the emperor’s new clothes currently focus on large language models (LLMs) like ChatGPT, most businesses don’t need to build such behemoths.
“Almost none of you have a real use case to go build your own LLM,” said Carter. “It doesn’t make any financial sense. It doesn’t make any sense with your resources, and it really won’t bring you a better outcome.”
Instead, Carter recommended more tailored models: “Small language models are probably one of the most useful concepts to come away with today. You can get away with maybe 10-billion parameters, a tenth of the size of a large language model, and still keep 90-plus percent of the performance for most use cases.”
These models, which can be tuned to specific data and tasks, operate with a fraction of the computing power and data required by LLMs.
This marks a turning point for AI deployment in markets like South Africa, where resource constraints are often the barrier. By shifting focus from all-powerful, general-purpose models to narrower, task-driven ones, companies can embrace a practical form of AI that doesn’t bankrupt their budgets or their teams.
In highly regulated industries like finance and insurance, where data is already structured and security is paramount, early wins have come in areas like fraud detection and personalised mobile banking.
Healthcare has found value in diagnostic models that perform more accurately than doctors when interpreting structured data. Still, these systems work best with a “doctor in the loop”, underscoring AI’s role as an assistant, not a replacement.
In most use cases, the common denominator is not flashy AI demonstrations, but measurable business outcomes.
HPE has implemented its own chat assistant, ChatHPE, to give employees access to company-specific knowledge while keeping proprietary data out of public models. It is also using AI to compress the nine-to-twelve-week process of server configuration down to just three days.
The real magic lies not in what AI can do, but in knowing what a business needs it to do – and doing just that.
* Arthur Goldstuck is CEO of World Wide Worx and editor-in-chief of Gadget.co.za. Follow him on Bluesky on @art2gee.bsky.social.
