Artificial Intelligence
Cisco Live: The biggest risk in AI
If you’re not using AI, you are already falling behind, but trust must be built in, Cisco VP Akshay Bhargava tells ARTHUR GOLDSTUCK.
When customers tell Akshay Bhargava that it is too risky to use AI, the Cisco vice president has a simple answer: it is too risky NOT to use AI.
As VP of product management for AI, software and platform at the global networking giant, Bhargava’s job includes making AI trustworthy, secure, observable and reliable. Even if AI were not constantly shifting the goalposts, that would be a big ask. Now the challenge is multiplying.
“AI is changing dramatically and the person who has this type of role in a company has gone through massive change in three dimensions,” he told Gadget during the Cisco Live Europe conference in Amsterdam last week.
“The first dimension is that the time to act on AI is now and it’s a harsh reality. If you’re not using AI, you are falling behind. It’s that blunt: you may say on, oh it’s risky, but it’s more risky if we do not use AI.
“The second thing that’s changed is that we have to embrace platform thinking, especially to AI best practice. You cannot generate a best practices system or architecture if you do not do it in a platform way – you get tremendous scale and benefits.
“The third thing that’s changed drastically is the customer expectation. Ten years ago, every customer said such things were about balance, you know, where you want to be on the pendulum between security and speed. It’s completely changed. You need both. So I’ve got to get AI, but I can’t get onto any AI. The requirement is, how do I get trusted AI. You need a partner, you need technology, you need frameworks.”
Naturally, Bhargava believes Cisco is the only company that can offer all those things. But he makes a compelling case.
“Fortune 1000 companies are coming to Cisco aggressively and saying, ‘Hey, we need help to make our apps, our models, our agents trustworthy, so that we have confidence to let them act autonomously and independently.’ They are saying, ‘We don’t have that trust right now, because we don’t have the visibility; we don’t understand the supply chain; we don’t have validation; we don’t have guardrails.”
They do, however, have one essential element: a sense of urgency. But they need to pair that with the right partners and products to avoid massive losses in both time and profit. This is all the more critical from the point of view that many early design decisions in AI platforms will be extremely difficult to unwind a few years later.

“I’m working with a lot of customers right now that wanted to build trustworthy AI. The challenge is that you have not used AI in the right way to build your product, and adding on trust after the fact is expensive to do. If you have a model or agent where 80% of attacks get through now, there’s a lot of processing overhead because you need to keep blocking a lot of things. You keep getting bad outputs, so you need to block them.
“The result is that the user experience is not so good. The right thing to do is, when you are building the product, you need to use AI in a way that is something called spec-driven development. You define the specs very clearly, so that the AI cannot hallucinate, so that the AI builds good system architecture, so that the AI self-corrects, so that the AI does proper testing of itself.
“If you do that in the AI upfront, you can build it in a secure way. Then, when you get to run the AI, the guardrails and other things work more seamlessly and it’s not a problem for the guardrails to block things. Then you have a better user experience of a product that is secure, reliable and much more trustworthy.”
Which brings us right back to the question of risk. Bhargava says it is a misconception that we must wait for AI to be secure enough before we start using it. He shares an anecdote from an executive symposium hosted by Cisco on the sidelines of Cisco Live Europe in Amsterdam last week.
The chief security officer of one of the biggest retail groups in the USA was asked what advice he would give people who hadn’t started on the AI journey yet. Instead of advice, he shared this analogy:
One day I was at a bar and I was drinking until late at night. And when I was going home, I said, oh should I drive my car, but I’m drunk. So what I did was I trusted AI. I called an autonomous vehicle. I got a Waymo, the Waymo took me home safely, and then I carried on working the next day.
And I told my team this lesson: that sometimes we think that AI is not safe enough , but actually it made sure I could be at work today. If we’re not using AI, it is like is driving home drunk.
“What we need is to use AI, but not any kind of AI,” was Bhargava’s own lesson. “We need trustworthy AI. If we’re not using that kind of AI, then we are falling behind. It’s like we are getting more and more drunk and keep trying to drive. That is the power of the moment right now.”
* Arthur Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of The Hitchhiker’s Guide to AI – The African Edge.



