Gadget

Gadget of the Week: AI agents are here – now for agent sprawl

What is it?

The concept of the AI agent has emerged from the shadows, both because it has gone mainstream and because it threatens to get out of control.

AI agents are apps that act as intelligent, automatic assistants, and they usually start as small fixes. Someone automates a report, another team sets up a workflow, a manager asks for updates to arrive without being chased, a homeowner schedules reminders for the entire family and an entire year.

Over time, those agents multiply across departments, systems and activities, working continuously in the background. Their creators lose track of how many exist, what they touch, and who remains responsible when something goes wrong.

I use the term agent sprawl to describe what follows. It grows out of sensible decisions made close to the task. Agents get deployed to watch information, pull data together, send messages, prepare reports, or trigger actions automatically. Once created, they keep working without being asked again. Each one usually makes sense on its own. The problem comes with accumulation, as agents spread across platforms, vendors, teams and even families, carrying permissions and accessing data or diaries long after their original purpose has faded.

Last week, during the Johannesburg leg of the Microsoft AI Tour, at the Sandton Convention Centre, I put the question of agent sprawl to Mark Chaban, corporate vice president for commercial cloud solutions at Microsoft. He was here to deliver a keynote address at the event on the emergence of frontier firms: companies that use intelligent technology like AI agents to enhance productivity.

He instantly agreed.

“What happens when you have so many agents is that you lose track and you lose control. That’s when you start having agents go rogue, operating in the background, a lot like excess code. That is what happens when these agents are not deployed leveraging responsible AI guardrails, of fairness, of accountability, of transparency, and so on.”

The scale of that challenge is already visible. Chaban pointed to how quickly agent-based work structures are forming inside organisations.

“IDC (International Data Corporation) says that by 2028 you’re going to have 1.3-billion agents deployed worldwide. We already see that with customers all across Europe, Middle East and Africa doing that at massive, massive scale. They want to go department by department to give deeper capability, agentic capability, the ability for people to create agents.”

Ease of creation accelerates the process.

“So for example, HR, legal, finance, engineering, I want to give them the ability to create agents. And very quickly, with Copilot Studio, you can take a broken business process that’s frustrating you, and you can quickly create an agent for that.”

At that point, oversight is less about IT governance and more about management, says Chaban.

“The question is, how do you extend the governance that you do today for your human beings?”

In response, many of the leading AI developers have rolled out agent management or orchestration platforms. Microsoft’s version, Agent 365, extends Microsoft 365 security, productivity, and management tools to agentic AI.

“Agent 365 now allows me to say, where is this agent sprawl, regardless of who made the agent and which vendor this agent was produced from, Adobe, ServiceNow, SAP, anybody that you want. It gives you a view of all of your agents from a governance perspective. It tells you if any of these agents are actually going to have a regulatory failure of some kind.

“For example, the EU AI Act imposes a five million euros fine per data leakage.”

At an individual level, the relationship with agents becomes increasingly personal.

“I use seven agents every day. I can’t live without these agents. They triage my email. They help draft complex emails where I need to respond. Most times, I do not go to my inbox. I go to my draft folder because they’re waiting there for me. I look at them to see why it selected this for me, or that requires my attention, and I fine tune it to make sure that it sounds a lot more like me.”

That personalisation deepens over time.

“I think you’re going to have an AI that’s going to be a lot more personal to you. It’s going to understand how you prompt, who you communicate with, and it’s going to start to sound a lot more like you.”

Why does it make a difference?

For most people encountering agents, it will be much like their first experience with a website or social media platform. It starts with a learning curve, but quickly becomes part of the background.

This familiarity is the very reason that agent sprawl develops undetected over time.  Rather than representing a failure of AI agents, it is a consequence of success, convenience and speed. The difficulty lies in recognising when helpful delegation turns into unmanaged accumulation, and in applying familiar management disciplines to systems that never stop working.

Exit mobile version