Artificial Intelligence
How 2025 will set the scene for next decade
Don’t groan when you keep hearing about AI, writes JAMES BERGIN, Xero executive GM for technology, research and advocacy.
You can tell we have hit a hype saturation point when it is hard to hear the words artificial intelligence (AI) without also hearing the audible groan that follows. And, after two years of hype around generative AI in particular, there are some clear signals that we’re headed into that situation – the low that tends to follow the high – in this space, where public mood shifts as the gap between expectation and reality becomes more apparent.
This isn’t anything new. History is littered with what seem like false starts, where new technology burns bright and seems to fizzle out just as quickly.
While predicting the timing and impact of technological advancements is challenging, I find that the more difficult task is anticipating which technologies are going to be the ones that capture the imagination of the public at large. The ones that get people really excited and which build their expectations of what is possible or even promised. How they will react, interact and adapt when these expectations are either met or wildly missed.
In some instances, “technology failures” lay the groundwork for more well-timed ideas that turn out to be world changing. It just may be closer to five or ten years from now, rather than 12 months, that we’ll see the true impact of these advances. With that timescale in mind, and thanks to a convergence of societal changes and new capabilities, next year will set the scene for a dynamic decade of technological advancement.
Here are the key signals that I will be paying close attention to in 2025 as I think about what the future might hold for people in small business, their advisors, and communities around the world.
Agentic AI: From providing insights to taking action
I know I said the general generative AI saturation point might be approaching, but AI that can take action hasn’t really had its day yet. Technologists and tech enthusiasts have long imagined a world where computers could do things on their own, on our behalf, like our very own personal assistants. Apple’s Knowledge Navigator in the 1980s was seen as a big step towards this vision but failed to truly take off due to the hardware and software limitations of the time. Now, the world might be finally ready for autonomous agents.
Currently, AI chatbots powered by large language models (LLMs) like ChatGPT are great at generating and summarising information and content based on user prompts. Many businesses have realised their capabilities in areas like marketing and customer service, based on their ability to answer questions, provide summaries and translate languages.
But so-called ‘agentic AI’ is the ability for AI agents or assistants to take action rather than just provide insights, and without being passively controlled by humans. As these capabilities emerge and are refined, and the appropriate safeguards are put in place, such agents might be able to operate as “digital employees” of a small business – performing tasks and assisting with helping customers – unlocking new scale and opportunity for that business.
I’m monitoring this area because balancing the ethical and legal risks in this space with the promise and the potential will require a significant shift in both technology capability and human behaviour over the next decade. We can already see a form of agentic AI in action with self-driving cars like Waymo, which you can hail in San Francisco and Los Angeles right now, and which have the ability to make decisions about steering, acceleration, and braking without human intervention – there is no human driver in these cars. I would not be surprised if, in the business world, we start to see similar ‘autonomous driving’ of fraud prevention and detection processes, or support queries, or stock ordering.
Ambient Computing: From Tiny Screens to Vast Vistas
We are surrounded by devices – on our wrists and fingers, in our pockets and in our homes and businesses – that have always-on access to the power of the cloud, with even more are being introduced. From smart watches to smart speakers, to smart TVs, to smart rings – everything, it seems, is being ‘smartened’ through connection to computing.
This year, for example, Meta’s partnership with Ray-Ban smart glasses and sunglasses started to gain some real traction. Suddenly, it seems, we have somewhat smart glasses that people are okay to be seen wearing. Users – or wearers – can take photos, record videos and make calls, and interact with Meta’s AI chatbot without pulling out their phone. You can instantly get weather or sports updates or get answers to questions about what you’re looking at.
It’s that last example in particular that points to the power of ‘ambient computing’ – the ability for wearable devices to respond to the context of their environment, including location, time, objects, and user behaviour in a natural and seamless way. I’m keeping an eye on this area, particularly as battery and display technology continue to improve. How long before everyday glasses or contact lenses combine AI and augmented reality (AR) features and are able to overlay digital information based on what people see in their field of view?
It is interesting to think about how the embedding of computing might change how small businesses operate. Shop owners could walk through their store and, in effect, talk to their products and have their shelves talk back to them – whether through smart glasses that overlay highlights on products that need to be restocked or displayed differently, or through virtual avatars that answer questions on smart speakers about what has sold particularly well or how price strategies should change, or products that reorder themselves. This may be a gradual change, but that may become as ordinary as checking the price on your smartphone is today.
Crypto and Blockchain: From Financial Experimentation to Digital Gold
Over the years, I’ve seen various critiques around the practical applications and real-world utility of distributed ledger technologies – particularly cryptocurrencies and blockchain-related technologies. One common criticism is that the technologies in this space are solutions in search of a problem with no real benefits for consumers or businesses who are seemingly satisfied with our current financial system, for example.
Yet, even as the hype fades, the technology continues to progress and is starting to show its utility. There are interesting signals like central banks around the world continuing to experiment with central bank digital currencies (CBDCs) to do faster and better cross-border payments and trade. China expanded its pilot program for its Digital Yuan (e-CNY) and The Bahamas became one of the first countries to fully implement a CBDC, the Sand Dollar, for example.
Then we have stablecoins: digital currencies that are able to maintain a stable value compared to a traditional currency like the US dollar. These are providing a more predictable store of value, and new types of stablecoins are emerging like commodity-backed stablecoins, which are ‘pegged’ to the value of a physical commodity such as gold or oil so investors can verify the value of the underlying asset.
And this is before we even get to other really interesting non-financial use cases like decentralised digital identity (where you can provide attestation to a claim about your identity without having to share sensitive information like your birthdate) – something that is gaining some real traction in multiple jurisdictions and which might be finally about to break into the mainstream.
Regardless, I’m continuing to monitor development in this space because, as the technology continues to evolve, the concept of verifiable ownership of non-replicable digital assets could become the norm for small businesses. They could securely own, manage and sell digital assets such as intellectual property or supply chain data. The technologies could protect against counterfeit goods or unauthorised use. We’re still a way off from widespread adoption, but there is the shape of something interesting starting to form for the decade ahead.
Underestimating the Long-Term
It can be tempting around this time of year to focus on the year ahead of us and think about what might be the immediate future we are facing. But I think that American futurist, Roy Amara said it best when he observed that we often overestimate the short-term impact of technology and underestimate its long-term consequences. As we move into 2025, Amara’s Law is a helpful reminder to not just get caught up in the hype of a new technology and what it might do for us in the short-term, and to not be too quick to write the technology off as soon as it shows some shortcomings.
It might not be that particular technology’s year, but it could just as easily be its decade.