Enterprise Ecosystems

Is Agentic the next frontier for AI?

Is Agentic the next frontier for AI?

As the limitations of generative AI become clearer, agentic AI has emerged as the next trend within the sector – to the extent that the term has been widely adopted, poorly defined, and consequently often misunderstood amidst all the hype.

A recent Gartner report that predicted that 70% of agentic AI projects would fail by 2026 – a claim that has been met with pushback as a lot of AI projects are mislabelled as agentic. So what exactly defines agentic AI? What are its practical applications? And will the technology be abandoned as widely as Gartner expects?

Agentic, not generative

Reece Hayden of ABI Research defines Agentic AI as an LLM-powered system which can autonomously perform actions - mostly iteratively - apply reasoning within it, and then achieve an outcome. In other words, it’s AI that not only takes prompts, answer questions and offer suggestions – but also take action on its own to achieve a goal. Standard AI which we see today in OpenAI’s ChatGPT and Google’s Gemini is like to a recipe book, but Agentic AI is a chef who will take an order and cook it up.

However, Hayden acknowledges that within this framework, there are numerous variants that affect how the process is carried out – notably, ‘human in the loop’, in which a human is involved at each step of the agentic process to check results and provide approval, or ‘human on the loop’ in which a human maybe checks the start and end of the process. Then of course there are fully autonomous cycles – ‘human out of the loop’ – although even these require human oversight to ensure that there are no unworkable hallucinations. The main aspect is the ability to reason through problems, act iteratively, and perform autonomous actions, rather than entering an endless cycle of prompts and responses.

Agentic applications

For agentic AI, the obvious early low-hanging fruit is the customer service or HR use case. The agent carries out reasoning based on a prompt from a human, then provides an answer which is verified and then delivered to the client. Hayden notes that agentic AI can have a major impact when applied in this way, speeding up processes and enabling employees to use more data to inform their responses to customers, but suspects that this will be the limit – not from a technology standpoint, but in terms of feasibility of operations.

“From my perspective, the main challenge at the moment is not technology. There are issues there, but they're being resolved at enormous speeds. The challenge is regulation, government, governance, risk, willingness, operations…which really will dictate the speed at which not only different applications or use cases can be deployed, but into which verticals, and into which situations.”

Further down the line, Hayden anticipates use cases such as optimisation, client engagement, and personalised retail experiences with agentic workflows, but doubts this will happen within the next five to ten years given the operational and regulatory issues, such as end-to-end supply chain optimisation. If a manufacturer is agnostic to its supply chain, and the entire supply chain can shift overnight based on demand, theoretically demand could match supply exactly. Agentic AI could enable this possibility, as it could enter all the information from the demand side, change the processes, and match the output to the demand. Hayden acknowledges this use case is blue sky thinking but argues that it is a feasible application.

Hayden notes that regulation is tightening up around agentic AI, making it even more of a challenge to operationalise. With ‘human in the loop’ AI, there is accountability if something goes wrong, but it is challenging to blame an AI. This is a significant issue with agentic AI, as the transition from generative to agentic is about removing humans from operations; humans are always involved in generative AI, but with agentic, the operational side will become very complex and verticalized. “There's a lot of reasons why POCs fail, but I think the one of the challenges that is that the real value with these is humans out of the loop, but the feasibility of doing that even in the long term, is unlikely.”

On the knife edge

Tom Cox, Founder & CEO of virtual sales agent 15Gifts, is sceptical of Gartner’s pessimism about agentic AI, noting that not only has agentic AI not been around long enough to warrant such a negative prognosis, but that the term is frequently used inaccurately. ‘AI’ has for years been used to mislabel fairly straightforward rule-based logic, but Agentic AI makes its own decisions and fulfils tasks independently, taking instructions, examining and gathering options, matching them based on user requirements, then returning these to the user for a final decision. Unlike generative AI, it is not instructed or prompted – it is self-learning and self-improving.

“It's the knife edge of AI. There's very little agentic AI in reality out there anywhere, but obviously everyone's jumped on the term, and it's become a broader term now for generative AI. If you're doing good generative AI tasks [with] AI which handles human language, people are tending to use agentic in that term. But true agentic AI, which is making its own decisions and is fulfilling tasks, is pretty limited in its examples.”

The rush to adopt AI – agentic or otherwise – has upended the usual discipline and rigour around prioritisation and budget spend, says Cox. Companies are pumping money into the technology through fear of losing competitive ground, or because their shareholders want to see attraction in AI. This spending is fear-driven rather than outcome-driven – the focus is less on the business case and therefore the return, and more on the perceived need to invest in AI, which requires a team as well as licences and partnerships.

“What's happening is that telcos and other companies have these ballooning AI teams, and every telco now is filling seats of all these AI people [who] are desperately trying to prove their worth, so they are finding initiatives where they could have the biggest impact. Obviously, some will succeed, but a lot will fail because they don't have enough data in a way that's structured… to be used for AI. The reality and cost and complexity of using AI - particularly agentic - to solve a problem, is immense.”

Quality control

If agents are built for particular problems, they need to be optimised and updated, and this makes cost of ownership a significant issue: if agentic AI is used across different projects, the costs balloon. Cox expects telcos and other firms to start cutting AI budgets – and staff - as they begin to realise that the ROI is not as they hoped, and that there are hard limits around the level of accuracy that agentic AI can offer. He notes that an outcome-based approach is desirable, but difficult with AI as the tech’s capabilities are often unknown, so there is a temptation to acquire it before figuring out which problems it can solve. Cox notes it’s important to be clear around the quality of data and its limitations before building anything: “AI is rubbish in, rubbish out - it's as simple as that. If you haven't got really good, unique, high quality, well-structured data, you will have a rubbish product.” However, the eagerness to invest in AI has upended business and prioritisation discipline: companies hypothesise their data can power an AI solution that will resolve a problem and invest robustly. If it then seems like it won’t deliver a return, they pull it and move onto the next one.

Cox notes that one way that operators can address this risk is by using specialist vendors for particular problem spaces, particularly in emerging markets. By working with data sets and deploying specific, targeted AI solutions, specialist vendors can help alleviate the risks associated with cost of ownership and ROI, particularly if they can work on a pay per performance basis as operators will only pay if the solution works, meaning there is no upfront cost risk. Cox expects operators in emerging markets to adopt this approach given the concerns about costs, building towards an agentic AI stack where the front-end experience is controlled by the operator, orchestrating between specialist agents – some in-house, some from vendors – so that they can create their own unique architecture and experience. This puts them in control without requiring them to own and manage everything – in many ways it’s an ideal model for both operators and AI specialists.

This level of control will help to unlock the value of agentic AI – while Gartner’s 70% estimate carries some weight given the amount of AI initiatives that are launched and then shelved, Hayden argues it’s more a matter of taking a pragmatic approach to how and where it's implemented. Trialling services in practical applications and spending to ensure it’s actually used by customers will help the technology achieve its potential.



More Articles you may be Interested in...

More Articles You May Be Interested In...