![]()
From costing companies billions to becoming more autonomous, here’s what experts expect from agentic AI this year.
Billions in investments and a concerted focus on the tech over the past few years has led to artificial intelligence completely transforming how major global industries work. Now, investors are finally expecting to see some returns.
SiliconRepublic.com covered major breaking stories in the tech sector this past year, and unsurprisingly, many had to do with AI.
The year saw a still unresolved back and forth between the US and China over Nvidia’s AI chips, new, more complex AI models that are built faster than ever before, and funding rounds that made OpenAI one of the richest private companies globally.
With all eyes on the tech, here’s what experts believe you can expect from agentic AI in 2026.
Investors want measurable ROI
Investors will no longer be satisfied with AI’s potential future capabilities – they want measurable returns on investment (ROI), says Jiahao Sun, the CEO of Flock.ie, a platform that allows users to build, train and deploy AI models in a decentralised manner. AI investment is entering its “show me the money era”, he says.
This isn’t to say that investments into AI will pause – take, for example, OpenAI’s most recent $100m acquisition of AI health-tech Torch – but that investors will begin prioritising critical areas that give guaranteed returns.
These could include agentic AI platforms that enable multi-agent orchestration; AI-native infrastructures built for scale, security, and interoperability; data modernisation tools that unlock the full potential of unstructured data; and AI observability and safety tools that monitor, govern, and refine agent behaviour in real time, explains Neeraj Abhyankar, the VP of Data and AI at R Systems.
“Enterprises that take these investments seriously, will build AI systems that are not only powerful, but trustworthy, scalable, and sustainable.”
Expect business acquisitions to continue
Platforms such as Meta and Apple will acquire innovation, rather than build it, comments Max Sinclair, the founder and CEO of AI visibility start-up Azoma.
“Single-purpose tools will be absorbed into unified AI platforms. The era of juggling ten different AI products is ending and the race to offer a complete, integrated experience will intensify,” he adds.
Meanwhile, some experts say that the EU’s AI Act will – for better or for worse – prohibit European firms from experimenting with high-risk use cases for AI. This, in turn, will make European companies more reliant on their US counterparts.
“The European Union is making a concerted effort to become more competitive and reduce its dependence on global tech infrastructure,” said Dane Anderson, the senior vice president of international research and product at Forrester. “
However, the reality is that ongoing volatility and operational constraints will compel European businesses to pursue more pragmatic strategies in both the long and short term.”
Growing agentic AI traffic will create security blind spots
Experts across the board agree that AI is set to take centre stage in this era of cybersecurity, and security teams need to evolve to keep up.
Melissa Ruzzi, the director of AI at AppOmni predicts that AI security risks are set to grow even more this year, stemming from excessive permissions granted to AI and a lack of instructions provided to it about how to choose and use tools, potentially leading to data breaches.
“This will come from increased pressure from users expecting AI agents to become more powerful, and organisations under pressure to develop and release agents to production as fast as possible.
“And it will be especially true for AI agents running in SaaS environments, where sensitive data is likely already present and misconfigurations may already pose a risk.”
Agentic AI acts on behalf of human users, and is, as a result, more prone to becoming a security blind spot. “These agents will make purchases, create and manage accounts, and engage directly with various platforms, generating a new level of automated service-to-service traffic that few security teams can detect or validate,” explains, Ethan Smith, the co-founder of Spur Intelligence.
Smith says that existing methods of detecting malicious AI activity will not hold up when Agentic AI traffic grows.
True AGI not here yet
Tech giants worldwide are gearing up for artificial general intelligence (AGI) – advanced AI systems that are expected to be “smarter” than humans. But AppOmni’s Ruzzi believes that might not be achieved before the next decade. Instead, the next generation of generative AI (GenAI), could be dubbed AGI instead, she says – “which would then force the market to create a new acronym for the true AGI”.
The big risk in AGI is similar to GenAI, where the focus on functionality clouds proper cybersecurity due diligence.
By trying to make AI as powerful as it can be, organisations may misconfigure settings, leading to over-permissions and data exposure. They may also grant too much power to one only AI, creating a major single point of failure.
Robotics to grow, backed by AI
Announcing a new family of open-source AI models for more advanced reasoning-based autonomous vehicles, Nvidia CEO Jensen Huang said, “The ChatGPT moment for physical AI is here…robotaxis are among the first to benefit”.
Kalpak Shah, the head of tech, internet and platforms at R Systems goes a step ahead. He says that the physical-digital digital convergence will continue, with IoT, edge, digital twins, AR, and AI supercharging robotics and operational workflows.”
While the International Federation of Robotics notes that, “Robots that use artificial intelligence to work independently are becoming more common.”
Even companies such as Softbank, which sold all of its Nvidia stocks just months ago, acquired ABB Robotics as a “major step forward into ‘physical AI’”.
Firms to hire AI governance heads
Forrester predicts that this year, around 60pc of Fortune 100 companies will appoint a head of AI governance, as a result of growing, and at times, fragmented legislations governing the tech across the EU and the US. Sony, Bank of America and UBS have already done so.
This, while tech execs already face mounting pressure in 2025 from unmet AI expectations, budget cuts, and economic instability.
Specialised AI-based roles will grow
Last year, in an interview with SiliconRepublic.com, Forrester expert Craig Le Clair said that AI ‘worker agents’ – AI with a “job description” – can only be expected only in 2027. This year, AI transformation in the workforce will hinge on specialised roles.
Neeraj Abhyankar, the VP for data and AI at R Systems says that these specialised roles will see human workers working with AI that has evolved from a tool into autonomous agents acting as digital co-workers with defined responsibilities and KPIs.
“In 2026, expect to see more AI integration architects who will be essential in embedding agentic workflows into enterprise systems,” he says. “Prompt engineers and large language model ops specialists will continue to emerge to fine-tune GenAI models for precision, performance, and reliability.”
Meanwhile, Shah explains, “In 2026, we’ll see knowledge workers become ‘agent managers’.”
“Today, developers already run agents to fix bugs, code, or implement new features, so I anticipate this expanding to other disciplines, such as law, finance, and consulting.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


