Nitesh Bansal discusses the growing popularity of AI agents and how they will result in a necessary change in data policy.
As explained by Nitesh Bansal, the CEO and managing director of digital product engineering company R Systems, AI agents are autonomous models with the ability to learn, perform tasks and make decisions, without the need for constant human intervention. They combine machine learning, natural language processing and reasoning to automate tasks, analyse data and optimise workflows.
“Unlike traditional automation, agentic AI adapts dynamically, enabling proactive problem-solving and multi-agent collaboration through high-level cognitive functions like thinking, reasoning and remembering, like a human mind.”
For companies, particularly those operating within the STEM sphere, agentic AI by virtue of its ability to automate mundane and routine tasks, is becoming crucial to further research and innovation. As noted by Bansal, in areas such as the life sciences, AI agents can streamline clinical trials, accelerate drug discovery and bring life changing therapies to market sooner.
Through personalised learning platforms AI agents are also democratising access to STEM education and the tools needed to work effectively in that space. This enables anyone, whether they are a student, a professional or a tech enthusiast, to teach themselves the skills needed to prepare for a role in an industry that is under near constant reinvention.
If you build it they will come
When it comes to deploying and using workplace AI agents, there are many challenges, from a lack of skill among staff and poor retention, to limited data quality and a weak understanding of the technologies true potential company-wide. But for Bansal, it is the complexity of integration and the growing infrastructural demands that are plaguing the industry.
Citing research from a survey of over 1,000 enterprise technology leaders and practitioners, he noted 42pc of companies that contributed answers required eight or more data connections for successful AI agent deployment. This need for high computational power and low-latency networks is often at the core of a company’s success and can put significant pressure on available resources.
Added research shows that “while some companies have robust infrastructure, many face gaps. A recent study found that only 22pc of organisations have architecture ready for AI workloads without modifications. 86pc of enterprises require upgrades to their existing tech stack in order to deploy AI agents.
“It’s important that enterprises consider their need for scalable, cloud-based solutions and access to advanced computing resources. Without them, I anticipate that many organisations will either face delays in deployment or run into issues if they don’t have a robust plan for upgrading their infrastructure in place.”
To build the infrastructure strong enough to support the full capability of an organisation’s AI agents, companies can invest in a few key areas. For example, high-quality data pipelines for collecting, cleaning and preparing information. Robust storage solutions and scalable computing resources are also necessary, as is the ability to integrate existing systems for widespread compatibility.
Workforce training and a deep understanding of ethical governance will underpin the entire system, as according to Bansal, for AI agents to be free of bias and misuse, there must be clear policies on data, privacy and security.
Policing policy
For this to happen, he is of the opinion that organisations must regard data policy as a project and a target that has clear milestones, but can never be fully realised. Due to the often private nature of the information that is processed by AI agents, companies should strive to update and advance their data policies, in line with changing regulations and improved safety methods.
“There are laws, such as GDPR and CCPA, that require robust data governance frameworks and ensure privacy and security. In order for organisations to effectively address their data policies, they must first fully assess and plan for updates to these policy changes.
“This includes conducting a comprehensive data audit to understand their current data landscape, focusing on data sources, management practices and deployment across the business. This audit will identify gaps and areas needing improvement. They should also implement a risk-based approach when developing and deploying AI, assessing whether AI is necessary for specific contexts and identifying potential security threats.”
The continued advancement of AI in the workplace has created new opportunities for the individual, as well as the organisation. In fact, entirely new careers, such as AI trainers, prompt engineers and ethical AI auditors have emerged as popular and exciting new avenues for professionals and companies to explore.
But it also means that there are more opportunities for maliciously-minded people to infiltrate and exploit infrastructure weaknesses, especially in organisations that don’t fully comprehend the steps it takes to safely install, use and maintain agentic AI technologies.
For Bansal, now more than ever, companies need to ensure that the human element is as skilled and clued-in as the non-human elements, so the two sides can collaborate, forming a resilient and strong unit.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.