What once felt like a distant move towards agentic AI is now becoming a reality.
According to a study by Deloitte, 52% of companies have an interest in GenAI for automation, and one of four leaders (26%) say their organizations are already to explore it to a large extent.
Agentic AI refers to systems capable of independently executing tasks and making decisions aligned with the user’s intent. They interpret surroundings, process information and accomplish specific objectives, continuously learning and improving through advanced algorithms and machine learning. Their ability to streamline workflows and accelerate decision-making makes them a powerful force for productivity and a competitive advantage for businesses.
As these agents handle more complex business functions, their transformative impact is only just beginning. But as these systems evolve, they bring with them emerging security risks that many organisations are still unprepared to address.
Although AI agents are not yet widespread across enterprises, adoption is accelerating as organisations recognise the significant value they offer. Employees across all levels and their associated identities – from business users to IT professionals to developers, and even to the devices and applications they use – will soon start interacting with resources and services through AI-powered agents. AI agents are set to become embedded in everything from operating systems and browsers to platforms and widely-used tools like Microsoft Teams. As adoption continues to grow, businesses will either build their own agents or rely on agent-as-a-service solutions offered by SaaS providers.
As employees collaborate with AI-driven agents, their productivity could potentially skyrocket. These AI driven agents will not only streamline workflows but also redefine how tasks are delegated and executed, effectively transforming users into managers of their own virtual teams. This shift will reshape traditional roles, driving work to become more dynamic, efficient, and aligned with strategic goals.
As agentic AI effortlessly integrates into business operations and simplifies various tasks, organisations will find that avoiding its adoption is no longer a viable option. The key challenge then, is understanding and mitigating the security implications.
The invisible autonomy of shadow AI agents
One of the biggest security challenges is the rise of shadow AI agents, in other words, AI-powered tools that have been deployed without the knowledge of IT and security teams. These agents can be introduced by individual employees, often bypassing standard security processes.
Because these agents can function independently and without supervision, they can introduce risks in unforeseen areas, creating blind spots for security teams. Without proper oversight, they can become a significant security vulnerability with the potential to expose sensitive company data or provide entry points for malicious actors.
How developers are becoming the new R&D and operations teams
The role of developers is also evolving. No longer just coders, they are now key players in research, development, and operations. Generative AI has already enhanced developer productivity, but now this is going one step further. Developers will soon manage the entire application lifecycle, from coding and integration to QA, deployment and troubleshooting – all autonomously.
With this increased autonomy comes increased privilege. If a developer’s identity is compromised, the risk escalates dramatically, making it one of the most valuable, but also vulnerable identities in the enterprise. Securing these identities must therefore be a top priority to prevent attackers from exploiting AI-powered development environments.
Assessing the risks and implications of human-in-the-loop systems
As organisations integrate agentic AI, humans will still continue to play a critical role in oversight and governance. These ‘human-in-the-loop’ process are essential for validating and ensuring that AI agents operate as intended, validating their actions, and approving exceptions and requests from agents. Human input will also shape the future behaviour of these self-learning AI agents.
However, malicious actors may target these individuals to infiltrate the architecture, escalate privileges and gain unauthorised access to systems and data. Balancing the necessity of human oversight with strong security measures will therefore be essential to minimising risk.
Organising millions of AI Agents
One of the biggest hurdles for enterprises is the sheer scale of AI agent deployment. Machine identities already outnumber human identities by up to 45-to-1, and 76% of security leaders anticipate the number of machine identities in their organisation to increase by as much as 150% over the next year. Meanwhile, NVIDIA’s Jensen Huang’s predicts 50,000 humans could manage 100 million AI agents per department, meaning the ratio could soar to over 2,000-to-1.
To maintain security, best practice will involve dividing tasks among multiple specialised AI agents, each with defined roles and responsibilities to help mitigate risk and maximise efficiency.
Securing the future of agentic AI
Successful large-scale deployment of agentic AI requires a focus on safety, compliance, and building trust in the systems. To ensure this, it’s critical to have complete visibility, enforce strong authentication, implement least privilege and just-in-time access, and conduct thorough session audits to link actions directly to user identities. Doing this is vital for ensuring the security of both human and machine identities alongside the rise of agentic AI.
It’s crucial to acknowledge that, in light of the rapid advancements in this field, many of the challenges we face today were not anticipated only a few months ago. While not exhaustive, the examples highlighted above reveal the dramatic shifts and potential risks associated with the widespread adoption of agentic AI.
With agentic AI reshaping enterprise operations, organisations must stay alert and proactive. As new challenges emerge, one thing is certain: agentic AI is here to stay. The real question is not whether enterprises will adopt it, but how they will secure it.
About the Author
Yuval Moss is Vice President of solutions for Global Strategic Partners at CyberArk. CyberArk is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads, and the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets.