Over the past two years, the rapid expansion of generative AI (GenAI) has driven widespread innovation and a global surge in demand from enterprises.
However, this drive for swift advancement has heightened risks, as the urgency to progress often results in security compromises.
With malicious actors increasingly exploiting GenAI to scale their operations, it means attacks are becoming more frequent and damaging than ever. Protecting GenAI-powered enterprise applications requires implementing key security controls to protect the infrastructure, which holds and processes vast amounts of sensitive enterprise data. Ensuring these security measures are in place is crucial for organisations to deploy these applications with confidence.
AI Agents are multiplying
GenAI has rapidly evolving from content creation tools to the 2025 phenomenon of autonomous agents capable of making decisions and taking actions. While not yet widely used in production, these AI agents are expected to see rapid adoption due to their benefits. However, this shift introduces security challenges, particularly in managing machine identities (AI Agents) that may behave unpredictably. Enterprises will need to secure AI agents at scale, potentially overseeing thousands or even millions at once.
Key considerations include authenticating AI agents, managing and restricting their access and controlling their lifecycle to prevent rogue agents from retaining unnecessary permissions. It’s also crucial to ensure AI agents carry out their intended functions within enterprise systems. As this technology advances, best practices for secure integration will emerge. However, securing the backend infrastructure of GenAI implementations will be essential to running AI agents on a robust and protected platform.
Addressing Evolving Security Concerns
As with any emerging technology, it is vital to secure them as they become mainstream. In tandem, as with any major technology innovation, identity security practices must evolve to address new challenges. GenAI introduces unique security concerns that require continuous adaptation, such as protecting against prompt injection attacks, which can expose sensitive data or cause unintended actions.
However, it’s important to remember that GenAI-powered applications rely on underlying systems and databases. Without securing this core infrastructure, enterprise applications become vulnerable to serious attacks, such as data leaks, poisoning, model manipulation, or service disruption.
Many identities – human or machine – that have access to critical infrastructure are prime targets for attackers. Identity-related breaches are a leading cyberattack vector, so identifying, managing, and securing these identities is vital. Fortunately, securing these identities aligns with established best practices for protecting other environments, especially cloud infrastructure, where most GenAI components are deployed.
Key Elements of Enterprise GenAI-powered Applications
When developing GenAI-powered applications, several critical components must be considered. Application interfaces, such as APIs, act as gateways for users and applications to interact with GenAI systems, making their security essential to prevent unauthorised access and ensure only legitimate requests are processed.
Additionally, learning models and large language models (LLMs) analyse vast amounts of data to identify patterns and make predictions, with most enterprises relying on leading LLMs from providers like OpenAI, Google, and Meta. While these models are trained on public data, enterprises must further refine them with proprietary data to gain a competitive advantage. However, while leveraging internal data is key to developing unique GenAI applications, protecting sensitive information from leaks or loss is a top priority. Finally, deployment environments, whether on-premises or in the cloud, must be secured with stringent identity security measures to ensure the safe operation of AI applications.
Enforcing Stronger Identity Security
Implementing strong identity security measures is essential to mitigate risks and protect the integrity of GenAI applications. Many identities have high levels of access to critical infrastructure and, if compromised, could provide attackers with multiple entry points. It is important to emphasise that privileged users include not just IT and cloud teams but also business users, data scientists, developers and DevOps engineers.
A compromised developer identity, for instance, could grant access to sensitive code, cloud functions, and enterprise data. Additionally, the GenAI backbone relies heavily on machine identities to manage resources and enforce security. As machine identities often outnumber human ones, securing them is crucial. Adopting a Zero Trust approach is vital, extending security controls beyond basic authentication and role-based access to minimise potential attack surfaces.
To enhance identity security across all types of identities, several key controls should be implemented. Enforcing strong adaptive multi-factor authentication (MFA) for all user access is essential to prevent unauthorised entry. Securing access to credentials, keys, certificates, and secrets—whether used by humans, backend applications, or scripts—requires auditing their use, rotating them regularly, and ensuring that API keys or tokens that cannot be automatically rotated are not permanently assigned. This means that only the minimum necessary systems and services should be exposed. By implementing zero standing privileges (ZSP), it ensures that users do not have permanent access rights and can only assume specific roles when required. Where ZSP is not feasible, applying least privilege access minimises the attack surface in case of user compromise. Additionally, by isolating and auditing sessions for all users accessing the GenAI backend infrastructure, you are strengthening your security. Finally, centrally monitoring user behaviour for forensics, audits, and compliance—along with logging and tracking any changes—helps maintain a secure and well-governed AI environment.
Upholding Security Without Sacrificing Usability in GenAI Projects
When devising your strategy for enforcing security and privilege controls, it’s important to note that GenAI-related projects will likely draw considerable attention across the organisation. In these cases, both development teams and broader corporate initiatives may perceive security controls as potential obstacles. The challenge is further amplified by the need to safeguard a diversified group of identities, each requiring distinct levels of access and using various tools and interfaces. Therefore, it’s essential that the security controls implemented scalable and sympathetic to users’ experience and expectations, and not impedingproductivity orperformance.
About the Author
Yuval Moss is Vice President of Solutions for Global Strategic Partners at CyberArk. CyberArk is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads, and the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. For over 25 years, CyberArk has led the market in securing enterprises against cyber attacks that take cover behind insider privileges and attack critical enterprise assets. Today, only CyberArk delivers a new category of targeted security solutions that help leaders stop reacting to cyber threats and get ahead of them, preventing attack escalation before irreparable business harm is done. At a time when auditors and regulators recognize that privileged accounts are the fast track for cyber attacks and demand stronger protection, CyberArk’s security solutions master high-stakes compliance and audit requirements while arming businesses to protect what matters most.