Emerging Risks for Enterprises: Where AI Risk Converges With Identity
A wide range of emerging risks spans misuse, malfunctions, and systemic impacts. For enterprises, these risks become tangible when AI systems interact with core control layers such as identity platforms, access rights, privileged accounts, automated workflows, and trust relationships. In many cases, identity is either the primary attack vector or the amplification layer: weak identity governance can allow incidents to escalate, spread laterally, and cause broader impact.
Understanding how these risk categories translate into enterprise identity exposure is critical for security leaders.
Risks From Malicious Use
AI-generated content and criminal activity
The report highlights that general-purpose AI systems can generate highly realistic text, audio, images, and video. These capabilities are already being used for fraud and other forms of criminal activity, particularly through impersonation and deception.
Organizations themselves are direct targets of this misuse. Attackers can use AI-generated content to be treated as a trusted person, a legitimate employee, or an authorized decision-maker. Accessible AI tools lower the barrier to creating harmful content at scale, and generation quality has improved faster than many detection capabilities.
The implication is that proof of identity can no longer rely on content or appearance. Voice, video, and writing style are becoming unreliable signals. Enterprise controls need to prioritize strong, phishing-resistant authentication, out-of-band verification for high-risk actions, and fraud-resistant processes for approvals and changes.
Influence and manipulation
AI-generated content can influence beliefs and behavior, including through persuasion and impersonation. While real-world impact is still being assessed, studies indicate that AI systems can be highly persuasive in certain contexts.
For identity and security leaders, the primary link is how manipulation translates into operational compromise: influenced users are more likely to approve access, share credentials, bypass process, or trust a malicious actor. Manipulation also intersects with identity proofing and helpdesk workflows. When attackers can produce personalized, convincing narratives at scale, they can pressure human gatekeepers and exploit weaknesses in identity verification processes.
Cyberattacks
The report presents strong evidence that criminal groups and state-sponsored actors actively use AI systems in cyber operations. AI systems can assist with multiple steps in cyberattacks, including identifying vulnerabilities and generating malicious code. In controlled settings, AI agents have achieved high performance in vulnerability discovery, and the report notes real-world incidents involving semi-autonomous cyber capabilities where AI handled a large share of technical work and humans intervened at critical decision points.
Importantly for organizations, the report notes that fully autonomous end-to-end attacks have not been reported, but that capabilities are improving unevenly and automation is increasing.
The identity connection is central. Most cyberattacks ultimately depend on gaining and expanding access through credentials, tokens, privileged accounts, and persistent footholds. As AI improves reconnaissance, vulnerability discovery, and social engineering, it can accelerate the path to credential theft and privilege escalation. It can also increase the volume and quality of targeted phishing, deepfake-assisted vishing, and other identity-based intrusion techniques.
This strengthens the case for treating identity as a primary containment layer: Zero Trust access controls, continuous authentication, strong PAM, least privilege, segmentation, and identity threat detection and response become key to limiting blast radius when attacks occur.
Risks From Malfunctions
Reliability challenges
The report emphasizes that AI system failures can cause real harm. Documented issues include hallucinations (producing false or fabricated information) or flawed code generation. These failures have the potential to cause physical or psychological harm and expose users and organizations to reputational damage, financial loss, or legal liability.
A key factor increasing these risks is the rise of AI agents. Unlike earlier AI systems that primarily generated content for humans to review, agents can plan, use tools, interact with enterprise systems, and execute actions with limited human intervention.
Reliability therefore matters directly for identity security. When AI systems can take actions, they require identities, permissions, and governance controls. If such systems operate with broad permissions or high levels of authority, failures can quickly translate into operational incidents.
For organizations, this points to a pragmatic control strategy: treat AI agents and AI-enabled workflows as you would treat privileged automation.
If the system can take actions, it needs a constrained identity with scoped permissions, strong auditability, approval gates for high-impact actions, and monitoring for anomalous behavior. Reliability is not only a model issue. It is also an access and governance issue.
Loss of control
The report discusses loss-of-control scenarios in which one or more AI systems operate outside anyone’s control, with regaining control being extremely costly or impossible. It notes wide expert disagreement about likelihood but highlights potential severity. The report also describes capabilities relevant to such scenarios, including agentic planning, deception, situational awareness, oversight evasion, persuasion, and autonomous replication.
For organizations, the immediate, actionable relevance is how deployment environments shape risk. The report highlights factors such as criticality of the environment, access to resources, and permissions granted. These are concrete design choices enterprises make when deploying AI systems and agents.
Even without extreme scenarios, the report’s analysis supports a practical governance principle: the most consequential risks increase when AI systems are deployed with broad permissions in critical environments. Identity and access management therefore becomes a key risk lever.
Systemic Risks
Labor Market Impacts
AI can automate or augment tasks across many roles, with uneven adoption and mixed employment effects.
For organizations, the identity impact lies in how work shifts to systems and agents. As non-human actors take on tasks, machine and service identities increase rapidly and often outnumber human identities by large margins, making governance, ownership, and lifecycle management significantly more complex.
Risks to Human Autonomy
AI systems can influence decision-making and contribute to automation bias, where users over-rely on system outputs.
In organizations, this translates into over-trust in AI-driven recommendations within security workflows and access decisions. Clear accountability, human oversight where needed, and strong governance controls remain essential.
Risk Management: What Organizations Should Take From the Report
After outlining the major risks, the report also examines how these risks can be managed. It discusses a range of approaches used by AI developers, policymakers, and organizations to identify, assess, and reduce risks from general‑purpose AI systems. These include technical safeguards, access restrictions, monitoring of system behavior, and broader efforts to make organizations and societies more resilient to AI‑related incidents.
A key theme is that safeguards built into AI systems alone are not sufficient. Once AI tools are deployed inside enterprise environments and connected to real systems and workflows, the responsibility for safe use shifts to the organizations operating them.
This is where organizational guardrails become critical. Guardrails define the boundaries within which AI systems can be used safely: who is allowed to access powerful AI capabilities, what systems AI agents can interact with, what permissions they receive, and how their actions are monitored.
Many of the risk management practices discussed in the report therefore translate into governance decisions that organizations themselves must make when deploying AI.
Guardrails Relevant to Identity and Access
Three risk management themes from the report map directly to identity security.
Defense-in-depth. Single safeguards are imperfect. Layered controls are needed across development, deployment, and monitoring. In enterprise settings, identity security is a core layer in that model.
Access control and user vetting. Access restrictions and monitoring are described as key mitigation approaches, especially for high-risk capabilities. Organizations should mirror this internally: restrict access to powerful AI tools, gate sensitive workflows, and apply additional controls for high-impact actions.
Monitoring and incident readiness. The limits of detection and the importance of monitoring user interactions and system behavior are highlighted. For organizations, that translates into logging, observability, and response playbooks for AI-enabled incidents, including identity-centric attacks such as deepfake fraud, credential theft, and agent hijacking.
Practical Identity Actions
For identity and security leaders, translating AI guardrails into practice often starts with a few high‑impact priorities:
- Treat AI agents as enterprise identities. Any AI system that can take actions in enterprise environments should have its own identity, tightly scoped permissions, and full auditability. Agents should never operate with shared or uncontrolled credentials.
- Apply least privilege to automation and AI workflows. AI systems frequently interact with APIs, cloud platforms, and internal tools. Restricting privileges and isolating workloads helps prevent failures or compromise from spreading across systems.
- Strengthen identity assurance for humans. As AI improves phishing, impersonation, and deepfake attacks, organizations need phishing‑resistant MFA, strong identity proofing, and robust verification processes for sensitive approvals and support requests.
- Establish governance for non‑human identities. AI agents, service accounts, and machine identities are expanding rapidly and often outnumber human users. Clear ownership, lifecycle management, and visibility are essential to prevent them from becoming unmanaged security blind spots.
- Integrate identity into AI guardrails. Identity platforms should be part of the guardrail layer that governs AI usage: controlling access to AI tools, monitoring privileged activity, and enabling rapid response when AI‑enabled attacks occur.
The broader message is that deployment context determines risk. Identity and access management is one of the most direct ways organizations can shape that context.
Conclusion
The International AI Safety Report 2026 provides a shared scientific assessment of what the most capable general-purpose AI systems can do, what emerging risks they create, and how those risks can be managed. Across misuse, malfunctions, and systemic impacts, many organization-relevant risks converge on identity: impersonation, credential theft, privilege escalation, over-permissioned agents, and over-trusted automation.
For identity and security leaders, the takeaway is pragmatic. AI risk will not be solved by a single control or a single stakeholder. But organizations can materially reduce exposure and blast radius through strong identity assurance, least privilege, governance of non-human identities, and continuous monitoring.
In the age of AI agents and synthetic reality, identity is not just a security domain. It is the control plane for safe adoption.
This is where strategic identity expertise becomes critical.
How iC Consult Supports Secure and Responsible AI Adoption
iC Consult is the world’s largest independent provider of identity security services and has long been at the forefront of innovation in identity and access management. As organizations begin integrating AI into enterprise environments, identity becomes a critical foundation for using these technologies safely.
We help organizations leverage AI for identity security while embedding the right guardrails from the start. Through strategic workshops and advisory engagements, we work with customers to explore practical AI use cases, define governance models, and design secure architectures for AI‑enabled environments.
The goal is simple: enable organizations to innovate with AI while maintaining accountability, resilience, and control.
Learn more at https://ic-consult.com/en/solutions/generative-ai/ or contact us at https://ic-consult.com/en/contact/ to get started.


