Preloader
Navigating the Rising Threat of Unmanaged Artificial Intelligence

Navigating the Rising Threat of Unmanaged Artificial Intelligence

Artificial intelligence (AI) is rapidly transforming how organizations and enterprises operate, introducing unprecedented efficiency, automation, and insight. Yet alongside these opportunities, new and often unseen risks are quietly emerging.

As employees and various departments increasingly adopt AI-enabled tools, many organizations are gradually losing control over how and where AI technologies are being deployed.

Within this evolution, a troubling phenomenon has surfaced — “Shadow AI”: the unauthorized or unsupervised use of artificial intelligence tools without the knowledge or approval of IT or security teams.

This silent expansion of AI activity has become a critical challenge for Chief Information Security Officers (CISOs) across industries.

According to Delinea’s latest report, “AI in Identity Security 2025: The Need for a New Strategy,” about 44% of organizations using AI admit that departments within their company deploy AI tools independently of the IT or security divisions.

Another 44% report instances of unauthorized use of generative AI (GenAI) by employees, often without visibility or governance.

Below, three key risks stemming from the growth of Shadow AI are outlined, along with essential strategies to mitigate these threats from a CISO perspective.

 

1. Gaps in AI Governance and Policy Implementation

While 89% of organizations claim to have some level of policy or control mechanism to restrict or monitor AI access to sensitive data, the effectiveness and scope of these safeguards vary drastically.

Only 52% of global enterprises have comprehensive controls in place, while smaller organizations lag even further behind.

Without strong governance frameworks and clear visibility into AI activities, organizations face an elevated risk of data breaches, regulatory non-compliance, and exposure of confidential information.

For instance, having an “Authorized Use Policy” for AI tools should be a basic expectation, yet only 57% of organizations have established such a policy.

Moreover, other vital controls remain inconsistent:

  • AI model and agent access control — 55%
  • Logging and auditing of AI activities — 55%
  • Identity management for AI entities — 48%

Without these foundational elements, CISOs are effectively operating blind within their organizational AI ecosystems.

 

2. Emerging Challenges in the Era of Agentic AI

With the rise of Agentic AI — autonomous systems capable of decision-making and execution without direct human input — the cybersecurity landscape is becoming increasingly complex.

As these agent-based AIs gain deeper access to critical data and systems, the risks of exploitation, misconfiguration, or malicious automation multiply.

Weaknesses in digital identity security amplify these vulnerabilities.

Security teams must therefore modernize their strategic approach and treat machine identities with the same rigor applied to human users — through authentication, authorization, and continuous monitoring.

Only by aligning human and machine identity governance can organizations responsibly harness the benefits of self-directed AI systems while maintaining security integrity.

3. Overconfidence in Machine Identity Security

Despite the lack of comprehensive AI governance, many organizations display overconfidence in their ability to secure machine identities.

Delinea’s research shows that while 93% of organizations believe their machine identity security is strong, 82% only use basic lifecycle management and just 58% have automated or comprehensive security controls in place.

Additionally, merely 61% report full visibility into all machine identities within their networks.

This gap between confidence and reality creates hidden exposure points, allowing unmonitored machine entities within AI systems to become entryways for cyberattacks or data leakage.

 

Strengthening Identity Security Against Shadow AI

To mitigate the risks associated with unmanaged AI, organizations must begin with stronger transparency and governance.

This includes developing detailed policies for authorized AI usage, enforcing strict access controls, maintaining detailed logging and auditing of AI interactions, and institutionalizing identity management for AI entities.

By treating AI systems as independent digital identities — subject to the same authentication, authorization, and oversight standards as human users — enterprises can leverage the power of AI without compromising trust or compliance.

As Agentic AI continues to expand, identity security strategies must evolve in parallel — incorporating granular access control, advanced monitoring, deeper auditing, and investment in dynamic identity management platforms that can keep pace with the speed of AI innovation.

Ultimately, CISOs and technology leaders (CISOs and CTOs) must adopt a proactive, adaptive, and collaborative approach, which includes:

  • Continuous collaboration with departments experimenting with AI tools,
  • Staying informed about emerging threats and vulnerabilities,
  • Building a robust AI governance framework that balances innovation with risk management and compliance.

Organizations that can evolve their identity security strategies alongside AI advancements will not only harness its transformative potential but also safeguard their data, integrity, and operational trust against the growing shadow of unmanaged AI.

Source: MedadPress
www.medadpress.ir