5 min read
·
1,138 words
OpenAI just launched GPT-5.4-Cyber, a “cyber-permissive” variant of its flagship GPT-5.4 model built specifically for defensive cybersecurity work. The release comes exactly one week after Anthropic unveiled its own cybersecurity-focused model, Mythos, signaling that the race to build AI-powered security tools has become the newest battleground between the two leading AI companies.
But where Anthropic restricted Mythos to roughly 40 organizations, OpenAI is taking a deliberately broader approach. Here is what GPT-5.4-Cyber does, who gets access, and why the different strategies matter.
In This Article
What Makes GPT-5.4-Cyber Different
GPT-5.4-Cyber is a fine-tuned variant of GPT-5.4 that deliberately lowers the safety refusal boundaries that prevent standard AI models from engaging with cybersecurity content. In practical terms, this means the model can analyze malware samples, reverse-engineer compiled binaries, discuss vulnerability exploitation techniques in defensive contexts, and assist with penetration testing work that standard models would refuse to touch.
The key new capability is binary reverse engineering. Security professionals can feed compiled software into the model without needing access to source code, and GPT-5.4-Cyber will analyze it for malware, vulnerabilities, and security weaknesses. This is a significant step because much of the real-world threat landscape involves proprietary or obfuscated binaries where source code is unavailable.
OpenAI describes the model as “cyber-permissive,” a term that differentiates it from both the safety-restricted standard GPT-5.4 and Anthropic’s completely locked-down Mythos. The idea is that GPT-5.4-Cyber will assist with defensive tasks that standard models are too cautious to handle, while still maintaining safeguards against offensive misuse.
Trusted Access for Cyber: Tiered Verification
Access to GPT-5.4-Cyber runs through OpenAI’s Trusted Access for Cyber program, which launched in February 2026 alongside a $10 million cybersecurity grant program. The program has now been expanded with tiered verification levels:
Tier 1 (Standard verification): Individual security professionals can verify their identity at chatgpt.com/cyber. This basic level provides access to cybersecurity-enhanced features in the standard ChatGPT interface.
Tier 2 (Organization verification): Security teams and enterprises can request access through their OpenAI representative. This unlocks more advanced capabilities suitable for organizational security operations.
Tier 3 (Full access): The highest tier unlocks GPT-5.4-Cyber’s complete capabilities, including binary reverse engineering. This level is reserved for vetted security vendors, established organizations, and verified researchers.
OpenAI’s goal is to make advanced defensive tools “as widely available as possible while preventing misuse” through automated verification systems rather than manual gatekeeping. This is a deliberate contrast to Anthropic’s approach of restricting Mythos to a small number of hand-selected partners.
Performance Benchmarks
OpenAI shared capture-the-flag cybersecurity benchmark results that show rapid improvement across its model generations:
- GPT-5 (August 2025): 27% CTF benchmark score
- GPT-5.1-Codex-Max (November 2025): 76% CTF benchmark score
- GPT-5.4-Cyber (April 2026): Not yet publicly benchmarked, but OpenAI says it is “planning and evaluating future releases as though each new model could reach High levels of cybersecurity capability” under its Preparedness Framework
The jump from 27% to 76% in just three months demonstrates how quickly AI cybersecurity capabilities are advancing. If the trend continues, we may see models capable of matching or exceeding top human security researchers within the next year.
Codex Security: 3,000 Vulnerabilities Fixed
OpenAI also highlighted progress from Codex Security, a related product that launched in private beta six months ago and as a research preview earlier in 2026. According to OpenAI, Codex Security has contributed to fixes for more than 3,000 critical and high-severity vulnerabilities across the software ecosystem since its broader launch.
The company has also extended its security efforts to the open-source community through Codex for Open Source, a free security scanning tool that has reached more than 1,000 open-source projects to date. This is significant because open-source software forms the foundation of most enterprise technology stacks, and vulnerabilities in open-source components have been responsible for some of the most damaging cyberattacks in recent years.
OpenAI vs Anthropic: Two Strategies for Cybersecurity AI
The GPT-5.4-Cyber and Anthropic Mythos releases represent two fundamentally different approaches to the same problem: how to deploy powerful AI cybersecurity capabilities without enabling offensive misuse.
Anthropic’s approach (Mythos): Extremely restricted access to roughly 40 organizations. Briefings rather than product releases. No public API. Focus on strategic partnerships with government and major financial institutions. The model is treated as a controlled capability, not a product.
OpenAI’s approach (GPT-5.4-Cyber): Tiered access system targeting thousands of individual defenders and hundreds of security teams. Verification-based rather than relationship-based. Available through the ChatGPT interface at the basic tier. Focus on broadening defensive capability across the entire security community.
OpenAI’s broader rollout is a strategic advantage. By putting cyber-permissive AI tools in the hands of thousands of security professionals rather than a few dozen organizations, OpenAI is building network effects in the security community. More users means more feedback, more vulnerability discoveries, and more data to improve the model.
Anthropic’s tighter control may appeal more to government agencies and large enterprises that prioritize security over accessibility, but it limits the model’s practical impact on the broader threat landscape.
The Shift from Blanket Restrictions to Identity-Based Access
Perhaps the most significant aspect of OpenAI’s announcement is not the model itself, but the access framework. The company explicitly stated that it is moving away from blanket capability restrictions toward identity-based access controls.
This represents a philosophical shift in AI safety. Instead of preventing all users from accessing dangerous capabilities because some might misuse them, OpenAI is building infrastructure to verify who users are and what their legitimate use cases are. The model itself is more capable; the safety mechanism is the access control system around it.
This approach has clear advantages for defensive security work. Blanket restrictions on cybersecurity content in standard AI models have frustrated security professionals who need AI assistance for legitimate tasks. By creating a verified pathway, OpenAI is solving a real pain point without simply removing all guardrails.
What This Means for Security Teams
If you work in cybersecurity, GPT-5.4-Cyber represents a tangible upgrade in AI-assisted defensive capabilities. The binary reverse engineering feature alone could save significant time in malware analysis workflows. The tiered access system means that even individual security researchers can get started with enhanced capabilities without enterprise-level verification.
The broader takeaway is that AI cybersecurity tools are transitioning from experimental research projects to operational products. With OpenAI targeting thousands of users and Anthropic partnering with major banks and government agencies, AI-powered security analysis is becoming a standard part of the defensive toolkit.
The competition between OpenAI and Anthropic in this space will likely accelerate capability improvements. Both companies have signaled that more powerful models are coming later this year. For security teams, the question is no longer whether to adopt AI cybersecurity tools, but how quickly they can integrate them into existing workflows.
Related Reading
- Anthropic Mythos and the Pentagon: Why the Most Powerful AI Model Is Caught in a Political Firestorm
- OpenAI Security Alert: Axios Supply Chain Attack Exposed macOS App Signing Certificates
Written by
Gallih
Tech writer and developer with 8+ years of experience building backend systems. I test AI tools so you don't have to waste your time or money. Based in Indonesia, working remotely with international teams since 2019.


