OpenAI releases cybersecurity-specific model GPT-5.4-Cyber: has patched 3,000 high-risk vulnerabilities, surpassing Claude Mythos

robot
Abstract generation in progress

OpenAI Releases GPT-5.4-Cyber, the first large language model fine-tuned specifically for cybersecurity defense scenarios, simultaneously expanding the TAC (Trusted Access for Cyber) program significantly, opening access to thousands of verified individual defenders and hundreds of organizational teams responsible for safeguarding critical software.
(Background recap: OpenAI launches GPT-5.2! Aiming to replace professionals, with lower hallucination rates, API cost updates)
(Additional background: Anthropic’s new model “Claude Mythos” exposed as the strongest ever, raising concerns even within their own team about cyber attack capabilities)

The 3,000 high-risk vulnerabilities are the number of issues that OpenAI’s Codex Security has helped the cybersecurity community patch since its recent launch. Today, OpenAI announced the official release of GPT-5.4-Cyber, a model fine-tuned specifically for cybersecurity defense work scenarios, along with a major expansion of the TAC (Trusted Access for Cyber) program, opening access from small-scale pilots to thousands of verified individual defenders and hundreds of organizations responsible for protecting critical software.

From GPT-5.2 to GPT-5.4-Cyber: A deliberately paved path of security evolution

GPT-5.4-Cyber did not appear out of nowhere; it is a concentrated reflection of OpenAI’s cybersecurity layout gradually built since 2023 at the model level.

  • In 2023, OpenAI launched a $10 million Cybersecurity Grant Program and began evaluating the cybersecurity capabilities of its models that same year.
  • In 2025, GPT-5.2 was released with its first dedicated cybersecurity training; later, GPT-5.3-Codex enhanced reasoning abilities; by GPT-5.4, OpenAI had classified it as “high cybersecurity capability” within its Preparedness Framework.
  • Today’s GPT-5.4-Cyber further reduces the boundary for refusing legitimate cybersecurity work and unlocks new advanced defense features based on GPT-5.4.

One of the most noteworthy new capabilities is Binary Reverse Engineering. Translated, it means: analyzing compiled executables directly without access to source code, to identify malicious potential, vulnerabilities, and security robustness.

Why “Only for Defenders”

The key design element of this release is not just the model’s capabilities, but who can use it and how.

GPT-5.4-Cyber is a “more permissive” model, lowering the refusal rate for legitimate cybersecurity operations, which also makes it potentially more useful to attackers. OpenAI’s solution: build walls through identity verification.

Access to the TAC program is divided into two levels. General individual users can obtain basic access through KYC verification at chatgpt.com/cyber; organizations and research institutions must apply through an OpenAI sales representative for higher trust levels to access the full features of GPT-5.4-Cyber.

OpenAI calls this approach “Democratized Access”: making tools broadly available through clear, objective standards (identity verification, KYC) while preventing abuse.

Meanwhile, OpenAI continues to strengthen defense capabilities through ecosystem development: Codex for Open Source now covers over 1,000 open-source projects, and funds have been contributed to the Linux Foundation as part of a $12.5 million open-source security grant program. Since its recent launch, Codex Security has helped patch over 3,000 high-risk and critical vulnerabilities.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin