JSON Variables

OpenAI Just Answered Anthropic. This is What GPT-5.4-Cyber Is.


Only a few days after Anthropic released Claude Mythos, OpenAI returned with GPT-5.4-Cyber. The Hacker News A specialized version of GPT-5.4, designed to be used in defensive cybersecurity. Not a whole new model. A special model with the guardrails relaxed to allow those who actually require them off.

The most notable feature is binery reverse engineering. Compiled software can be analyzed by security professionals to identify malware and vulnerabilities without access to source code. 9to5Mac That is a big deal when you are a defender and you are attempting to tear apart something that you did not write and have no documentation about. Most of the time, which is most malware.

This change is worth noting. OpenAI is shifting towards not limiting what models can do and instead ensuring that the most sensitive capabilites are only accessed by the right people. Axios That's a meaningful change in philosopy. You are not blocked due to the question being dangerous. You are blocked since you are not yet vetted.

The program that conducts such a vetting is known as Trusted Access for Cyber. It was introduced in February with a $10 million cybersecurity grant program, and is currently being extended to thousands of verified individual defenders and hundreds of security teams. SiliconANGLE Higher verification tier, more powerful version of the model you can access. At chatgpt.com/cyber, individual users verify. Enterprises go through their OpenAI rep. SiliconANGLE

The agentic component of this entire system, Codex Security, has already helped to fix more than 3,000 critical vulnerabilities in its private beta. The Hacker News That number is hard to contextualize without knowing the scope, but it's not nothing.

Now the Anthropic side. In the last several weeks, Mythos Preview found thousands of zero-day vulnerabilities in all major operating systems and all major web browsers, many of which are critical, such as a 27-year-old bug in OpenBSD. Anthropic Most of that was done independently. It was shared with about 40 organizations by Anthropic. SiliconANGLE Controlled. Tight. Deliberate.

OpenAI is going broader. The pitch is providing defenders with a head start, increasing access to a broader pool of vetted security vendors and researchers instead of keeping it closed to a small group. The Hacker News Is that really more democratizing or is it a different form of exclusive is a fair question to ask.

Among them is the fact that OpenAI is not yet providing GPT-5.4-Cyber to US government agencies, but the company claims to be in discussions. Axios That will be something to look back on and think was interesting, assuming the outcome of those talks.

Other security experts have countered the idea that AI-discovered vulnerabilities are new or easy to exploit. The counterargument is the speed. The pace at which these models are finding flaws is what's worrying government officials and business leaders Axios, not just the flaws themselves. Speed alters the math of all things, response times, patch windows, the amount of time defenders have before something is weaponized.

The same experiment is now being conducted by both companies in some form. Powerful model, restricted release, gradual expansion, hope the vetting holds. The question of whether identity verification is a strong enough gate to something this powerful is one that neither of them has answered fully yet, and frankly it is not obvious who would...

Post a Comment

أحدث أقدم