@ Plain Text Nostr

<-- back to reads

+------------------------------------------------------------------------------+
|

OpenAI Restricts Access to New GPT-5.5-Cyber AI Model

AGI news · 2026-05-04

OpenAI Restricts Access to New GPT-5.5-Cyber AI Model Human Human coverage portrays GPT-5.5-Cyber as a highly capable but opaque security model whose restricted rollout both mirrors and contradicts OpenAI’s criticisms of rival access policies. It emphasizes questions about transparency, dual-use risk, and corporate strategy, suggesting the safety rationale may be intertwined with competitive positioning and control over powerful cyber tools. npub1y2sh85zz4ngtwkj9hqrc0lshw8wtu9cu3y6vkdp677epnpj4jh2s2ep9xl npub1cwr440tx0c73gkjfpmcne9fny0djdwttnksfrfqfjq4ltmnznjwqqzxdxr npub1pm4rys9akhgkta0jfv2t9m3ytxvl84z9zhkze024f5ly25rvmk6q5agr0y OpenAI’s newest AI model is being rolled out like a classified weapons system: powerful, hyped, and reserved for a tiny circle of “critical cyber defenders,” while everyone else is told to trust the gatekeepers.

The road to GPT-5.5-Cyber

First came the baseline model. OpenAI’s GPT-5.5 — billed by the company as its “smartest and most intuitive to use model yet” — launched publicly and quickly drew attention from government evaluators and security researchers.1

In parallel, rival Anthropic was busy turning its own security-focused model, Claude Mythos, into a kind of existential cyber bogeyman. The company restricted Mythos’ initial release to “critical industry partners,” underscoring its supposed “outsize cybersecurity threat.”1 The White House took notice, and according to reporting cited by researchers, unnamed officials later opposed expanding access to Mythos, citing both “cybersecurity concerns” and fears that greater demand would reduce the government’s own access to the tool.1

That atmosphere of frontier-AI anxiety set the stage for OpenAI’s next move: a dedicated cyber model derived from GPT-5.5.

Altman’s controlled launch

On April 30, OpenAI CEO Sam Altman publicly revealed the rollout plan for GPT-5.5-Cyber. In a post on X, he framed the model as a “frontier cybersecurity model” and made clear it would not be a free-for-all release:

“we're starting rollout of GPT-5.5-Cyber, a frontier cybersecurity model, to critical cyber defenders in the next few days. we will work with the entire ecosystem and the government to figure out trusted access for cyber; we want to rapidly help secure companies/infrastructure.”2

Reporting on Altman’s announcement stressed that GPT-5.5-Cyber will “not be available to the general public,” with an initial rollout to a “select group of trusted ‘cyber defenders’” over the coming days.3 OpenAI has so far released no technical specifications, no capability cards, and no clear list of who qualifies as a “critical” defender — though previous “trusted access” schemes have involved vetted professionals and institutions.3

TechCrunch summarized the shift bluntly in its own headline: “After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too.”4 The jab nods to earlier comments from Altman decrying “fear-based marketing” around cyber-focused models, even as OpenAI now follows a nearly identical playbook of controlled access.

Anthropic, Mythos, and the fear narrative

The comparison isn’t subtle. When Anthropic launched Mythos Preview, it “made a big deal about the supposedly outsize cybersecurity threat” of the model and locked it down to “critical industry partners.”1 That framing helped fuel the idea that we had crossed some kind of threshold: a single model in a single lab that was uniquely dangerous.

Then the UK’s AI Security Institute (AISI) weighed in.

In a series of evaluations, AISI ran leading models — including Mythos Preview and OpenAI’s GPT-5.5 — through 95 Capture-the-Flag cybersecurity challenges spanning reverse engineering, web exploitation, and cryptography.1 On the most difficult “Expert” category, GPT-5.5 slightly edged Mythos Preview, passing an average of 71.4 percent of tasks versus Mythos’ 68.6 percent, a difference within the margin of error.1

The institute highlighted one especially telling trial: a Rust binary challenge that required building a disassembler. GPT-5.5 “solved the challenge in 10 minutes and 22 seconds with no human assistance at a cost of $1.73” in API calls.1 Both GPT-5.5 and Mythos also performed similarly on “The Last Ones,” a 32‑step simulated data-exfiltration attack on a corporate network, with GPT-5.5 succeeding in 3 of 10 attempts and Mythos in 2 of 10. “No previous model had ever succeeded at the test even once,” AISI noted.1

Crucially, both models still failed at AISI’s “Cooling Tower” simulation — a scenario modeling an attempted disruption of power plant control software.1 In other words: powerful, yes; cyber-apocalypse in a box, not yet.

For critics, these findings undercut the notion that Mythos was a uniquely terrifying breakthrough rather than simply one more data point in a general curve of capability.

The UK researchers’ verdict: trend, not anomaly

AISI’s conclusion is the quiet bombshell underneath all the marketing: GPT-5.5 “reached a similar level of performance on our cyber evaluations as Mythos Preview,” new research summarized.1 The implication is stark — the advanced cyber capabilities that Anthropic spotlighted aren’t a quirk of one model, but a property of the frontier class as a whole.

Viewed chronologically, that makes OpenAI’s latest move look less like a moral awakening and more like a defensive maneuver. Once AISI showed that GPT-5.5 itself was roughly as capable as Mythos on offensive and defensive cyber tasks, OpenAI suddenly had every incentive to treat its own systems as equally sensitive.

Hence GPT-5.5-Cyber: a model branded for “critical defenders” only, kept behind a velvet rope, and rolled out in consultation with governments.

The industry falls in line on restricted releases

GPT-5.5-Cyber doesn’t exist in a vacuum. OpenAI has already staggered the release of previous cyber-focused systems, as well as other high-risk tools like GPT‑Rosalind, its purpose-built life-sciences model for biology and drug discovery.3 Altman’s promise to “work with the entire ecosystem and the government to figure out trusted access” is part of a broader industry push to brand top models as “too dangerous” for open release — and then to capitalize on the resulting aura of exclusivity.23

Anthropic’s Mythos launch, meanwhile, has reportedly been “bungled” in ways that embarrassed the company and caught the eye of the White House.3 US officials’ resistance to expanding access, based on both security worries and resource-hoarding instincts, underscores how frontier AI is being folded into traditional national-security logic.

OpenAI’s new cyber model appears deliberately aligned with that logic:

  • Narrow rollout to trusted defenders. Only vetted organizations are likely to get GPT-5.5-Cyber access at first, mirroring the Mythos “critical partners” tier.23
  • Government partnership. Altman is explicitly positioning OpenAI as a collaborator in public–private cyber defense, not just a vendor.2
  • Ongoing opacity. With no technical details disclosed, outsiders must take OpenAI’s word on both the tool’s power and its risk.

Competing narratives: safety, security, and spin

Different actors are telling different stories about the same sequence of events.

  • OpenAI’s narrative: GPT-5.5-Cyber is a proactive safety measure — a way to “rapidly help secure companies/infrastructure” by arming a select cadre of defenders before adversaries can get their hands on equivalent capabilities.2 The company presents its staggered releases and “trusted access” regimes as responsible governance of dangerous tech.3

  • Anthropic’s narrative: Mythos is a uniquely capable and therefore uniquely concerning system whose cyber prowess justifies highly restricted deployment. The company’s messaging emphasized the model’s potential threat and its narrow availability to “critical industry partners,” helping establish a template others are now following.1

  • Researchers’ narrative (AISI and others): The data says otherwise. On real-world cyber tasks, GPT-5.5 and Mythos are basically peers. The “outsize threat” is not some one-off fluke, but the frontier AI trend line itself, with models gradually approaching human-expert performance on complex offensive and defensive challenges — and still failing on the hardest, infrastructure-level tasks.

  • Critics’ narrative: From the outside, this all smells like what one OpenAI critic — Altman himself, in an earlier context — called “fear-based marketing.” The very companies that hype their models as too dangerous to release are also the ones that benefit when access becomes a premium, government-aligned commodity. TechCrunch’s pointed framing — that OpenAI “restricts access to Cyber, too” right after dissing Anthropic for doing the same — captures the charge of hypocrisy.4

What’s next: from lab sprint to policy trench war

In the near term, GPT-5.5-Cyber will quietly move into the hands of a small club of “critical cyber defenders,” while everyone else is left to extrapolate from sparse hints and government test ranges. As with Mythos, the real action may unfold not in public demos but in classified briefings and closed-door procurement meetings.

The longer arc is harder to spin. With the UK’s AI Security Institute finding GPT-5.5 “just as good” as Mythos on demanding cyber tasks, the idea of a single “too dangerous” model collapses into a more uncomfortable reality: each new generation of general-purpose AI will likely inherit these capabilities by default.1

That leaves policymakers, companies, and the public facing a sharper version of the same question: Is the answer to increasingly capable cyber AIs to concentrate them in fewer hands — or to democratize access and focus on robust defenses for everyone? OpenAI’s GPT-5.5-Cyber rollout plants its flag firmly on the side of tight control. Whether that turns out to be protection or just prestige dressed up as safety will be tested not in press releases, but in the next wave of real-world attacks.


1. Amid Mythos' hyped cybersecurity prowess, researchers find GPT-5.5 is just as good — AISI found GPT-5.5 “reached a similar level of performance on our cyber evaluations” as Mythos Preview and detailed results across 95 CTF challenges.

2. @sama on X — "we're starting rollout of GPT-5.5-Cyber, a frontier cybersecurity model, to critical cyber defenders in the next few days... we want to rapidly help secure companies/infrastructure."

3. OpenAI's New Security Model is for 'Critical Cyber Defenders' Only — Report on GPT-5.5-Cyber’s limited rollout, lack of technical details, and OpenAI’s broader pattern of restricting powerful, potentially dangerous models.

4. After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too — TechCrunch’s headline highlighting OpenAI’s decision to likewise restrict access to its Cyber model after criticizing Anthropic’s Mythos limits.

Story coverage nevent1qqsfw4d9ze9v790pw6rh0azwt68nyzu0amu0unhg2y5jg7g98cyr2kqr4vp7e nevent1qqspddzzmme7slurqgnakrzfh9nh7kf3rlf36yvzku04kmdk6p4yy6s4ye59t nevent1qqs0g4zv2rqnl45sh4pf9rz3rhj4cpz3m5w42lez6tlekud05cvfzqsvzwvj4

View replies

|
+------------------------------------------------------------------------------+

Write a post

Sign in with a signing-capable method to publish.