OpenAI released GPT-5.5 on April 23, 2026, six weeks after GPT-5.4 and one week after Anthropic shipped Claude Opus 4.7. The model arrives in two variants -- standard and Pro -- with benchmark scores that lead on math, coding, and agentic tasks, API pricing that doubles the previous generation, and a set of 3D generation capabilities that could reshape how creators prototype interactive content.
What Happened
GPT-5.5 is available now in ChatGPT for Plus, Pro, Business, and Enterprise subscribers. The Pro variant is limited to Business and Enterprise tiers. API access is coming "very soon" according to OpenAI, and Codex users from Plus through Go tiers get access with a 400,000-token context window. The full model supports a 1-million-token context.
OpenAI describes the release as "a new class of intelligence for real work," positioning it squarely at enterprise and professional users rather than casual chat. The model handles autonomous, multi-step task execution: writing and debugging code, operating software interfaces, conducting web research, building spreadsheets, and generating documents without requiring step-by-step instructions.
Why It Matters for Creators
The headline capability for creative professionals is native 3D content generation. During pre-release testing under the codename "Spud," GPT-5.5 demonstrated the ability to generate voxel art and 3D animations from text prompts, including a detailed pelican riding a bicycle rendered as voxel art. It built a Minecraft-style simulation with realistic physics and interactive elements, and reconstructed Monica's apartment from Friends as a navigable 3D environment using Three.js.
These are not static renders. The model outputs interactive web-based 3D content that runs in a browser, combining code generation with spatial reasoning. For game designers, this means prototyping interactive environments from a text description. For animators and 3D artists, it opens a path to rapid concept visualization without touching modeling software. For marketers, interactive 3D product demos become a prompt away.
This arrives as OpenAI's video generation tool Sora is shutting down -- the app closes April 26, with the API following in September 2026. GPT-5.5's 3D capabilities partially fill that gap, though they produce interactive 3D environments rather than traditional video. Creators who relied on Sora for pre-visualization may find GPT-5.5's Three.js output more useful for certain workflows, particularly architectural walkthroughs and game prototyping.
Benchmarks and Performance
OpenAI published benchmark comparisons against Anthropic's Claude Opus 4.7 and Google's Gemini 3.1 Pro. The numbers tell a clear story on coding and math, with tighter margins on other tasks:
- Terminal-Bench 2.0 (agentic coding): GPT-5.5 scores 82.7%, versus 69.4% for Claude Opus 4.7 and 68.5% for Gemini 3.1 Pro
- FrontierMath Tier 4 (postdoctoral math): GPT-5.5 standard hits 35.4%, Pro reaches 39.6% -- nearly double Claude Opus 4.7's 22.9%
- BrowseComp (web research): GPT-5.5 Pro scores 90.1% versus Gemini 3.1 Pro's 85.9%
- OSWorld-Verified (computer use): 78.7% for GPT-5.5, narrowly ahead of Claude Opus 4.7 at 78.0%
- CyberGym (security): 81.8% versus Claude Opus 4.7's 73.1%
- GDPval (knowledge work across 44 occupations): 84.9%, matching or beating human professionals
One notable exception: Claude Opus 4.7 outperforms GPT-5.5 on SWE-Bench Pro (GitHub issue resolution) at 64.3% versus 58.6%. For developers working primarily on open-source codebases, that distinction matters.
The long-context improvement is dramatic. On Graphwalks BFS with 1 million tokens, GPT-5.5 scores 45.4% -- up from GPT-5.4's 9.4%. That 4x jump means the model can now meaningfully process very large codebases, long documents, and extended conversation histories.
Pricing
API pricing doubles across the board compared to GPT-5.4:
- GPT-5.5 standard: $5.00 input / $30.00 output per million tokens (GPT-5.4 was $2.50 / $15.00)
- GPT-5.5 Pro: $30.00 input / $180.00 output per million tokens
- Fast mode: 1.5x token generation speed at 2.5x cost
OpenAI argues that improved token efficiency offsets the price increase -- the model allegedly accomplishes the same tasks with fewer tokens. For ChatGPT subscribers, the price stays the same: Plus at $20/month, Pro at $200/month. The cost increase only hits API users building applications on GPT-5.5, alongside the existing Mini and Nano tiers for cost-sensitive workloads.
Key Details
- Release date: April 23, 2026
- Variants: GPT-5.5 (standard) and GPT-5.5 Pro (enhanced reasoning)
- Context window: 1 million tokens (400K in Codex)
- ChatGPT access: Plus, Pro, Business, Enterprise (Pro variant: Business/Enterprise only)
- API: Coming "very soon," pending safety review
- 3D generation: Voxel art, interactive Three.js environments, game prototypes from text prompts
- MCP support: Can automatically determine how to use an MCP server without explicit instructions
- Internal use: OpenAI used GPT-5.5 to optimize its own GPU batching, achieving 20%+ speed improvements on Nvidia GB200 and GB300 systems
- Research milestone: Helped discover a new mathematical proof related to Ramsey numbers
What to Do Next
If you are a ChatGPT Plus or Pro subscriber, GPT-5.5 is available now. The 3D generation capabilities are the most immediately relevant feature for creators -- try prompting it to build interactive Three.js scenes, voxel art, or game prototypes to evaluate whether it fits your pre-visualization or prototyping workflow.
For API developers, hold off on migration planning until pricing and rate limits are confirmed. The 2x cost increase is significant, and the token efficiency claims need real-world validation. If you are building creative tools on GPT-5.4, benchmark your specific use cases before upgrading.
The competitive picture is tight. Claude Opus 4.7 wins on code-heavy GitHub tasks, Gemini 3.1 Pro is close on web research, and both are priced lower. GPT-5.5's edge is in math, long-context processing, and the unique 3D generation capabilities that neither competitor currently offers. For creators, that 3D feature alone may justify the switch.