Claude Opus 4.7 Is Here: Sharper Vision, Smarter Agents, Same Price
2 min readAnthropic shipped Claude Opus 4.7 on April 16, and the headline is simple: more capable, no price increase. The new model narrows reclaims the top spot among publicly available frontier models, edging past OpenAI’s GPT-5.4 and Google’s Gemini 3.1 Pro — though the margin is slim enough that calling it a runaway victory would be overstating it.
What’s more interesting than the leaderboard position are the specific upgrades packed into the release, which signal where Anthropic thinks the real competition is heading.
Vision Gets a Major Boost
The most tangible improvement in Opus 4.7 is vision. The model can now process images up to 2,576 pixels on the long edge — more than three times the resolution supported by Claude Opus 4.6. For developers building document processing pipelines, design review tools, or any application where image fidelity matters, that’s a meaningful upgrade that doesn’t require any changes to existing API calls.
Finer Control Over Reasoning
Anthropic added a new effort level called xhigh — sitting between the existing “high” and “max” settings — giving developers a more granular dial for trading off reasoning depth against latency and cost. For hard problems that don’t need max-effort computation, xhigh offers a middle ground that wasn’t there before.
Also entering public beta are task budgets: a mechanism that lets developers cap how many tokens an autonomous agent can spend before it stops and checks in. For anyone building agentic workflows — where runaway reasoning loops can quietly burn through budgets — this is a practical safety net that addresses a real operational headache.
A New Tool for Code Review
Inside Claude Code, Anthropic added /ultrareview, a slash command designed to simulate a senior human reviewer. Rather than just catching syntax errors, it’s built to flag subtle design flaws, architectural issues, and logic gaps that standard linting tools miss. Early reports suggest it’s particularly useful on pull requests where a second set of eyes would normally be warranted.
What About Mythos?
Opus 4.7’s release comes with an explicit acknowledgment from Anthropic that it trails the company’s more powerful Claude Mythos model — which remains restricted to roughly 40 vetted enterprise and government partners. Anthropic describes Mythos as capable of autonomously discovering and exploiting zero-day software vulnerabilities, which is precisely why it isn’t available to the general public. Opus 4.7, by contrast, was deliberately trained with reduced cybersecurity capabilities to make it safer for broad deployment.
That two-tier approach — a publicly available model and a more powerful restricted one — is increasingly how frontier AI labs are managing the tension between capability and safety.
Pricing and Availability
Opus 4.7 is available now across Anthropic’s API, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, and GitHub Copilot Pro+/Business/Enterprise. Pricing remains unchanged at $5 per million input tokens and $25 per million output tokens — the same as Opus 4.6.
For teams already using Claude Opus 4.6, the upgrade path is straightforward: swap the model string, and the improvements come along for free.
