CFOtech India - Technology news for CFOs & financial decision-makers
Email attachment20260417 675951 l0u22x

Anthropic launches Claude Opus 4.7 with stronger coding

Fri, 17th Apr 2026 (Yesterday)

Anthropic has made Claude Opus 4.7 generally available.

The model is rolling out across Claude products, Anthropic's API, Amazon Bedrock, Google Cloud Vertex AI and Microsoft Foundry.

The launch is the latest update to Anthropic's flagship Opus line. Opus 4.7 improves on Opus 4.6 in software engineering and image handling while keeping the same pricing: USD $5 per million input tokens and USD $25 per million output tokens.

Early users were able to delegate more difficult coding work to the model with less supervision. Anthropic also reported stronger performance on long-running tasks, closer adherence to instructions and better internal checking of outputs before answers are returned.

Cyber controls

The release is also part of Anthropic's staged approach to cyber safety, following its decision to keep access to Claude Mythos Preview limited. Opus 4.7 is the first model in that process, with safeguards designed to detect and block requests suggesting prohibited or high-risk cybersecurity use.

Anthropic tested ways to reduce the model's cyber capabilities during training and described Opus 4.7 as less advanced in that area than Mythos Preview. Lessons from deploying those safeguards will inform any broader release of Mythos-class models.

At the same time, security professionals with legitimate cybersecurity needs, including vulnerability research, penetration testing and red-teaming, can apply to a new Cyber Verification Program.

Model changes

Beyond coding work, Opus 4.7 can process higher-resolution images, accepting images up to 2,576 pixels on the long edge. According to Anthropic, that is more than three times the visual input size supported by earlier Claude models.

The model is also aimed at professional work involving documents, slides and interfaces. Internal testing indicated stronger results for finance analysis, and Anthropic said the model reached a leading score on its Finance Agent evaluation. It also said Opus 4.7 was state-of-the-art on GDPval-AA, a third-party evaluation covering finance, legal work and other forms of economically valuable knowledge work.

Another change is memory. Anthropic said the model is better at using file system-based memory, allowing it to retain notes across longer and multi-session tasks and reducing how much context users need to provide again later.

Prompt and cost impact

Anthropic warned that stronger instruction-following could lead to unexpected results for prompts written for earlier models, because Opus 4.7 interprets directions more literally. It advised users to retune prompts and related systems when moving from Opus 4.6.

The upgrade also changes token usage. An updated tokenizer means the same input may convert into roughly 1.0 to 1.35 times more tokens, depending on content type. Higher effort settings can also increase output token use, especially in agent-style workflows involving multiple turns.

Users can manage that spend through effort settings, task budgets and shorter prompts. Anthropic said its own tests showed a favourable overall effect on token usage in an internal coding evaluation, but recommended measuring differences on live traffic.

New tools

Anthropic is also introducing an xhigh effort level between high and max, giving users another option to balance reasoning time and latency on harder tasks. In Claude Code, the default effort level has been raised to xhigh for all plans.

On the API side, task budgets are launching in public beta to help developers guide token spending across longer runs. In Claude Code, Anthropic is adding a /ultrareview command that creates a dedicated review session to scan changes and flag bugs and design issues.

Anthropic has also extended auto mode to Max users. Under that setting, Claude can make decisions on a user's behalf during longer tasks with fewer interruptions.

Safety profile

Anthropic said Opus 4.7 has a safety profile similar to Opus 4.6, with low rates of behaviour it classifies as concerning, including deception, sycophancy and cooperation with misuse. The newer model performed better on some measures, such as honesty and resistance to malicious prompt injection attacks, but was modestly weaker on others, including a tendency to provide overly detailed harm-reduction advice on controlled substances.

Its alignment assessment found that the model is "largely well-aligned and trustworthy, though not fully ideal in its behavior".