Now loading...
Anthropic has rolled out Claude 4.5 today, introducing this updated model widely across its platforms. The pricing structure remains consistent with that of Claude Sonnet 4, set at $3 for every million input tokens and $15 for each million output tokens. Developers are able to use this model through the Claude API by referencing “claude-sonnet-4-5” as the identifier.
In addition to the main model update, several auxiliary features within the Claude suite have also seen enhancements. Users can now perform code execution and file creation directly within their discussions in the web interface and dedicated apps. This improvement enables users to generate spreadsheets, slides, and documents seamlessly within the chat environment.
For Max subscribers, the company has introduced a five-day research preview titled “Imagine with Claude,” showcasing the model’s real-time software generation capabilities. Anthropic describes this preview as an engaging demonstration of what Claude Sonnet 4.5 can achieve when paired with the right infrastructure.
Moreover, the command-line development tool Claude Code has also undergone several upgrades alongside the new model release. Enhancements include the introduction of checkpoints for saving progress, a revamped terminal interface, and a dedicated extension for Visual Studio Code. The Claude API now features improvements in context editing and memory tools, facilitating the management of longer-running tasks.
As AI companies emphasize software development benchmarks as metrics to showcase the effectiveness of AI assistants, progress in other areas remains challenging to quantify. Nonetheless, customers continue to rely on chatbots like Claude for various general applications. In light of recent challenges related to some users encountering misleading or fantastical responses from AI chatbots, Anthropic has highlighted that Claude Sonnet 4.5 aims to reduce issues including “sycophancy, deception, power-seeking, and the encouragement of delusional thinking” in comparison to earlier models. The term ‘sycophancy’ refers to the inclination of AI to endorse users’ ideas, even when they are incorrect or potentially harmful.
While there may be debates regarding the anthropomorphic interpretations of AI behavior, the effort to diminish sycophantic tendencies is undoubtedly a positive development. This is increasingly relevant as chatbots are being called upon for much more than mere coding tasks.