
Anthropic, one of many world’s largest AI distributors, has a strong household of generative AI fashions referred to as Claude. These fashions can carry out a variety of duties, from captioning photos and writing emails to fixing math and coding challenges.
With Anthropic’s mannequin ecosystem rising so rapidly, it may be robust to maintain observe of which Claude fashions do what. To assist, we’ve put collectively a information to Claude, which we’ll preserve up to date as new fashions and upgrades arrive.
Claude fashions
Claude fashions are named after literary artworks: Haiku, Sonnet, and Opus. The most recent are:
- Claude 3.5 Haiku, a light-weight mannequin.
- Claude 3.7 Sonnet, a midrange, hybrid reasoning mannequin. That is presently Anthropic’s flagship AI mannequin.
- Claude 3 Opus, a big mannequin.
Counterintuitively, Claude 3 Opus — the most important and costliest mannequin Anthropic gives — is the least succesful Claude mannequin in the intervening time. Nonetheless, that’s certain to alter when Anthropic releases an up to date model of Opus.
Most not too long ago, Anthropic launched Claude 3.7 Sonnet, its most superior mannequin to this point. This AI mannequin is totally different from Claude 3.5 Haiku and Claude 3 Opus as a result of it’s a hybrid AI reasoning mannequin, which may give each real-time solutions and extra thought-about, “thought-out” solutions to questions.
When utilizing Claude 3.7 Sonnet, customers can select whether or not to activate the AI mannequin’s reasoning talents, which immediate the mannequin to “suppose” for a brief or lengthy time period.
When reasoning is turned on, Claude 3.7 Sonnet will spend anyplace from just a few seconds to a few minutes in a “considering” part earlier than answering. Throughout this part, the AI mannequin is breaking down the person’s immediate into smaller elements and checking its solutions.
Claude 3.7 Sonnet is Anthropic’s first AI mannequin that may “cause,” a way many AI labs have turned to as traditional methods of improving AI performance taper off.
Even with its reasoning disabled, Claude 3.7 Sonnet stays one of many tech trade’s top-performing AI fashions.
In November, Anthropic launched an improved – and dearer – model of its light-weight AI mannequin, Claude 3.5 Haiku. This mannequin outperforms Anthropic’s Claude 3 Opus on a number of benchmarks, however it could actually’t analyze photos like Claude 3 Opus or Claude 3.7 Sonnet can.
All Claude fashions — which have an ordinary 200,000-token context window — can even observe multistep directions, use tools (e.g., inventory ticker trackers), and produce structured output in codecs like JSON.
A context window is the quantity of knowledge a mannequin like Claude can analyze earlier than producing new information, whereas tokens are subdivided bits of uncooked information (just like the syllables “fan,” “tas,” and “tic” within the phrase “unbelievable”). 2 hundred thousand tokens is equal to about 150,000 phrases, or a 600-page novel.
Not like many main generative AI fashions, Anthropic’s can’t entry the web, that means they’re not significantly nice at answering present occasions questions. In addition they can’t generate photos — solely easy line diagrams.
As for the key variations between Claude fashions, Claude 3.7 Sonnet is quicker than Claude 3 Opus and higher understands nuanced and complicated directions. Haiku struggles with subtle prompts, however it’s the swiftest of the three fashions.
Claude mannequin pricing
The Claude fashions can be found via Anthropic’s API and managed platforms reminiscent of Amazon Bedrock and Google Cloud’s Vertex AI.
Right here’s the Anthropic API pricing:
- Claude 3.5 Haiku prices 80 cents per million enter tokens (~750,000 phrases), or $4 per million output tokens
- Claude 3.7 Sonnet prices $3 per million enter tokens, or $15 per million output tokens
- Claude 3 Opus prices $15 per million enter tokens, or $75 per million output tokens
Anthropic gives immediate caching and batching to yield further runtime financial savings.
Immediate caching lets builders retailer particular “immediate contexts” that may be reused throughout API calls to a mannequin, whereas batching processes asynchronous teams of low-priority (and subsequently cheaper) mannequin inference requests.
Claude plans and apps
For particular person customers and corporations seeking to merely work together with the Claude fashions by way of apps for the net, Android, and iOS, Anthropic gives a free Claude plan with fee limits and different utilization restrictions.
Upgrading to one of many firm’s subscriptions removes these limits and unlocks new performance. The present plans are:
Claude Professional, which prices $20 monthly, comes with 5x greater fee limits, precedence entry, and previews of upcoming options.
Being business-focused, Group — which prices $30 per person monthly — provides a dashboard to regulate billing and person administration and integrations with information repos reminiscent of codebases and buyer relationship administration platforms (e.g., Salesforce). A toggle permits or disables citations to confirm AI-generated claims. (Like all fashions, Claude hallucinates sometimes.)
Each Professional and Group subscribers get Initiatives, a function that grounds Claude’s outputs in information bases, which will be fashion guides, interview transcripts, and so forth. These clients, together with free-tier customers, can even faucet into Artifacts, a workspace the place customers can edit and add to content material like code, apps, web site designs, and different docs generated by Claude.
For purchasers who want much more, there’s Claude Enterprise, which permits corporations to add proprietary information into Claude in order that Claude can analyze the data and reply questions on it. Claude Enterprise additionally comes with a bigger context window (500,000 tokens), GitHub integration for engineering groups to sync their GitHub repositories with Claude, and Initiatives and Artifacts.
A phrase of warning
As is the case with all generative AI fashions, there are dangers related to utilizing Claude.
The fashions sometimes make mistakes when summarizing or answering questions due to their tendency to hallucinate. They’re additionally skilled on public internet information, a few of which can be copyrighted or underneath a restrictive license. Anthropic and lots of different AI distributors argue that the fair-use doctrine shields them from copyright claims. However that hasn’t stopped information house owners from filing lawsuits.
Anthropic offers policies to guard sure clients from courtroom battles arising from fair-use challenges. Nonetheless, they don’t resolve the moral quandary of utilizing fashions skilled on information with out permission.
This text was initially revealed on October 19, 2024. It was up to date on February 25, 2025 to incorporate new particulars about Claude 3.7 Sonnet and Claude 3.5 Haiku.