Sacra Logo

Considering the advanced state of OpenAI, what opportunities exist for competitors like Anthropic?

In which areas can Anthropic differentiate itself and find a unique market niche? How can Anthropic capitalize on any limitations or gaps in OpenAI's offerings? What specific industries or applications may be more suitable for Anthropic's AI solutions?

Jan-Erik Asplund

Co-Founder at Sacra

Anthropic is an AI research company started in 2021 by Dario Amodei (former VP of research at OpenAI), Daniela Amodei (former VP of Safety and Policy at OpenAI) and nine other former OpenAI employees, including the lead engineer on GPT-3, Tom Brown. Their early business customers include Notion, DuckDuckGo, and Quora.

Notion uses Anthropic to power Notion AI, which can summarize documents, edit existing writing, and generate first drafts of memos and blog posts.

DuckDuckGo uses Anthropic to provide “Instant Answers”—auto-generated answers to user queries.

Quora uses Anthropic for their Poe chatbot because it is more conversational and better at holding conversation than ChatGPT.

In March 2023, Anthropic launched its first product available to the public—the chatbot Claude, competitive with ChatGPT. Claude’s 100K token context window vs. the roughly 4K context window of ChatGPT makes it potentially useful for many use cases across the enterprise. 

Despite the advanced state of OpenAI, there are a few different avenues for Anthropic to become a major player in the AI space.

1. Anthropic gives companies optionality across their LLMs

DuckDuckGo’s AI-based search uses both Anthropic and OpenAI under the hood. Scale uses OpenAI, Cohere, Adept, CarperAI, and Stability AI. Quora’s chatbot Poe allows users to choose which model they get an answer from, between options from OpenAI and Anthropic.

Across all of these examples, what we’re seeing is that companies don’t want to be dependent on any single LLM provider.

One reason is that using different LLMs from different providers on the back-end gives companies more bargaining power when it comes to negotiating terms and prices with LLM providers.

Working with multiple LLM companies also means that in the event of an short-term outage or a long-term strategic shift, companies aren’t dependent on just that one provider and have a greater chance of keeping their product going in an uninterrupted manner.

This means that even if OpenAI were to be the leader in AI, Anthropic would still have a great opportunity as a #2—as the Google Cloud to their AWS in a world of multi-cloud, and as a vital option for companies to use to diversify their AI bill.

2. Anthropic is focused on B2B use cases, OpenAI on B2C

Different AI chatbots have different strengths and weaknesses. For example, Anthropic’s Claude chatbot is more verbose than ChatGPT, more natural conversationally, and a better fit for many B2B use cases. On the other hand, ChatGPT is better at tasks like generating code or thinking about code, and for many B2C use cases.

Claude’s 100K token context window means that it is specifically a better fit than ChatGPT for many use cases across the enterprise—something we’ve already seen play out across Notion, DuckDuckGo, and Quora, as well as companies like Robin AI (a legal tech business using Claude to suggest alternative language in briefs) and AssemblyAI (a speech AI company using Claude to summarize and drive Q&A across long audio files). Legal doc review, medical doc review, financial doc review—Claude has applications across industries where large amounts of text and information need to be processed.

Another aspect of Claude that makes it potentially more useful than ChatGPT for professional use cases is the fact that it has been trained specifically to be “more steerable” and produce predictably non-harmful results. 

Claude’s more prescriptive approach means it can be relied on to provide more consistent answers with less hallucinations—a tradeoff that might make it less useful than ChatGPT for all-purpose consumer applications, for exploring novel information, or for generating new information like code, but which makes it more useful for e.g. basic service and support tasks that involve retrieving information from a knowledge base and synthesizing it for customers.

3. Anthropic serves businesses rather than competing against them

Anthropic’s focus on the business/enterprise use case of building AI chatbots could be powerful not just in terms of what it allows customers to build but in how it allows them to avoid hitching their wagon to OpenAI. 

OpenAI’s hit product is the consumer chatbot ChatGPT, which therefore makes OpenAI potentially competitive with any product building an AI product for consumers.

Since the launch of the GPT-3 API, there’s been a wave of companies building text-based AI products—see AI writing assistants like Jasper and Copy.ai. Jasper and Copy.ai built their businesses reselling OpenAI’s GPT-3 output at ~60% gross margin. Then OpenAI released ChatGPT, with which users can upload a batch of text and have it edited via a chat interface just as they could have within Jasper or Copy.ai.

OpenAI’s hit consumer product ChatGPT, while a big success for OpenAI, therefore works at cross purposes to their ability to sell access to their APIs into businesses.

Anthropic, by not having a consumer-facing product like ChatGPT, avoids this issue.

Instead, they can fully focus on developing a product specifically responsive to the needs of businesses, which might mean higher customization, better integration capabilities, a stronger focus on scalability and reliability, white-labeling, or better data privacy controls.

4. Anthropic has Google’s infrastructure and capital behind it

Google's investment in Anthropic can be seen as a strategic move to stay competitive in AI, especially given Microsoft’s close alliance with OpenAI.

Google invested nearly $400M in Anthropic in late 2022 in exchange for a 10% equity stake—similar to the arrangement created by OpenAI with Microsoft, who invested $1B in the AI lab in the summer of 2019 and followed that up with a $10B investment in early 2023. 

There’s a world where the competition between cloud giants like Microsoft and Google—and other companies like Amazon and Facebook—creates an oligarchic market structure in AI, much like we have with cloud itself. 

A key reason for that is the high capital requirements when it comes to building and training AI models—it takes millions of dollars, powerful hardware, data, and a large amount of energy, privileging big incumbents with billions of dollars to spend.

However, developing AI applications is also risky—hence why Microsoft would look to partner with OpenAI while Google partners with Anthropic. 

Google competes with Microsoft in a kind of proxy war, while in return, Anthropic gains access to Google’s vast amounts of compute and their massive distribution network, and they get the benefit of their brand imprimatur when it comes to closing business development deals, raising money, and recruiting top talent.

Meanwhile, Google buys themselves a call option to use Anthropic’s tech to improve their search engine, advertising platform, or cloud services, protect themselves against some risk of OpenAI’s dominance in the space, and keep themselves at the forefront of AI research.

---

It’s still early days for the entire AI space. 

It’s unclear whether large, versatile, foundational models like GPT-4 (and applications built on top of them, like GitHub’s Copilot) will continue to outperform pre-trained niche models like FinBERT (for financial analysis). Open source models could emerge to compete head-to-head with Google/Anthropic and Microsoft/OpenAI, and there’s evidence that this is starting to happen with models like Vicuna. Some other technological breakthrough could take place and change the state of play in the market entirely.

So far, OpenAI has a lead over Anthropic—it’s on track to do more than $200M in revenue this year compared to Anthropic’s minimal revenue, it’s raised $11.3B and has the top pedigreed team and name in the space, with powerful social proof.

That said, Anthropic has its own advantages—they provide companies optionality and they’re focused on the B2B use case—and it has its own pedigree in having a founding team made of former OpenAI team members and the might of Google behind them.

Despite the advanced state of OpenAI, opportunities exist for Anthropic to build its own multi-billion dollar business in the AI space.