September 25, 2023
Having made a virtue of supporting multiple AI models on AWS, Amazon’s massive investment in one provider indicates that is set to change.
Hyperscaler-enabled AI was a major theme at the recent DTW telecoms trade show. AI requires an exceptional amount of computing grunt so that, along with its potential to augment stuff already running in the public cloud, suggested another major reason for operators to hand over one more piece of their soul to one of the three US giants that dominate this space.
Google owns BARD, so it will presumably steer its cloud customers towards that AI platform, while Microsoft is the dominant investor in OpenAI, which leads to a similar assumption for Azure. AWS (Amazon Web Services), however, recently made a virtue of supporting whichever platform its customers choose, without fear or favour.
So it comes as a bit of a surprise to hear the news that Amazon has pledged to invest up to $4 billion in Anthropic, which owns the Claude large language model (LLM) – a direct competitor to those offered by OpenAI and Google. The Amazon announcement focuses on Anthropic’s resulting commitment to AWS but it’s hard not to conclude the converse also applies.
“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, Amazon CEO. “Customers are quite excited about Amazon Bedrock, AWS’s new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’s AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”
“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers. By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”
Anthropic was only thought to be valued at around $5 billion prior to this announcement but Amazon is still able to claim a minority stake because its initial investment is apparently a mere $1.25 billion. But why announce the $4 billion figure if you don’t intend to enact it? That would surely make Amazon the majority owner, with an even greater interest in favouring Clause over the other LLMs.
There were a bunch more canned quotes from obliging AWS customers in the press release but none of them were from the telecoms industry. As Light Reading explores, these LLMs are not currently being trained on telecoms data. Nonetheless, operators will be compelled to be ever more reliant on one or more of the big public cloud providers if they want to join the generative AI party. Furthermore, it looks increasingly likely that their choice of provider will also dictate which LLM they use.
About the Author(s)
You May Also Like