- AI Geekly
- Posts
- AI Geekly: Abby Normal
AI Geekly: Abby Normal
Alexa brain transplant, dueling datacenters, and enterprise AI
Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.
TL;DR 7th Wonder of the World; Alexa’s Increased IQ AI-Q; Anthropic Aims for Enterprise
Hope everyone enjoyed last week with back-to-school and back-to-work (4-day week!) as we bid summer farewell. This week we heard from a growing player in the AI space about the deployment of an absolutely massive AI datacenter, though the founder’s penchant for exaggeration and the practical limitations around building large datacenters quickly (and powering them!) invited many a naysayer to speak out. We have two Anthropic-related stories this week, as the ex-OpenAI-employee-founded company has found its proprietary AI models of increasing interest to enterprise clients. And as far as enterprise clients, it really doesn’t get any bigger than Amazon, who is looking to inject Anthropic’s AI intelligence into its less-than-intelligent Alexa platform. Were Alexa’s brain to be extracted and placed in a vat on a shelf, the label would no doubt read “Abby Normal”. Read on below!
Colossus vs. The Cloud
A Tale of Two AI Training Strategies
What it is: xAI was in the news this week, as its CEO posted on X that it had recently completed the rapid construction of a massive GPU cluster, dubbed "Colossus," in Tennessee, reportedly networking 100,000 Nvidia H100 GPUs (and a further 50,000 H200s expected in months). While its full operational status remains unclear due to power and networking challenges, Colossus aims to be the world's most powerful AI training cluster at a single location. This approach stands in contrast to industry leaders like Google, OpenAI, and Anthropic who are exploring multi-datacenter training strategies to overcome the physical limitations of single sites as they develop increasingly larger AI models.
What it means: xAI's ambition with Colossus showcases a "go big or go home" mentality, concentrating enormous compute power in one place and potentially side-stepping the complexities of multi-site data synchronization and management —low latency is critical in the training of LLMs, which is why single-site is generally preferrable. Problems arise however when a datacenter becomes too large for local power and water sources. The success of Colossus will rely on overcoming unique challenges related to power infrastructure, networking, and environmental regulations. Meanwhile, the multi-datacenter approach, while technically complex, offers greater scalability and flexibility for future AI model development.
Why it matters: As AI model developers put the Scaling Laws for LLMs to the test, throwing increasing amounts of compute and building larger and larger models, we expect to see a number of creative approaches —the emerging battle between single-site and multi-site will be an interesting one. Will xAI's audacious single-site strategy prove more effective, or will the industry gravitate towards the distributed, multi-datacenter model? The answer will have material impact on chip manufacturers like Nvidia and AMD, data center developers, and the telecom companies enabling high-bandwidth connections between geographically dispersed computing clusters. While the two different approaches may yield comparable models because of the scale of spend here (billions of dollars), seemingly small choices around networking scale-up in cost and cost savings.
Alexa Finally Gets a Brain
Amazon Outsources Alexa AI Intelligence to Anthropic
What it is: Amazon is set to release a revamped Alexa in October, and this time, it's not just relying on in-house AI. The new paid "Remarkable" version of the voice assistant will be powered primarily by Anthropic's Claude AI model, a move prompted by the subpar performance of Amazon's own AI software according to Amazon insider perspectives.
What it means: The move is a notable departure for Amazon, a company known for its preference for in-house solutions and tight control over its technology stack (though it does support many different third-party AI models in its Bedrock AI offering). The decision to turn to Anthropic reflects the pressure to keep pace in the rapidly evolving AI landscape, particularly as competitors like Google and OpenAI make advancements with their voice assistants (Apple, for its part, is using a combination of its in-house AI and third-party offerings including OpenAI). Amazon has experienced challenges monetizing Alexa, which has yet to generate significant revenue despite widespread adoption.
Why it matters: The success of the upgraded Alexa will be a key indicator of Amazon's ability to compete in the increasingly competitive AI space. Analysts have criticized the company’s wait-and-see approach to Generative AI, where, much like Apple, it delayed the development and release of its models as it took a beat to see where the market was heading (its Titan models do offer enterprise users a cost-conscious solution for certain applications). The company is betting that users will be willing to pay a monthly fee for a more intelligent, feature-rich voice assistant. This may very well be true. We expect many users are frustrated with the stagnation of home AI assistants over the past decade (Alexa came out in 2014) and would welcome improved capabilities. However, consumer skepticism and the potential for delays or alterations to the rollout, depending on the technology's performance, remain factors to watch.
Claude Learns to Tie a Double Windsor
Anthropic Goes Enterprise
What it is: Anthropic has launched Claude for Enterprise, a new subscription tier designed for businesses seeking to leverage AI while maintaining control over their data and workflows. Key features include an expanded 500,000-token context window, a native GitHub integration, and robust security measures like single sign-on (SSO) and role-based permissions.
What it means: Anthropic is directly targeting the lucrative enterprise market, positioning Claude as a powerful AI assistant capable of handling complex tasks and integrating seamlessly into existing workflows. The emphasis on enterprise-grade security and data privacy addresses concerns around the use of sensitive information in AI applications. Although why it still can’t connect to the internet like ChatGPT and Gemini, I can’t tell you, other than an overly cautious stance, which may even excite some similarly wet-blanket-based clients...
Why it matters: The launch of Claude for Enterprise reflects the growing demand for tailored AI solutions that meet the specific needs of businesses. Anthropic's move could challenge Microsoft's dominance in the enterprise AI space, particularly for companies seeking an alternative to OpenAI-powered solutions. Given the litany of lawsuits and resignations of safety-team staff over the years, OpenAI doesn’t necessarily come off as the stalwart enterprise partner of choice. Early adoption by prominent companies like GitLab and Midjourney suggests that Claude is gaining traction as a collaborative AI tool, at least with other tech companies.
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.