- AI Geekly
- Posts
- AI Geekly: Vying for Attention (is All you Need)
AI Geekly: Vying for Attention (is All you Need)
AI players fight for the spotlight
Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.
TL;DR AlexAWSisonfire; OpenAIdvent Calendar
This week many in the AI community were focused on Amazon’s annual re:Invent event, where the company’s top brass brings together AWS developers to show-off the latest wares it hopes will both keep current users on its platform as well as attract new ones. Looking back to last year’s re:Invent, the big announcement was the release of the company’s Q virtual assistant (AWS’ response to MSFT’s Github Copilot). While Q may not have made quite the splash Amazon CEO Andrew Jassy had wanted, the company has continued to invest in Q’s capabilities as it moves towards a more agentic design (i.e. able to take actions on behalf of users). This year, re:Invent is back; with AWS now headed by Matt Garman, the company is making its ambitions in GenAI a clearer priority. Garman & Co. dropped a brand-new family of models that “bob and weave with the best of them”, announced an “ultracluster” and underscored the value of its partnerships as it tries its best to showcase its AI bona fides to the market.
Those who’ve followed the Geekly and the OpenAI story for a while will know well that the ChatGPT maker has a propensity for upstaging competitor announcements with groundbreaking announcements of its own. This week was no different as CEO Sam Altman announced OAI’s 12 Days of Shipmas, where the company will announce new products and capabilities over the coming weeks. To kick off, the company announced a pricy new subscription tier and its highest performing reasoning model to date. Read on below!
SamA Claus is Coming to Town
OAI’s 12 Days of Shipmas
What it is: OpenAI is gearing up for a busy holiday season, as it announced it will unveil dozens of new features, products, and demos over 12 weekdays this December. The so-called "shipmas" period will include the highly anticipated launch of Sora, its text-to-video AI model, and a new reasoning model. The company kicked things off on Friday, with the introduction of ChatGPT Pro, a $200 per month subscription tier offering unlimited access to OpenAI's most advanced models, including a new "pro mode" for its o1 reasoning model with enhanced performance on complex tasks.
What it means: OpenAI is aggressively pushing to expand its reach and monetize its AI technology. The imminent launch of Sora, after months of anticipation and recent leaks, positions OpenAI to compete directly with Google's recently released Veo video generation platform. (released this week and capable of generating >1min 1080p videos). The introduction of ChatGPT Pro, while pricey, aims to capture the high-end user base of researchers, engineers, and other professionals who require access to the most powerful AI capabilities. However, the value proposition of the Pro tier, particularly its "o1 pro mode" feature, remains somewhat unclear, with questions raised about its actual performance gains compared to the standard o1 model. The removal of usage limits across OAI’s models in Pro is likely one of the biggest draws for power users.
Why it matters: OpenAI finds itself in an interesting position. Its popularity is unquestionable; as of last week, ChatGPT has crossed the 300 million weekly active user mark, with users sending over 1 billion messages per day —t is a “big player”, even an incumbent of sorts, in AI. Despite unprecedented fundraising capabilities ($18 Bn to date), it still has nowhere near the balance sheet of the titans with which it competes (Google, Microsoft, Amazon, Meta). The cost of research, model development, and serving-up its models to customers on both the ChatGPT client and the business API side consumes tremendous amounts of capital. To survive, the company has to remain ahead (or at least appear to be ahead), it has to capture the hearts of consumers, businesses, academia, and governments. Most importantly, it has to figure out how to turn the billions invested into profit for itself and its customers. While the introduction of ChatGPT Pro is a step towards the former, considerably larger revenue streams will be needed to reach the break-even point, let alone begin to generate a profit.
Amazon’s Champagne Super Nova
Series of impressive announcements at AWS re:Invent 2024
What it is: Where some may have felt last year’s re:Invent conference saw AMZN merely dipping its toe in the water of GenAI, this year it went all-in, unveiling a slew of new offerings designed to challenge the dominance of rivals like Google and OpenAI in the generative AI space. The announcements included the "Amazon Nova" family of foundation models, featuring a range of capabilities from text and multimodal understanding to image and video generation. Amazon also highlighted a strategic partnership with Anthropic to build a massive "ultracluster," powered by AWS's proprietary Trainium chips, and a surprising collaboration with Apple, who revealed it is using AWS infrastructure and exploring the use of Trainium for its own AI development. The company also unveiled its Trainium3 next-gen AI chips, its first built on the 3 nm process node and telegraphing a 4x improvement in performance.
What it means: Amazon is aggressively pursuing a multi-pronged strategy to establish itself as a leader in the enterprise AI market. The introduction of the Nova models, with their competitive pricing and focus on enterprise-specific use cases, aims to attract businesses seeking powerful and cost-effective AI solutions. The collaborations with Anthropic and Apple, while involving substantial financial investment, strategically promote the adoption of AWS's Trainium chips, presenting a potential alternative to the ubiquitous Nvidia GPUs that currently dominate the AI hardware landscape. The preference for Nvidia GPUs, due to CUDA’s status as the de-facto industry standard platform for AI development and research, remains the greatest obstacle for companies seeking AI GPU dominance.
Why it matters: Amazon's focus on delivering practical, enterprise-ready solutions, combined with its strategic investments in hardware and partnerships has been a tried-and-true approach that looks like it could serve the company well once more. The success of Amazon's strategy hinges on its ability to convince businesses that its Nova models and Trainium chips offer a compelling alternative to existing solutions, while also proving to investors that its growing AI investments will translate into meaningful revenue growth and long-term profitability for AWS. Still a “show me” story (like most GenAI to date) but making the right moves.
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.