- AI Geekly
- Posts
- AI Geekly: Models, Chips, and Robots, Oh My!
AI Geekly: Models, Chips, and Robots, Oh My!
We're not in Kansas anymore...
Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.
TL;DR Driving Machines; Chipwrecked; AI Model Battle Royale
This week we have a bit more content for you to chew on. We start with another “AI Done Right” case study looking at how recent advances in Generative AI have accelerated the development of humanoid robots, with material implications for human labor —first movers here are going to benefit from a flywheel effect. Next, we look at recent news in the chips space, digesting insights from Nvidia’s most recent quarter and looking at the potential implications of a major multi-billion-dollar investment in Anthropic by Amazon and a new record in AI inference. Finally, we note some interesting developments on the AI modelling side as researchers compare the capabilities of the top frontier models hoping to knock OpenAI off its perch. Read on below!
AI Done Right Case Study: Figure 02 Robot at BMW Plant
AI-enhanced robots improving at accelerating pace
What it is: BMW and anthropomorphic robot manufacturer Figure provided an update on the progress of the Figure 02 humanoid robots being piloted at the BMW Spartanburg plant. The AI-imbued androids achieved a remarkable 400% efficiency upgrade during recent trials, successfully performing high-precision sheet metal insertion tasks with significantly improved speed and accuracy. Impressively, this milestone in robotic dexterity comes just a few months after the robots were deployed to the factory in August. It brings to mind another impressive AI-enhanced robot launched in August, Astribot's S1, a wheeled humanoid robot that uses its digital gray matter to perform complex household tasks (see the video below).
What it means: While Figure 02's successful integration into a real-world automotive production line showcases the growing potential of humanoid robots to transform manufacturing processes, the speed of its improvement in just three short months underscores the pace of the AI revolution (of which Robotics is an important part). The robot's ability to handle complex, millimeter-precise tasks with increased speed and reliability suggests that humanoid robots could play a significant role in automating traditionally labor-intensive operations. What’s critical to note here is that these are not specialized, robots tailor made for BMW’s production line —the Figure 02 is a multi-purpose humanoid robot demonstrating its ability to be deployed across a wide range of complex and highly specific tasks.
Why it matters: The reason for these rapid developments in robotics is largely due to their implementation of Generative AI / Large Language Models (LLMs) as a key element of the robots’ software and interaction models. Figure 02's achievements underscore the transformative potential of AI-powered robotics not only in the manufacturing sector, but across all sectors, and indeed society at large. The increased efficiency, precision, and automation capabilities offered by humanoid robots will certainly lead to significant cost savings, improved product quality, and enhanced productivity for companies like BMW and others who adopt this technology. While near-term challenges remain in terms of scalability, cost-effectiveness, and broader industry adoption, the rapid progress demonstrated by Figure 02, alongside other promising developments like Astribot's S1 are a preview of what is to come.
What’s the play?: Forward-thinking enterprise players would be wise to begin experimenting with this technology now. In the same way that first movers in AI were wise to build early, strong relationships with Nvidia, giving them preferential treatment when the crowd caught on and demand soared; innovative management teams should be experimenting with this technology today to gain the insights on how to scale and integrate in the near future, while establishing the relationships to ensure they have the supply they need when the rest of the world catches up.
When The Chips Are Down
Nvidia’s quarter, Amazon’s investment, and Cerebras’ record
What it is: Plenty of developments in the world of AI chips this week. Nvidia announced another blow-out quarter buoyed by insatiable demand for its AI chips (including its coming Blackwell cards). Amazon announced another $4 Bn investment in Anthropic, but with a catch! Anthropic must use AWS’ proprietary Trainium chips in lieu of Nvidia’s chips, which makes things a little more ticklish. Finally, Cerebras, a smaller “chip” manufacturer (currently using a 7nm node for those who care) announced that it was able to write code 75x faster using its AI wafers compared to an Nvidia GPU configuration.
What it means: Nvidia's continued dominance in the AI chip market is undeniable (>80%), with its data center sales growing at a staggering pace (112% y/y) and demand for its upcoming Blackwell chips far outstripping supply. The company's expanding software and services business, now generating $1.5 billion annually, also positions it to compete with its own cloud provider customers (that… doesn’t sound like a good thing). Amazon's strategic investment in Anthropic, while providing the AI startup with substantial capital, aims to boost the adoption of its own Trainium chips, a potential challenger to Nvidia’s dominance but one that has yet to achieve widespread adoption (for their part, Anthropic would prefer to use Nvidia chips, given the choice). Cerebras’s remarkable performance benchmark highlights the potential for alternative chip architectures to disrupt the market, particularly in specialized applications where speed and efficiency are paramount (but their limited ability to scale and meet current demand levels reflects the practical reality of the physical limitations preventing a true challenge to Nvidia’s dominance).
Why it matters: Despite frequent developments and the participation of the biggest players in Tech, the AI chip market remains highly competitive and incredibly difficult to disrupt, dominated by a lone player, in much the way it has been, not only for the past two years (when most began to follow the NVDA story), but going all the way back to the initial release of NVDIA’s CUDA software in 2006 (this unlocked Nvidia GPUs for the matrix multiplication favored in STEM and AI). Nvidia's strong financial performance and its technological leadership position it as the current frontrunner, while challengers like AWS and Cerebras hope to eke out some market share, it’s a monumental task. Amazon's strategic bet on Trainium, coupled with Cerebras's focus on speed and specialized workloads, could create a more dynamic and competitive landscape in the long run. For investors, the race for AI chip supremacy presents both opportunities and risks, as the market evolves and the demand for specialized AI hardware continues to grow.
What’s the play?: For many businesses, you’re going to be looking at using APIs, cloud-based, or third-party solutions where you’re not going to have to worry so much about what is under the hood. Depending on your use case and, assuming you’re working with the right vendor partners, what’s happening on the chips side shouldn’t matter to you too much (though it may flow through to you in terms of cost). For those who either get a little more in the weeds, aren’t well served by off-the-shelf offerings, have specific use cases, deal with sensitive data or are in highly regulated industries (Finance, Healthcare): you may find yourselves either using highly customized cloud hardware or building your own servers on-premises, in which case the chips will make a difference. There’s no one-size fits all approach, but, depending on the application, the market dynamics mentioned above can actually be beneficial —enabling optimization for use cases by min-maxing combinations of models and chips, each of varying degrees of price and performance, to achieve an optimal match. Questions? reach-out at usurper.ai !
Model Showdown
OpenAI vs. the world
Anthropic AI vs. OpenAI: A new study from the non-profit Model Evaluation and Threat Research (METR) pitted Anthropic's Claude Sonnet 3.5 against OpenAI's o1-preview in a series of seven complex AI research problems, revealing that Anthropic's model outperformed OpenAI's in five of the seven tests. The METR study highlights the rapid advancements in AI's ability to perform complex research tasks. While both Claude and o1-preview lagged behind human researchers in overall performance, their ability to solve specific problems at a level comparable to the average human researcher is notable.
DeepSeek vs. OpenAI: DeepSeek, an open-source AI research company, has released its own reasoning-focused LLM, R1-Lite-Preview, showing performance comparable to, and in some cases exceeding, OpenAI's o1-preview. DeepSeek's competitive entry into the reasoning-focused LLM space, with its emphasis on transparency and open-source accessibility, could further accelerate the development of advanced AI capabilities.
Why it matters: The race to develop more powerful and capable LLMs is driving innovation across multiple fronts, from AI-assisted research and transparent reasoning to voice interaction and real-world problem-solving. The increasing accessibility of advanced AI tools, through open-source initiatives like Hugging Face is empowering a wider range of users to leverage the transformative potential of AI. While concerns remain about the safety and ethical implications of increasingly sophisticated AI systems, the rapid pace of advancement suggests that AI will continue to play a growing role in shaping our future.
What’s the play?: The battle to be king of the hill in LLMs is a bit of a distraction. While much of the focus is on “America’s Next Top Model”, the gaps between models are often temporary and the advantages short-lived. Businesses should fret less about the best overall model and instead focus on the best model for their use case and price point.
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.