Welcome back to the AI Geekly, by Brodie Woods. Your curated 5-minute-ish read on the latest developments in the rapidly evolving world of AI.
AI Quote of the Week:
“You have this concept of AIs or agents that kind of work on your behalf where you can send them off to do long-running tasks and come back to them later. I think that's going to be a very commonplace thing in the future.”
-Doug Seven, CodeWhisperer General Manager at AWS/Amazon on unsupervised AI Agents.
Questions? Reach out
TL;DR - Exec Summary
OpenAI, Me Oh My; MSFT Forgets Wallet at Home; AI Emperors with No Clothes; Nvidia To Lap Competition; iRobot 2024
OpenAI's revenue spike hints at a GenAI market potential, though profitability remains veiled. AI players face reality check as tightening monetary policy stretches balance sheets and investor expectations. Raising the prospect of 'AI Washing' as firms overstate AI ventures to fit-in, overlooking the risks to a “fast follow” approach as capabilities rapidly evolve. Nvidia is probably an AI. How else can they be shortening their chip architecture refresh cycle from two years to one? 2024 could be the year of the robots with the convergence of AI intelligence and recent advances in robotics.
-Read on for the full story
Making It: Top-line? Sure. Bottom-line? Another Story.
Viable AI business models are still few…
What it is: This week the world received a peek into the financial performance of the world’s most popular AI company as OpenAI, makers of ChatGPT, disclosed internally that run-rate revenues have surpassed $1.3 Bn (46x > 2022’s $28 mm).
What it means: OpenAI is a bellwether for consumer and enterprise demand for GenAI —market perception swings heavily depending on subscriber growth, frequency, and stickiness. The missing datapoint here is earnings/bottom-line. While top-line growth is certainly encouraging, the market demands transparency in pursuit of a profitable business model.
Why it matters: With sky-rocketing interest rates over the past year, companies have had to drastically cut costs and prioritize efficiency. For a quick finance refresher, recall that the Fed rate is effectively the “risk-free-rate”, setting the bar for necessary return on investment —higher rates demand greater efficiency and higher returns, since investors seek reward for the increased risk relative to 'risk-free' investments such as treasuries.
One more thing: As tighter monetary policy constrains capital availability, tech company multiples come under pressure and earnings become more critical. A good example of what good looks like: the recent story of 32-year-old billionaire Shunsaku Sagami whose proprietary M&A Matchmaking AI not only doubled revenue but delivered a 4x increase in earnings. Note the higher earnings growth vs. revenues implies material margin expansion —PRECISELY what markets are asking for.
Breaking It: I Was Told There Would Be Cake
Many AI co’s are coming to terms with harsh realities
What it means: This is a hard business. It’s formidable for incumbents: Microsoft struggles at it; Google’s traditional dominance has been challenged. Even start-ups, known to be nimble, quick to adapt, and flush with cash have been strained in this environment —largely as VC cash has dried-up as a result of higher rates.
Why it matters: Management teams are confident long term. They understand the opportunity. They know that “if [they] build it, [they] will come”. It’s a little bit like shorting a stock: certainly, you need to have the conviction (which I believe management teams do) —that’s the easy part. The challenge in both cases is staying solvent long-enough to be proven right.
Faking It: Enterprise AI Imposter Syndrome?
Pinkwashing, Greenwashing, and now, AI Washing?
What it is: There’s a third group of companies to call-out, separate from the ones that are Making it or Breaking it —the ones who are “Faking it”. Whether that’s on the vendor side, or on the enterprise side, there are dozens of companies telling tall tales about GenAI. These companies typically take current capabilities or products and slap “AI” on it —this risks not only damaging their own reputations, but that of the broader sector.
55% claimed to be currently evaluating or experimenting with GenAI.
79% of CEOs expect GenAI to increase efficiencies and 59% expect it to increase growth
With such lofty expectations, it does beg the question why only 55% (inflated) are evaluating.
Implicitly, 24% of Fortune 500 CEOs are saying that they are not investing in efficiency and 4% not investing in growth.
1/3 CEOs suggested that they are currently implementing GenAI to some degree.
Why it matters: This doesn’t jibe with what we are seeing from AI vendors. Over the past ~call it a year~ SaaS companies have effectively been in a mini-recession. Enterprise customers have been ruthlessly reducing spend. Reducing licenses. Reducing infrastructure (Cloud scalability goes both ways). It defies logic then that these same companies can be simultaneously both spending less on technology yet somehow spending more on technology when it comes to GenAI, especially given higher cost at current early stages for novices. This is all a little reminiscent of the liberties some companies took with describing their “Big Data” and “Cloud Stategies”.
One more thing: With media and analysts voicing high expectations for GenAI, there may be an upward bias in terms of survey respondents wanting to exaggerate current AI efforts to fit into market narratives. Management may be hoping they can pull-off the misdirection while continuing to move quite slowly, watching competitors and learning from their mistakes (wannabe fast-followers if you will). The risk here is that when the AI flywheel starts spinning, it may not be possible for companies who dragged their feet to catch-up. This may be the greatest risk of all in our view —The Risk of Being Left Behind.
Nvidia’s Jensen Has a Need. A Need. For Speed.
Architecture update cadence moves to one year from two
What it is: Nvidia CEO Jensen Huang has done it again and patently refuses to stay out of my newsfeed. Nvidia has announced ambitious plans to move from biennial updates to its chip architecture to annual revisions.
What it means: It means that AMD’s Lisa Su, Intel’s Pat Gelsinger, and everyone else who over the last several months has been burnishing their potentials as the “Nvidia Killer” when it comes to AI chips just received a rude awakening. Generally the chip to beat is Nvidia’s H100 —effectively the most performant GPU for training demanding modern AI models. With this announcement Nvidia plans to get back to annual architecture releases as it had in the past. Such a pace would not only keep it a generation ahead of its competitors, but further increase its lead with each year.
Why it matters: There are three critical ingredients to developing AI: Data, Compute, and Algorithms. Bottlenecks tend to emerge when the available supply of any of these three is outstripped by demand. Currently the bottleneck for rapid evolution of GenAI is Compute —There simply are not enough performant AI chips on earth to keep-up with the insatiable hunger. Nvidia’s announcement promises to directly address the issue at hand, which will ultimately both accelerate AI development and further establish Nvidia’s dominance in the niche.
2024 to be the year of the robot
What it is: With AI sucking all the air out of the room, what about that other disruptive technology that promises to either usher-in a utopic era of abundance and relaxation for humans, or turn into a real-life version of WALL-E? I’m referring, of course, to robots.
What it means: Advances in AI have not been in a vacuum, nor have they been relegated purely to the digital world. Indeed, readers of the AI Geekly will recall that we have highlighted many developments where recent developments in AI have benefitted robotic applications (see last week where we discussed DeepMind’s new robot instruction model). Very simply, we can think of AI as the brain for these robots. With the tremendous pace of development in the past year, we have similarly seen robots benefit from, and begin to quickly improve thanks to these advances.
Before you go… We have one quick question for you:
With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.
CodeWhisperer: AI coding companion from Amazon Web Services, providing real-time single-line or full function code suggestions for developers.
GenAI (Generative AI): AI that creates new content through learning from existing data. This technology can be utilized across various domains such as images, text, music and video.
AI Washing: Act of over-stating or falsely claiming use of AI in products or services to fit in with market trends.
Run-rate Revenues: A financial projection of future revenues based on current financial performance, extrapolated over a period (in this case, one year).
Fed Rate (Federal Funds Rate): The interest rate at which banks lend reserves to other depository institutions overnight, impacting all other interest rates.
VC (Venture Capital): Financing that investors provide to startup companies and small businesses that are believed to have long-term growth potential.
SaaS (Software as a Service): A cloud computing service where instead of downloading software on your desktop PC or business network to run and update, you instead access an application via an internet browser.
GPU (Graphics Processing Unit): A specialized electronic circuit designed to accelerate the creation of images in a frame buffer intended for output to a display device, essential in AI for processing large blocks of data.
LLM (Language Model): A type of AI model that learns to predict the next word in a sequence given the words that came before it, fundamental in various natural language processing tasks.
OpenAI: An artificial intelligence research lab known for developing cutting-edge AI models and technologies including ChatGPT and the GPT-3, 3.5, and 4.0 models.
Microsoft (MSFT): Engages in the development of AI and GenAI technologies, with products like GitHub Copilot and its OpenAI investment.
GitHub: A subsidiary of Microsoft, it provides hosting for software development and a web-based platform for version control using git.
Alphabet (Google): The parent company of Google, known for its search engine, advertising services, consumer electronics, operating systems, and more.
DeepMind: A British AI company, acquired by Google in 2014, and combined with Google Brain earlier this year. Known for pioneering work in deep learning and AI for diverse applications.
Deepgram: An AI startup specializing in deep learning for automatic speech recognition (ASR).
Nvidia: A leading player in the AI and GenAI space, Nvidia provides advanced platforms and solutions for GenAI applications. Its innovations in accelerated computing and AI software enable the development and deployment of generative AI models across various sectors
AMD (Advanced Micro Devices): A multinational semiconductor company known for its central processing units (CPUs), graphics processing units (GPUs), and other hardware components.
Intel: A multinational corporation and technology company Intel manufactures hardware and owns the X86 architecture which is used widely in AI and general computing.
Agility Robotics: A company that develops and manufactures bipedal robots. The intersection of GenAI and robotics at Agility signifies a step towards more advanced and capable robotic systems.
Tesla: An American electric vehicle, clean energy, and now robotics and AI company at the forefront of integrating AI technologies to enhance autonomous driving and energy products.
Doug Seven: Director of Software Development and General Manager for Amazon CodeWhisperer at AWS, Doug Seven leads the development of generative AI tools to improve productivity.
Shunsaku Sagami: A 32-year-old billionaire who utilizes a proprietary AI to target a niche in the Japanese business landscape: retiring business owners with no suitable successor.
Jensen Huang: CEO of Nvidia, known for driving the company's ambitious plans in the AI and GenAI domain, pushing the boundaries of what's possible in terms of chip architecture and AI applications
Dr. Lisa Su: Chair & CEO of Advanced Micro Devices, pioneer and thoughtleader in the semiconductors space. Responsible for the turnaround and success of AMD over the past decade.
Pat Gelsinger: CEO of Intel, known for his long-term vision in advancing Intel's technology.