• AI Geekly
  • Posts
  • AI Geekly - Well... That Escalated Quickly...

AI Geekly - Well... That Escalated Quickly...

Board Opens Box on Schrödinger's CEO

Welcome back to the AI Geekly, by Brodie Woods.

Due to a very exciting weekend in the AI space, we’re mixing things up (again!) With a longer-form version. We’ve also updated our note that was set to go out at 5 AM ET this morning with new content from overnight and this AM, denoted by purple text.

AI Quote of the Week as of Friday AM:

“[The lab, the MSFT partnership, ChatGPT] those aren’t really our products. Those are channels into our one single product, which is intelligence, magic intelligence in the sky. I think that’s what we’re about.”

Sam Altman, Former CEO of OpenAI. No wait. Maybe not. Maybe current? Wait, yes, maybe? No. Nevermind.

AI Quote of the Week as of Sunday PM:

“…we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.”

Satya Nadella, CEO of Microsoft and perhaps biggest winner of this weekend’s AI drama

TL;DR - Scientific Progress Goes “Boink”

OpenAI Pauses Plus Sign-Ups (told you it was an iPhone Moment!), Delivers on Determinism, while Board Pauses Brains | Novel Tools Show Us What Real-time Generation Means | A Closer Look at AMD Chips | MSFT Might As Well Make Some Chips If You’re All Doing It

AI News

We Were Trying Really Hard Not to Write About OpenAI This Week…

Not The Half-Day Friday Many were Expecting

If you’re anything like me, on Friday afternoon you were looking forward to a brisk weekend relax, the last week before Thanksgiving. US Thanksgiving for you Canadians, or, as I prefer to call it, Thanksgiving II: Electric Bugaloo. Anyways, those thoughts of dancing turkey drumsticks, cranberry, and mashed potatoes were startlingly interrupted by news that Sam Altman, (sad-puppy) face of AI globally, and CEO of OpenAI (makers of ChatGPT), had been summarilly dismissed by the Board on somewhat specious grounds. Sam also lost his board seat. Co-founder Greg Brockman was removed as chairman and director, but told he could remain on as President (he didn’t —quitting in solidarity, as did a number of other OpenAI researchers). Let us take you through the series of events and implications:

The Blog Post Heard Around the World

To say this sent shockwaves through the AI world and the broader tech sector is putting it mildly. People literally fell out of chairs —a group most likely including Microsoft CEO Satya Nadella, also completely blindsided by the news a mere minute before it was made public. X (formerly Twitter) lit-up with questions, demands, outpours of support, sarcastic thanks from competitors and more. The common question was “why?” The board’s announcement stated only “[Sam] was not consistently candid in his communications with the board.” Many rumors have been published suggesting a rift between Altman and fellow board member and Chief Scientist Ilya Sutskever, indicating Altman’s push to move more quickly to productize and release new capabilities was at odds with Sutskever’s focus on safety (though this has been denied as a motive by more recent sources).

“Just kidding. It’s uh, Opposite Day. Actually we’re fired and you and Greg are the board now.”

Open musing by many about the prospect of Sam & co. starting their own rival to OpenAI probably didn’t sit too well with Nadella in light of MSFT’s $13 Bn+ investment in OpenAI… As the story developed, there were several reliable leaks suggesting that, in a complete 180, OpenAI’s board had begun courting Altman’s return to his role as CEO, with a number of changes, potentially including the resignation of the remaining board members. Blowing through a deadline on Saturday, the parties engaged in extended negotiations throughout Sunday, discussed in greater detail below. This odd turn of events is a great reminder of just where we are in the AI lifecycle, despite the speed. We’re at the Bill Gates + Steve Balmer, Steve Jobs + Steve Wozniac stage: the ‘wild’ sector/mythos-building period.

Board’s lack of experience in public markets is apparent. They were running OpenAI like it’s any other private company. It isn’t.

While there has been a great deal of focus on ‘why’ this happened, it’s equally important to ask ‘how’. Regardless of the motives for what appears on the surface to be some type of coup; a poorly-executed, tone-deaf, cut-off-your-face-to-spite-your-nose, how-could-anyone-think-that-was-reasonable coup —how could a responsible board not only allow this to happen, but actually be the instigator? A quick review of the backgrounds of the board members responsible for the move, we see a handful of academics, policy thinktank members, the CEO of a private internet company of questionable value (Quora), etc. What we don’t see is a single person with public company board experience. This is a junior board. Their inability to 1) read the room/foresee the blowback such a draconian move would create, and 2) maturely govern the board belies this naivete. No seasoned public board would fire one CEO and two Founder board members, leave a $13 Bn investor and tens of thousands of developers in the lurch, and erode the trust of hundreds of world leaders, researchers, clients, developers and more —all for what boils down to a ‘communication breakdown’? More than likely, Microsoft will use its leverage to ensure that there are some grown-ups in the room next time.

The King is Dead, Long Live The King Clippy

After a tense three days of rumors, negotiations and quantum superposition of a CEO/Former-CEO/Returning-CEO, the boardroom drama is at an end: Altman and Brockman have joined Microsoft to run a dedicated AI research lab, with many core OpenAI researchers expected to join. The juxtaposition between ‘Furious’ Microsoft CEO Satya Nadella on Friday with ‘Thrilled’ Satya on Sunday evening must have been a sight to behold. In fact, for MSFT, this may have been the best possible outcome (even vs. status quo). While it hasn’t acquired OpenAI’s IP, it picked-up something better: the public face of AI and the goodwill + mindshare that brings, along with some of the top leadership and researchers in the space, and greater clarity and control over productization of AI research —further entrenching MSFT’s perceived dominance amongst the enterprise-scale Tech players. This was a masterclass in negotiation, board maneuvering and strategy (by everyone but the OpenAI board). MSFT got 80% of what they wanted from OpenAI for a song, and likely has the opportunity over the coming months to pull what remains of its investment in OpenAI, given the substantial changes. Despite the positive outcome for MSFT, trust in OpenA’s board and management is likely nil. Indeed, if one of the drivers of the ouster was a opposing views on safety, then OpenAI’s newly minted interim CEO, Emett Shear (fmr. CEO of Twitch) with his public concerns about AI safety is a good fit. I maintain that management with public markets experience is critical, given what we just witnessed over the weekend. Purposefully or not, the actions of the OpenAI board amount to what in our view constitutes the greatest self-own in the AI space to date.

The Case for Model and Vendor Diversification, or: How I Learned to Stop Worrying and Love Open Source

For the Open Source AI community, these events were a reminder of exactly why open, distributed, democratized and accessible AI ecosystems are vital —they eliminate the key-person risk, even key-company risk through a broader community of collaboration. Not only are the models highly-performant, the communities friendly and engaging, but from a purely business perspective at a minimum they can offer fallbacks and failsafes when closed vendors experience instability (be that at the infrastructural, or corporate level) —a hedge against the whims of decisionmakers.

An Intermission of Lighter Fare
Handful of fun AI projects to spark imaginations

What it is: Turning to lighter fare, we have a round-up of some fun AI projects released this week that we think will inspire.

Why it matters: Each of these projects demonstrates how quickly AI technology is developing in its ability to interpret and understand what humans want from it. Compared to the state of prompting just 12 months ago, our ability to communicate with these models has improved at an incredible pace —one we expect to continue to accelerate in coming weeks/months.

AI’s iPhone Moment Feels Pretty Bang-on
Remiss if we left-out ChatGPT Plus Pause and Determinism Switch

What it is: Two more quick bits on OA (we’ll abbreviate it the rest of the way, for everyone’s sanity). This week, following on the incredible popularity of the new features released to ChatGPT —the first viable AI agents for the masses— OA has had to pause new sign-ups. They also introduced a new “determinism switch”, much to the joy of enterprise customers.

What it means: The pause on new sign-ups is effectively the AI version of a line-up around the block for the new iPhone. It’s a win for the product roadmap and continuing evolution of the technology, which management describes as “…channels into our one single product, which is intelligence, magic intelligence in the sky.” As we stated last week, we think this will be how many initially begin to interact with more powerful, performant AI.

Why it matters: OA’s determinism switch is something that enterprise customers (particularly their model risk teams) have been clamoring for. Ideally, a model would reproducibly generate the same outputs with the same inputs, ceteris paribus (all things being equal). Historically OA’s models have not, variously explained by inherent statistical randomness, or its MoE approach. The lack of consistency is problematic in enterprise settings, especially highly-regulated ones like Finance and Medicine, where sequential generations with the same inputs producing diametrically opposed answers is a nightmare. This development will be welcomed by many corporate clients who have been hamstrung.

Tech News

AMD ABCs
Deep dive into AMD’s newest AI chips debuting next month

What it is: AMD is poised to launch its Instinct MI300X (GPU) and MI300A (GPU+CPU) AI accelerators, marking a significant entry into the datacenter-scale AI hardware market. These accelerators are built on the advanced CDNA 3, employing a mix of 5nm and 6nm IPs. The design mirrors Nvidia’s GPU-only Hopper H100s/H200s and its Grace Hopper H100s/H200s (both GPU+CPU).

What it means: As we’ve covered before, This launch signifies AMD's strategic move into high-performance AI computing, challenging NVIDIA's dominance. AMD’s strategy is interesting and it may win frustrated customers from Nvidia (though its ROCm software while feature complete is less widely adopted than Nvidia CUDA). The MI300X comes with 192 GB of HBM3 memory, imbuing it with the most ram we’ve seen to date on an AI accelerator card, surpassing the H200’s 141 GB of HBM3e and Intel’s Gaudi 3’s 144 GB HBM3. Memory pools are a critical bottleneck in AI training and inference —AMD may not be able to beat Nvidia on compute (we’ve seen this play out on the consumer graphics side for years), but perhaps they can outshine on ram.

Why it matters: Chip Nerds may recall AMD was the pioneer in the use of HBM memory on GPUs, first using it on its 300 series (Figi) cards in 2015. This may feel a bit like a home court advantage to Dr. Lisa Su (CEO) and her team. The introduction of AMD’s new hardware in December gives customers more options when building-out their hardware. Even if AMD’s hardware falls short, the increased supply should help to reduce costs to some extent, at the very least serving as inference hardware and freeing-up preferred Nvidia GPUs for training.

Boarding the Bandwagon: Microsoft Might As Well Make Their Own Chips Too
Announces CPU and GPUs for Azure Datacenters

What it is: Microsoft is finally joining the in-house chip-making party with the Azure Maia AI Accelerator and Azure Cobalt CPU. This move, while significant for Microsoft, follows a well-trodden path by other tech giants, reflecting a late yet necessary step in the AI arms race​​.

What it means: Microsoft’s entry into chip development represents a strategic, albeit delayed, response to the industry's shift towards customized silicon solutions. It's a catch-up play to align with competitors who've already embraced this approach​.

Why it matters: While not groundbreaking, Microsoft's decision to develop its own chips is crucial for its long-term competitiveness in AI and cloud services. With its OA alliance, it has the model-side well in-hand, but the hardware and compute side could use improvement. MSFT’s OA investment is paid in large part via compute. Ergo, it makes sense that, rather than rely on a competitor or vendor’s product (which just happens to be the most in-demand object on the planet at the moment), vertical integration can help to de-risk at a minimum, and reduce cost, if done scaleably.

Before you go… We have one quick question for you:

If this week's AI Geekly were a stock, would you:

Login or Subscribe to participate in polls.

About the Author: Brodie Woods

With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.

Glossary

Terms:

  • OpenAI Plus: A subscription service that provides access to OpenAI's GPT-4 model. GPT-4 is a large language model that is capable of generating human-quality text, translating languages, writing different kinds of creative content, and answering your questions in an informative way.

  • Determinism: A property of a model that guarantees that it will always produce the same output for the same input. This is important for applications where reliability is critical, such as finance and medicine.

  • MoE: Mixture of Experts, an approach to model training that uses multiple experts to make predictions. This can improve the accuracy of the model, but it can also make it more complex to train.

  • iPhone Moment: A reference to the long lines of people that formed outside Apple stores on the day that the first iPhone was released. This is being used to describe the sudden popularity of ChatGPT Plus.

  • CDNA 3: AMD's latest architecture for AI accelerators. CDNA 3 is a major leap forward for AMD and is designed to compete with NVIDIA's Ampere architecture.

  • HBM3: A type of high-bandwidth memory that is used in AI accelerators. HBM3 is the latest generation of HBM memory and is designed to provide the high bandwidth that is needed for modern AI applications.

  • ROCm: AMD's software platform for AI accelerators. ROCm is an open-source platform that is designed to be compatible with a wide range of hardware.

  • CUDA: NVIDIA's software platform for AI accelerators. CUDA is a proprietary platform that is designed to be exclusive to NVIDIA hardware.

  • nm Process: Measure of the size of the transistors on a computer chip. The smaller the nm process, the more transistors that can fit on the chip, and the more powerful the chip. AMD's new Instinct MI300X and MI300A AI accelerators are built on a 5nm and 6nm process, respectivel, making them some of the most powerful/efficient AI chips available today.

Entities:

  • OpenAI Board: The board of directors of OpenAI. The board is responsible for overseeing the company's operations and making decisions about its future direction.

  • Microsoft: A technology company that has invested $13 billion in OpenAI. Microsoft is a major partner of OpenAI and is helping to develop and commercialize its technology.

  • AMD: A company that manufactures computer processors and graphics cards. AMD is a major rival of NVIDIA in the AI market.

  • NVIDIA: A company that manufactures computer processors and graphics cards. NVIDIA is the leader in the AI market, but AMD is making significant inroads.

Key People:

  • Sam Altman: Former CEO of OpenAI, a research lab that develops and promotes friendly artificial general intelligence. He is a well-known figure in the AI community and is considered to be one of the leading experts in the field.

  • Greg Brockman: Co-founder of OpenAI. Brockman is a former technical lead at Google and is a leading expert in artificial intelligence.

  • Ilya Sutskever: Chief Scientist of OpenAI. Sutskever is a former researcher at Google Brain and is a leading expert in deep learning.

  • Steve Jobs: Co-founder and CEO of Apple. Jobs was a visionary leader who transformed the electronics industry.

  • Steve Wozniak: Co-founder of Apple. Wozniak is the engineering genius behind the Apple II, the first successful personal computer.

  • Bill Gates: Co-founder of Microsoft. Gates is one of the richest people in the world and is a major philanthropist.

  • Dr. Lisa Su: President, CEO, and chair of AMD. She is widely recognized for her leadership in transforming AMD into a leading provider of processors and graphics cards.