AI Geekly: Victory Lap

Quite the week for Microsoft and OpenAI...

Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.

TL;DR SB 1047 DOA; Another Copilot Revamp; OpenAI Stacks Dollars and Features; Liquid Gold

This week we can’t help but feel like Microsoft and OpenAI are doing pretty well for themselves. First off, both companies stand to benefit from Governor Newsom’s decision to veto California State Bill 1047, which sought to introduce substantial regulations and restrictions on AI companies building models over a certain size. Microsoft also announced a revamp of its non-Microsoft 365 Copilot properties (i.e. not the Copilot built into the Office suite, but the other Copilot built into Windows 11, Microsoft Edge, and Bing) which has received positive attention despite confusion about everything MSFT touches being called “Copilot” these days… OpenAI had a pretty impressive week, closing a $6.6 Bn equity raise valuing the company at $157 Bn (roughly 14x 2025E revenue of $11.6 Bn, which actually seems like a reasonable multiple in context), and releasing Canvas, a new pop-up asset in its ChatGPT interface that displays and interacts with text and code similar to Anthropic’s Artifacts. So, a pretty good week for OpenAI and Microsoft (especially by virtue of MSFT’s investment in OAI). Lastly this week, we’ll dedicate a little bit of real estate to our Model Corner and take a peek at two new models from Nvidia and an off-shoot of MIT that make the state-of-the-art in open-source a little more state-of-the-art-y. Read on below!

*Note: the AI Geekly will be taking a short pause after this week, returning on October 21st!

VETO!
Governor Newsom vetoes controversial SB 1047

Pictured: an actual photo of Governor Newsom vetoing the bill…

What it is: California Governor Gavin Newsom vetoed State Bill 1047, which would have imposed the nation's most stringent regulations on the development and deployment of large AI models. The bill, supported by a diverse coalition including Elon Musk, AI safety advocates, and Hollywood figures, aimed to establish legal liability for AI-related harms and mandate safety testing and "kill switch" mechanisms for AI systems over a certain size (as we covered in a previous edition of the Geekly). However, the bill faced strong opposition from major tech companies, including Google and OpenAI, who argued it would stifle innovation and harm the state's burgeoning AI industry. Advocates for open-source AI including Dr. Andrew Ng and open-source model repository Hugging Face warned that the bill threatened to severely restrict the community’s ability to release and use open-source models, criminalizing models themselves instead of the action of using models for nefarious purposes.

What it means: While Newsom's veto is certainly a victory for Big Tech, allowing companies to continue developing AI with minimal regulatory oversight (in CA), it also benefits AI startups and smaller open-source players who don’t necessarily have the resources to deal with regulatory burden. Despite the veto, one thing SB 1047 did accomplish was sparking public debate about the potential risks of AI and the need for thoughtful regulation to mitigate those risks.

But Why?: Explaining the rationale for the veto, the Governor suggested that smaller AI models could pose equal or greater dangers than the larger ones targeted by the bill. Interesting that he chose to throw the bill out altogether, rather than start with regulating larger models. We don’t really buy this excuse… The more likely concern was the threat to the state economy. Given the rationale provided, we wouldn’t be surprised to see an SB 1047 v2 that targets models regardless of scale, which means the veto here only served to kick the can down the road.

Why it matters: One of the greatest challenges of this generation will be finding a balance between fostering innovation and safeguarding the public. While the bill's defeat may provide a temporary reprieve for tech companies, the pressure for AI regulation is likely to intensify as the technology continues to advance and its societal impact grows. The lack of clear federal guidance on AI leaves the door open for a patchwork of state-level regulations, an unwieldy proposition that will create uncertainty and fragmentation in the AI landscape.

Updating the Other Copilot
Microsoft refreshes its non-Microsoft 365 Copilot

What it is: Microsoft is giving its AI-powered Copilot assistant a major redesign, infusing it with voice and vision capabilities and a more personalized user experience. Led by new Microsoft AI CEO Mustafa Suleyman (formerly of Google DeepMind and Inflection AI), the revamped Copilot draws inspiration from Inflection's Pi AI assistant, emphasizing a warmer, more conversational approach. Key features include a customizable homepage based on conversation history, natural-sounding voice interaction with multiple voice options, and Copilot Vision – allowing the AI to "see" and respond to content on webpages (similar to the Recall feature added to Win 11).

What it means: This redesign reflects Microsoft's ambition to make Copilot a more engaging and integrated AI companion for users across all its platforms. It mirrors Microsoft's recent efforts to upgrade the AI capabilities within its Microsoft 365 suite (Office), where Copilot has been integrated into apps like Word, Excel, and PowerPoint. However, this revamped Copilot exists outside the traditional Microsoft 365 ecosystem, transforming Microsoft's consumer-facing offerings like the Bing search engine, the Edge browser, and the dedicated Copilot app in Windows 11.

Why it matters: Microsoft is leaning into new computing modalities (compare with Apple and Meta’s embrace of spatial computing) becoming a central part of how people interact with technology as AI develops. The new Copilot, with its emphasis on personalization, intuitive voice interaction, and the ability to understand visual information, aims to deliver a more compelling and user-friendly experience than existing AI assistants. So far OpenAI’s ChatGPT is the dominant player in defining AI interaction, which remains limited to traditional chatbots. We’re interested to see if MSFT’s fresh approach can unlock new, more organic ways of interfacing with AI tools.

Open Aye-ooooo!!
OpenAI closes record-breaking $6.6 Bn raise and releases new functionality

What it is: OpenAI has secured a massive $6.6 billion funding round, valuing the company at a staggering $157 billion — one of the largest private capital raises in Silicon Valley history. At roughly 14x forecast 2025 revenue of $11.6 Bn the valuation actually isn’t even that outlandish (Nvidia trades at >40x!). This significant influx of capital comes as OpenAI continues to push the envelope in AI development, recently introducing its o1-preview and o1-mini reasoning models (discussed in a prior Geekly) and this week releasing "Canvas," a new collaborative interface for ChatGPT designed to enhance writing and coding projects (reminiscent of Anthropic’s “Artifacts”).

What it means: This substantial investment belies the capital intensity of leading-edge AI development, in the LLM space in particular, where model developers continue to rely on scaling laws to produce larger, more intelligent models necessitating increasing amounts of compute and data. This dry powder provides the company with ample resources to fuel its ambitious research agenda, scale its operations, and maintain its competitive edge. The company's high cash burn rate, driven by the aforementioned substantial costs of AI development and the rapid growth of its ChatGPT platform, suggests that OpenAI will likely require additional funding rounds in the future to achieve its goal of making advanced AI a widely accessible resource.

Why it matters: With OpenAI's dominance in the generative AI market, the company's ability to continue translating its significant financial backing into commercially viable products and increasingly intelligent models will be closely watched by investors. We still need to see many more needle-moving cases of GenAI investment resulting in significant returns in order to justify the tremendous investment in the space over the past 2+ years. While the new Canvas tool is helpful in coding and writing workflows, and the o1 family of reasoning models is a step in the right direction, it’s not the game changing technological leap that we’re all holding our breath for.

Pulling-up the ladder: One more thing worth mentioning; OpenAI’s funding round asks investors not to back or invest in OAI competitors. Specifically, Anthropic (founded by ex-OpenAI-ers), Elon Musks’s xAI (an initial backer of Open AI), Ilya Sutskever’s Safe Superintelligence (Ilya’s a former AI co-founder) and AI search engine Perplexity. While there’s no contractual obligation or “teeth” in the agreement to enforce this, it does potentially limit the ability of competitors to raise dollars in a capital-burning business that leads to cash-strapped companies, even with billion-dollar checks being written. We’d be curious what the FTC thinks of this.

Model Corner: Nvidia and Liquid AI Up the Ante
Impressive new models advance state of the art

What it is: The already exciting AI model landscape just got a little more interesting. Nvidia, known primarily for its dominant position in AI hardware, has released NVLM-1.0-D-72B, the latest in its family of open-source language models, rivaling the performance of closed-source giants like OpenAI's GPT-4. What’s important here is that license terms for these models are more open than Meta’s Llama model family and include the associated training data (something Meta has been reticent to provide). This generous move towards open access comes alongside another compelling development: Liquid AI, an MIT spinout, has introduced its first series of "Liquid Foundation Models" (LFMs). These models challenge conventional AI scaling wisdom by prioritizing efficiency and adaptability over sheer size —a development that could substantially reduce the amount of compute and energy required to train and run language models.

What it means: Nvidia's NVLM-1.0-D-72B model, demonstrates exceptional capability in handling both visual and textual information, even surpassing its text-only performance after multimodal training. The company's decision to open-source this powerful model and its training data are helpful and appreciated contributions to open science. Liquid AI's LFMs, inspired by the human brain, offer a unique approach to AI development. While Liquid AI is open science but not open source (as in they publish their research but don’t provide open weight models or access to training data) it’s a lot better than a black box.

Smaller is better: Liquid’s FLMs’ ability to learn continuously and integrate new information, while maintaining efficiency and a smaller memory footprint, could open new use cases for AI applications in resource-constrained environments, such as edge computing and mobile devices. For example, with Apple’s Apple Intelligence, simple tasks can be managed by the on-device AI using smaller language models, but more complex ones must be sent to cloud servers. Liquid AI’s LFMs would obviate the need to invest in expensive cloud infrastructure if more processing can be done on device.

Why it matters: These two releases are helpful in pushing the state of the art forward. We would be pleased if Nvidia's open-source strategy encouraged other tech giants to reconsider their closed-source models, making their AI developments available to the broader scientific community. Liquid AI's novel approach could lead to a new generation of AI systems that are more adaptable, efficient, and capable of tackling complex tasks in resource-constrained settings. Developments such as these benefit both societal and enterprise goals by democratizing AI access for the former, and introducing more flexible, lower-cost, on-prem solutions for the latter.

Before you go… We have one quick question for you:

If this week's AI Geekly were a stock, would you:

Login or Subscribe to participate in polls.

About the Author: Brodie Woods

As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.