- AI Geekly
- Posts
- AI Geekly: Sign Here
AI Geekly: Sign Here
New regulations, new deals, and new models
Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.
TL;DR AI Regs; Pay to Display; Model Extravaganza
This week we have an eclectic mix of news for you: An update on an important piece of legislation that will impact AI development for years to come, an examination of a new content deal between a prominent AI company and a large publisher, and a summary of a host of new model announcements in both the image and text generation domains. Read on below!
The Californication of AI
California’s State Bill 1047 has AI companies in a tizzy
What it is: We’ve been remiss in not covering California Senate Bill 1047 which passed through the Assembly Appropriations Committee and is now headed for a floor vote in the CA state Assembly next week. The bill, introduced by State Senator Scott Wiener, aims to mandate safety testing for developers of large AI models. Many in the AI space are concerned that the bill is “well-intentioned but ill-informed” (a direct quote from Speaker emeritus Nancy Pelosi).
What it means: The bill has sparked significant debate among tech companies, lawmakers, and AI experts:
Anthropic, initially opposed to the bill, worked with Senator Wiener to amend SB 1047. The revised version incorporates some of Anthropic's suggestions.
OpenAI’s Chief Strategy Officer, Jason Kwon, penned an open letter to Wiener expressing concern that the bill would threaten innovation, competition and national security. OAI joins the chorus against the bill including VCs Andreessen-Horowitz and Y Combinator, as well as Meta, open-source-advocates Hugging Face, and Google.
Senator Scott Wiener defends SB 1047 as a "straight-forward, common-sense, light-touch bill" that builds on President Biden's executive order. He argues that the bill only requires large AI developers to perform safety tests they have already committed to doing.
Why it matters: The debate surrounding SB 1047 highlights the difficulties of regulating AI, a topic which we have covered at length. While it is vital to ensure that AI tools are used for the benefit of society, it is challenging to regulate in such a way that doesn’t stifle innovation or give an advantage to competing nations who are not bound by the same regulation (see the U.S.’s efforts to curb Chinese AI). We find ourselves agreeing with Dr. Andrew Ng’s suggested approach: rather than hamstring the development of AI technology because of the possibility it could be used to harm, instead we should focus on restricting specific applications. So while we don’t ban the use of Photoshop because it could be used to reproduce copywritten works or illegal content, instead we prosecute when the tool is used for nefarious purposes. AI should be treated similarly.
Condé Nast Cash
Publisher inks deal with OpenAI to license content
What it is: Condé Nast, the global media company behind publications like Vogue, The New Yorker, GQ, Architectural Digest, Vanity Fair, Bon Appétit and Wired, has entered into a licensing agreement with OpenAI.
What it means: Interestingly, this multi-year partnership sounds a little different from some of the others we have covered. While previous agreements discussed the ability for OAI to train on the data, the verbiage for this agreement seems to focus on OpenAI being able to surface or display content from Condé Nast. This is different than how the recent deals with the AP, Axel Springer, the Atlantic, Dotdash Meredith, The Financial Times, NewsCorp, Vox, etc. were worded (although the TIME deal hints at this). Exact financial terms remain undisclosed (a frustrating trend with these deals of late).
Why it matters: This story is another development in the ongoing dance between AI companies and content producers/copyright holders. We’re pleased to see more deals being done where AI companies are playing nice and going through the front door. That being said, these deals shouldn’t be restricted to the companies who have multimillion dollar legal teams and big shiny New York offices. The content creators who are most harmed by AI companies are the smaller players. A compensation scheme will need to be developed to remunerate the artists and writers whose creative works have been harvested with no regard. It’s a little frustrating that SB1047 that we just mentioned in the story above does nothing to address this egregious theft and disparity in power.
Models Galore
New models from Microsoft, Nvidia, Google, Ideogram, AI21, and Luma AI
Busy week for LLMs: Microsoft released its newest Phi 3.5 models in three flavors (mini, MoE, and vision); Nvidia released its new Mistral-NeMo-Minitron 8B and 4B Instruct on-device small language model; and AI21 labs stole the show, with its Jamba 1.5 series models —1.5 Mini and 1.5 Large. Both models use the Mamba model framework paired with AI21’s SSM-Transformer architecture which allow the models to compete head-to-head with much larger traditional transformer LLMs while offering better speed and efficiency and most importantly better effective context window use (this is to say that unlike other models which degrade in performance as the upper bounds of the context window is filled, Jamba 1.5 does not).
Busy week for Image and Video Gen AI too!: Hot on the heels of the new Flux image generation models (covered in a previous Geekly), Ideogram released Ideogram 2.0, a significant upgrade from its prior model that better handles color palettes, text, and remixing (we just re-did the AI Geekly logo with it in this week’s Geekly!). Google released Imagen 3, its latest image gen model. It doesn’t excel in anything in particular, but that is in keeping with Google’s AI efforts to date (it always seems to be playing catch-up). Luma AI released Dream Machine 1.5 with higher quality and prompt adherence.
Takeaway thoughts: We see two trends worth noting in the world of Text-based large language models (LLMs). The first: we are beginning to see greater diversification in architecture as demonstrated by the introduction of Jamba 1.5, which is distinctly different from standard transformer-based LLMs, offering promising potential for innovation. Second, it’s notable that when a new LLM king is crowned (we would put Claude 3.5 sonnet at the top in our experience) many models are released in quick succession that offer feature parity but at lower cost. While previously it took months for others to catch up, it now seems to be a matter of weeks. The AI flywheel is spinning faster.
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.