• AI Geekly
  • Posts
  • AI Geekly: Special Labor Day Edition

AI Geekly: Special Labor Day Edition

Eight AI updates to end the summer

Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) longer(ish) read.

This week we have eight, count ‘em, EIGHT stories for you. We figured with the extra day off and how busy this week has been in AI that we would err on the side of “more is more”. If you have the time, then by all means, read ‘em all. But if you’re pressed, feel free to pick 2-3 stories that resonate the most (the links in the summary below will take you directly to the story)

In this AI Geekly Special Labor Day Edition we’ll take you through:

  1. A very cool new AI companion that is guaranteed to make your Zoom calls easier by creating an AI simulation of you sitting in front of the webcam, talking and looking engaged (in reality you could be in the kitchen making a smoothy or sitting on a beach sipping a margarita).

  2. Next, We have two Nvidia updates: the company’s Q2 results and some blueprints for AI Agents that should make the technology more accessible and lower barriers to entry.

  3. We’ve also covered some of the latest news out of OpenAI on its Strawberry (formerly Q*), that is being used to train its latest model, Orion. What sets Strawberry apart is its unique ability to succeed in logical tasks where current AI models falter.

  4. We’ll also look at the latest update from Klarna, a Swedish buy-now-pay-later company that has been one of the first companies to profitability adopt generative AI in its business. We have… opinions…

  5. From there, we’ll look at some new technology that combines biology with transistors to create incredibly powerful, and ok, honestly pretty creepy bio-organic processors.

  6. We’ll also talk through the latest news on California’s State Bill 1047 which passed last week and is headed for Gov. Newsom’s desk, how it may impact popular open-source platforms, and another California bill that may have even deeper implications.

  7. We’ll also look at a new model from Magic AI with a super long context window, which can handle up to 10 mm lines of code, or 750 books (or one AI Geekly Special Labor Day Edition Newsletter 😉).

  8. Finally, we’ll look at some promising technology out of Google DeepMind, which uses AI to predict the next frame for popular old-school videogame DOOM rather than rendering it –and we’ll explain what that means and what the implications are.

Saying “Present!” Without Being Present
Lip-synched AI Avatar takes the stress off Zoom calls

Not a real rendering from the site.

What it is: Pickle is a new AI tool that allows users to participate in video calls using an AI-generated avatar that looks and sounds like them, eliminating the need to be on camera. The avatar mimics the user's facial expressions and movements in real-time, synchronized to their voice, while allowing for customization and control over their virtual appearance.

What it means: Pickle certainly takes Nvidia’s Broadcast feature that uses AI to simulate perfect eye contact with the webcam a step further. The technology addresses the growing phenomenon of "Zoom fatigue," where constant on-camera presence can be draining and distracting. Through its realistic and customizable avatar solution, Pickle’s tool reduces the pressure of being constantly "camera-ready" and allows users to focus more on the content of the meeting. We can see how this tool could be a godsend for remote and hybrid work, allowing for more efficient and engaging virtual interactions. Personally, I can never get the lighting in my office juuuust right.

Why it matters: The increased reliance on video conferencing for work and personal interactions post-Covid has highlighted the limitations of traditional video call formats. Pickle’s approach to alleviating these restrictions by offering a more flexible and less self-conscious way to participate in video calls could prove valuable. If the technology delivers on its promises of realism and seamless integration, it could be adopted by a wide range of users, particularly those who find constant on-camera presence tiring or distracting –in particular for some neurodivergent i.e. (ADHD, Autism, etc.) individuals who struggle with these types of interactions. The potential for increased engagement and productivity makes Pickle a development worth watching.

Nvidia is Still Walking on Sunshine, but Storm Clouds Have Appeared
Record Revenue Tempered by Delays and Bubble Concerns

What it is: Nvidia announced record-breaking Q2 revenue of $30 Bn, up 122% y/y, fueled by (you guessed it!) massive demand for its AI chips. Net income also rose 168%, reaching $16.6 Bn. However, the company confirmed a three-month delay in its upcoming Blackwell chips due to design flaws, pushing back mass production to Q4. Nvidia also launched NIM Agent Blueprints, a new toolset designed to simplify AI agent development for businesses.

What it means: While Nvidia remains perhaps the biggest beneficiary of the emerging AI era, the Blackwell delay raises concerns about future growth and profitability for jittery investors worried the air might be coming out of the balloon (and benefits short sellers eager for a pullback in the S&P’s best performing stock of 2024). The company’s gross profit margin dipped to 75% due to write-offs related to the chip flaws, indicating potential margin pressure as newer chips arrive. Furthermore, operating expenses rose as Nvidia ramps up spending on research and development for future products.

The big concern: Persistent questions remain about whether the broader AI market is delivering a sufficient return on investment to justify this massive spending surge (or is it a splurge?). Other than a handful of smaller companies who’ve shared some insights on how AI has benefitted their business (see Klarna), FAANG players who have invested heavily in Nvidia’s chips have yet to deliver the returns the market expects for such capital-intensive spending. Nvidia’s NIM Agent Blueprints initiative reflects the company’s recognition of the need to simplify AI development, which is critical to unlocking potential use cases that can generate material value.

Why it matters: Nvidia’s future is tightly linked to the success of AI. The Blackwell delay and rising expenses point to potential challenges ahead. Despite record-breaking revenue, concerns linger about the sustainability of the current AI investment climate. While Nvidia's NIM Agent Blueprints may be helpful in encouraging broader AI adoption, the broader ecosystem needs to begin generating healthy returns soon, before investors lose their patience.

Planting Strawberry, Seeding the Future of AI
Advanced Reasoning Capabilities to Enhance OpenAI Models

What it is: With OpenAI’s Sam Altman posting cryptic clues to his Twitter account and reports that the company is demonstrating the model to the U.S. government, we think things are official enough that we can start reporting on what has been rumored for the past few months: OpenAI is developing a new AI model code-named Strawberry, designed to significantly enhance reasoning capabilities beyond its current models. Strawberry can solve novel math problems, write complex code, and even tackle word puzzles, demonstrating proficiency in logic and language understanding.

What it means: OpenAI aims to incorporate a distilled version of Strawberry into a chatbot as early as this fall. This technology could be a significant step towards addressing the limitations of current chatbots, which often struggle with complex reasoning and generate inaccurate outputs (hallucinations). Strawberry could enhance ChatGPT’s performance, enabling it to handle more complex tasks and provide more accurate responses. The full-scale Strawberry is also reportedly being used to generate synthetic training data for OpenAI's next flagship large language model, Orion, and its future AI agents.

Why it matters: OpenAI faces intense competition in the generative AI space, with rivals like Google and Anthropic also advancing reasoning capabilities. Strawberry's successful deployment could reinforce OpenAI's leadership position and pave the way for more advanced applications. Clearly, it hopes that by being transparent with the U.S. federal government it can avoid potential shut downs. The use of Strawberry-generated synthetic data for training future models is significant insofar as it could alleviate the reliance on real-world data (we suspect that OAI is approaching the upper bounds of scrapable, useable public data) and has raised privacy and IP concerns. The potential for an AI that can truly reason unlocks interesting potential use cases for enterprise and society.

Klarna: Buy AI Now, Pay Fire Later
BNPL Giant Embraces Automation to Boost Profits Ahead of IPO

What it is:  Swedish buy-now-pay-later giant Klarna, continues its aggressive pursuit of AI-driven automation to streamline operations and reduce its workforce (previously covered by the Geekly). The company reports that its AI assistant is already performing the tasks of 700 employees, leading to a significant reduction in customer service resolution times from 11 minutes to just 2 minutes. Klarna's workforce has shrunk from 5,000 to 3,800, primarily through attrition, and CEO Sebastian Siemiatkowski suggests it could eventually reach 2,000. For those keeping track at home, yes, their math is off by some 500 people…

What it means: Klarna's strategic embrace of AI depicts what is expected to be a broader trend of AI transforming industries. While the company's success in leveraging AI for customer service demonstrates its potential to enhance efficiency and reduce labor costs, it also brings to the forefront the question about reduced employment as a result of AI. What we’re not seeing here is a high-grading of human employees to more complex tasks, instead the company is taking the facile approach of simply clipping coupons instead of reinvesting and doubling down –a short-sighted strategy in our minds.

In pursuit of IPO: This focus on replacing humans with AI is directly tied to Klarna's pursuit of profitability as it prepares for an IPO next year. The company's reported 73% year-over-year increase in average revenue per employee underscores the impact of these efforts. As it’s not a public company, we don’t have access to other metrics, but we suspect there’s more to the story. We believe it will be hard for the company to retain and hire talent given the approach it has espoused to date, but that isn’t too surprising given the company makes its money off the backs of those who can little afford it. Questionable employment tactics and predatory business practices go hand in hand, after all.

Why it matters: Klarna's case study offers a glimpse into a future where AI plays a central role in reshaping labor markets, and it’s not looking pretty. While the company's emphasis on attrition rather than layoffs mitigates immediate job displacement concerns, the long-term implications of widespread AI adoption for employment remain a significant consideration. Klarna's success in leveraging AI to drive profitability will likely encourage other companies to explore similar AI strategies, potentially accelerating this transformation across industries. We hope to see more positive applications that are less of a race to the bottom(feeders). This is one reason the democratization of AI (and open-source AI) is so vital. If the keys are left in the hands of the Klarnas of the world, dystopian visions aren’t that far-fetched. Rather, if AI can be used by the masses to generate monetary value, perhaps the need for Klarna’s services will diminish. In the end, it may be Klarna that pays later.

Rent-a-Brain for AI Research
Human brain organoid bioprocessors hit the cloud

What it is: FinalSpark has begun offering paid remote access to its bioprocessors. What are those? You may be asking yourself. Bioprocessors, utilize human brain organoids, (what are those? You may be asking yourself) —miniaturized versions of organs produce in vitro— touted to be a million times more energy-efficient than traditional digital processors. This design combines hardware, software, and biology interfacing human tissue masses with electrodes and cameras to create a biological processor. Academic researchers can now access a shared “neuroplatform”, featuring four organoids, for $500 per user per month, with select projects eligible for free access.

What it means: It’s basically the AWS of biological processors; Cloud biocomputing. The potential benefits of biocomputing are significant, particularly in addressing the growing energy demands of AI. FinalSpark envisions a future where bioprocessors could drastically reduce energy consumption for tasks like LLM training, offering both economic and environmental advantages. We’re still a ways away from that, so don’t toss your Nvidia H100s in the garbage just yet.

Why it matters: As AI models continue to grow in complexity and energy requirements, biocomputing presents a compelling potential approach to achieving high efficiency. The research conducted on this platform could yield valuable insights into the potential of brain organoids for processing information and contribute to the development of more sustainable and efficient AI systems. It is a little bit off-putting though…

California AI Regulation: From Oversight to Overreach?
SB 1047 sparks debate while AB 3211 raises alarm bells

What it is: California lawmakers recently passed SB 1047, a bill requiring AI developers to anticipate and mitigate potential harms caused by their AI systems, facing pushback from major tech companies. Simultaneously, another bill, AB 3211, is progressing through the legislature. This bill focuses on AI image generation, mandating stringent watermarking systems for all AI-generated images and imposing extensive testing and disclosure requirements on model creators.

What it means: SB 1047 represents a significant step towards regulating AI, but its broad language and potentially onerous requirements have drawn criticism. Opponents argue that the bill's vague definition of "critical harms" and the scope of developer liability could stifle innovation, particularly in the open-source AI community. AB 3211, however, raises even greater concerns. The bill's strict watermarking requirements, potentially technologically infeasible at present, could effectively ban many existing AI image generation models and services in California. This raises questions about the bill's true intention and whether it serves as a form of regulatory capture benefiting large tech companies like Microsoft, OpenAI, and Adobe, who support the measure, at the expense of open-source development.

Why it matters: These bills are the first of many US regulations being introduced as part of the struggle to balance AI innovation with safety and ethical concerns. The debate surrounding SB 1047 showcases the challenges of regulating rapidly evolving technology and the potential unintended consequences of overly broad legislation.

A heavy hand: AB 3211’s potential impact on AI image generation, particularly its chilling effect on open-source development, raises serious concerns. The bill's questionable technological feasibility and lack of clear exemptions highlight the need for lawmakers to engage in more nuanced discussions with technical experts to ensure that legislation effectively addresses the challenges of AI without stifling innovation or unduly favoring large corporations. We concur with Dr. Andrew Ng that rather than block the development of models themselves, the focus should be on regulating misuse of the tool as opposed to the tool itself.

Do You Believe in Magic?
Ultra-long context window model pushes the boundaries of AI reasoning

What it is: Magic AI has unveiled its latest large language model, LTM-2-mini, boasting a context window of 100 million tokens, equivalent to roughly 750 novels or 10 million lines of code. This massive context window allows the model to retain and process vast amounts of information during inference, a significant departure from traditional models with limited context (e.g. GPT-4 has a context length of only 128k). To evaluate these ultra-long context models, Magic AI developed a new benchmark called HashHop, designed to assess the model’s ability to store, retrieve, and reason over extended information sequences.

What it means: Magic AI's LTM-2-mini far exceeds the context length of even Google’s Gemini 1.5 Pro (2 mm), augmenting the AI's capacity to reason over long sequences of information. In its announcement, the company considers the possibility that this breakthrough could transform how AI models learn, shifting from training-dominant approaches to more dynamic in-context learning during inference. Focusing on software development, Magic AI highlights the potential for code synthesis, where models with access to vast codebases, documentation, and libraries could generate significantly more sophisticated and accurate code. We’ve experienced these limitation firsthand: today’s models cannot hold an entire (larger) codebase, meaning the coding support they can provide is often limited.

Why it matters: Ultra-long context windows could enhance AI capabilities across various domains, from software development to scientific research and creative writing. Magic AI's new HashHop benchmark offers a more rigorous method for evaluating these models, addressing the limitations of existing benchmarks that fail to capture the true complexity of long-context reasoning. The company's partnership with Google Cloud to build a supercomputer powered by NVIDIA GB200 NVL72 also piqued our interest. We expect to hear more from Magic AI in the not-too-distant future.

Can it Run Doom?
Google’s GameNGen uses inference instead of rendering

What it is: Google DeepMind researchers have developed GameNGen, a neural network that can simulate the classic video game Doom in real-time without relying on a traditional game engine. Running on a single Google TPU chip, GameNGen leverages a diffusion model to predict and render each frame, achieving playable gameplay at 20 frames per second. This marks the first time an AI has successfully simulated a complex video game with such fidelity and interactivity using inference instead of rendering. Put simply: it’s not being told what to show, it’s guessing what to show based on its training.

What it means: We don’t use the term often, but this is a breakthrough. inference instead of rendering offers a new avenue of game design and development compared to traditional methods. AI-driven game engines like GameNGen could drastically reduce development time and costs by eliminating the need for manually programmed game logic.

Why it matters: AI-powered engines could enable entirely new genres of games with dynamically evolving environments and gameplay, responding to player actions in real-time. GameNGen's ability to simulate complex environments in real-time has potential applications beyond gaming, in fields like autonomous vehicle development, virtual and augmented reality.

Before you go… We have one quick question for you:

If this week's AI Geekly were a stock, would you:

Login or Subscribe to participate in polls.

About the Author: Brodie Woods

As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.