• AI Geekly
  • Posts
  • AI Geekly - This is your Brain on AI

AI Geekly - This is your Brain on AI

I’ve never seen anything like this in my career

Welcome back to the AI Geekly, by Brodie Woods. Your curated 5-minute-ish read on the latest developments in the rapidly evolving world of AI.

AI Quote of the Week:

“I’ve never seen anything like this in my career, and I’ve been doing AI for 20 years… We saw a window of opportunity that was just completely disruptive, and I think as an organization, we didn’t want to get left behind.”

-Jeff McMillan, Chief Analytics & Data Officer, Morgan Stanley Wealth Management on internal ChatGPT-powered Wealth Advisor chatbot.

Questions? Reach-out 

TL;DR - Exec Summary

Recipe for AGI; AI&Robots -the new PB&J; Promptbreeder's Evolution; AMD + OpenAI chip away at Nvidia; Bard teaches Assistant iambic pentameter

In this week’s AI Geekly, OpenAI's Greg Brockman discusses the simple recipe for Artificial General Intelligence (AGI), hinting at cognition-capable AI within a decade.
Meanwhile, Google DeepMind's new RT-X model is breaking barriers in robotics, outperforming specialized models in zero-shot attempts.
DeepMind also unveiled Promptbreeder, an AI that optimizes prompts iteratively and autonomously, accelerating AI self-improvement.
On the chip front, AMD aims to snatch Nvidia's crown with a software-centric strategy, while OpenAI considers a play for verticle integration. Lastly, Google Assistant gets a Bard-infused IQ boost, promising smarter interactions —whether this improves Spotify song request accuracy is an open question, but we are optimistic for a world where we no longer have to listen to “Hauling Oats” and “Taylor Shift”…

-Read on for the full story

AI News

Girls = Sugar+Spice+Everything^Nice
Boys = Snips+Snails+√(Puppy_Dog_Tails)
AGI= Compute+Data+Algos?
ChatGPT co-creator on ingredients to produce AGI

What it is: OpenAI’s Greg Brockman broke-down the ingredients for Artificial General Intelligence to a group of undergrads, sharing important insight into how the creators of ChatGPT and the GPT-4 model view the path-to, and impacts-of, a general AI capable of cognition.

What it means: One of the most connected and influential players in the AI space, Brockman offered various insights, expectations and aspirations throughout the interview: AGI within 10 years, post-scarcity society and high-basic-income (HBI) in lieu of universal-basic-income, and the ultimate technological breakthrough: Artificial Superintelligence, and Singularity.

Why it matters: Roughly a year ago for many (ChatGPT) and 3+ years ago for some (GPT-3), the world changed dramatically. The potential and abilities of new GenAI began to take center stage. Humanity is going through an open public beta of AI, one with considerable risks. While it is tempting to hit pause, and give ourselves time to catch up, the world doesn’t work like that (other nations will not wait). For better or worse, Change is the one constant. The next decade will be an exciting one.

RT-X Not Just for Graphics Cards Anymore…
New Robot Instruction Model Accelerates AI-Robot Convergence


RT-2-X (55B): one of the biggest models to date performing zero-shot tasks in lab

What it is: Google (DeepMind) released an updated robot instruction model called RT-X (two months after its predecessor RT-2), trained on diverse robotic experimentation data from an assortment of universities and studies.

What it means: Despite training on a variety of materially different robotics papers/data (different formats, approaches, etc.), the new generalized RT-X model outperforms more specialized robot models, further support for the thesis of emergent capabilities in generalized models (models doing things they weren’t explicitly trained to do).

Why it matters: AI advances over the past 18 months are quickly becoming translatable to the relatively more stagnant world of robotics. It’s not hard to imagine a future where robots have been imbued with an acceptable degree of intelligence to be useful —powered by multimodal GenAI models including vision, sound speech, and emergent reasoning.

Getting Better All the Time...
DeepMind Promptbreeder self-improves for better results

What it is: Determined to fill the AI Geekly news pipeline to capacity, Google DeepMind also announced Promptbreeder this week, a self-improvement mechanism for Large Language Models (LLMs). Promptbreeder evolves AI instruction prompts autonomously to better suit specific domains. Notably, it doesn’t only improve underlying prompts, it iterates on the methodology of improvement as well —a novel approach.

What it means: One of DeepMind’s researchers shares a helpful analogy: LLMs are like computers. Think of prompts being executed like programs being run on the “LLM computer”. Thus, it is not surprising to see dramatically different outcomes/outputs from LLMs, depending on the prompts used, as we see stark difference in outputs between programs on a computer (e.g. Excel vs. Photoshop). We can think of prompt fine-tuning as creating an LLM program, and Promptbreeder as an automated LLM program creator.

Why it matters: It took nearly 100 years from the creation of computers for true automated coding to be enabled late in 2022. It has taken only six years since the invention of the Transformers underlying LLMs for humans to automate the “coding” (prompt creation) of an emerging AI technology about which we are only just beginning to understand the possibilities. When we talk about the rate of change accelerating, it's things like this and the pace of the RT-X evolution in the story above are clear examples.

Tech News

Family Feud —”Survey saaaaaaays…. SOFTWARE”
AMD’s AI hopes depend on ROCm

Game of Thrones vibes as distant relatives Lisa Su (AMD CEO) and Jensen Huang (Nvidia CEO) vie for AI chip supremacy

What it is: AMD CEO Dr. Lisa Su discussed her plan to win AI market share from dominant rival Nvidia (80-95%) with a focus on software. Not only is there a natural opportunity to exploit current AI GPU scarcity (particularly with regard to Nvidia’s best AI chips), but Dr. Su sees particular opportunity on the inference side (running AI, not training it).

What it means: There simply are not enough modern, performant AI-capable GPUs available for purchase, or in cloud offerings, to meet current AI-driven demand. Part of Nvidia’s success has come from the walled-garden of its CUDA software, which requires Nvidia GPUs —historically it has been the software of choice for massively parallel applications like AI training. AMD’s own software ROCm recently achieved feature parity with CUDA, making it a serious contender if it can get community buy-in (ROCm historically has a bad reputation of bugginess and poor support).

Why it matters: We’ve seen a few chip designers announcing new inference-focused products. Training is still the more intensive task, but this could reduce the current burden on training-capable GPUs coming from inference applications. While inference may be easier to build and score quick wins (hardware need not be as performant), this is effectively ceding first place to NVDA when it comes to training.

Much Ado About Music-Volume
Google’s Bard performs with Google Assistant

The Bard’s famous Hamlet, reimagined with AI

What it is: In a natural synergistic move, Google is injecting its Bard AI package into Google Assistant. As is in-vogue with many AI tools these days, it is being released in drips and drabs as an “early experiment” to testers before being rolled-out publicly.

What it means: Users of Google’s Assistant on mobile devices will benefit from a smarter Assistant, imbued with the “reasoning”, generation, and contextual fluency that Bard brings to Assistant. Vanilla Google Assistant has been showing its age, now seven years old (but weaker in many cognitive facets than comparable seven-year-olds), and the bar has been raised for AI agent interactions. Siri finds itself in a similar spot. We’re not even going to mention Samsung’s “Bixby”, or MSFT’s Cortana, who has essentially been erased from Windows 11 and replaced by Copilot.

Why it matters: GenAI for its own sake isn’t sustainable. It needs to make tangible impact for it to matter. At the AI Geekly, we take a critical eye to AI developments overall. in a consumer setting, these tools need to both “wow” and materially improve quality of life to survive. In a business setting, AI must generate meaningful financial impacts to companies’ bottom lines in order to be sustained and invested in. We have seen very few deployed, in-production GenAI use cases that make math. This is still very much in the “show me” stage of AI’s development.

Since We Haven’t Mentioned AI Chips in 3 Seconds…
Chips! Chips! Chips! — OpenAI looks to roll its own

What it is: OpenAI, the company behind ChatGPT, is weighing the merits of crafting its own AI chips. Amidst an ultra-tight chip market (Nvidia’s H100 GPUs may literally be the most in-demand object on earth at the moment), the company is exploring possible partnerships or M&A to achieve a degree of autonomy/vertical integration

What it means: OpenAI’s probing mirrors a broader tech industry trend: a pivot towards in-house chip design (Apple, AWS, and Google do), a strategic move to reduce reliance on external players. This could potentially cut down OpenAI's hefty op costs and supply much needed hardware for training —both currently tethered to the vagaries of external chip market dynamics.

Why it matters: The news isn’t surprising. Vertical integration, particularly with AI chip scarcity, makes intuitive sense. While the move does risk dividing the attention of management, ultimately, we believe convergence of software and hardware to be inevitable.

What everyone’s missing: Nobody seems to be drawing a distinction between chip design and chip fabrication. Very few companies who design their own chips in-house actually manufacture them. Manufacturing is exclusively by a handful of companies (TMSC, Samsung, etc.) with fabrication backlogs booked for years! There’s no fast track. The only opportunity is if OpenAI partners with, or acquires, a company already in queue.

Before you go… We have one quick question for you:

Is the AI Geekly Too High Level?

Login or Subscribe to participate in polls.

About the Author: Brodie Woods

With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.

Glossary

Terms:

  • Artificial General Intelligence (AGI): highly autonomous systems capable of outperforming humans in most economically valuable work. Unlike Narrow AI, AGI has generalized intelligence, allowing it to perform any intellectual task that a human being can do.

  • Artificial Superintelligence: A form of AI that surpasses human intelligence across all practical fields, from ordinary problem-solving to creative innovation. It's speculated that ASI could outperform the best human brains in practically every field.

  • Singularity: a hypothetical point in the future when artificial intelligence surpasses humans, leading to unpredictable and potentially rapid advancements. It suggests that such an AI could improve itself at an exponential rate, transforming society.

  • Zero-shot Learning: involves training a model to handle tasks it has never seen during training. It allows AI to make predictions or categorizations in scenarios where no explicit examples were provided beforehand.

  • RT-X: a robot instruction model developed by Google DeepMind, designed to perform unseen tasks in academic labs, bridging the gap between AI advancements and robotics.

  • Promptbreeder: A mechanism by DeepMind aimed at evolving AI prompts autonomously to better suit specific domains. It not only improves underlying prompts but also iterates on the methodology of improvement.

  • ROCm (Radeon Open Compute): An open-source software platform from AMD designed to provide a framework for GPU-accelerated computing. ROCm facilitates the development of applications that leverage the computational power of AMD GPUs, aiming to make GPU computing accessible and easy to use for developers.

  • CUDA (Compute Unified Device Architecture): A parallel computing platform and interface created by NVIDIA. CUDA allows developers to use NVIDIA GPUs for general-purpose processing, particularly useful in AI.

  • Inference (in AI): Inference refers to the process of using a trained machine learning model to make predictions or decisions. It's the phase where the trained model is deployed and starts taking in new data to provide outputs.

  • Vertical Integration: In business, vertical integration refers to a strategy where a company owns or controls its suppliers, distributors, or retail locations to control its value or supply chain. Vertical integration can help a company reduce costs and improve efficiency.

  • Bard: An AI package developed by Google for consumer applications. Google’s consumer-facing answer to ChatGPT and Bing Search,. Recently integrated into Google’s productivity suite.

  • GenAI (Generative AI): AI that creates new content through learning from existing data. This technology can be utilized across various domains such as images, text, music and video.

Entities:

  • OpenAI: A Microsoft-backed AI research lab responsible for the popular ChatGPT as well as the underlying GPT-3.5 and GPT-4 LLMs and DALL-E image generator.

  • Google DeepMind: A British AI company, acquired by Google in 2014, and combined with Google Brain earlier this year. Known for pioneering work in deep learning and AI for diverse applications.

  • AMD: Advanced Micro Devices, an American multinational semiconductor company that develops computer processors and related technologies for business and consumer markets.

  • Nvidia: An American multinational technology company, known for its Graphics Processing Unit (GPU) products, which are particularly well-suited for AI applications.

Key People:

  • Greg Brockman: President, Chairman and Co-founder of OpenAI, a significant player in the AI space.

  • Dr. Lisa Su: Chair & CEO of Advanced Micro Devices, pioneer and thoughtleader in the semiconductors space. Responsible for the turnaround and success of AMD over the past decade.