Welcome back to the AI Geekly, by Brodie Woods.
We’re going to keep it really short this week.
It’s Thanksgiving in the U.S.
The tryptophan narcolepsy is kicking-in; Every time I blink, 30 minutes passes.
Every time I blink… Sam Altman negotiates his job.
Every time I blink… another AI achieves General Intelligence.
Every time I blink… I hear another unsubstantiated rumor about the OpenAI board’s questionable decisionmaking.
For everyone that has been keeping-up with the news cycle on the OpenAI drama, It’s been a week-long fever dream of possible theories, hearsay, and nonsense.
We’re going to let the dust settle on this one. In the absence of data, analysis is impossible. You can’t use A to solve for B, C, and D (especially if that A is followed by “ltman”).
AI News - Neural Nuggets
Technically, the Technical Details DO Matter…
Judge Tosses Much of Sarah Silverman’s Legal Challenge
What it is: U.S. District Judge Vince Chhabria tossed the majority of Sarah Silverman’s lawsuit earlier this week, citing a very practical reality, per Mr. Justice: "There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs' books… To prevail on a theory that LLaMA's outputs constitute derivative infringement, the plaintiffs would indeed need to allege and ultimately prove that the outputs 'incorporate in some form a portion of' the plaintiffs' books.”
What is means: Suits under U.S. Copyright Law hinge upon substantial similarity. Anyone who has tried to use Meta’s LLaMA model, or any other available LLM to reproduce works would come to the same conclusion. This isn’t how these models work, not only won’t they reproduce Sarah’s books —they can’t. The “predict-the-next-token” core (glorified auto-complete for laypeople) simply cannot reproduce works at scale like this. Just like tires, made from rubber, cannot be made to reproduce the rubber trees from which their composite parts are formed.
Why it matters: Sarah and other IP owners have their hearts in the right place, but they need better representation and a better strategy. Ready-Fire!-Aim isn’t going to cut it. The technical details matter. While we are not fans of government over-regulation when it comes to AI, there may be a need for legislation to move forward to protect the creators of IP that ultimately is used to train these advanced models (whether LLMs as in this case, diffusion models as in related lawsuits, or anything else).
One more thing: Alternatively, and perhaps preferably, we like the proactive approach some are taking: avoid the issue entirely, by directly engaing on IP, up-front, with creators. Unity Technologies (makers of the Unity game engine powering many of today’s most popular games) recently announced Muse —its proprietary AI model trained exclusively on owned and/or licensed content, avoiding the PR and ethics mess that comes from treating copyrighted works like the Pirate Bay.
Tech News -Nevitably Nvidia
“We’re in the AI chip business, and cousin, business is a-boomin’”
Nvidia’s Q3 Results beat Street estimates
What it is: NVDA generated Q3 revenue of $18.1 Bn, ahead of Street estimates of $16.2 Bn and up 34% q/q. While earnings also beat expectations, that’s not actually why we follow the name… hah. We just care about the AI read-throughs. For that, we look at its Data Center revenue, which was $14.5 Bn —a 279% increase y/y and 41% q/q.
What it means: It means they’re killing it. The quarter demonstrates exactly what we have been hearing everywhere: there is insatiable demand for Nvidia hardware stemming from the emerging AI Age and the heavy compute workloads that rely on Nvidia’s world-leading tech. Honestly, that’s not hyperbole. That’s just the story.
Why it matters: We highlight Nvidia here frequently at the AI Geekly, and it’s for specific reasons. Not only are Nvidia AI chips/cards the most in-demand pieces of hardware on the planet, but the company consistently outperforms. CEO Jensen Huang, and his team took serious bets in taking its hardware into an AI-focused direction well-before the space had really matured.
Credit where it is due: In fact, Nvidia is largely responsible for the AI we know today, due to those highly-ridiculed (and ultimately prescient) bets on RT cores and Turing cores in its 2018 RTX 2000 series cards. This, combined with the company’s active, aggressive work to get its GPUs into the hands of AI researchers was a major contributor to the expansion and acceleration of the modern (developing) AI ecosystem.
Before you go… We have one quick question for you:
With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.
General Intelligence (AI): a level of artificial intelligence where a machine can understand, learn, and apply its intelligence to a wide range of problems, similar to human cognitive abilities.
Derivative Infringement: In copyright law, it's a type of infringement where a new work is created from an existing one without permission.
Predict-the-next-token: A core mechanism in AI language models where the system predicts the next part of a sequence based on the context provided.
AI Chips/Cards: Specialized hardware designed to efficiently process AI-related tasks, such as data processing and machine learning.
RT Cores and Turing Cores: Types of processing cores in Nvidia's graphics cards, used for rendering realistic lighting and graphics in video games and AI applications.
Y/Y and Q/Q: Year-over-year and quarter-over-quarter, respectively, financial terms used to compare a company's performance across different time periods.
OpenAI: An AI research and deployment company known for developing advanced AI models and advocating for ethical AI usage.
Meta’s LLaMA Model: Refers to a large language model developed by Meta (formerly Facebook), used for various AI applications like natural language processing.
Unity Technologies: A video game software development company known for its Unity game engine, widely used in game development and now exploring AI applications.
Nvidia: A technology company best known for its graphics processing units (GPUs) for gaming and professional markets, and more recently, for its role in AI and deep learning.
Muse: A proprietary AI model developed by Unity Technologies, trained exclusively on owned or licensed content.
U.S. District Court: A federal court in the United States where many legal cases, including copyright and intellectual property disputes, are adjudicated.
Sam Altman: Part-time?? CEO of OpenAI, a research lab that develops and promotes friendly artificial general intelligence. He is a well-known figure in the AI community and is considered to be one of the leading experts in the field.
Sarah Silverman: An American comedian and actress, mentioned in the context of a legal challenge related to AI and copyright law.
Vince Chhabria: A U.S. District Judge mentioned in the context of ruling on a legal case involving AI and copyright issues.
Jensen Huang: CEO of Nvidia, recognized for leading the company's significant advancements in AI and GPU technology.