Onward and Upward

AI landscape continues to mature

Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.

TL;DR BBQs; Amii est un bon amie d’IA; PC++; You gotta keep ‘em regulated; Jensen’s moon mission; Backfilling the training

This Memorial Day Long Weekend (sorry Canada, you already had yours) we have a jam-packed note as we bring you highlights from the Upper Bound AI conference we attended, put on by the venerable Alberta Machine Intelligence Institute (Amii) —a must-attend for those in industry. Next, we’ll dissect Microsoft’s latest announcement: a new AI PC dubbed Copilot+ with dedicated neural processing hardware developed by Qualcomm —they claim it’s the first chip with dedicated AI hardware (it’s not. AAPL has had them in their M-silicon chips for years). Next we’ll briefly discuss the emergence of the self-appointed “AI Gang” as US politicians position themselves to begin regulating the nascent AI industry. We’ll take a look at Nvidia’s quarter, which knocked it out of the park (again, again) before closing-out with a look at a recently announced content deal signed between OpenAI and News Corp.

You’ve Got a Friend in Me Amii
Conference puts Alberta AI prowess on display

What it is: Upper Bound is an annual AI conference held in Edmonton, Alberta, organized by the Alberta Machine Intelligence Institute (Amii). The 2024 event, which took place from May 21 to 24, featured over 150 sessions, 200 speakers, and attracted more than 5,000 attendees from around the world. The conference included a mix of presentations, master classes, workshops, networking events, and a major party, all designed to foster learning, discussion, and collaboration in the field of artificial intelligence.

What it means: Upper Bound serves as a significant platform for the global AI community, bringing together researchers, business leaders, students, and enthusiasts to explore the latest advancements and applications in AI. The event highlights the transformative power of AI across various industries, including agriculture, law, healthcare, and government services, among others. Featuring prominent speakers such as experts Richard S. Sutton (creator of reinforcement learning), UT Austin’s Amy Zhang, and CIFAR AI Chair Patrick Pilarski, along with business leaders including Vancouver International Airport CIO Gerri Sinclair, and Clio’s CEO Jack Newton; the conference underscored the importance of embracing AI across a variety of industries to get ahead in a competitive and rapidly evolving technological landscape. We were impressed with speakers who reminded the audience of the human aspect, like DDB’s Howard Poon, whose panel often touched on the interaction between creatives and AI.

Why it matters: This year’s Upper Bound conference emphasized the impact and power of AI, showcasing its potential to drive innovation and solve complex real-world problems. The event also introduced the Executive AI Summit, a new program tailored for corporate leaders, highlighting the increasing need for strategic AI adoption at the executive level. Therein Amii CEO Cam Linke led several key sessions on the future of AI, workforce implications, and AI’s role in various sectors. As we mentioned in our intro, we think this is a valuable conference for anyone from an expert to an enthusiast —there’s something for everyone. This was our first year attending and we walked away impressed. We’ll be back next year.

Copilot+ or Copliot-?
Microsoft tries to reinvigorate its hardware business with AI

What it is: Microsoft announced a new category of AI-powered personal computers called Copilot+ PCs, designed to integrate advanced artificial intelligence capabilities directly into the hardware. These PCs, set to debut in mid-June, will be available through Microsoft's Surface line and various manufacturing partners. The Copilot+ PCs feature neural processing units (NPUs) capable of 40 tera operations per second (TOPS), 16GB of RAM, and 256GB of storage, enabling them to handle AI tasks locally rather than relying on cloud-based processing. This is about 4x faster than AMD’s mobile chips with integrated NPUs and 3x faster than Apple’s M3 silicon but just 5 TOPS slower than Intel’s latest Lunar Lake Chips. We also note that the measly 16GB of RAM (and no mention of VRAM) means that Copilot+ PCs will only be able to run smaller 7B or 8B parameter and quantized models as opposed to full-blown 30B or 70B parameter models which require commensurate levels of dedicated RAM.

What it means: The introduction of Copilot+ PCs ties-in nicely with our thesis regarding the move to local compute/edge compute and away from large, centrally located, and high latency data centers (that said we were impressed with the low latency of OpenAI’s AI recent assistant demo, which, despite running in cloud datacenters is able to respond in real-time). MSFT’s move emphasizes the growing importance of local AI processing for enhanced performance, privacy, and efficiency. By integrating AI capabilities directly into the hardware, Microsoft aims to provide users with faster, more responsive computing experiences, reducing the latency and potential security risks associated with cloud-based AI. This move also positions Microsoft to compete more effectively with other tech giants like Apple and Google, who are also advancing their AI technologies (with Apple targeting local AI inference and Google continuing to focus on its solid cloud offering).

Why it Matters: Microsoft's Copilot+ PCs are perhaps one of the most interesting developments in an otherwise dormant PC market. It’s been several years since anything “exciting” happened with PCs, though the Mac market has remained exciting with the introduction of Apple’s custom M silicon years ago, and its continued innovation with the subsequent M2, and M3 chips it designs. Depending on how well integrated new Copilot capabilities are in the new hardware, the launch of these AI-powered PCs could drive some growth in otherwise sleepy PC sales, as businesses and consumers seek to leverage the advanced AI functionalities for productivity and creative tasks.

You Don’t Pick Your Own Nickname
Self-dubbed politician “AI Gang” formed as actual AI leaders create safety alliance

What it is: The "AI Gang," a bipartisan group of U.S. senators led by Senate Majority Leader Chuck Schumer, has unveiled a high-level roadmap for AI regulation, emphasizing substantial government investment in AI research and development. The plan calls for $32 billion in annual funding and outlines policy recommendations claimed to address AI's potential risks, such as discrimination, job displacement, and election interference. Concurrently, at the AI Seoul Summit, 16 leading AI companies, including Microsoft, Google, Amazon, and OpenAI, committed to a set of voluntary safety standards aimed at ensuring the responsible development and deployment of AI technologies.

What it means: The AI Gang's roadmap aims to provide a broad framework for future AI legislation, focusing on the potential benefits and risks of AI without imposing immediate stringent regulations. This approach reflects a belief that heavy regulation could stifle innovation, contrasting with the more regulatory-focused stance of the European Union. Meanwhile, the AI safety alliance formed at the Seoul Summit marks a historic agreement among global tech giants to publish safety frameworks and risk thresholds, with commitments to halt the development of AI models if severe risks cannot be mitigated. We question these leaders’ commitment to the democratization of AI if their intention is to keep advanced AI tools solely available to wealthy tech companies and the politically connected.

Why it Matters: The U.S. Senate's roadmap aims to maintain American AI supremacy with emphasis on investment over regulation, highlighting a significant policy divergence between the U.S. and other regions like the EU, which could impact global AI leadership and innovation dynamics. The industry's voluntary commitments at the Seoul Summit set a precedent for global standards in AI safety, promoting transparency and accountability among leading AI developers —It remind us of the number of Self-Regulating Organizations (SROs) in the finance industry like FINRA (Financial Industry Regulatory Authority), which acts as a first line of defense for ensuring fair markets. Together, these initiatives highlight collaborative efforts among policymakers and industry leaders to harness the benefits of AI while safeguarding against its potential harms. Ultimately, we remain skeptical of their support for open science, given previous public comments regarding the need to limit public access to models.

We’re Seriously Running Out of Superlatives to Describe These Guys
Nvidia does it again with blow-out quarter

What it is: Nvidia reported its financial results for the first quarter of 2024, showcasing record-breaking performance driven by the surging demand for AI infrastructure (surprise surprise). The company achieved quarterly revenue of $26 billion, marking an 18% increase from the previous quarter and a staggering 262% year-over-year uptick. This exceptional performance was primarily fueled by robust demand for NVIDIA's data center products, particularly its Hopper GPU platform, which saw significant adoption among cloud providers and enterprise clients.

What it means: The results underscore NVDA's pivotal role in the ongoing AI revolution, with its data center segment alone generating $22.6 billion in revenue, a 427% increase from the previous year. This growth highlights the widespread adoption of Nvidia's technology in AI and high-performance computing applications, with notable customers including OpenAI, Meta, and Google. The company's financial success has driven a meteoric rise in its stock price, surpassing the $1,000 mark and leading management to announce a 10-for-1 stock split to make shares more accessible to retail investors.

Why it Matters: Nvidia's blow-out quarter is a testament to the transformative impact of AI on the tech industry and beyond. The company's advancements in AI infrastructure are not only driving its own growth but also enabling other industries to harness the power of AI for enhanced productivity and innovation. Combined with the stock split, increased dividend and strong forward guidance, NVDA’s quarter was a masterclass in keeping the momentum going, pleasing investors and placating the acerbic investment community with enough tidbits to allay concerns about a slowdown in growth anytime soon.

“Sometimes We Pay For Things”
OpenAI cuts content deal with News Corp.

What it is: OpenAI has entered into a significant content-licensing agreement with News Corp, the media conglomerate that owns The Wall Street Journal, The New York Post, and other major publications. This multi-year deal, valued at over $250 million, grants OpenAI access to both current and archived content from News Corp's extensive library. The partnership aims to enhance OpenAI's AI models, such as ChatGPT, by incorporating high-quality journalistic content into its training data.

What it means: We highly doubt that to this point OpenAI has held back from training its models on News Corp content. Indeed, we expect that OpenAI has trained on much of the so-called “open internet” or the internet that can be easily accessed and browsed, which includes much of News Corp’s content. So really, this is the “beg forgiveness instead of asking permission” approach to training data licensing. At $250 mm its sufficiently high priced to be worth it for News Corp and not terribly punitive to the well-heeled OpenAI.

Why it Matters: The partnership between OpenAI and News Corp provides a valuable datapoint as content creators and Ai companies continue their delicate dance of price discovery. The deal also highlights a shift in the media industry, where traditional publishers are increasingly partnering with AI firms to monetize their content and navigate the challenges posed by generative AI technologies. This deal could pave the way for similar agreements between AI companies and other media organizations, potentially transforming the landscape of news consumption and production. Additionally, it addresses some of the legal and ethical concerns surrounding the use of copyrighted content by AI, offering a model for fair compensation and collaboration between tech firms and traditional media

Before you go… We have one quick question for you:

If this week's AI Geekly were a stock, would you:

Login or Subscribe to participate in polls.

About the Author: Brodie Woods

As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.