- AI Geekly
- Posts
- AI Geekly - Making it Rain
AI Geekly - Making it Rain
All about the Benjamins...
Welcome back to the AI Geekly, by Brodie Woods, bringing you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.
TL;DR Canadian Bacon; Don’t count your chips Nvidia
This week we have a shorter note for you again as we work to bring you our AI 101 Primer report. Canada puts its money where its globally recognized talent is, with a major investment pledge eclipsing rival nations. We close out with an update on the latest chips from Google and Meta while examining how the most powerful chip may not always be the best option for live use cases as a result of the high cost of AI inference. Read on below.
That’s a lot of “Loonies”
Canadian government pledges $2Bn to AI
What it is: The Canadian federal government this week announced plans to invest C$2.4 Bn (US $2 Bn) into its own burgeoning AI economy, buttressing the nation’s globally renowned AI talent with desperately needed capital.
What it means: This is a sizeable investment both locally and on the world stage, eclipsing fellow AI powerhouse France’s €1.5 Bn (US$ 1.6 Bn) despite France being 66% larger by population and a third larger by GDP. The UK, not considered a global AI hub (sorry Mr. Turing), has pledged a similar amount to France but will likely need to do more to foster local AI innovation if it wants to catch-up to the leaders of the pack.
Why it matters for Canada: The investment comes at a critical time for Canada’s AI community: as we’ve covered in previous issues, Canada overindexes when it comes to AI talent, producing some of the top AI research globally. What has been missing is the capital, at scale, to backstop these experts and begin to build-out a vibrant AI economy in earnest. This can only be done by enabling vertical integration from a financial lifecycle perspective. This means Canadian entities and institutions investing in Canadian companies, generating outsized earnings and then reinvesting those earnings (rinse-and-repeat-style) back into the AI economy, as we see in Silicon Valley with its richly developed venture funding ecosystem.
Why it matters for the US: The 2024 budget contains $3 Bn in spending ostensibly related to AI across the US government’s various agencies to develop, test, and procure. While the dollar value is higher, the impact is likely to be muted due to the detrimental muddling effect of bureaucracy. On a relative basis (GDP and pop) this is a paltry figure. Direct investment is necessary to foster growth and maintain U.S. AI supremacy (from a geopolitical superpower perspective). Other nations are nipping at the heels of the US (Canada, France, China) and some are investing directly in US companies, capturing IP, as in the case of Saudi Arabia and UAE. Uncle Sam better open the checkbook (but, uh, deal with the ballooning national debt too please).
Chips Ahoy!
Google and Meta announce new AI silicon
What it is: at Google Next 2024 in Las Vegas this week, Google announced two new chips: the latest iteration of its AI-dedicated Tensor Processing Unit (TPU) its upcoming TPU v5p and its first ARM CPU the Axion. Not content to let Alphabet hog all the attention Meta (the artist formerly known as Facebook) provided details of its new MTIA chip.
What it means: The list of companies hoping to dethrone Nvidia’s AI chip supremacy grows by the day (these aren’t GOOG and META’s first chips though). Announcing the performance of their respective chips, both companies compared their silicon to Nvidia’s Hopper chips (H100 and H200) which is all well and good, but Nvidia recently announced its state of the art Blackwell series B100 (GPU) and B200 (GPU+CPU) chips. Comped vs. the newer Blackwell cards Google and Meta’s chips fall short from a raw performance perspective. Nvidia maintains its lead.
Why it matters: While Nvidia has won the battle, they may have, in this instance, lost the war. There is a common theme, perhaps a sobering fact, shared by many enterprise and medium-sized companies we’ve worked with: when successful pilot projects mature to production use cases, often the cost of inference (the cost to use an AI model to generate an output) is prohibitively expensive to the point that it becomes uneconomic to use. This is a direct result of the cost of Nvidia GPUs being passed along to their customers. GOOG and META, despite having less-performant chips can offer a more affordably priced inference solution with their vertically integrated models resulting in a lower cost structure.
That’s it for this week. We’re keeping it short again as we continue to work to bring you our AI 101 Primer —everything you need to know to be dangerous when it comes to the topic of AI.
We think you’ll dig it —have a great week!
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.