- AI Geekly
- Posts
- AI Geekly - FOMO
AI Geekly - FOMO
Fear Of Missing Out
FOMO
Welcome back to the AI Geekly, by Brodie Woods. We’ve been travelling in Canada this week, but that doesn’t mean we’ve forgotten about you. We’ve had our chilly (-4F, -20C) finger on the pulse of the most important AI developments, neatly summarized for your perusal below.
We’re also inviting readers in the NYC area to attend a special in-person event with friends of the Geekly —AI investment corp, Developer Capital (details below).
TL;DR Where Exactly Are We In AI?; Three AI Musketeers; NYC AI Event
This week, at a high level, we take stock of where we are in the overall artificial intelligence development narrative. We contend that while advances are coming quickly, and increasingly capable, we're still at the very beginning of what will be a massive transformational shift for human society. We also take a look at three impressive models and associated accoutrements that not only take a swing at Open AI and Anthropic but introduce impressive capabilities of their own. Finally, we close out with a special invite to an NYC AI event this March 7th with special guest Max Mccrea.
Are We Too Late to the Game?
Addressing the FOMO; the time is now
What it is: The most common questions we hear from clients are: “Are we too late to the game?” “Have we already missed the boat?” We wanted to spend a little bit of time now delving into where exactly we are in the AI Age.
Good news: For those corporates, enterprise, and small business players; you're not too late, we're still at the first “at bat” of the first inning; we have a long way to go still (but the pace is picking up). The most recent exuberance around AI has largely been focused on the capabilities of Generative AI, capable of creating new images, new text content, and now video (as discussed in last week’s Geekly).
Why it matters: Why do we say that we're still in the early innings? Because the technology today, despite its promise, has yet to prove its business value broadly. To date, there are limited publicly available data points/case studies to demonstrate the true ROI and value of investing in Generative AI applications. Specifically, we need to see more simple, clear-cut cases where, for example, Company A invests $10 million into a GenAI project and realizes a 200 bps increase in margins, or 3% increase in top line, or similar. While we’ve seen pockets of profitable innovation in select businesses, at scale Generative AI remains very much a “show me” story.
Critical approach: That's not to say we are not optimistic, rather we approach this issue from the pragmatic capital markets perspective which permeates the AI Geekly. We continue to evaluate this technology and the companies pushing it using a critical approach. Considering the billions invested by major tech players, a substantial, quantifiable amount of value will need to be generated to justify the expense.
Experimentation is crucial: That said, companies need to begin experimenting with this technology immediately. We've seen how quickly AI can evolve and potentially disrupt industries; those companies taking a wait and see approach do so at their own peril. Smart management teams should be evaluating these technologies and how they can support their operations —being a leader in applied AI in a particular sector can start an innovation flywheel that competitors may never catch-up to. Open AI is a great example, as we see with Google’s recently upgraded Gemini model, it still fails to outperform GPT-4 released one year ago —Google may never catch up.
Models and Bottles
Checking out three new AI models
What it is: Three new notable AI models we're announced in recent days Stable Diffusion 3 (image generation), Mistral Large (generalist LLM), and Phind 70B (Coding LLM).
What it means: These are three very impressive models. Working backwards, Phind 70B is a powerful coding LLM that not only searches the Internet to assist with development work, but also communicates directly with developers’ code bases: a novel concept. Mistral Large represents eponymous French AI company Mistral’s most advanced publicly released model yet. While not open source, Mistral Large is intended to go toe-to-toe with Open AI and Anthropic’s respective offerings, GPT-4 and Claude II. Further turning-up the heat, Mistral has introduced Le Chat (it’s not a cat…), its competitor to ChatGPT. Finally, Stable Diffusion 3 presents the most advanced image editing and image generation capability seen to date, resolving prior issues with text/spelling, multi-subject images and more. Stable Diffusion 3 is an open source model, run locally on a PC (read local inference, a 2024 theme we’ve discussed in prior Geeklies).
Why it matters: We're pleased to see three such impressive models released within short days of one another. Not only do they meaningfully push the envelope in each of their respective foci, they demonstrate the broad vibrancy of the AI community as some of the world’s smartest minds work together creating tools to address the myriad challenges faced by society. We expect 2024 to see so many new impressive models released that the inference frontier (the edge of AI capability) shifts markedly.
High Stakes: Monadical Talk on the Dangers of Monopoly in AI
Let’s learn from our recent past
Join us on March 7th in NYC, as friend of the Geekly, and former high stakes online poker pro and AI industry veteran Max Mccrea examines the economic harm that resulted from the Facebook-Google advertising duopoly, and why the emerging AI industry is at risk of a similar outcome with two companies set to stack the deck and are going all-in on regulatory capture. Through this lens, he will explore the broader implications for innovation, market competition, and the likely regulatory interventions to address these challenges.
Max is the Founder of Monadical, a software consultancy with 30 engineers, and the CTO of Developer Capital, a pre-seed and seed investment corp. He has been working in the AI/ML field since 2011 and started his software consultancy in 2019 to solve leading edge tech problems, incorporating the probabilistic thinking and game tree evaluation learned from a successful poker career into his business. Since it’s founding, Monadical has delivered 100+ projects, including many utilizing LLMs and Diffusion models in production. Max has long been an advocate for the open source community and at Monadical has proven you can build out compelling technology on open sourced LLMs.
Details: Join us on March 7th at 5:45 PM at Legends on 33rd (6 W 33rd Street in Manhattan) for the event!
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.