- AI Geekly
- Posts
- AI Geekly: The Catch-up
AI Geekly: The Catch-up
Three weeks is a long time in the AI world...

Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week we bring you yet another week of fast-paced AI developments packaged neatly in a 5 minute(ish) read.

TL;DR OpenAI at it again; GOOG, ADBE, xAI model showdown; Europe’s regulatory 180; Chip ARMies
We were out for a couple of weeks working on a few client projects with tight deadlines, but we’re back with another AI Geekly to bring you up to speed on what’s been happening in the AI space. The past three weeks saw AI heavyweights doubling down. OpenAI launched a new Deep Research tool and confirmed plans for GPT-4.5 and GPT-5 (streamlining a confusing product lineup). A slew of powerful models debuted — from Google’s Gemini 2.0 to Adobe’s Firefly for video (trained only on licensed content!), and xAI’s Grok-3. Meanwhile, the EU shifted strategy to boost AI innovation (France alone pledged $112 B!), and ARM is looking at competing with its own clients in a partnership with Meta to make chips.
Trailblazers
OpenAI Goes Deep (Research) and Maps Out GPT-5

Deep Research Yields Impressive Outputs: OpenAI rolled out a new ChatGPT capability called “Deep Research.” This agentic tool can autonomously scour hundreds of web sources and compile a comprehensive report on complex queries, basically condensing hours of human research into a few minutes. It’s part of OpenAI’s push to make AI more useful and hands-off for tough analytical tasks. (For context, Google’s Gemini and others have recently added similar research modes, underlining a trend toward AI “analysts.”)
GPT-4.5 (Orion) and GPT-5: On the product front, OpenAI CEO Sam Altman shared a rare roadmap update. GPT-4.5 is confirmed as the next release, internally codenamed “Orion,” and will be OpenAI’s last model without advanced reasoning (chain-of-thought) features. After that comes the big one: GPT-5, which will unify all OpenAI model series and tools into a single platform. OpenAI claims to want to eliminate the confusion of multiple model choices and menus, bringing back a “magic unified intelligence” that just works for everything. What this means is that GPT-5 will dynamically adjust its power based on your subscription level—free users get standard intelligence, while Plus and Pro subscribers get progressively more advanced capabilities. It will integrate voice, vision canvas, web search, deep research, and more into a single system.
Is that the real reason? There’s no exact timeline yet, but the messaging Sam is trying to project is simplification of the user experience while packing in more capabilities. By the same token (pun), however, this permits OpenAI to obfuscate what is going on with the models on the backend. So, while currently a user may select a reasoning model for a complex coding problem, thereby theoretically using more of OpenAI’s expensive compute on the backend to deliver an improved output, in the new GPT-5 veiled-intelligence paradigm, it will be up to the parameters (likely profit-optimizing) set by OpenAI to decide the amount of compute and underlying model to handle the task. The rationale is simple: compute costs money. If OpenAI can resolve (or rebuff and deflect) users’ prompts in a way that reduces compute load, their margins improve.
Compute is the reason: OpenAI’s reasoning-centric “o-series” models are already showing what that unified future might deliver. Case in point: OpenAI’s o3 model just hit a major milestone in AI problem-solving. In a recent test, o3 achieved a CodeForces coding competition rating of 2724, placing in the 99.8th percentile of human coders, and earned a gold medal score at the International Olympiad in Informatics. Impressively, o3 managed this without any special hand-crafted tricks — it was trained purely via reinforcement learning, yet it outperformed even a prior model that was fine-tuned specifically for the Olympiad. This breakthrough reinforces a big idea: general-purpose AI reasoning (with enough training and compute) can beat expert systems even in specialized domains. For AI geeks, it’s a glimpse of how far “thinking” models have come — essentially reaching elite human coder level in competitive programming.
Major Model Milestones and Rivalries
New models address IP risks and feature parity

So many models, so little time: It’s been open season for new AI models — everyone is dropping upgrades or new contenders. Here are the highlights:
Google Gemini 2.0 — Google’s most powerful AI model to date has now rolled-out broadly. Gemini 2.0 is more capable than its predecessors, with native image and audio generation abilities and built-in tool use. It comes in multiple flavors — a fast “Flash” version (with a 1 million token context window) now generally available via API, and an experimental “Pro” model tuned for coding and complex reasoning. The thinking models function similar to the reasoning models from peers.
Adobe Firefly (Video Model) — Adobe jumped into generative AI video in a big way. It unveiled a new Firefly AI video generator (now in beta) that the company touts as the industry’s first “IP-friendly, commercially safe” model. Why the emphasis on IP-safe? This model is trained only on licensed or Adobe-stock content, so businesses can use the AI-generated footage without legal worries about copyright. Adobe integrated the tool into its Creative Cloud (e.g. Premiere Pro), aiming to court professional creators who need AI assistance without the usual IP risk. It’s a smart play: as generative video heats up (OpenAI’s new Sora model, etc.), Adobe is differentiating by saying “ours won’t get you sued.”
xAI Grok-3 — Not to be outdone, xAI has launched Grok-3, the latest version of its chatbot aimed squarely at OpenAI and DeepSeek. Released just days ago, the model topped the Chatbot Arena leaderboard over the weekend. While we have often challenged benchmarks in the space, they do provide at least some measure of performance, and Grok 3 seems to perform well (though candidly it can be difficult to separate the politics-related vitriol which tends to pollute the evaluation). xAI has ramped up its infrastructure big-time (as previously covered in the Geekly), their new GPU supercluster “Colossus” allowed the company to deploy its foundation model in record time. Analysts note Grok-3 shows solid improvements, though some skeptics say the leap from Grok-2 isn’t huge relative to the massive compute spent.
Old World Chases the New…
Europe pivots on AI Strategy


Shifting from Restriction to Innovation: The EU is changing course on tech regulation to better compete in AI. In a rather symbolic move, Brussels abandoned its long-stalled ePrivacy Regulation reform — a proposal that had aimed to tighten online privacy rules (cookies, messaging, etc.) but was stuck in legislative gridlock for years. The European Commission quietly withdrew the ePrivacy bill this month, signaling that the bloc doesn’t want to add new privacy burdens that could hamper data-driven innovation. In the same vein, the EU has also shelved a proposed AI liability law. The focus is now overtly on competitiveness: EU leaders want to foster more data access and a friendly climate for AI development, worried about falling behind the US and China. It’s a notable philosophical shift: from “regulate all the things” to “let’s not strangle our nascent AI industry in red tape.” Time will tell if this boosts Europe’s AI ecosystem, but the tech sector is welcoming the lighter-touch approach.
Deploying Balance Sheet: Individual member states are stepping up as well. France, for one, just made a jaw-dropping commitment to AI. President Emmanuel Macron announced plans to invest over €109 billion (about $112 billion) in France’s AI ecosystem. This war chest includes contributions from private companies and will fund everything from startups to supercomputers. France even inked a deal to dedicate 1 gigawatt of nuclear power exclusively for AI model training data centers — ensuring the electricity-hungry compute clusters have a stable, domestic power supply. This massive bet — on par with the largest national AI investments anywhere — shows Europe’s second-largest economy is determined not to miss out on the AI revolution. “Vive l’AI!”
ARMs Race
ARM gets into chipmaking

Rolling its own: UK-based ARM Ltd. (the company whose chip designs power nearly all smartphones) is building its own processors for the first time. ARM plans to launch a server CPU in 2025 and has already lined up Meta as an early customer. This move breaks ARM’s traditional purely-licensing business model and will put it in direct competition with some of its clients (like Nvidia, who uses Arm cores in its chips). The strategy is backed by ARM’s owner SoftBank — they want a slice of the booming AI chip market rather than just collecting royalties. While this could ultimately shake up the chip market in AI, remember that when it comes to hardware, the lead times are quite long.
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.