- AI Geekly
- Posts
- AI Geekly: Event Horizon
AI Geekly: Event Horizon
Predicting the unpredictable is, predictably, difficult

Welcome back to the AI Geekly, by Brodie Woods, brought to you by usurper.ai. This week’s Geekly provides some more of our perspectives on what 2025+ holds for the AI space. To refer to prior coverage in our three-part-series, see our 2025 Outlook piece, and our AI Alignment Check note. We didn’t cover OpenAI’s release of o3-mini (including o3-mini-high) or the launch of Alibaba’s Qwen team’s open-source rival to OpenAI’s Operator, Google’s Mariner, and Anthropic’s Claude Computer Use Agent—but we wouldn’t be surprised to see another wave of tech and financial panic this week over *checks notes* yet another open-source model daring to challenge closed-source dominance, inevitably framed as a U.S.-China AI showdown.
AGI, ASI, and Beyond

Before we get into the societal impacts of AGI and ASI, a brief refresher:
Artificial General Intelligence (AGI): A theoretical AI system with human-like cognitive abilities, capable of understanding, learning, and applying knowledge across a wide range of tasks without being specialized.
Artificial Superintelligence (ASI): A hypothetical AI that surpasses human intelligence in all aspects, including creativity, problem-solving, and decision-making, potentially making it vastly more capable than the best human minds.
How these intelligences might be structured remains to be seen —will they be singular omnipotent and omniscient AI models? Or collection of smaller AIs (agentic or otherwise) working together in tandem through layers? This isn’t yet clear. Based on some of the work we have done experimenting with autonomous agentic systems we see benefits to the layered, segmented approach, but singular omni models may yet win the day.
Note that we’re not saying this all happens in 2025. Instead, we are laying out what we see are just a small subset of the societal implications of imminent AGI and ASI on humanity in the very near future.
The Impact of AGI-ASI on Societal Structures

What are the major factors that govern society, its constructs, the winners, losers, and the motivations of the individuals of which it is comprised? What happens to them in a world with AGI and ultimately ASI? Not just in the interim, but long term? What happens to:
The cost of intelligence or knowledge work, as advanced AI models become more common, cheaper, and smarter?
The cost of labor as robots become more economical, intelligent, and capable?
The cost of energy as we ramp production to meet growing AI demand, and we employ AI-enhanced optimization to generation, transmission and storage?
New systems of distribution and management: Long term, we see the cost of AI model inference moving to zero (in keeping with current trends) meaning effectively we see the cost of intelligence moving to zero, the cost of labor moving to zero (in keeping with current trends), even the cost of energy ultimately moving to zero (in keeping with current trends). The ramifications on the lives of individuals, indeed, our society as a whole, are impossible to overstate. The current systems for distribution and management of capital, labor, energy, resources, scarcity, time, and more will be completely disrupted.
AI will become the best tool for every task: Continued advances in AI and robotics capabilities combined with ongoing cost reduction —the benefits of AI and humanoid robots being assigned to every task— will amplify until the point where having a human complete a required job or task is effectively making the decision that the task should be done poorly (as an AI/robot will certainly complete it better than a human) and more expensively. Motivated by profit (efficiency), companies (individuals) will inevitably shift from reliance upon human intelligence and labor to pure-AI and robotic labor. That said, we know humans are slow to adapt. Incumbents will push back.
Short-Term Thinking is a Distraction

Inevitably there will be protests, bans, and boycotts. Justifiably, people will have concerns about the upheaval caused by rapid AI modernization on our lives. This will be mitigated to a degree by the sheer awesomeness (in the biblical sense) of this new AI and the positive impacts it will have on society with regard to healthcare and quality of life enhancements. Also, the extent to which decisions are made by Closed Source, private interests as opposed to Open Source, public-minded groups will factor into the disruption and the response. Regardless, many of the shrillest voices in the halls of power across politics, industry, and society will stir up dissention as they cling desperately to power and control (the most vociferous voices are likely to be those adding the least value, with the most to lose). Ultimately, it won’t matter. Like fighting to prevent 2024 from turning into 2025 it’s just not possible. Progress and time march on.
Too Big Not to Fail: In the short term, legacy companies will be slow to adapt, but AI-native disruptors will use emerging tools to accelerate go-to-market and iterative enhancement —outmaneuvering incumbents at every turn. This is precisely where the ~300,000 tech workers discussed in our 2025 piece come into play —optimally leveraging their niche knowledgebases, many would be glad to launch products and companies that disrupt their former employers (it’s what they know best, after all)— What happens when you take marvelous AI development tools + a pile of cash and combine them with ~300,000 driven, highly-skilled professionals, with unlimited time, nursing a grudge? Disruption. That’s what.
Entire companies will be replaced with AI: But remember, the above scenario is transitory. We share it because in our conversations people say “companies will be slow to adapt”. We’re saying that won’t matter. This will impact every facet of society and the economy. Entire companies, whole industries —today populated by people— obviated by highly competent, hyper-connected and ultra-efficient AI. AI isn’t taking your job. It’s taking away the concept of jobs altogether. A bank won’t be a sandstone building with 20,000-100,000 employees sitting at computers and desks, sending emails and answering phones. It will be an AI. Doing ALL of it. Better than we ever could.
But wait! If we don’t have jobs, how will we pay for things?

Great question. If the foregoing holds true, then the current transactional trade-based economy will be the quaint relic of a bygone era. Of no practical purpose in the world we find ourselves in. Remember, we’re operating in an environment here where the costs of intelligence, labor and energy have moved to nil. So, the cost to acquire anything is zero. Land and time remain finite (we’ll uh, have to figure that one out), but everything else to which monetary value is today ascribed becomes effectively costless in our view with the elimination of the prior cost factors and their secondary impacts. It is self-evident that our society as it exists today is completely incongruent with AGI and ASI as described herein. This profound change will require us to reevaluate and completely rebuild the entire structure of how we do almost everything as a species. Ultimately, humanity will be forced to discover its purpose and will have to dispel itself of the notion that servile work and consumerism are the great and noble heights to which our species is called. Indentured servitude does not an Ikigai make…
Note: we encourage everyone to find their Ikigai (learn more),
Great! Then there’s nothing to worry about?
One thing. Ok, well two things. The first one isn’t new. It’s one that humanity will have to deal with at some point: entropy, the heat death of the universe. But we can probably set that aside for the next few billion years… The big problem we need to solve (kind of now) is Superalignment: how do we ensure that the ethics and principles of AGI and ASI are aligned with the ethical and moral framework of humanity? That is to say, we need to make sure that we develop a system of governance, values, or alignment that will motivate and direct the actions of this AI in order to ensure that they don’t conflict with what we believe to be in our best interest as a species.
For example in a scenario where a desired AGI-ASI outcome is the prevention of wrist injuries in humans:
an aligned AI might develop specialized bracelets that eliminate strain on certain tendons by applying specific pressure —so far so good, right?
An un-aligned AI might determine that wide-scale hand amputation is the surest way to achieve the desired outcome —that escalated quickly…
That feels like an important one for us to get right… High stakes. As you might have guessed, we have some ideas here as well.
We’re going to leave it here
We hope you enjoyed our 2025+ outlook series. We would have made it shorter, but we ran out of time… We took a different approach this year. Most of the outlook pieces that have come out over the past few weeks have some pretty mild takes. They feel like 2022 outlook pieces. We understand, people don’t like to look foolish. They don’t like to make predictions and get called out if they don’t happen. Well, neither do we. There are two reasons we elected to share these thoughts with you this year. First, we have a high degree of conviction based on our analysis of the data. Second, we believe this is important reading for everyone. The unfamiliar is intimidating. With a little insight (we hope we’ve provided) it can turn into excitement. That’s the mood we’re projecting through 2025.
Before you go… We have one quick question for you:
If this week's AI Geekly were a stock, would you: |
About the Author: Brodie Woods
As CEO of usurper.ai and with over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.