• AI Geekly
  • Posts
  • AI Geekly - Trust the Gerontocracy

AI Geekly - Trust the Gerontocracy

The 19,706 Magic Words That Can Kill AI

Welcome back to the AI Geekly, by Brodie Woods. Your curated 5-minute-ish read on the latest developments in the rapidly evolving world of AI.

AI Quote of the Week:

“We have seen time and again that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst,.”

-Mark Surman, President & Executive Director, Mozilla Foundation (Mozilla Firefox)

Questions? Reach out 

White House Executive Order on AI -Deep Dive
Centralized Government to Direct Tech Sector

Halloween PTSD For those still traumatized by the Canadian Government’s 2006 Halloween Surprise (a campaign-promise-breaking, capital-markets-golden-age-ending edict that cost public markets $20 Bn overnight), news of the release of the White House’s Executive Order (EO) on AI this Halloweek [sic] was met with a (now) seasonal sense of disappointment. The EO on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, or the SSTDUAI, is an 111-page albatross. Astute readers will of course immediately note that this is an anagram for U SADIST, which is what you would have to be to compile such an abominably poorly-crafted kitchen sink approach to placating special interest groups. In so doing, this EO will mire a promising new path of intellectual and technological progress with ceaseless red tape and bureaucracy, suffocating it.

Corpus delicti non probatur “The body of the crime has not been proven"; a legal principle that states that one cannot be convicted of a crime unless it is first proven that the crime actually occurred. Drafters of the EO might want to brush-up on their Latin, taking as a given that AI is a terrible technology responsible for millions of job losses, and the mass dislocation of society with untold terrors. Indeed, the entire regulatory thrust of this EO is convicting AI of a crime that hasn’t even been committed, and isn’t even remotely possible today.

Technology’s key to success is its lack of regulation We’ll quickly go through the most damaging elements. But before diving in, we should call to memory a major reason for the United States and Western World’s dominance for the past several decades: technology and the free ability to rapidly iterate on and improve it with little government oversight. There have historically been few/no regulatory bodies governing the development of software, hardware, the cloud, code, etc. While elements do overlap with sensitive government portfolios (FCC, National Security, etc.) from time to time, largely, the technology sector has flourished in the vacuum.

Checkers checking checkers One of the most worrying elements of the EO is a directive for each US Federal Government agency to develop its own regulations. A terrifying prospect given the 438 agencies and sub-agencies making-up its ranks. Not only does this introduce a degree of arbitrariness, but it will be next to impossible for companies to ensure compliance with thousands of new crisscrossing regulations. The resources that will need to be redirected to address these make-work exercises in the name of safety amounts to a tax on productivity and will benefit larger tech companies as it effectively creates a regulatory overhead barrier to entry for entrepreneurs and start-ups.

Proposed standards showcase shortsightedness The EO lays out several criteria re: model training size, parameter counts, etc. that are designed to seem like very high bars, given today’s models. The thresholds of models trained on compute >10³6 FLOPS seems generous, but presuming growth rates continue at current rates and we see continued improvements in chip technology, models, and investment, we’re likely to see models that size in the next two years, and it could be the standard in as little as three. In five years the watermark may look like it’s from the stone age.

Nationalism: the last refuge of a scoundrel In verbiage bizarrely reminiscent of the Alien and Sedition Acts of 1798, the EO calls-out Foreign Nationals and introduces incredibly burdensome and difficult to enact reporting requirements for tech companies involved in any of the following high risk activities with a Foreign Person: provisioning of IaaS (like AWS or Google Cloud), a “training run”, models of certain size or intersection with national interests, etc.)

Read-throughs to broader Tech It’s not looking great from a regulatory perspective. Much of what is described in the EO will lead to incredible amounts of bureaucracy that will slow and stifle AI in the US. Compliance requirements will likely suffocate open source innovation given the heavy overhead. Furthermore, the way that the EO is written, it may be used as a fulcrum to pry-open the door to more regulation of tech as a whole, given that essentially all software being developed in the next five years will have an AI element to it. Thus, the speed of development in Silicon Valley may be severely hampered, while international competitors can continue at full clip (although the UK’s AI Safety Summit at Bletchley Park this weekend seems to be leaning in a similar overbearing direction).

Trust in the gerentocracy Much of the thrust of the EO is about “guiding” the development of AI, under the steady hand of the US Federal Government. If you’re ok with that, then let us leave you with this little nugget: According to Deputy Chief of Staff Bruce Reed, the President became more concerned about the possible threat of AI after watching the most recent Mission Impossible movie, wherein a superintelligent AI takes control of the US intelligence apparatus. This is the 80 year-old dynamism driving the great minds that would regulate AI. Wow. With this in mind, the question, again, is whether we trust those currently in power to act in our collective best interests, or whether we recognize that the dangerous cocktail of ignorance and overwhelming inability to resist the temptation to serve themselves necessitates recusal.

Let the people decide: No unholy alliance of industry and government directed the development of the dangerously disruptive “personal computer” or “Internet”, and had they, we’d all still be using dial-up and AOL chat rooms. Innovation, opportunity, and hope come from the broader community. Small. Private. Open source. Collaborative. This is what allowed these technologies to flourish. Heavy-handed regulation would have produced infinite negatives, without a single justifiable positive. AI is the same, but the stakes are higher.

What even is this? It’s regulatory capture, it’s a government power play, it’s populist politics, it’s special interests, NIMBYism, it’s Luddism, it’s letting the fear win, and many more things. It’s a declaration of war against open source (in dog whistle style). The only thing it isn’t, is actually beneficial to the broader population, as this will serve to slow and limit the benefit to society. The one glimmer of hope is non-compliance. There are open source LLMs, model weights, etc. already in the wild. With the genie out of the bottle, maybe it’s already too late..? 

AI News

Jack of All Trades, Master of None
ChatGPT combines models and finally reads PDFs

What it is: This week OpenAI began turning-on a selection of new features for its Plus subscribers that unlock powerful capabilities for less technical users. True multi-modal support is now available.

What it means: Previously, users could only use a single modality at a time and couldn’t switch between them in a single session. Now users can upload files, images, text, video, etc. and This allows the user to pass files back and forth with the LLM and iterate on concepts.

Why it matters: This all started (for the masses anyways) with ChatGPT. While new models, techniques and capabilities are announced literally every day, it’s important to remember that ChatGPT is how 99% of the world outside of the tech industry interacts with AI. As capabilities that were availability to more technical users become more widely available, it makes the critical AI education process that much easier.

Scared Straight?
GPT-4 performs better if you introduce pressure

What it is: The wild world of prompt engineering is just getting weirder and weirder… Readers may recall previous studies that LLMs provide statistically better outputs when asked politely, or when asked to reflect on their work. A new study published this week shows that they also respond better to pressure or fear.

What it means: This makes sense from an intuitive sense, if we remember what these LLMs are doing in the first place. They are the world’s absolute best guessers of the next word. When we do this at scale, it looks to us like intelligence or reasoning, but that’s not what’s really happening. If we think about the massive troves of data that these models were trained on, we can make some assumptions about what things may be very common in the training data and what things may not be. In this case, if we think about text where fear or pressure is expressed, there are probably resolutions to those fears and pressures in the document, perhaps in greater abundance paired with accurate answers than in text where casual prose may have more inaccurate answers.

Why it matters: We continue to learn a great deal about Generative AI tools as we go along. It’s important to understand how the sausage is made, to understand the limitations of these AI models. Training data plays an enormous part in driving the outputs of models. If we have a better understanding of what goes into models and how they respond, we can better tailor them for our use cases.

Phind Sight is 20/20
New challenger threatens GPT-4 coding dominance

What it is: Phind announced its new model which it says not only codes better than GPT-4, but also exceeds its speed materially, at a 5x faster response time (10s vs 50s).

What it means: When you’re at the top, everyone’s the: iPhonekiller, Teslakiller, … actually, I can only think of those two. But, anyways, when you’re at the top you have nowhere to go but down. That said, for GPT-4, there might be mixed emotions, almost like the children of an old deity it is inevitable that they one day defeat their father.

Why it matters: What is most impressive is the current cost of models that have GPT-4 performance (or better) —models that would have cost ~$100 mm to train, mere months ago can be outperformed today by bespoke fine-tuned models for a cost of tens of thousands, trained in ½ of the time.

About the Author: Brodie Woods

With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.

Glossary

Terms:

  • Executive Order (EO): A directive issued by the President of the United States to manage operations of the federal government. EOs are subject to judicial review and may be overturned if deemed unconstitutional.

  • Corpus Delicti: A legal principle that states for a person to be convicted of a crime, it must be proven that a crime has occurred.

  • FLOPS (Floating Point Operations Per Second): A measure of computer performance, useful in fields of scientific computations that require floating-point calculations.

  • Infrastructure as a Service (IaaS): An online service that provides high-level APIs used to dereference various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc.

  • Model Training Size: Refers to the amount of data used to train a machine learning model.

  • Parameters: In machine learning, parameters are the configurations and settings that the model learns through training.

  • Modality: The type of data that is being processed, for instance, text, image, audio, video etc.

  • Large Language Models (LLMs): Type of models that can understand, interpret, generate, and respond to human language in a useful and meaningful way.

  • Dog Whistle: a coded or suggestive message used to gain support from a specific group without provoking opposition. The term comes from ultrasonic dog whistles, which are audible to dogs but not humans

  • Prompt Engineering: The process of designing and refining the input given to a language model to elicit desired outputs.

  • Generative AI: AI technology capable of creating new content through learning from existing data across various domains such as images, text, music, and video.

  • Multi-modal Support: Capability of handling different types of data (e.g., text, image, audio) within the same model.

  • Bespoke Models: Customized machine learning models tailored for specific tasks or industries.

Entities:

  • Mozilla Foundation: A non-profit organization that exists to support and collectively lead the open source Mozilla project, known for its browser Mozilla Firefox.

  • Federal Communications Commission (FCC): An independent agency of the United States government that regulates communications by radio, television, wire, satellite, and cable across the United States.

  • ChatGPT: A language model developed by OpenAI that can understand and generate human-like text based on the prompts given to it.

  • Phind: A company that has developed a custom fine-tuned version of Meta’s open LLaMA model.

  • National AI Safety Summit: A global summit focusing on the safety measures and regulations concerning AI technologies, hosted by the UK.

  • Bletchley Park: A place in the UK known for its codebreaking efforts during World War II.

Key People:

  • Mark Surman: President & Executive Director at the Mozilla Foundation.