• AI Geekly
  • Posts
  • AI Geekly - Your Amazon.com order #G3nA1 has shipped

AI Geekly - Your Amazon.com order #G3nA1 has shipped

Delivering AI insights

Your Amazon.com order #G3nA1 has shipped

Welcome back to the AI Geekly, by Brodie Woods. We have a few updates to go through this week we think you will find illuminating. We’re still steering clear of the OpenAI drama, including the rumored Q* model and its potential mathematical prowess and logical potential.

AWS listens, learns, and delivers for Enterprise; Google DeepMind’s actions speak louder than code; Google and OpenAI Slam the Brakes

This week we’ll focus on two areas of progress in AI but in very different respects. The first, a walk through AWS’ major announcements from its re:Invent conference, where it finally showcased in finer detail its expanded cloud offerings targeting GenAI specifically. Second, we’ll look at the needle-moving impacts that come from thoughtful application of powerful AI tools to difficult problems by the researchers at Google DeepMind. Real-world AI applications that actually reduce costs or increase revenue. Finally, we’ll talk about two recent product delays by OpenAI and Google, dealing with each company’s respective Achilles’ heel.

AI News

Amazon’s re:Invent conference puts AWS AI on display

What it is: Amazon Web Services held its annual conference this week in Las Vegas, bringing its best GenAI to entice enterprise clients and the various vendor/partner ecosystems it operates in.

What it means: The major releases that stood out to us were:

  • AWS Q chatbot: (no relation to Q*) AWS’ answer to ChatGPT and Bard has finally arrived. Q is trained on 17 years of “AWS Knowledge” and is targeted to business users (like everything at re:Invent as AWS isn’t consumer-facing), leveraging their own data assets. Now in public preview Q is also an agent capable of taking actions and will be integrated into AWS products.

  • Guardrails: necessary restrictions for enterprise clients to manage access/permissioning and appropriateness of responses from GenAI models.

  • Image Generator: FINALLY. AWS has played its hand on image gen. with its Titan text-to-image. Designed as a developer tool for use on AWS, it isn’t tailored to mainstream users like DALL-E, Stable Diffusion or Midjourney. It also includes watermarking technology, which may create issues with enterprise clients as it may unintentionally reveal information.

  • Graviton Chips: What Geekly would be complete without some chips? AWS announced its Graviton4 processor. While offering modest on-chip performance improvements moving from a 5nm build to 4nm process, investments in higher memory bandwidth lend themselves to a overall product that offers 30-45% performance bumps depending on the workload.

Why it matters: While Amazon had announced its Bedrock GenAI platform quite some time ago, and has been showcasing it in various conferences and workshops, this was the first time it was really presented by Amazon, effectively to the world (actioning on its press release). While it’s still early days, we are impressed with AWS’ offerings and their receptiveness to client needs. Each of Q, Guardrails, Titan t2i and the Graviton chips address major enterprise concerns/pain-points. They’ve been listening.

‘They’re Not Rocks, They’re MINERALS’
Impressive array of AI solutions to challenging and expensive problems

What it is: In a flurry of announcements, white papers and articles, Google’s DeepMind team has developed a more accurate weather prediction model (GraphCast), demonstrated that AIs can be trained by mimicking humans (like children), discovered 2.2 mm new crystals (using GNoME), and deployed a fully-automated AI lab that invents, produces, tests, and analyzes new minerals completely autonomously (A-Lab). Graph Neural Networks are having a much deserved moment right now —they excel at processing data with inherent interconnected relationships, (weather, social networks, molecular structures, etc.) capturing the complex interactions between elements.

What it means: GraphCast means $1Bn in savings per year for the NOAA with a model that can be run for pennies on a desktop instead of a massive compute cluster. The models’ higher accuracy will no doubt lead to billions in savings for more accurate inclement weather prediction. Mimicking AIs will be able to save billions of dollars in compute and energy costs to develop AGI by learning new skills through mimicking humans vs. traditional time, power, data and $ intensive training. GNoME’s 2.2 mm crystal discoveries represent 800 years of knowledge, a saving in time and dollars impossible to quantify. This, paired with the automation of laboratories is the type of singularity-enabling (I hate to say it) technology that shows how quickly the flywheel of progress can begin to spin when the bottlenecking human element is removed from the equation.

Why it matters: In the span of a couple of weeks, Google’s DeepMind researchers have released new research that will save billions in weather predictions, AI development, scientific discoveries and research. These are real, tangible use cases that demonstrate the promise that AI holds. This also demonstrates the pace of advancement. Expect this to continue to accelerate.

Delays and Back-peddling
OpenAI’s GPT Store and Google’s Gemini model both delayed

What it is: Two hotly anticipated AI tools, OpenAI’s GPT Store for purchasing customized flavors of ChatGPT (Ok, maybe not that hotly anticipated) and Google’s Gemini multimodal LLM were delayed. The former likely due to a recent discovered security hole through an exploit in its Code Interpreter, while the latter is supposedly due to poor multilingual support.

What it means for Google: Perfect is the enemy of good. That was actually the whole lesson that Google was supposed to have learned from what happened with ChatGPT in the first place, where Google had developed even more impressive models that GPT-3 in-house, but had kept them under wraps. Now, with an opportunity to vindicate itself in the eyes of the public and investors, it is instead pumping the brakes and falling into old habits.

Why it matters with OpenAI: As we’ve covered in previous Geeklies (that’s the plural) OpenAI’s new customizable and still frustratingly-named “GPTs” offer real promise for putting powerful AI in the hands of everyone. The problems arise when there are these security issues, reliability/access issues, performance issues, and more. This once again makes the case for Open Source models, specifically ones that can be run on local hardware (laptops, smartphones, etc.) which can mitigate each issue identified.

In Sum: Google’s delay is unfortunate because they really need this. Their DeepMind team is absolutely crushing it, as we just covered, but the market needs to see them release a rock-solid model to quell concerns that they can’t match OpenAI (who haven’t been sitting on their hands waiting for Google to release Gemini BTW). OpenAI’s delay is necessary, but again, shows why you want to run your own models instead of relying on a single, centralized entity. Recent vibrations at OpenAI show exactly why this can be dangerous.

Before you go… We have one quick question for you:

If this week's AI Geekly were a stock, would you:

Login or Subscribe to participate in polls.

About the Author: Brodie Woods

With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.


  • OpenAI: A leading AI research and deployment company known for its advanced AI models like GPT-4.

  • Ql*: a rumored AI model by OpenAI, speculated to have advanced capabilities in logical and mathematical reasoning, potentially marking a significant step towards artificial general intelligence (AGI)​​.

  • AWS (Amazon Web Services): A subsidiary of Amazon providing cloud computing platforms and APIs.

  • re:Invent Conference: Amazon Web Services' annual conference showcasing new services and products.

  • GenAI (Generative Artificial Intelligence): AI systems capable of generating text, images, or other media using generative models that learn the patterns and structure of their input training data to generate new data with similar characteristics​​.

  • Google DeepMind: A British AI subsidiary of Alphabet Inc., known for its cutting-edge AI research and applications.

  • Q Chatbot: an AI-powered chatbot for AWS customers, offering conversational assistance and integrated across various AWS platforms and communication tools​​.

  • ChatGPT: An AI chatbot developed by OpenAI, renowned for its conversational capabilities.

  • Bard (Google Bard): conversational AI chatbot developed by Google, using machine learning, natural language processing, and generative AI to understand user prompts and provide text responses, distinct in its ability to access the Internet for information​​.

  • Guardrails: Safety measures or guidelines within AI systems for managing access and appropriateness of responses.

  • Image Generator: AI tools capable of creating images, typically from textual descriptions.

  • Titan Image Generator: a text-to-image generation model that allows users to create and edit images based on textual prompts​​.

  • DALL-E: An AI model by OpenAI known for generating images from text descriptions.

    Stable Diffusion: Stable Diffusion is a deep learning, text-to-image model that uses diffusion techniques to generate new images or alter existing ones, based on text prompts​​.

  • Midjourney: a generative AI app/model that creates images from natural language descriptions, allowing users to generate a wide range of art forms from textual prompts​​.

  • Graviton Chips: the latest processor in the AWS Graviton series, designed for high performance and energy efficiency in a wide range of cloud workloads​​.

  • Bedrock GenAI Platform (Amazon Bedrock): a fully managed service offering a selection of high-performing foundation models for building generative AI applications, featuring models from leading AI companies via a single API​​.

  • GraphCast: an AI model developed by Google DeepMind for advanced global weather forecasting, recognized for its accuracy in predicting weather conditions and extreme weather events​​.

  • NOAA (National Oceanic and Atmospheric Administration): A U.S. agency focusing on the conditions of the oceans and the atmosphere.

  • AGI (Artificial General Intelligence): AI capable of understanding, learning, and applying knowledge in a wide range of tasks.

  • GNoME (Graph Networks for Materials Exploration): an AI tool developed by Google DeepMind, a graph neural network designed to predict inorganic crystal structures and has discovered over 2 million new materials potentially useful in various technologies​​.

  • A-Lab (Autonomous Laboratory): an autonomous laboratory developed by Google DeepMind, integrating robotics and artificial intelligence to synthesize new inorganic materials, capable of planning experiments, interpreting data, and making decisions to improve synthesis processes without human intervention​​.

  • GPT Store: An app-store-like platform by OpenAI for accessing various customized GPT models.

  • Gemini Multimodal LLM: a large language model (LLM) that works with text and images, offering improvements over Google DeepMind's previous multimodal models, and is similar in capabilities to models like GPT-4​​.

  • Code Interpreter: AI tool that enhances the capabilities of ChatGPT, enabling it to understand, interpret, and generate code in various programming languages, and is used for tasks like data analysis, visualization, and machine learning model building​​.