• AI Geekly
  • Posts
  • AI Geekly - A Funny Thing Happened On The Way To CES

AI Geekly - A Funny Thing Happened On The Way To CES

Takeaways and telegraphing from CES 2024

A Funny Thing Happened On The Way To CES

Welcome back to the AI Geekly, by Brodie Woods. Hope your MLK Day Long Weekend was nice. Unfamiliar terms written in green are defined in the Glossary at the end of the note.

TL;DR CES; Cameras/Sensors; Privacy Through Local; New Chips and Cards; Digital Twinning; Shaking The Shakes

The Consumer Electronics Show (CES) wrapped up on Friday, bringing to a close the annual technology showcase (a cross between the Sharper Image catalog and an Apple product launch). While prior years’ exhibitions have been marred by focus on fad technologies like 3D TVs and IoT everything (smart fridges… ugh), this year’s exhibition marked a return to form, introducing some of the foundational technologies, infrastructure, and concepts that will power the AI Age.

In this week’s AI Geekly we’ll look at the hardware announced at CES, how users will interact with it, where it might add value, and where there are specific considerations at play, like privacy or IP. We’ll also take a look at two real-world applications of AI that are already being used in the wild to drive meaningfully better human and business outcomes. Notably, we see the theme of convergence between AI, robotics, and spatial computing that we mentioned last week manifest prominently as an overarching theme.

Personal, Portable, and Private
Dedicated consumer hardware puts the power of AI in your pocket/glasses/laptop

Wearables featured prominently at CES, many remixing the feature set from Meta's Ray-Ban partnership (audio + camera) integrating various LLMs. Others incorporated spatial computing, as in the case of Xreal’s Air 2 Ultra. Xreal’s glasses with built-in displays were an update of its already updated glasses, now featuring cameras which allow some of the augmented reality functionality seen in the Apple Vision Pro at a fraction of the cost. Cameras and sensors are the name of the game, they are critical hardware for smoothing the inherently rough transition between the real and augmented worlds. The Vision Pro has them in spades: 12 cameras, six microphones, five sensors for monitoring hand gestures and mapping the environment, plus Lidar depth sensors. Inside are four infrared cameras and LED lights. Combined with its sophisticated display technology, this makes the VP the most sophisticated piece of consumer hardware sold today. On the topic of the Vision Pro, while Apple doesn’t participate in CES, it did announce that its coveted $3,500 headset will open for pre-order on January 19, with fulfillment on Feb 2nd.

“Be vewwwwy qwiet… I’m hunting wabbits”
Intriguing little piece of AI hardware breaks the mold

Coming back to dedicated AI devices, the Rabbit R1 was introduced by, well, Rabbit, selling-out 10,000 units almost immediately. Essentially it takes a custom LLM and shrinks it down to fit on a small handheld device. Users interact with the AI using voice and it essentially acts as a super-powered AI assistant, able to perform complex tasks on behalf of the user including things like booking flights and interacting with apps. Due to the local nature of the processing, privacy is significantly higher than when using a cloud-based tool like ChatGPT or Google’s Bard, which use human feedback and interaction to train their models, which is beneficial at scale, but also poses privacy and security risks. The R1 is reminiscent of the Humane AI pin which was launched a few weeks ago. Humane’s pin is considerably more expensive at $699 (vs. R1’s $199) and seems to focus more on the privacy of those encountered by the user (e.g. a flashing light to indicate camera recording which cannot be disabled) than the user themselves due to its cloud-derived capabilities.

The Subtle Art of Minding Your Own Business
Apple prioritizes user privacy almost above all else

Closing-out the privacy theme, AAPL once again deserves credit for their privacy-first design. Recall not only were they the first to integrated NPUs into their chip designs (facilitating local inference), but they have also been adamant about privacy and local inference since they began discussing their GenAI plans publicly. We really do have to tip the hat to AAPL management. While we have yet to see where they fall in the open-source debate, their prioritization of the rights of individuals, along with their proactive negotiations with digital rights owners before using the data to train their LLMs is a commendable approach, one a little more aligned with societal values than some of the approaches of peers like Google, OpenAI, etc.

Romulus v. Remus - Round II
AMD and Nvidia duke it out on the CES stage

Nvidia and AMD both trumpeted their respective consumer-focused hardware offerings. AMD pointed to its new 8040 series ultramobile chips with a dedicated NPU, 60% faster than the prior generation at 16 TOPS (due to overclock only, no hardware change vs. 7040) which it refers to as XDNA1. XDNA2, due later this year will further double performance of the NPUs in the 8040s. Moore’s Law nods knowingly. AMD also announced the release of its 8000G chips for larger mobile platforms, and highlighted the work it has put into its ROCm and Ryzen AI platforms, the first of which helps to build AI applications for consumers, while the latter is used to run the AI applications locally.

Nvidiaman: Enter the Omniverse
Nvidia does a little on the consumer side, but a lot on the corporate

Nvidia doesn’t produce x86 CPUs for consumer applications, so its approach is a bit different from that of AMD (who produces both CPUs, and GPUs for consumer). Nvidia’s focus therefore is purely on the discrete graphics side, introducing a number of RTX 4000 Super Series GPUs that utilize AI for better gaming experiences and also allow for better local inference. The company also demonstrated its Nvidia ACE platform, which honestly looks terrible in the demos, but will allow game developers to incorporate GenAI into videogame conversations to create bespoke interactions. These sorts of simulated environments and interactions bear a resemblance to one of Nvidia’s other thrusts, its Digital Twin initiatives run through its Omniverse Platform (digital assets designed to simulate and replicate real world objects, environments, and physics) in concert with its ISAAC Robotics Platform. Check out the video above for a snippet on this. Here too Nvidia employs its GenAI capabilities, quickly converting 2D images to 3D, spinning-up simulated manufacturing facilities and more. Nvidia takes it beyond the theoretical, with its Jetson platform acting as the physical brains / edge compute interface with the real world (supporting local inference at the edge/agent level). It should be noted that Nvidia’s Digital Twin/AI/Robotics solutions are currently in-use, generating value for companies like Amazon (logistical warehouse robotics).

“Please Sir, May I Have Some More VRAM?”
GPU Poor starving for more memory

There’s one area we will critique Nvidia and AMD, that’s on producing GPUs with enough VRAM to support larger consumer AI applications. Those of us in the practitioner/experimenter space, aka the GPU Poor, would love to get our hands on some consumer GPUs with decent compute, and more importantly, large enough VRAM to be able to run the more performant modern models. Nvidia and AMD’s GPU announcements at CES both fell well short of this bar, with new cards topping-out at 16GB of VRAM —a far cry from the amount needed to run 30B and 70B parameter models without quantizing. We understand their fear about cannibalizing their professional tier cards, which is fair. Unfortunately, high-powered local inference using the best models is not possible at this time. That said, the open-source community has demonstrated impressive creativity in devising ways to run more complex models on simpler hardware while reducing the trade-offs in performance to the extent possible (like quantizing). It would be nice, however, if they’d throw us a bone, nevertheless. The GPU Poor are starving. Should AMD expand the availability of its ROCm solution beyond the handful of GPUs it currently supports, and add meaningful VRAM, it could start to pick-away at Nvidia’s dominance…

“Trust me. See? Steady Hand”
Revolutionary new AI tech ELIMINATES tremors

Finally, we end on a bit of a cool note. We’re highlighting a piece of technology that really impressed us at CES this year. Not only is it currently in use, measurably improving lives, but it gives a hint of the potential for what is possible when synergies are achieved between AI and robotics. Behold, the Gyroglove, a brand-new device which counteracts hand tremors in those who suffer from them, enabling a previously unimaginable steady hand. For readers suffering from tremors, we don’t recommend purchasing the current glove. Think of this like the initial Tesla Roadster —priced exorbitantly high ($5k for the glove) this semi-prototype will help refine and reduce the cost of the technology until generally affordable. Alternatively, we expect a similar open-source solution to be released within the next year or so that offers an 80/20 compromise —80% of the functionality at 20% of the cost, akin to the university-built open-source robots we saw in prior weeks’ Geeklies, which were compared to Tesla’s more expensive closed-source Optimus 2.0 robot.

Glossary

  • IOT (Internet of Things): Network of physical devices embedded with sensors and software to collect and exchange data. Often used in previously dumb devices like toasters

  • IP (Intellectual Property): ownership of rights, trademarks, copyright, patents, etc.

  • Spatial Computing: Overlaying digital information onto the physical world through AR/VR technology.

  • LLM: Large Language Model, capable of processing and generating large amounts of text. ChatGPT is powered by an LLM.

  • Google Bard: A factual language model from Google AI, trained on a massive dataset of text and code.

  • Local inference: Running AI computations directly on a device instead of uploading data to the cloud.

  • NPU (Neural Processor Unit): specialized type of processor designed specifically to accelerate machine learning and artificial intelligence (AI) tasks.

  • GenAI: Generative AI, capable of creating new content like text, images, or code.

  • TOPS: Stands for Tera Operations Per Second. Measures the number of integer operations a processor can perform in one second. Commonly used to measure the performance of neural processing units (NPUs) and artificial intelligence (AI) accelerators.

  • Moore's Law: Observation that transistor count in integrated circuits doubles roughly every two years.

  • ROCm: Open-source platform for developing software for AMD GPUs.

  • Ryzen AI: Line of AMD CPUs optimized for AI workloads.

  • x86: Instruction set architecture used by most modern PCs and servers. Differs from ARM, which is used in mobile devices and Apple hardware, known for higher efficiency than x86.

  • Discrete Graphics: Dedicated graphics processing unit (GPU) separate from the CPU.

  • VRAM: Dedicated memory for a GPU, used for storing graphics data.

  • GPU Poor: Those with a lack of sufficient GPU resources for desired AI tasks.

  • 30B and 70B Parameter Models: LLMs with 30 billion and 70 billion parameters, respectively, indicating their complexity.

  • Quantizing a model: Reducing the number of bits used to represent model parameters, making it smaller and faster.

Before you go… We have one quick question for you:

If this week's AI Geekly were a stock, would you:

Login or Subscribe to participate in polls.

About the Author: Brodie Woods

With over 18 years of capital markets experience as a publishing equities analyst, an investment banker, a CTO, and an AI Strategist leading North American banks and boutiques, I bring a unique perspective to the AI Geekly. This viewpoint is informed by participation in two decades of capital market cycles from the front lines; publication of in-depth research for institutional audiences based on proprietary financial models; execution of hundreds of M&A and financing transactions; leadership roles in planning, implementing, and maintaining of the tech stack for a broker dealer; and, most recently, heading the AI strategy for the Capital Markets division of the eighth-largest commercial bank in North America.