CES 2026: follow live Nvidia, Lego, AMD, Amazon and many others make their big revelations

ces-2026:-follow-live-nvidia,-lego,-amd,-amazon-and-many-others-make-their-big-revelations

CES 2026: follow live Nvidia, Lego, AMD, Amazon and many others make their big revelations

    • Sean O'Kane

    First look: the Lucid-Nuro-Uber robotaxi

    Image credits:TechCrunch

    We’ve known for a while that Uber, Lucid, and Nuro are collaborating on a robotaxi, but all three companies just revealed the production-bound version here at the show — and I got a sneak peek ahead of the official unveiling. It’s built around Lucid Gravity, which seems like a really smart choice. It’s extremely spacious inside and makes perfect sense for a high-end robotaxi service.

    I also got a preview of what the driver UI will look like. It’s quite similar to Waymo’s in-vehicle user interface, although Uber claims to develop this software itself. All told, I could see this becoming a popular option in the Bay Area when all three companies start offering rides later this year, especially for people who want more legroom than Waymo’s Jaguar I-Pace offers.

    We also tried to find a portmanteau for this collaboration and would love to hear your thoughts. Nubercide? Lubero? Read the full story here.

    Cute robots always steal the show

    Speaking of robotics, Nvidia’s keynote may be over, but we’d be remiss not to mention the adorable robots on stage.

    Amid conversations about advances in robotics and automation, Nvidia CEO Jensen Huang made friends with two very cute R2D2-like robots on Monday. The robots, which made their second appearance on an Nvidia stage, helped illustrate where the company hopes to take robotics with its software and hardware models.

    Huang has previously said he believes humanoid robotics could become a multibillion-dollar industry. The robots were cute but perhaps not designed for speed. “Hurry up,” he said as they waddled slowly across the stage.

    Nvidia wants to be the default platform for general robotics

    Nvidia released a new stack of basic robot models, simulation tools, and cutting-edge hardware at CES 2026. And as senior AI reporter Rebecca Bellan notes, the move signals the company’s ambition to become the default platform for general-purpose robotics, just as Android has become the operating system for smartphones.

    Nvidia on Monday revealed details of its comprehensive ecosystem for physical AI, including new open base models that allow robots to reason, plan and adapt to many tasks and diverse environments, going beyond narrow task-specific robots, all available on Hugging Face.

    Read the full story here.

    Nvidia’s new AI system is much easier to cool

    The GPUs that power contemporary AI systems consume a lot of energy and water, which has rightly drawn a lot of criticism from environmental groups. But Nvidia’s keynote brought some surprisingly good news on that front, as the reduced power consumption of the company’s new architecture also means that cooling systems could use a break.

    As Jensen Huang said on stage:

    Vera Rubin’s power is twice that of Grace Blackwell. And yet, and this is the miracle, the air that goes in, the air flow, is about the same and very important, the water that goes in is at the same temperature, 45°C with 45°C, no water chiller is needed for data centers. We’re basically cooling this super computer with hot water.

    What this means in practice remains to be seen, but it could be very good news for the ongoing construction of data centers.

    The new Rubin chip architecture is already in production

    In 2024, Nvidia announced plans to implement a new Rubin computing architecture. And now it’s here.

    Nvidia CEO Jensen Huang announced while on stage at CES 2026 on Monday that the powerful chip is in production and is expected to ramp up further in the second half of the year.

    “Vera Rubin is designed to address this fundamental challenge we face: the amount of computation needed for AI is skyrocketing,” Huang told the audience.

    Read Russell Brandom’s full story on this powerful new chip architecture..

    “Open source AI models” are a theme of Nvidia’s keynote

    Nvidia CEO Jensen Huang is interested in open source AI models, and his speech at CES 2026 illustrated this point.

    On stage, Huang announced a number of new open source AI models, signaling the company’s intention to expand its influence in the open model ecosystem. The models are designed for a wide range of automated services and are part of Nvidia’s already existing open model families, including Nemotron for agentic AI, Gr00t for robotics, and Cosmos for physical AI.

    The idea is that these models can be used by both startups and large companies to design new AI applications, tools and products, Huang said.

    Nvidia’s models are already used by a number of high-profile companies, including CrowdStrike, Palantir, CodeRabbit and Fortinet, according to Huang.

    Open source models have “really revolutionized artificial intelligence,” Huang said, noting that “the whole industry is going to be reshaped” by them in the future. He added that 80% of startups build their products based on open models and his company is leading the open model ecosystem.

    Alpamayo is not a condiment, but a whole new family of open source AI models

    Nvidia’s keynotes, always led by CEO Jensen Huang, are famous for their dozens of announcements. And the CES 2026 keynote was no different. And yes, the keynote is still happening.

    So let’s come back to some news: Alpamayo. It is a new family of open source AI models, simulation tools, and datasets for training physical robots and vehicles designed to help autonomous vehicles reason in complex driving situations. You can learn more about Alpamayo and what it means here.

    “Not only do we open source the models, but we also open source the data that we use to train those models, because that’s the only way we can really trust how the models are born,” Huang said on stage when talking about Alpamayo.

    Nvidia AV driver software is heading to the US

    Last year, Nvidia launched its first DRIVE AV stack, designed to enable hands-off automated driving, starting with the Mercedes-Benz CLA in Europe. At CES 2026, Nvidia announced its Launch in the United States for Drive AV software, as well as enhanced driving capabilities, coming later this year. It includes hands-free driving on highways and end-to-end driving in urban areas.

    The plan is to expand its L2++ city driving in 2026, first in the United States and then globally, to other countries in Europe, Japan and South Korea. On stage, Nvidia CEO Jensen Huang said the launch would be done with its longtime partner Mercedes-Benz. “Of course, in the case of Mercedes-Benz, we built the whole stack together,” he said.

    Atlas, meet Atlas

    Image credits:Hyundai Automotive Group

    Boston Dynamics showed off a prototype of its Atlas humanoid robot at CES 2026 and it walked on stage to show off its moves. But the audience also got a glimpse of the production version of Atlas, shown above.

    Zach Jackowski, vice president and general manager of Atlas at Boston Dynamics, and Aya Durbin, product manager of humanoid applications at Boston Dynamics, shared some of the Atlas specifications. For example, this version has 56 degrees of freedom, mostly with fully rotating joints, and it has human-sized hands with touch sensing in the fingers and palms for dexterous manipulation.

    Atlas also has 360-degree cameras that can see in all directions, allowing it to understand when people are approaching and can lift up to 110 pounds. Atlas is also water resistant in the real-world industrial environments that robots must endure, and it can operate at its full capabilities, including resistance between -4°F and 104°F, according to executives. Stay tuned to learn more about Atlas!

    Oh, and it’s already in production, according to Jackowski, who added that the entire supply for 2026 has already been allocated to Hyundai Motor Group and our “new AI partner.” Stay tuned for this news.

    The Lego SMART brick is a “smart” way to add technology to children’s toys

    When a manufacturer outside the tech sphere announces something “smart”, it’s usually a poorly executed gadget (see: the Kohler smart toilet). But Lego SMART bricksannounced Monday at CES, adds new technology to the franchise without spoiling what makes the difference. Lego so iconic in the first place.

    The SMART Play system, which does not use screens, includes SMART 2 × 4 bricks, SMART Tag tiles and SMART figurines. SMART bricks and figures can detect nearby SMART Beacons, which are flat 2×2 tiles (without the typical Lego studs on top) that have unique digital identifiers that tell the bricks and figures how to act. The bricks are powered by a patented ASIC chip that is smaller than a single Lego stud. The chip uses near-field magnetic positioning to recognize beacons around it and contains a miniature speaker, an accelerometer and an LED array.

    In the Star Wars sets launched in March, for example, Lu’s SMART figurines ke Skywalker and Darth Vader can simulate a lightsaber duel by interacting with nearby SMART tags and bricks. Or if a SMART brick is attached to an A-wing, it can light up and make noises that bring the starfighter to life. The built-in accelerometer makes these lights and sounds more responsive to how you actually play with the A-wing, since the Brick can detect when it’s zooming through the air or being knocked down.

    Image credits:LEGO

    Hyundai’s keynote kicks off with dancing spots

    Image credits:Hyundai Motor Group (screenshot)

    Need I say more? Hyundai Motor Group’s press conference kicked off with a troupe of Spot robotic quadrupeds dancing to “Go!” » by Cortis. Shortly after, Merry Frayne, Director of Spot Product Management at Boston Dynamics, took the stage to kick things off.

    “Now our robots are very talented, dancing like K-pop stars, but they are built for a higher purpose. Cooperate, collaborate, do dangerous work and many other things for us and with us, and all of this doesn’t happen in isolation. It’s all about partnership,” she said.

    Nvidia’s pre-game show is here!

    Hello from Nvidia’s keynote at the Fontainebleau in Las Vegas, where crowds are piling into their seats, eager to see the company’s CEO and founder, Jensen Huang (and, presumably, his signature black leather jacket), take the stage.

    AI and robotics will obviously be major topics today. Nvidia’s pregame is currently underway, where moderator Mark Lipacis, Senior Managing Director of Evercore, interviews several people, including Mercedes CEO Ola Källenius and Skild AI CEO Deepak Pathak, about the impact of robotics and automation on their industries. Källenius was quick to offer a compliment, noting at one point: “Working with Nvidia, we know we have the best compute. »

    Curious how to log in? We have what you need how to watch the Nvidia keynote.

Exit mobile version