It wouldn’t be an Nvidia keynote without a robot on stage, and GTC did not disappoint.

Nvidia CEO Jensen Huang spoke for nearly 3 hours Monday during the GTC keynote. Unsurprisingly, it was about how the world’s largest company (by market capitalization) is building the hardware, software and infrastructure needed to continue to dominate the AI sector. Here’s what you need to know.
Our experts attended the event in San Jose, California and tuned in remotely to bring you the latest news. There was several important points to rememberincluding a new Vera processor, an AI agent platform called NemoClaw and yes, even an Olaf robot, thanks to a partnership with Disney. Huang also said he expects $1 trillion in orders for its Blackwell and Vera Rubin systems through 2027, increasing previous estimates. But it’s not as bold coming from a company valued at $5 trillion.
Nvidia’s chips are among the most sought-after resources for companies to create and maintain their AI models. Along with the massive level of spending in the tech industry, Nvidia’s skyrocketing valuation has many financial and technology experts worried about an AI “bubble.”
This year will likely be a turning point for AI stalwarts like Nvidia. Tech companies are investing money data center construction to manage the demand for AI services and create enough energy to fuel their AI ambitions. Environmental and labor concerns are numerouswith very real concerns that AI Disruptions in the Workplace will leave many people jobless. Nvidia is the leader in AI chip production and, therefore, the backbone of companies like OpenAI, Google and Anthropic. Everything the company says and does gives us insight into where this complex and ever-evolving industry could be headed.
Are you up to date with Nvidia news? We have you
By Katelyn Chedraoui
Nvidia kicked off its GTC conference yesterday with a keynote from CEO Jensen Huang. Numerous announcements were made during the nearly three-hour opening speech. Our experts have identified the three main takeaways from Nvidia GTC. Here’s what you need to know.
Disney and Nvidia are teaming up to combine AI and robotics, resulting in an adorable droid Olaf who joined Huang on stage. Nvidia is also diving deep with a new platform for AI agents, called NemoClaw. Yes, it’s inspired by the open source AI viral agent OpenClaw – founder Peter Steinberger also made an appearance during the pre-show. Huang also briefly discussed the possibility of data centers in space and the biggest challenge in making it a reality. It also dropped DLSS 5, an AI-powered scaling tool for gamers.
Keep scrolling to see our play-by-play updates during the keynote and check out our coverage on Instagram and TikTok.
And that’s it, friends!
By Katelyn Chedraoui
Knowing that it’s probably impossible to top the Disney robots, Huang closed the speech shortly after. The AI-generated music video is certainly something.
OpenClaw’s lobster logo received a cymbal solo in the finale of Nvidia’s clip.
Nvidia/Screenshot by CNETWe got a lot of information during the nearly three-hour keynote, including a robotics partnership with Disney, a new agentic toolkit for developers, and a teaser about plans to build data centers in space. Most absent from today’s speech was the rumors about the new N1 and N1X chipsbut it’s possible we’ll see them released later this year.
Thanks for following us and don’t forget to check back here as we continue to uncover what all of today’s announcements mean for the future.
Nvidia announces NemoClaw reference stack
By Blake Stimac
Nvidia’s NemoClaw allows users to create claws with added layers of privacy and security.
Faith Chihil/CNETNvidia announced NemoClaw, a stack for the OpenClaw agent platform for creating autonomous AI agents, or “claws.” NemoClaw uses Nvidia AI Agent Toolkit to optimize OpenClaw with a single command, installing OpenShell for open models and a sandbox for added privacy and security.
“The OpenClaw ‘event’ cannot be underestimated,” Huang said. “It’s as big a deal as HTML. It’s as big a deal as Linux.”
Nvidia and Disney make an Olaf robot
By Katelyn Chedraoui
This is by far the best part of the speech so far: Nvidia and Disney teaming up to create an Olaf droid. The robot version of Disney’s Frozen snowman made an appearance on stage with Huang.
Nvidia has released an example of its droid Olaf.
Faith Chihil/CNETCNET’s Corinne Reichert has all the details on how the project came to be.
“The robotic snowman runs simulations on Nvidia GPUs and is powered by Nvidia chips. Olaf came to life thanks to the Newton Physics Engine, an open source system developed by Nvidia, Google DeepMind and Disney Research that allows high-performance robot simulations to run quickly on GPUs,” reports Corinne.
Check his complete storywhich includes Disney theme parks that can see Olaf droids walking around.
Nvidia makes a computer for space
By Jon Reed
Nvidia isn’t just putting its chips in almost every data center on the face of the Earth. The company makes a computer for space. Huang called it Vera Rubin Space-1 and said Nvidia and its partners were already in development. It’s “very complicated to do,” he said.
The big complication? Just like on the Earth’s surface, it’s a question of how to avoid overheating.
“In space, there’s no conduction, there’s no convection, it’s just radiation,” Huang said. “So we need to find a way to cool these systems in space.”
Nvidia upgrades its Vera Rubin system for agentic AI
By Katelyn Chedraoui
Nvidia’s Vera Rubin data center platform helps AI companies build and deploy their AI tools. Vera Rubin is receiving new updates to help her handle agentic AI, which are more computationally intensive tasks.
The company introduced a new Vera processor, which it says “delivers results that are twice as efficient and 50% faster than traditional processors.”
Jensen Huang with a Vera Rubin rack, used in data centers.
Faith Chihil/CNETNamed for the astronomer who discovered dark matter, we first saw the Vera Rubin system at CES in January.
An inflection point for AI inference
By Jon Reed
Nvidia CEO Jensen Huang on stage talks about inference.
Nvidia/Screenshot by CNETComputing demand has increased significantly in recent years, thanks to AI, but the main driver is no longer training or creating these AI models, but operating them, Huang said. This is called inference: when an AI model processes new information and applies its existing model to do or produce something new. Agentic AI relies heavily on inference because models must constantly adapt to new information, meaning the demand for the computing power needed to handle this load increases significantly.
“Finally, AI is able to do productive work, and so the point of inference has arrived,” Huang said. “The AI must now think. To think, it must infer. The AI must now do. To do, it must infer. The AI must now read. To read, it must infer.”
“King of Tokens”
By Katelyn Chedraoui
The Nvidia CEO joked that he is now the “king of tokens.”
Faith Chihil/CNET“Inference is your new workload, tokens are your new product… you want to make sure the architecture is as optimized as possible going forward,” Huang said. “Intelligence will be increased by tokens.”
Tokens are the building blocks of AI, individual units of data that an AI model can process. Huang said Nvidia has the lowest cost per token in the world, making it something of a “king of tokens.”
New AI-Driven DLSS 5 Upscaling for Gaming
By Lori Grunin
AI-driven optimization highlights smaller details and creates more realistic appearances.
NvidiaNvidia introduced DLSS 5, an update to its AI-driven gaming upscaling and optimization software. It appears to add an element-based understanding: it relates features to surfaces (like skin) and perhaps objects in order to make lighting and details more consistent and realistic throughout a game – by analyzing the initial content. This contrasts with today’s pixel-by-pixel, frame-by-frame understanding of a scene.
Its deployment is planned from this fall. It’s unclear which GPUs it will support, but it’s likely that it will be optimized for the Blackwell RTX 50 series architecture.
Nvidia GTC keynote begins
By Katelyn Chedraoui
Nvidia’s GTC keynote is officially underway. Follow here for updates and you can watch the live stream on YouTube.
Jensen Huang opens Nvidia GTC keynote.
Faith Chihil/CNETPossible new Nvidia chips for Windows computers?
By Katelyn Chedraoui
We’re sure to get some performance updates on Nvidia’s chips during today’s keynote, but could we also get some new ones? The Verge and the Wall Street Journal reported that Nvidia was building two new chips, called N1 and N1X, specifically for Windows computers. These chips would fuel Nvidia’s return to the consumer market, potentially in Dell and Lenovo computers.
Nvidia’s partnership with MediaTek would likely play an important role in the development of these chips. CNET IT expert Matthew Elliott explains that these new silicon chips would be “system-on-a-chip,” meaning they would integrate central, graphics, and neural processing units. It will be based on the Arm architecture, like Apple and Qualcomm int egrent – not the x86 architecture used by competitors AMD and Intel, which are present in many Windows computers.
“Unlike high-end gaming and creative laptops with dedicated Nvidia GeForce RTX GPUs, laptops with this new SoC designed by Nvidia and manufactured by MediaTek should be thin, light and durable,” Elliott said.
Why Nvidia’s on-device AI plan matters
By Katelyn Chedraoui
Whether you run your AI on a device or in the cloud is probably not something you think about often. But it should be. There are many benefits to running AI models locally, and Nvidia hardware makes it easy.
CNET editor-in-chief Jon Reed took a close look Nvidia’s Project G Assist at CES in January. It’s a chatbot-like interface that runs on your device and allows you to easily adjust your computer’s settings by speaking out loud. It’s also the technology behind the AI chatbot assistants that Nvidia is developing for gamers.
“Complex strategy games like Total War: Pharaoh can be particularly difficult for new players to understand, given that they often come with extensive documentation and complex mechanics that warrant their own encyclopedias,” Reed wrote. “This AI advisor, which runs on the device rather than in the cloud, can answer players’ questions about real events in the game using the context of all that information.”
On the device AI is not perfect: for example, it takes up a significant portion of your device’s memory. But it’s part of a growing movement to give AI developers and users a safer, cheaper, and faster way to access AI tools without having to rely on AI companies and their data centers. Nvidia’s chips in hardware like laptops are one way to make it easier to run AI locally, along with its more powerful desktop supercomputers, like the DGX Spark, which can be used by individual users and small and medium-sized businesses for more compute-intensive tasks.
OpenClaw creator at GTC pre-show
By Katelyn Chedraoui
OpenClaw creator Peter Steinberger participated in the Nvidia GTC pre-show.
Faith Chihil/CNETIf there’s one thing that’s absolutely taken the AI world by storm this year, it’s OpenClaw. This viral and open source AI agent has wowed fans with its agentic AI capabilities, capable of performing tasks independently and managing your entire digital life, if you choose. OpenClaw creator Peter Steinberger appeared in a video interview with Huang, shown on the theater screen during the GTC pre-show. His lobster headband references OpenClaw’s lobster mascot/logo.
OpenAI acquired OpenClaw shortly after it went viral, with Meta resuming his agentic AI social media spin-off, Moltbook.
Live from San José!
By Katelyn Chedraoui
CNET and PCMag are ready for the 2026 Nvidia GTC keynote.
Faith Chihil/CNETCNET social media manager Faith Chihil and Jacqueline Goldblatt of our sister site PCMag are inside the SAP Center waiting for the keynote to begin at 11 a.m. PT.
How to stream the Nvidia GTC keynote
By Katelyn Chedraoui
CEO Jensen Huang will talk today about “the latest advances in AI and accelerated computing.” Nvidia GTC keynote expected to begin at 11:00 a.m. PT (2:00 p.m. ET, 6:00 p.m. UK) on Monday March 16. You can stream the event on YouTube.































