Practical: My hands-on experience with the Asus Ascent GX10 has been radical and is only relevant to those who are actively engaged in AI development.

practical:-my-hands-on-experience-with-the-asus-ascent-gx10-has-been-radical-and-is-only-relevant-to-those-who-are-actively-engaged-in-ai-development.

Practical: My hands-on experience with the Asus Ascent GX10 has been radical and is only relevant to those who are actively engaged in AI development.

Early verdict

While this can be great for developing localized AI, using it for anything else can be difficult. And if the wind direction of AI changes, it could find itself largely redundant. However, Asus has done an excellent engineering job on the GX10.

Benefits

  • +

    Unprecedented AI power

  • +

    128 GB LPDDR5x RAM

  • +

    200 Gbps interconnect and 10 GbE LAN

Disadvantages

  • No USB Type-A or USB4 ports

  • Single 2242 M.2 slot for internal storage

  • Other than storage, it’s not internally scalable

Why you can trust TechRadar We spend hours testing every product or service we review so you can be sure you’re buying the best. Learn more about how we test.

Rather than a review, this is a hands-on study in which I explored what the Asus Ascent GX10 offers, providing information that could be essential to those considering purchasing one.

The first important information about this hardware is that it is not a PC, or rather an Intel, AMD or X86 compatible platform that can run Windows.

The system can be used directly by connecting a mouse, keyboard, and display, but it’s also intended to be used in headless mode from another system, which might explain why it comes with relatively modest onboard storage.

What this system does not allow for is much expansion, at least internally. The inclusion of a special network connection, the Nvidia ConnectX-7 port, allows another Ascent GX10 node to be stacked on top, doubling the processing power and price.

Are you a pro? Subscribe to our newsletter

Sign up for the TechRadar Pro newsletter to get all the top news, opinions, features and tips your business needs to succeed!

Asus Ascent GX10: Price and availability

  • How much does it cost? From $3,090, £2,800
  • When was it released? Available now
  • Where can you get it? At online retailers.

The ASUS Ascent GX10 isn’t available directly from Asus, but it’s easy to find at many online retailers, including Amazon.

For US readers, the price on Amazon.com is $3,099.99 for the 1TB storage SKU (GX10-GG0015BN) and $4,149.99 for the 4TB storage model (GX10-GG0016BN).

Given that a 4TB Gen 5 SSD costs around $500, that’s a remarkable price hike for the extra storage capacity.

For UK readers, on Amazon.co.uk the price for the 1TB model is £3,769, but I found it via online retailer SCAN for a more palatable £2,799.98. SCAN also offers a 2TB option for $3,199.99 and the 4TB model for £3,638.99.

The important details of this platform are that the hardware inside the GX10 is not exclusive to Asus, as Nvidia GPUs are (in theory) available from a number of brands and Nvidia has its own model.

The Nvidia DGX Spark Personal AI supercomputer, as its creator modestly calls it, costs £3,699.98 in the UK, for a system with 128GB of RAM and 4TB of storage.

Acer offers the Veriton AI GN100, which bears an uncanny visual resemblance to the Asus but comes with 4TB of storage, like the Nvidia option. It’s £3,999.99 direct from Acer in the UK, but just $2,999.99 from Acer in the US.

Another choice is the Gigabyte AI TOP ATOM desktop supercomputer, a 4TB storage model that sells for £3,479.99 at SCAN in the UK and can be found on Amazon.com for $3,999.

And the latest model with the same specs as most is the MSI EdgeXpert Desktop AI supercomputer, selling for £3,598.99 at SCAN in the UK and $3,999 at Amazon.com for US customers.

Overall, the prices for all of these products are roughly in the same range, but the Asus in its 1TB configuration is one of the cheaper choices, especially for those in Europe.

Asus Ascent GX10 AI supercomputer

(Image credit: Mark Pickavance)

Asus Ascent GX10: Specifications

Drag to scroll horizontally

Article

Specification

Processor:

ARM v9.2-A (GB10) processor (20 ARM cores, 10 Cortex-X925, 10 Corex-A725)

GPU:

NVIDIA Blackwell GPU (GB10, integrated)

RAM:

128 GB LPDDR5x, unified system memory

Storage:

1TB M.2 NVMe PCIe 4.0 SSD Storage

Expansion:

N / A

Port :

3x USB 3.2 Gen 2×2 Type-C, 20 Gbps, Alternate Mode (DisplayPort 2.1) 1x USB 3.2 Gen 2×2 Type-C, with PD input (180 W EPR PD3.1 SPEC) 1x HDMI 2.1 1x NVIDIA ConnectX-7 SmartNIC

Networking:

10GbE LAN, AW-EM637 Wi-Fi 7 (Gig+), Bluetooth 5.4

Operating system:

Nvidia DGX operating system (Ubuntu Linux)

Power supply:

48V 5A 240W

Dimensions:

150 x 150 x 51 mm (5.91 x 5.91 x 2.01 in)

Weight:

1.48kg

Asus Ascent GX10: Design

  • Uber-NUC
  • Scalability of Connect-7
  • Limited internal access

(Image credit: Mark Pickavance)

Although the GX10 looks like an oversized NUC mini PC, it weighs 1.48kg, heavier than any I’ve encountered before. And that doesn’t include the sizable 240W power supply.

The front is a sleek grille with only the power button for company, and all the ports are on the back. These include four USB-C ports, one of which is required for connecting the power brick, a single 10GbE LAN port, and a single HDMI 2.1 video output.

You can connect multiple monitors using the USB 3.2 Gen 2×2 ports in DP Alt mode, if you have the adapters to convert them to DisplayPort.

What seems slightly odd is that Asus went with three USB 3.2 Gen 2x2s, a standard that was a dead end in USB development, not USB4. And there are no USB Type-A ports, forcing the buyer to use an adapter or hub to connect a mouse and keyboard to this system.

Since mice and keyboards are still primarily USB-A, this is slightly irritating.

But what really makes this system interesting is the inclusion of a ConnectX-7 smart network card alongside the more conventional 10GbE Ethernet port.

The best the 10GbE LAN port can offer is around 840MB/s data transfer, which is technically slower than USB ports, although it is fast thanks to networking technology.

The ConnectX-7 port is a technology developed by Mellanox Technologies Ltd, an Israeli-American multinational supplier of computer networking products based on InfiniBand and Ethernet technology acquired by Nvidia in 2019.

In this context, ConnectX-7 provides a way to link a second GX10 directly over a 200 Gbps (25 GB/s) InfiniBand network, enabling performance scaling between the two systems.

There are certainly parallels to this type of technology with the days when Nvidia allowed two GPUs to work in unison using a dedicated interconnect, but the ConnectX-7 interface is a much more sophisticated option where processing and memory can be used in a collective exercise, enabling the management of large-scale models with over 400 billion parameters. This is double the 200 billion that a single unit can manage.

(Image credit: Mark Pickavance)

Mellanox makes ConnectX switches, but I don’t know if it’s possible to connect more than two GX10s through one of them. Being realistic, each system is still only capable of 200 Gbps communication, so adding additional nodes beyond two could offer diminishing returns. But this technology is used in switched fabrics for enterprise data centers and high-performance computing, and in these scenarios the Mellanox Quantum family of InfiniBand switches supports up to 40 ports operating at HDR 200 Gbps.

It may be that products such as the GX10 are at the forefront of broader use and application of ConnectX technology, as well as a model for easily scalable clusters.

However, the last aspect I reviewed of the GX10 was a disappointment, and it was the only nod to the scalability of this system beyond the addition of a second machine.

Underneath the GX10 is a small panel that can be removed to provide access to the M.2 NVMe drive supported by this system.

In our review, the hardware was occupied by a single 1TB 2242 M.2 PCIe 4.0 drive, although you can also get this system with 4TB. The fact that there’s no room for a 2280 drive is a shock, as that effectively limits the maximum internal storage to 4TB.

But conversely, the only other of these types of systems I’ve seen, the Acer GN100 AI Mini Workstation, has no access to internal storage. So perhaps Asus Ascent GX10 owners should be grateful for the small mercies.

Asus Ascent GX10: Features

  • 20-core ARM processor
  • Grace Blackwell GB10
  • Comparison of AI platforms

The Nvidia GB10 Grace Blackwell superchip represents a significant advancement in AI hardware, resulting from a collaborative effort between Nvidia and ARM. Its origins lie in the growing demand for specialized computing platforms capable of supporting the rapid development and deployment of artificial intelligence models. Unlike traditional x86 systems, the GB10 is built around the ARM v9.2-A architecture, with a combination of 20 ARM cores, specifically 10 Cortex-X925 cores and 10 Cortex-A725 cores. This design choice reflects a broader industry trend toward ARM-based solutions, which provide improved efficiency and scalability for AI workloads.

The capabilities of the GB10 are simply remarkable. It features a powerful Nvidia Blackwell GPU paired with the ARM processor, delivering up to a petaFLOP of AI performance using FP4 precision. This level of computing power is particularly suited to training and inferring large language models (LLMs) and diffusion models, which underpin much of today’s generative AI. The system is further enhanced by 128GB of LPDDR5x unified memory, ensuring that even the most demanding AI tasks can be processed efficiently.

The GB10’s operating environment is based on Ubuntu Linux, specifically tailored to NVIDIA’s DGX operating system, making it an ideal platform for developers familiar with open source AI tools and workflows.

There is exceptional irony in this choice of operating system, since Nvidia has hardly been a friend to Linux over the past three decades and has actively hindered its attempts to compete more broadly with Microsoft Windows. If anyone doubts my opinion on the relationship between Linux and Nvidia, search for “Linus Torvalds” and “Nvidia”. Recently, Linus has been supportive of the company, but much less supportive of Nvidia CEO Jensen Huang. And he’s not a fan of the AI ​​industry, which he describes as “90% marketing and 10% reality.”

In the future, the evolution of GB10 and similar superchips will likely be way ned by the ongoing arms race in AI hardware. As models become larger and more complex, the need for even greater memory bandwidth, faster interconnects, and more efficient processing architectures will drive innovation. The modularity offered by technologies such as ConnectX-7 portends a future in which AI systems can be scaled seamlessly by linking multiple nodes, enabling the management of models with hundreds of billions of parameters.

In terms of raw AI performance, the GB10 delivers up to 1 petaFLOP with FP4 accuracy, which is highly optimized for quantized AI workloads. Although this falls short of the multi-petaFLOP performance of NVIDIA’s flagship data center chips (such as the Blackwell B200 or GB200), the power efficiency of the GB10 is remarkable. It runs at around 140W TDP, much lower than the 250W or more seen in GPUs like the RTX 5070, but offers significantly more memory (128GB versus 12GB on the 5070). This makes the GB10 particularly suitable for developers and researchers who need to work with large models locally, without the need for an entire server rack.

(Image credit: Mark Pickavance)

Although there are other players hiding in the shadows, mainly Chinese, the main players in AI are Nvidia, AMD, Google and Apple.

NVIDIA offers the Blackwell B200/GB200 products for flagship data center products, delivering up to 20 petaFLOPS of sparse FP4 compute and massive H. BM3e memory bandwidth. These are extremely expensive enterprise products, and the GB10, by contrast, is a smaller, more accessible version for desktop and edge use, trading cutting-edge performance for efficiency and compactness.

AMD’s line of AI accelerators is the Instinct MI300/MI350, they are competitive in raw compute and memory bandwidth, with the MI350X offering up to 288GB HBM3e and solid FP4/FP6 performance. But these don’t offer the same level of flexibility as the GB10, although they are better suited to interference tasks. And the same can be said for Google TPU v6/v7, a highly efficient technology for large-scale inference and optimized for Google’s cloud and AI services.

While the Apple M3/M4/M5 and Edge AI chips are optimized for on-device AI in consumer products, with impressive efficiency and built-in neural engines. However, these chips are not designed for large-scale model training or inference, and their memory and computing capabilities are much lower than those offered by the GB10 for professional AI development.

The NVIDIA GB10 Grace Blackwell superchip stands out as a bridge between consumer AI hardware and data center accelerators. It offers a unique blend of high memory capacity, power efficiency, and local accessibility, making it ideal for developers and researchers who need serious AI functionality without the scale or cost of a full server. While it can’t match the absolute peak performance of the largest data center chips, its unified memory, advanced interconnects, and software support make it a compelling choice for cutting-edge AI work on the desktop.

However, this statement assumes that current AI is a way forward.

Asus Ascent GX10: AI reality check

(Image credit: Mark Pickavance)

Looking at the specs of the Asus Ascent GX10, it’s easy to be impressed by the computing power that Asus, with help from Nvidia, has managed to pack into a small computer and its ability to scale.

However, three practical little pigs live in this AI straw house, and in this story, I am the wolf.

Those researching AI might think I’m referring to the three AI problems that all public implementations face. These are algorithmic biases, lack of transparency (i.e. explainability), and significant ethical/societal risks associated with the spread of misinformation. But this is not the case, because these problems are potentially fixable to some extent.

Instead, I talk about the three irreparable problems with current models

Almost all AI platforms are based on a concept called Deep Neural Net, and under this are two approaches generally classified as LLM (Large Language Models) and streaming models, which are the ones that can generate images and videos.

What these two sides of the Deep Neural Net coin show is an approach to problems based on pattern matching, as if the computer were playing a complex version of the children’s card game Snap. Results are influenced by the scale of the data and how quickly routines and hardware platforms find patterns.

Before IBM made computers, they sold map files, with the idea that it was faster to navigate maps to the information they wanted.

This is a generalization, but these models are purely more sophisticated versions of it, because if the pattern they are looking for does not exist in the data, then the routine cannot create it in an inspiring way.

To make results less random, model designers have tried to specialize their AI constructs to focus on narrower criteria, but the nirvana of AGI (Artificial General Intelligence) is that AI should be generally applicable to almost any problem.

This problem manifests itself in the AI’s responses: when faced with a pattern that the routine cannot match precisely, it simply offers the partial matches it found, which may or may not be related at all.

These “hallucinations,” as they are often called, are a choice that modelers have between the AI ​​admitting that it has no idea what the answer is and providing an answer that has a remarkably low probability of being correct. Since AI companies don’t like the idea of ​​their models admitting they have no idea what the answer is, hallucinations are deemed preferable.

(Image credit: Mark Pickavance)

Perhaps part of the problem here is not the AI, but that users are not trained to verify what the AI ​​produces, which is not entirely wrong.

The next problem is the classic “rapid injection” problem, where you ask a question, then, often depending on the answer, you realize you asked the wrong question, and then you move in a completely different direction. The AI ​​doesn’t recognize this pivot and tries to apply its previous construction patterns to the new problem, and gets completely confused.

And the final piglet, where current AI completely breaks down, could be considered original thinking, where what the user wants is a new approach to a problem that hasn’t been documented before. What has defined humans as particularly impressive thinkers is their ability to abstract, and that’s something that current AI doesn’t do, even modestly.

Although fast injection can probably be solved, the other two problems regarding generalization and abstraction are unlikely to be solved by the Deep Neural Net. They require a radically new approach and, ironically, not the one that AI is likely to deliver.

Some of you reading this will wonder why I inserted this information into this product reveal, but the whole purpose of the Asus Ascent GX10 is to make the LLM and Diffusion models easier to design and test, and at the moment these have significant limitations.

But importantly, the development of the entire Deep Neural Net direction does not appear to have resolved some of the more problematic issues, suggesting that it may ultimately be a dead end.

This could prove useful for solving many problems, but it is not the AI ​​we are looking for, and the likelihood of it evolving into true artificial intelligence is extremely low.

This is particularly relevant to the Asus Ascent GX10, as it has no practical use beyond creating models, since it’s not a PC.

These aren’t all the issues associated with AI, but these are some of the ones that could have a direct impact on those who purchase the GX10, at one point or another.

Asus Ascent GX10: first verdict

(Image credit: Mark Pickavance)

It’s exciting to see Asus doing something this radical, showing that it truly believes in a post-Windows, post-PC future where hardware is purely specified for a specific task, and in the case of the Asus Ascent GX10, that’s the development of AI models.

I’ve already touched on the caveats on this topic, so for the purposes of this conclusion, let’s assume that AI is the solid bet some think it is, and not an underperforming dead end like others think it is.

For businesses, the cost of this hardware will not be an issue for their IT people to experiment with building AI models and assess their value.

The beauty of a system like the GX10 is that it is a limited cost, unlike purchasing access to an AI server center cluster, which will be an ongoing cost and is likely to become more expensive if demand is high. Although the data center may still be needed for larger projects or deployment, the GX10 provides a starting point for any proof of concept.

However, if the AI ​​route is not the one ultimately taken, this machine becomes, above all, a beautifully designed paperweight.


For a more compact calculation, consult our guide to best mini pc you can buy

Mark is an expert in 3D printers, drones and phones. It also covers storage, including SSDs, NAS drives, and portable hard drives. He started writing in 1986 and has contributed to, among others, MicroMart, PC Format, 3D World.

What is a practical exam?

Hands-on reviews are a journalist’s first impressions of a piece of equipment, based on time spent with it. It may only take a few moments or a few hours. The important thing is that we were able to play with it ourselves and we could give you an idea of ​​what it means, even if it is only an embryonic view. For more information, see TechRadar’s Review Guarantee.