Quantum News Nexus is a new site from freelance writer and editor Dan O’Shea that covers quantum computing, quantum sensing, quantum networking, quantum-safe security, and more. You can find him on X @QuantumNewsGuy and doshea14@gmail.com.
Quantum computing and classical computing companies for the most part have found some consensus around the notion that hybrid quantum-classical computing is the future path for supercomputing.
And thank goodness for that, as it probably has helped us avoid a lot of messy street fights between engineers and theorists from both sectors. CEOs still may claim one is better than the other, particularly if they are trying to deploy hype and controversy to bump up their stock prices, but most of us are on the same page.
This consensus seemingly has been reached over the last year or so, and there have been important milestones leading up to it. Possibly the earliest one was deployment by Quantum Brilliance of its room-temperature QPU at the Pawsey Supercomputing Research Centre in Australia. Last year, Nvidia announced plans for multiple projects in which classical supercomputers powered by GPUs and CPUs would gain quantum computers or QPUs as roomates, including the matching of its new ABCI-Q supercomputer in Japan with a QuEra Computing quantum machine. In recent months, IBM co-located one of its Quantum System Two machines with the new Fugaku supercomputer, also in Japan. There have been others, but you get the idea.
Now comes the latest hybrid quantum-classical supercomputing project, and it breaks some new ground: Quantum Machines, Arque Systems, and the Jülich Supercomputing Centre (JSC) in Germany (home to Jupiter, the first exascale supercomputer in Europe), recently announced the first deployment of an NVIDIA DGX Quantum-powered quantum computer at a major supercomputing center.
DGX Quantum is the system approach to hybrid quantum-classical computing that Nvidia first announced about two-and-a-half years ago (Yeah, for those of you thinking Nvidia just started cramming on quantum, no, it has been scheming for a long time.) DGX Quantum offers a model for combining GPUs and QPUs with quantum control electronics, marrying Nvidia’s Grace Hopper Superchip with Quantum Machines’ OPX1000 hybrid quantum-classical controller to support seamless interaction between classical and quantum computing resources.
As for the QPU involved, a press release noted, “The system features Arque Systems’ 5-qubit quantum processor, which uses electron shuttling to couple qubits – an approach intended to enable architectures suitable for quantum error correction (QEC), a key requirement for practical quantum computing. To leverage the fast gate speeds of spin qubits, the tightly integrated stack delivers microsecond-scale analog feedback, ensuring classical processing stays within qubit coherence times.
Sam Stanwyck, Quantum Product Lead for quantum computing at Nvidia, told Quantum News Nexus in a one-on-one interview that in addition to being the first supercomputer site deployment for the DGX Quantum, there is another bit of novelty involved: “What’s new and different [in this case] is the tight, low-latency coupling [of the Arque QPU] through the DGX Quantum.”
Nvidia claims that tight integration enables round-trip data transfer with latency under 4 microseconds, a 1000-fold improvement over previous implementations.
Of all, the QPUs out there, why one from Arque? Well, the company is a spin-off of RWTH Aachen University and Forschungszentrum Jülich, the latter being the research organization that is parent to the Jülich Supercomputing Centre. So, this is a proud papa moment. But, there’s more…
Though the Arque QPU is significantly smaller in qubit count than many QPUs on the market, that is typical of spin qubit systems, and these QPU types are a natural fit for tight integration with GPUs.
“Spin qubits actually present a lot of use cases for tightly-coupled GPU computing,” Stanwyck said. “The way that you control and calibrate spin qubits is very complicated and very computationally intensive, and it’s one of the reasons that, though the fabrication [of spin-qubit-based QPUs] is very scalable, you see five qubit systems instead of 100 qubit systems.”
It turns out JSC has plans to work with multiple QPUs from different firms, such as IQM, and others, but as Stanwyck explained this particular distinction, “You can think of it as a control system and our superchip in a reference architecture, and Arque is the first QPU to use that to connect to a supercomputer, or in this case, an exascale supercomputer at Julich. So, others, like IQM, can be used together with the supercomputer as well, but the integration there is a software integration for a co-located system, while this is a tight, low-latency, physical hardware integration.”
As hybrid quantum-classical computing advances, there are still likely to be many deployments where quantum machines and classical machines are co-located rather than more tightly integrated, one of several current examples being the Israeli Quantum Computing Center, which featured the very first deployment of the DGX Quantum system, along with a number of co-located quantum computers from different suppliers.
However, Nvidia is pushing for a different approach for deployments at the supercomputing end of the scale, with the aims that it will benefit error correction and control for the quantum computers involved, and allow the quantum machines to serve as accelerators to help the classical supercomputers do things that, even with exascale power, they have never been able to do before.
“I do expect to see that for the very long term, where we’re talking about building a co-designed supercomputer that includes quantum, the trend will be more towards this kind of tight integration,” Stanwyck said.
Image: The Nvidia DGX Quantum system with the Grace Hopper Superchip and the Quantum Machines OPX1000 hybrid quantum-classical controller. Source: QM





Leave a comment