Quantum computing guarantees to reshape industries — however progress hinges on fixing key issues. Error correction. Simulations of qubit designs. Circuit compilation optimization duties. These are among the many bottlenecks that should be overcome to deliver quantum {hardware} into the period of helpful functions.
Enter accelerated computing. The parallel processing of accelerated computing presents the ability wanted to make the quantum computing breakthroughs of right now and tomorrow doable.
NVIDIA CUDA-X libraries kind the spine of quantum analysis. From quicker decoding of quantum errors to designing bigger programs of qubits, researchers are utilizing GPU-accelerated instruments to develop classical computation and produce helpful quantum functions nearer to actuality.
Accelerating Quantum Error Correction Decoders With NVIDIA CUDA-Q QEC and cuDNN
Quantum error correction (QEC) is a key approach for working with unavoidable noise in quantum processors. It’s how researchers distill 1000’s of noisy bodily qubits right into a handful of noiseless, logical ones by decoding knowledge in actual time, recognizing and correcting errors as they emerge.
Among the many most promising approaches to QEC are quantum low-density parity-check (qLDPC) codes, which may mitigate errors with low qubit overhead. However decoding them requires computationally costly typical algorithms working at extraordinarily low latency with very excessive throughput.
The Quantum Software program Lab, hosted on the College of Informatics on the College of Edinburgh, used the NVIDIA CUDA-Q QEC library to construct a brand new qLDPC decoding technique known as AutoDEC — and noticed a 2x enhance in velocity and accuracy. It was developed utilizing CUDA-Q’s GPU-accelerated BP-OSD decoding performance, which parallelizes the decoding course of, growing the percentages that error correction works.
In a separate collaboration with QuEra, the NVIDIA PhysicsNeMo framework and cuDNN library have been used to develop an AI decoder with a transformer structure. AI strategies supply a promising means to scale decoding to the larger-distance codes wanted in future quantum computer systems. These codes enhance error correction — however they arrive with a steep computational value.
AI fashions can frontload the computationally intensive parts of the workloads by coaching forward of time and working extra environment friendly inference at runtime. Utilizing an AI mannequin developed with NVIDIA CUDA-Q, QuEra achieved a 50x enhance in decoding velocity — together with improved accuracy.
Optimizing Quantum Circuit Compilation With cuDF
A method to enhance an algorithm that works even with out QEC is to compile it to the highest-quality qubits on a processor. The method of mapping qubits in an summary quantum circuit to a bodily format of qubits on a chip is tied to an especially computationally difficult drawback referred to as graph isomorphism.
In collaboration with Q-CTRL and Oxford Quantum Circuits, NVIDIA developed a GPU-accelerated format choice technique known as ∆-Motif, offering as much as a 600x speedup in functions like quantum compilation, which contain graph isomorphism. To scale this method, NVIDIA and collaborators used cuDF — a GPU-accelerated knowledge science library — to carry out graph operations and assemble potential layouts with predefined patterns (aka “motifs”) primarily based on the bodily quantum chip format.
These layouts might be constructed effectively and in parallel by merging motifs, enabling GPU acceleration in graph isomorphism issues for the primary time.
Accelerating Excessive-Constancy Quantum System Simulation With cuQuantum
Numerical simulation of quantum programs is important for understanding the physics of quantum gadgets — and for creating higher qubit designs. QuTiP, a extensively used open-source toolkit, is a workhorse for understanding the noise sources current in quantum {hardware}.
A key use case is the high-fidelity simulation of open quantum programs, reminiscent of modeling superconducting qubits coupled with different parts inside the quantum processor, like resonators and filters, to precisely predict system habits.
Via a collaboration with the College of Sherbrooke and Amazon Internet Providers (AWS), QuTiP was built-in with the NVIDIA cuQuantum software program improvement equipment through a brand new QuTiP plug-in known as qutip-cuquantum. AWS offered the GPU-accelerated Amazon Elastic Compute Cloud (Amazon EC2) compute infrastructure for the simulation. For big programs, researchers noticed as much as a 4,000x efficiency enhance when learning a transmon qubit coupled with a resonator.
Be taught extra in regards to the NVIDIA CUDA-Q platform. Learn this NVIDIA technical weblog for extra particulars on how CUDA-Q powers quantum functions analysis.
Discover quantum computing classes at NVIDIA GTC Washington, D.C, working Oct. 27-29.