Open Acceleration Stack: Bringing AI into the Heart of Quantum Computing
March 18 | 2026
Qubits are getting better – but today, they are still not good enough.
And the industry is no longer betting on them alone. Instead, quantum computing is shifting toward hybrid systems, where classical processors, especially GPUs running advanced AI, sit directly in the control loop, stabilizing fragile qubits in real time.
Dr Itamar Sivan, CEO of Quantum Machines, puts it very bluntly. Despite all the promises, decades of research and screaming headlines, quantum bits (qubits) and, by extension, quantum processing units or QPUs – the heart of the quantum computer – are still not good enough to get us to the desired moment of quantum advantage any time soon all on their own. They simply don’t ‘live’ long enough just yet for meaningful computations. If the classical machine takes too long to send the instructions, qubits decohere, and the system collapses into our boring macroscopic reality.
That’s what made the three co-founders of Quantum Machines – Dr Sivan, Dr Yonatan Cohen and Dr Nissim Ofek – think outside the box, and to get AI give qubits a helping hand. Together with Nvidia and several other partners, QM has developed a way for any company or lab to integrate advanced AI into a hybrid quantum-classical setup as the experiment is running, cutting the GPU-QPU roundtrip communication to under 4 microseconds and enabling qubits to do the work despite their short lifespans.
Dubbed the Open Acceleration Stack, this architecture is built around QM’s OPX1000 control system and should allow researchers to develop the next generation of quantum computers. While earlier demonstrations showed that GPUs and quantum processors could work together in tightly coupled systems, the Open Acceleration Stack turns that idea into infrastructure that labs can deploy themselves, allowing different classical accelerators to connect through the control system to a wide range of quantum processors.
Watch:It’s a dual path to fault-tolerant quantum computing: better qubits and the systems around them. Watch 2025 Nobel Prize winner Prof. John Martinis share his vision for scalable quantum computing.
More compute in the loop also means better insight into how qubits behave — and how to improve them. “If you want to improve your qubits, you need to understand what’s going on in there, what’s good and what’s bad, and what you could improve,” says Dr Sivan. “That is not easy, and we need all the compute and all the AI we can get to operate our quantum processors, to interpret the data from them and to learn how to improve them.”
Complex? In quantum, that’s an understatement
What once was limited to dreams and hopes, famous, inspiring words from the 1980s like “Nature isn’t classical, dammit,” by American physicist Richard Feynman, diagrams on the blackboard and papers in scientific journals, has morphed into a few dozen of ever-more powerful machines that promise to impact our lives unlike any other technology. From helping us discover new drugs and materials faster and more efficiently to optimizing how we transport goods, and ourselves, from point A to point B, dealing with various global challenges and maybe even – who knows – finally achieving fusion on Earth. And much more. We can’t wait for these machines to start working alongside the rest of our tech.
Scientists believe that we’ll get there, eventually, so companies, academia and governments are pouring money and resources into the research, refining national quantum computing strategies and deliberating ways to educate our children about this emerging technology that they – hopefully – will benefit from.
The reason why quantum bits are still not good enough is that nature – and thus any quantum system – is extremely complex. Mimicking biology isn’t trivial; after all, atoms can be in multiple states at once, but nothing in our daily macro world can. Except for qubits – systems such as photons, atoms or superconducting circuits – tiny, supercooled wires. The most advanced quantum computers today boast a few hundred qubits. Take one with, say, just a hundred of them. No classical computer on Earth would ever be able to process the same amount of data as such a machine, because to do so, one would need more classical transistors than the number of atoms in the universe. “We’re here dealing with very, very complex beasts,” chuckles Dr Sivan.
And these beasts keep making errors.
Just like when our regular computers deal with code to execute an algorithm, bugs are common. We rarely notice them, as computers correct them on the fly. That’s what we want our quantum machines to do, too, but building a fully fault-tolerant quantum computer ain’t easy. Qubits are extremely fragile, and it is only when they are in the delicate quantum state that they can solve the problems we want them to solve. Any disturbance, be it from temperature fluctuations or even motion next to the machine, and errors creep in – yanking the qubits right back into our classical reality and leading to a wrong result.
To fix the errors, a classical computer needs to constantly monitor the error signals, calculate a fix and send a correction back. If the fix doesn’t arrive within about 10 microseconds, the error spreads and the calculation crashes. Communication, or in science speak, latency, matter greatly.
To address the latency issue, QM and its partners are bringing GPUs directly into the control loop. This reflects a broader industry shift: GPUs are no longer just post-processing tools, but real-time decision engines that sit alongside quantum hardware, analyzing measurements and sending corrections within microseconds.
QM’s real-time classical control hardware, OPX, sits right next to the quantum machine and enables it to talk and work together with its classical counterparts, connecting the QPU to classical processors through hybrid control integrations. The key part of the control hardware is the Pulse Processing Unit, or PPU. It sends data and instructions from the classical computer to the qubits using microwave pulses and reads signals the qubits send back.
In 2023, QM partnered with Nvidia to develop a hybrid computing system that consisted of the OPX/PPU, an OPNIC and Nvidia’s Grace-Hopper GPU system. This system, back then called DGX Quantum, enabled classical processors to analyze and send fast feedback to qubits during experiments, and paved the way to Nvidia’s NVQLink, an architecture for GPU–quantum system interaction. With that development, the time for the feedback loop of getting the qubit measurement, communicating it to the GPU and sending updated parameters back to the controller dropped to the lowest-ever figure of less than 4 microseconds.
That was a good result, but Dr Sivan knew that they could do better.
To run a problem on a quantum computer, researchers typically break it into separate chunks – some destined to run on the GPU, others – on the CPU, and the rest the QPU. The process is slow and not very accurate, often leading to incorrect results. Classical machines tend to take too long to send the data, and the qubits decohere before the data arrives. In a complex problem like drug discovery, the GPU and QPU need to ‘talk’ back and forth thousands of times. If each exchange takes milliseconds, the total problem would take days.
“Classical computers alone trying to operate quantum computers is like a single-handed puppeteer trying to control the universe,” laughs Dr Sivan.
AI, quantum and classical walk into a bar…
That’s exactly where AI can help.
Until recently, AI was mostly used after experiments, to analyse results. Now it can sit inside the control loop, interpreting measurements and correcting errors in real time. At such speeds, the GPU can act as a real-time decoder, analysing measurements and deciding the next action on the fly.
Instead of the researchers distributing the work between classical and quantum processors, AI would orchestrate everything, learning the quantum process, adjusting it and suggesting improvements – speeding up the overall decision time.
QM has shown that AI can do the job. In 2025, together with Diraq and Nvidia, researchers connected silicon qubits to Nvidia’s GPUs via the DGX Quantum system and OPX1000 controller. They achieved ultra-fast 3.3 microsecond communication between the QPU and the GPU, allowing the GPU to run algorithms, including machine learning, in the control loop.
Still, despite the significance of those milestones, it was not so easy for companies and labs to achieve the hybrid quantum-classical-AI setup. The typical approach has been for a quantum computing vendor to partner with a classical processors’ provider, or the other way around, and the use AI mainly for post-processing. Clunky, cumbersome, setup-specific and time-consuming. And time is exactly what it would be great to minimize, as after all, we do want to get to our quantum future faster. Maybe to even enjoy it together with our kids.
Open, inclusive and diverse hybrid ecosystem
So QM has decided to simplify the way to hybridization, building on the success of the experiments with Nvidia and others.
“We wanted to enable the use of AI for quantum, and to make it easy, fast and accessible to all,” says Dr Sivan. “That’s exactly why we’ve created the Open Acceleration Stack, a framework that allows everyone to work with everyone else. It allows you to integrate all the latest and greatest on both sides, meaning the state-of-the-art in classical processing – GPUs, CPUs, FPGAs that the world has to offer – with any quantum processor the industry has developed. And it is definitely AI that we consider to be the most promising element here, what we believe is going to help us bring this technology to fruition.”
The Open Acceleration Stack enables wide integration between different computing platforms. It offers a qubit-agnostic, open architecture and integration layer where any QPU developer can plug into Nvidia’s (and later other) GPUs and the high-performance computing ecosystem overall, moving the field to a standardized, scalable industry.
In its latest CUDA-Q release, Nvidia introduced a real-time runtime library called cudaq-realtime, designed to enable microsecond-level communication between GPUs and quantum controllers. QM’s Open Acceleration Stack integrates with this architecture through the OPX1000 control system and its OPNIC interface.
“The framework allows to build servers into which you can plug in all the different processors you like,” says Dr Sivan. “Let’s say, you bought a Nvidia Grace Hopper or Grace Blackwell superchip, FPGA boards from AMD and a decoder from Riverlane. You can plug them all into the server. On the hardware level, this is the framework that allows you to connect them.”
And the beauty of it is that you are not limited by the choice of supplier. Users can buy hardware from multiple companies – say, accelerators from Nvidia, AMD, and soon enough, others, says Dr Sivan. The result is a more flexible and collaborative ecosystem, where different technologies can evolve together rather than in isolation.
The bottom line is, progress no longer depends on qubits alone – but on how well control hardware, AI models, software frameworks and accelerators work together, to turn fragile quantum systems into usable machines.
Katia Moskvitch
Katia Moskvitch is a science and technology communications leader with deep expertise in quantum computing and advanced research. A former journalist for Nature, Wired, and the BBC, she later led communications for Europe at IBM Research before joining Quantum Machines. She now oversees communications strategy, editorial direction, and storytelling across product, research, and industry initiatives, helping bridge the gap between frontier quantum technologies and broader audiences.
Never miss a Quark!
Sign up for the Newsletter
Take the Next Step
Have a specific experiment in mind and wondering about the best quantum control and electronics setup?