

Among other research, the Global Technology Applied Research (GTAR) Center at JPMorgan Chase is experimenting with quantum algorithms for constrained optimization to perform Natural Language Processing (NLP) for document summarization, addressing various application points across the firm.
Marco Pistoia, Ph.D., Managing Director, Distinguished Engineer, and Head of GT Applied Research recently led the research effort around a constrained version of the Quantum Approximate Optimization Algorithm (QAOA) that can extract and summarize the most important information from legal documents and contracts. This work was recently published in Nature Scientific Reports () and deemed the “largest demonstration to date of constrained optimization on a gate-based quantum computer.”
JPMorgan Chase was one of the early-access users of the ԹϺ H1-1 system when it was upgraded from 12 qubits with 3 parallel gating zones to 20 qubits with 5 parallel gating zones. The research team at JPMorgan Chase found the 20-qubit machine returned significantly better results than random guess without any error mitigation, despite the circuit depth exceeding 100 two-qubit gates. The circuits used were deeper than any quantum optimization circuits previously executed for any problem. “With 20 qubits, we could summarize bigger documents and the results were excellent,” Pistoia said. “We saw a difference, both in terms of the number of qubits and the quality of qubits.”
JPMorgan Chase has been working with ԹϺ’s quantum hardware since 2020 (pre-merger) and Pistoia has seen the evolution of the machine over time, as companies raced to add qubits. “It was clear early on that the number of qubits doesn't matter,” he said. “In the short term, we need computers whose qubits are reliable and give us the results that we expect based on the reference values.”
Jenni Strabley, Sr., Director of Offering Management for ԹϺ, stated, “Quality counts when it comes to quantum computers. We know our users, like JPMC, expect that every time they use our H-Series quantum computers, they get the same, repeatable, high-quality performance. Quality isn’t typically part of the day-to-day conversation around quantum computers, but it needs to be for users like Marco and his team to progress in their research.”
More broadly, the researchers claimed that “this demonstration is a testament to the overall progress of quantum computing hardware. Our successful execution of complex circuits for constrained optimization depended heavily on all-to-all connectivity, as the circuit depth would have significantly increased if the circuit had to be compiled to a nearest-neighbor architecture.”
The objective of the experiment was to produce a condensed text summary by selecting sentences verbatim from the original text. The specific goal was to maximize the centrality and minimize the redundancy of the sentences in the summary and do so with a limited number of sentences.
The JPMorgan Chase researchers used all 20 qubits of the H1-1 and executed circuits with two-qubit gate depths of up to 159 and two-qubit gate counts of up to 765. The team used IBM’s Qiskit for circuit manipulation and noiseless simulation. For the hardware experiments, they used to optimize the circuits for H1-1’s native gate set. They also ran the quantum circuits in an emulator of the H1-1 device.
The JPMorgan Chase research team tested three algorithms: L-VQE, QAOA and XY-QAOA. L-VQE was easy to execute on the hardware but difficult to find good parameters for. Regarding the other two algorithms, it was easier to find good parameters, but the circuits were more expensive to execute. The XY-QAOA algorithm provided the best results.
Dr. Pistoia mentions that constrained optimization problems, such as extractive summarization, are ubiquitous in banks, thus finding high-quality solutions to constrained optimization problems can positively impact customers of all lines of business. It is also important to note that the optimization algorithm built for this experiment can also be used across other industries (e.g., transportation) because the underlying algorithm is the same in many cases.
Even with the quality of the results from this extractive summarization work, the NLP algorithm is not ready to roll out just yet. “Quantum computers are not yet that powerful, but we're getting closer,” Pistoia said. “These results demonstrate how algorithm and hardware progress is bringing the prospect of quantum advantage closer, which can be leveraged across many industries.”
ԹϺ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ԹϺ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ԹϺ leads the quantum computing revolution across continents.
Progress in quantum computing is measured by hardware advances plus the algorithms and quantum error-correction codes that turn quantum systems into useful computational tools.
Thanks to recent hardware advances, researchers are increasingly sharpening their tools to probe the performance of quantum algorithms and understand how they behave in realistic conditions – where stability, system architecture and algorithm design all shape performance.
A new Denmark-based collaboration between the University of Southern Denmark (SDU), ԹϺ, and the Danish e-Infrastructure Consortium (DeiC) will utilize ԹϺ Helios. Researchers at the SDU’s Centre for Quantum Mathematics, led by Jørgen Ellegaard Andersen, will use Helios to pursue research into topological quantum computing.
Their work could help explain how and why successful quantum algorithms perform as they do, informing the development of high-performance algorithms suited to emerging quantum systems. They’re exploring the scientific foundations that support future quantum applications across areas including pharmaceuticals, finance, and defense.
“We are thrilled to gain access to ԹϺ’s high-fidelity Helios system. This collaboration gives us a unique opportunity to test the limits of our algorithms and evaluate system performance, while advancing fundamental research and laying the foundation for future applications.”
— Professor Jørgen Ellegaard Andersen, Director of the Centre for Quantum Mathematics at University of Southern Denmark
Topological quantum computing is an area of research that connects quantum computation with deep mathematical structures. It includes the study of error correcting codes known as surface codes that encode quantum information in the global properties of systems of logical qubits.
The research team will explore how these codes behave, and how they may support the development of fault-tolerant quantum algorithms in practical implementations under realistic conditions.
This distinction between theory and practical implementation matters. In theory, topological approaches offer a rich framework for designing algorithms and error-correcting codes. In practice, researchers need to understand how those ideas perform when implemented on real systems, where questions of noise, stability, overhead, and scaling become central. The collaboration will allow the SDU team to investigate these questions directly.
Beyond individual algorithms and codes, the research will also develop tools for benchmarking quantum processors. The goal is to develop new ways to characterize fidelity and stability in regimes that can be difficult to access.
The team will also explore hybrid quantum–classical approaches, including machine-learning techniques assisted by quantum hardware, to study the mathematical structures at the heart of topological quantum computing. This work reflects a broader field of research in which quantum and classical methods are used together, each contributing to parts of a computational problem.
The collaboration reflects the growing role of national quantum infrastructure in supporting research and talent development. Denmark has a long tradition of scientific innovation, and this collaboration is intended to support the country’s continued development in quantum technology.
The initiative is supported by DeiC, which played a central role in securing funding and enabling access to ԹϺ’s systems. DeiC has been assigned a particular role in developing and coordinating quantum infrastructure initiatives for the benefit of universities and industry, operating without its own commercial, sectoral, or geographical interests. This includes securing dedicated access to quantum computers, producing advisory services and supporting the development of new talent in the Danish quantum sector.
“DeiC’s special effort to secure funding and access for this research initiative is rooted in our organization’s role in relation to the Danish Government’s strategy for quantum technology.”
— Henrik Navntoft Sønderskov, Head of Quantum at Danish e-Infrastructure Consortium
This collaboration promises to accelerate the development of practical algorithms. It is grounded in fundamental science – but its focus is practical: discovering and testing mathematical approaches to topological quantum computing that can be implemented, evaluated, and improved on real quantum hardware.
That work requires both theoretical insight and access to a system such as Helios capable of supporting meaningful scientific work.

This month, ԹϺ welcomed its global user community to the first-ever Q-Net Connect, an annual forum designed to spark collaboration, share insights, and accelerate innovation across our full-stack quantum computing platforms. Over two days, users came together not only to learn from one another, but to build the relationships and momentum that we believe will help define the next chapter of quantum computing.
Q-Net Connect 2026 drew over 170 attendees from around the world to Denver, Colorado, including representatives from commercial enterprises and startups, academia and research institutions, and the public sector and non-profits - all users of ԹϺ systems.
The program was packed with inspiring keynotes, technical tracks, and customer presentations. Attendees heard from leaders at ԹϺ, as well as our partners at NVIDIA, JPMorganChase and BlueQubit; professors from the University of New Mexico, the University of Nottingham and Harvard University; national labs, including NIST, Oak Ridge National Laboratory, Sandia National Laboratories and Los Alamos National Laboratory; and other distinguished guests from across the global quantum ecosystem.
The mission of the ԹϺ Q-Net user community is to create a space for shared learning, collaboration and connection for those who adopt ԹϺ’s hardware, software and middleware platform. At this year’s Q-Net Connect, we awarded four organizations who made notable efforts to champion this effort.
Congratulations, again, and thank you to everyone who contributed to the success of the first Q-Net Connect!
Q-Net offers year‑round support through user access, developer tools, documentation, trainings, webinars, and events. Members enjoy many exclusive benefits, including being the first to hear about exclusive content, publications and promotional offers.
By joining the community, you will be invited to exclusive gatherings to hear about the latest breakthroughs and connect with industry experts driving quantum innovation. Members also get access to Q‑Net Connect recordings and stay connected for future community updates.

In a follow-up to our recent work with Hiverge using AI to discover algorithms for quantum chemistry, we’ve teamed up with Hiverge, Amazon Web Services (AWS) and NVIDIA to explore using AI to improve algorithms for combinatorial optimization.
With the rapid rise of Large Language Models (LLMs), people started asking “what if AI agents can serve as on-demand algorithm factories?” We have been working with Hiverge, an algorithm discovery company, AWS, and NVIDIA, to explore how LLMs can accelerate quantum computing research.
Hiverge – named for Hive, an AI that can develop algorithms – aims to make quantum algorithm design more accessible to researchers by translating high-level problem descriptions in mostly natural language into executable quantum circuits. The Hive takes the researcher’s initial sketch of an algorithm, as well as special constraints the researcher enumerates, and evolves it to a new algorithm that better meets the researcher’s needs. The output is expressed in terms of a familiar programming language, like Guppy or , making it particularly easy to implement.
The AI is called a “Hive” because it is a collective of LLM agents, all of whom are editing the same codebase. In this work, the Hive was made up of LLM powerhouses such as Gemini, ChatGPT, Claude, Llama, as well as which was accessed through AWS’ Amazon Bedrock service. Many models are included because researchers know that diversity is a strength – just like a team of human researchers working in a group, a variety of perspectives often leads to the strongest result.
Once the LLMs are assembled, the Hive calls on them to do the work writing the desired algorithm; no new training is required. The algorithms are then executed and their ‘fitness’ (how well they solve the problem) is measured. Unfit programs do not survive, while the fittest ones evolve to the next generation. This process repeats, much like the evolutionary process of nature itself.
After evolution, the fittest algorithm is selected by the researchers and tested on other instances of the problem. This is a crucial step as the researchers want to understand how well it can generalize.
In this most recent work, the joint team explored how AI can assist in the discovery of heuristic quantum optimization algorithms, a class of algorithms aimed at improving efficiency across critical workstreams. These span challenges like optimal power grid dispatch and storage placement, arranging fuel inside nuclear reactors, and molecular design and reaction pathway optimization in drug, material, and chemical discovery—where solutions could translate into maximizing operational efficiency, dramatic reduction in costs, and rapid acceleration in innovation.

In other AI approaches, such as reinforcement learning, models are trained to solve a problem, but the resulting "algorithm" is effectively ‘hidden’ within a neural network. Here, the algorithm is written in Guppy or CUDA-Q (or Python), making it human-interpretable and easier to deploy on new problem instances.
This work leveraged the NVIDIA CUDA-Q platform, running on powerful NVIDIA GPUs made accessible by AWS. It’s state-of-the art accelerated computing was crucial; the research explored highly complex problems, challenges that lie at the edge of classical computing capacity. Before running anything on ԹϺ’s quantum computer, the researchers first used NVIDIA accelerated computing to simulate the quantum algorithms and assess their fitness. Once a promising algorithm is discovered, it could then be deployed on quantum hardware, creating an exciting new approach for scaling quantum algorithm design.
More broadly, this work points to one of many ways in which classical compute, AI, and quantum computing are most powerful in symbiosis. AI can be used to improve quantum, as demonstrated here, just as quantum can be used to extend AI. Looking ahead, we envision AI evolving programs that express a combination of algorithmic primitives, much like human mathematicians, such as Peter Shor and Lov Grover, have done. After all, both humans and AI can learn from each other.