Ryan Babbush

Ryan Babbush

Ryan is the director of the Quantum Algorithm & Applications Team at Google. The mandate of this research team is to develop new and more efficient quantum algorithms, discover and analyze new applications of quantum computers, build and open source tools for accelerating quantum algorithms research and compilation, and to design algorithmic experiments to execute on existing and future fault-tolerant quantum devices.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Visualizing dynamics of charges and strings in (2 + 1)D lattice gauge theories
    Tyler Cochran
    Bernhard Jobst
    Yuri Lensky
    Gaurav Gyawali
    Norhan Eassa
    Melissa Will
    Aaron Szasz
    Dmitry Abanin
    Rajeev Acharya
    Laleh Beni
    Trond Andersen
    Markus Ansmann
    Frank Arute
    Kunal Arya
    Abe Asfaw
    Juan Atalaya
    Brian Ballard
    Alexandre Bourassa
    Michael Broughton
    David Browne
    Brett Buchea
    Bob Buckley
    Tim Burger
    Nicholas Bushnell
    Anthony Cabrera
    Juan Campero
    Hung-Shen Chang
    Jimmy Chen
    Benjamin Chiaro
    Jahan Claes
    Agnetta Cleland
    Josh Cogan
    Roberto Collins
    Paul Conner
    William Courtney
    Alex Crook
    Ben Curtin
    Sayan Das
    Laura De Lorenzo
    Paul Donohoe
    ILYA Drozdov
    Andrew Dunsworth
    Alec Eickbusch
    Aviv Elbag
    Mahmoud Elzouka
    Vinicius Ferreira
    Ebrahim Forati
    Austin Fowler
    Brooks Foxen
    Suhas Ganjam
    Robert Gasca
    Élie Genois
    William Giang
    Dar Gilboa
    Raja Gosula
    Alejo Grajales Dau
    Dietrich Graumann
    Alex Greene
    Steve Habegger
    Monica Hansen
    Sean Harrington
    Paula Heu
    Oscar Higgott
    Jeremy Hilton
    Robert Huang
    Ashley Huff
    Bill Huggins
    Cody Jones
    Chaitali Joshi
    Pavol Juhas
    Hui Kang
    Amir Karamlou
    Kostyantyn Kechedzhi
    Trupti Khaire
    Bryce Kobrin
    Alexander Korotkov
    Fedor Kostritsa
    John Mark Kreikebaum
    Vlad Kurilovich
    Dave Landhuis
    Tiano Lange-Dei
    Brandon Langley
    Kim Ming Lau
    Justin Ledford
    Kenny Lee
    Loick Le Guevel
    Wing Li
    Alexander Lill
    Will Livingston
    Aditya Locharla
    Daniel Lundahl
    Aaron Lunt
    Sid Madhuk
    Ashley Maloney
    Salvatore Mandra
    Leigh Martin
    Orion Martin
    Cameron Maxfield
    Seneca Meeks
    Anthony Megrant
    Reza Molavi
    Sebastian Molina
    Shirin Montazeri
    Ramis Movassagh
    Charles Neill
    Michael Newman
    Murray Ich Nguyen
    Chia Ni
    Kris Ottosson
    Alex Pizzuto
    Rebecca Potter
    Orion Pritchard
    Ganesh Ramachandran
    Matt Reagor
    David Rhodes
    Gabrielle Roberts
    Kannan Sankaragomathi
    Henry Schurkus
    Mike Shearn
    Aaron Shorter
    Vladimir Shvarts
    Vlad Sivak
    Spencer Small
    Clarke Smith
    Sofia Springer
    George Sterling
    Jordan Suchard
    Alex Sztein
    Doug Thor
    Mert Torunbalci
    Abeer Vaishnav
    Justin Vargas
    Sergey Vdovichev
    Guifre Vidal
    Steven Waltman
    Shannon Wang
    Brayden Ware
    Kristi Wong
    Cheng Xing
    Jamie Yao
    Ping Yeh
    Bicheng Ying
    Juhwan Yoo
    Grayson Young
    Yaxing Zhang
    Ningfeng Zhu
    Yu Chen
    Vadim Smelyanskiy
    Adam Gammon-Smith
    Frank Pollmann
    Michael Knap
    Nature, 642 (2025), 315–320
    Preview abstract Lattice gauge theories (LGTs) can be used to understand a wide range of phenomena, from elementary particle scattering in high-energy physics to effective descriptions of many-body interactions in materials. Studying dynamical properties of emergent phases can be challenging, as it requires solving many-body problems that are generally beyond perturbative limits. Here we investigate the dynamics of local excitations in a LGT using a two-dimensional lattice of superconducting qubits. We first construct a simple variational circuit that prepares low-energy states that have a large overlap with the ground state; then we create charge excitations with local gates and simulate their quantum dynamics by means of a discretized time evolution. As the electric field coupling constant is increased, our measurements show signatures of transitioning from deconfined to confined dynamics. For confined excitations, the electric field induces a tension in the string connecting them. Our method allows us to experimentally image string dynamics in a (2+1)D LGT, from which we uncover two distinct regimes inside the confining phase: for weak confinement, the string fluctuates strongly in the transverse direction, whereas for strong confinement, transverse fluctuations are effectively frozen. We also demonstrate a resonance condition at which dynamical string breaking is facilitated. Our LGT implementation on a quantum processor presents a new set of techniques for investigating emergent excitations and string dynamics. View details
    Preview abstract This perspective outlines promising pathways and critical obstacles on the road to developing useful quantum computing applications, drawing on insights from the Google Quantum AI team. We propose a five-stage framework for this process, spanning from theoretical explorations of quantum advantage to the practicalities of compilation and resource estimation. For each stage, we discuss key trends, milestones, and inherent scientific and sociological impediments. We argue that two central stages -- identifying concrete problem instances expected to exhibit quantum advantage, and connecting such problems to real-world use cases -- represent essential and currently under-resourced challenges. Throughout, we touch upon related topics, including the promise of generative artificial intelligence for aspects of this research, criteria for compelling demonstrations of quantum advantage, and the future of compilation as we enter the era of early fault-tolerant quantum computing. View details
    Optimization by Decoded Quantum Interferometry
    Stephen Jordan
    Mary Wootters
    Alexander Schmidhuber
    Robbie King
    Sergei Isakov
    Nature, 646 (2025), 831–836
    Preview abstract Achieving superpolynomial speed-ups for optimization has long been a central goal for quantum algorithms. Here we introduce decoded quantum interferometry (DQI), a quantum algorithm that uses the quantum Fourier transform to reduce optimization problems to decoding problems. When approximating optimal polynomial fits over finite fields, DQI achieves a superpolynomial speed-up over known classical algorithms. The speed-up arises because the algebraic structure of the problem is reflected in the decoding problem, which can be solved efficiently. We then investigate whether this approach can achieve a speed-up for optimization problems that lack an algebraic structure but have sparse clauses. These problems reduce to decoding low-density parity-check codes, for which powerful decoders are known. To test this, we construct a max-XORSAT instance for which DQI finds an approximate optimum substantially faster than general-purpose classical heuristics, such as simulated annealing. Although a tailored classical solver can outperform DQI on this instance, our results establish that combining quantum Fourier transforms with powerful decoding primitives provides a promising new path towards quantum speed-ups for hard optimization problems. View details
    Preview abstract Decoded Quantum Interferometry (DQI) provides a framework for superpolynomial quantum speedups by reducing certain optimization problems to reversible decoding tasks. We apply DQI to the Optimal Polynomial Intersection (OPI) problem, whose dual code is Reed-Solomon (RS). We establish that DQI for OPI is the first known candidate for verifiable quantum advantage with optimal asymptotic speedup: solving instances with classical hardness $O(2^N)$ requires only $\widetilde{O}(N)$ quantum gates, matching the theoretical lower bound. Realizing this speedup requires highly efficient reversible RS decoders. We introduce novel quantum circuits for the Extended Euclidean Algorithm, the decoder's bottleneck. Our techniques, including a new representation for implicit Bézout coefficient access, and optimized in-place architectures, reduce the leading-order space complexity to the theoretical minimum of $2nb$ qubits while significantly lowering gate counts. These improvements are broadly applicable, including to Shor's algorithm for the discrete logarithm. We analyze OPI over binary extension fields $GF(2^b)$, assess hardness against new classical attacks, and identify resilient instances. Our resource estimates show that classically intractable OPI instances (requiring $>10^{23}$ classical trials) can be solved with approximately 5.72 million Toffoli gates. This is substantially less than the count required for breaking RSA-2048, positioning DQI as a compelling candidate for practical, verifiable quantum advantage. View details
    Quantum Algorithms for Linear Matrix Equations
    Rolando Somma
    Guang Hao Low
    Dominic Berry
    arXiv:2508.02822 (2025)
    Preview abstract We describe an efficient quantum algorithm for solving the linear matrix equation AX+XB=C, where A, B and C are given complex matrices and X is unknown. This is known as the Sylvester equation, a fundamental equation with applications in control theory and physics. Rather than encoding the solution in a quantum state in a fashion analogous to prior quantum linear algebra solvers, our approach constructs the solution matrix X in a block-encoding, rescaled by some factor. This allows us to obtain certain properties of the entries of X exponentially faster than would be possible from preparing X as a quantum state. The query and gate complexities of the quantum circuit that implements this block-encoding are almost linear in a condition number that depends on A and B, and depend logarithmically in the dimension and inverse error. We show how our quantum circuits can solve BQP-complete problems efficiently, discuss potential applications and extensions of our approach, its connection to Riccati equation, and comment on open problems. View details
    Quartic Quantum Speedups for Planted Inference Problems
    Alexander Schmidhuber
    Ryan O'Donnell
    Physical Review X, 15 (2025), pp. 021077
    Preview abstract We describe a quantum algorithm for the Planted Noisy kXOR problem (also known as sparse Learning Parity with Noise) that achieves a nearly quartic (4th power) speedup over the best known classical algorithm while also only using logarithmically many qubits. Our work generalizes and simplifies prior work of Hastings, by building on his quantum algorithm for the Tensor Principal Component Analysis (PCA) problem. We achieve our quantum speedup using a general framework based on the Kikuchi Method (recovering the quartic speedup for Tensor PCA), and we anticipate it will yield similar speedups for further planted inference problems. These speedups rely on the fact that planted inference problems naturally instantiate the Guided Sparse Hamiltonian problem. Since the Planted Noisy kXOR problem has been used as a component of certain cryptographic constructions, our work suggests that some of these are susceptible to super-quadratic quantum attacks. View details
    Rapid Initial-State Preparation for the Quantum Simulation of Strongly Correlated Molecules
    Dominic Berry
    Yu Tong
    Alec White
    Tae In Kim
    Lin Lin
    Seunghoon Lee
    Garnet Chan
    PRX Quantum, 6 (2025), pp. 020327
    Preview abstract Studies on quantum algorithms for ground-state energy estimation often assume perfect ground-state preparation; however, in reality the initial state will have imperfect overlap with the true ground state. Here, we address that problem in two ways: by faster preparation of matrix-product-state (MPS) approximations and by more efficient filtering of the prepared state to find the ground-state energy. We show how to achieve unitary synthesis with a Toffoli complexity about 7 × lower than that in prior work and use that to derive a more efficient MPS-preparation method. For filtering, we present two different approaches: sampling and binary search. For both, we use the theory of window functions to avoid large phase errors and minimize the complexity. We find that the binary-search approach provides better scaling with the overlap at the cost of a larger constant factor, such that it will be preferred for overlaps less than about 0.003. Finally, we estimate the total resources to perform ground-state energy estimation of Fe-S cluster systems, including the Fe⁢Mo cofactor by estimating the overlap of different MPS initial states with potential ground states of the Fe⁢Mo cofactor using an extrapolation procedure. With a modest MPS bond dimension of 4000, our procedure produces an estimate of approximately 0.9 overlap squared with a candidate ground state of the Fe⁢Mo cofactor, producing a total resource estimate of 7.3e10 Toffoli gates; neglecting the search over candidates and assuming the accuracy of the extrapolation, this validates prior estimates that have used perfect ground-state overlap. This presents an example of a practical path to prepare states of high overlap in a challenging-to-compute chemical system. View details
    Preview abstract The solution of linear systems of equations is the basis of many other quantum algorithms, and recent results provided an algorithm with optimal scaling in both the condition number κ and the allowable error ϵ [PRX Quantum 3, 0403003 (2022)]. That work was based on the discrete adiabatic theorem, and worked out an explicit constant factor for an upper bound on the complexity. Here we show via numerical testing on random matrices that the constant factor is in practice about 1,200 times smaller than the upper bound found numerically in the previous results. That means that this approach is far more efficient than might naively be expected from the upper bound. In particular, it is over an order of magnitude more efficient than using a randomized approach from [arXiv:2305.11352] that claimed to be more efficient. View details
    Shadow Hamiltonian Simulation
    Rolando Somma
    Robbie King
    Tom O'Brien
    Nature Communications, 16 (2025), pp. 2690
    Preview abstract Simulating quantum dynamics is one of the most important applications of quantum computers. Traditional approaches for quantum simulation involve preparing the full evolved state of the system and then measuring some physical quantity. Here, we present a different and novel approach to quantum simulation that uses a compressed quantum state that we call the "shadow state". The amplitudes of this shadow state are proportional to the time-dependent expectations of a specific set of operators of interest, and it evolves according to its own Schrödinger equation. This evolution can be simulated on a quantum computer efficiently under broad conditions. Applications of this approach to quantum simulation problems include simulating the dynamics of exponentially large systems of free fermions or free bosons, the latter example recovering a recent algorithm for simulating exponentially many classical harmonic oscillators. These simulations are hard for classical methods and also for traditional quantum approaches, as preparing the full states would require exponential resources. Shadow Hamiltonian simulation can also be extended to simulate expectations of more complex operators such as two-time correlators or Green's functions, and to study the evolution of operators themselves in the Heisenberg picture. View details
    The FLuid Allocation of Surface Code Qubits (FLASQ) Cost Model for Early Fault-Tolerant Quantum Algorithms
    Bill Huggins
    Amanda Xu
    Matthew Harrigan
    Christopher Kang
    Guang Hao Low
    Austin Fowler
    arXiv:2511.08508 (2025)
    Preview abstract Holistic resource estimates are essential for guiding the development of fault-tolerant quantum algorithms and the computers they will run on. This is particularly true when we focus on highly-constrained early fault-tolerant devices. Many attempts to optimize algorithms for early fault-tolerance focus on simple metrics, such as the circuit depth or T-count. These metrics fail to capture critical overheads, such as the spacetime cost of Clifford operations and routing, or miss they key optimizations. We propose the FLuid Allocation of Surface code Qubits (FLASQ) cost model, tailored for architectures that use a two-dimensional lattice of qubits to implement the two-dimensional surface code. FLASQ abstracts away the complexity of routing by assuming that ancilla space and time can be fluidly rearranged, allowing for the tractable estimation of spacetime volume while still capturing important details neglected by simpler approaches. At the same time, it enforces constraints imposed by the circuit's measurement depth and the processor's reaction time. We apply FLASQ to analyze the cost of a standard two-dimensional lattice model simulation, finding that modern advances (such as magic state cultivation and the combination of quantum error correction and mitigation) reduce both the time and space required for this task by an order of magnitude compared with previous estimates. We also analyze the Hamming weight phasing approach to synthesizing parallel rotations, revealing that despite its low T-count, the overhead from imposing a 2D layout and from its use of additional ancilla qubits will make it challenging to benefit from in early fault-tolerance. We hope that the FLASQ cost model will help to better align early fault-tolerant algorithmic design with actual hardware realization costs without demanding excessive knowledge of quantum error correction from quantum algorithmists. View details
    ×