Scientists demonstrate quantum exponential scaling advantage
Scientists demonstrate unconditional exponential quantum scaling advantage using two 127- qubit computers by University of Southern California
Scientists demonstrate unconditional exponential quantum scaling advantage using two 127- qubit computers by University of Southern California
We propose an approach to generative quantum machine learning that overcomes the fundamental scaling issues of variational quantum circuits. The core idea is to
Daniel Lidar, holder of the Viterbi Professorship in Engineering and Professor of Electrical & Computing Engineering at the USC Viterbi School of Engineering, has been iterating on quantum error correction, and in a new study along with collaborators at USC and Johns Hopkins, has been able to demonstrate a quantum exponential scaling advantage, using two 127-qubit IBM Quantum Eagle processor-powered quantum computers, over the cloud. Says Lidar, who is also a professor of Chemistry and Physics at the USC Dornsife College of Letters, Arts and Science, “The quantum computing community is showing how quantum processors are beginning to outperform their classical counterparts in targeted tasks, and are stepping into a territory classical computing simply can’t reach., Our result shows that already today’s quantum computers firmly lie on the side of a scaling quantum advantage.”. He adds that with this new research, The performance separation cannot be reversed because the exponential speedup we’ve demonstrated is, for the first time, unconditional.” In other words, the quantum performance advantage is becoming increasingly difficult to dispute.
Recently, researchers led by Dr. Daniel Lidar’s team at the University of Southern California (USC) published results in *Physical Review X* that demonstrate exponential algorithmic speedup for a modified version of Simon’s problem using IBM quantum computers. Put simply, the team ran circuits on noisy quantum hardware up to 126 qubits, demonstrating that as the problem increased in size, the speedup scaled exponentially for quantum. An exponential scaling speedup means that as the problem size increases owing to the addition of variables, the gap between quantum and classical performance keeps growing, to the advantage of the quantum side. In 1994, Daniel Simon developed an algorithm providing theoretical proof that this problem—when run on ideal, noiseless quantum computers—could be solved in very few queries to the quantum oracle, providing exponential advantage over the best probabilistic classical algorithms. Lidar and team sought to take the promise of exponential speedup for Simon’s algorithm from paper to machine by demonstrating it on today’s pre-fault-tolerant quantum hardware, ultimately running their experiment on two 127-qubit IBM Quantum Eagle processors, `ibm_brisbane` and `ibm_sherbrooke`.
# Exponential quantum advantage in processing massive classical data. Broadly applicable quantum advantage, particularly in classical data processing and machine learning, has been a fundamental open problem. In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on massive classical data by processing samples on the fly, whereas any classical machine achieving the same prediction performance requires exponentially larger size. Combined with classical shadows, our algorithm circumvents the data loading and readout bottleneck to construct succinct classical models from massive classical data, a task provably impossible for any classical machine that is not exponentially larger than the quantum machine. Together, our results establish machine learning on classical data as a broad and natural domain of quantum advantage and a fundamental test of quantum mechanics at the complexity frontier. Our teams advance the state of the art through research, systems engineering, and collaboration across Google.
In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on
One of the main motivations for work in quantum information science is the prospect of fast quantum algorithms to solve important computational problems[1](https://www.nature.com/articles/s41534-025-01148-0#ref-CR1 "Montanaro, A. 51st Annual ACM SIGACT Symposium on Theory of Computing 217–228 (Association for Computing Machinery, 2019)."),[5](https://www.nature.com/articles/s41534-025-01148-0#ref-CR5 "Chia, N. Conference on Genetic and Evolutionary Computation 1590–1591 (Springer Nature, 2003)."), and can be used as a test-bed for search algorithms, as claimed by the title of the paper[37](https://www.nature.com/articles/s41534-025-01148-0#ref-CR37 "O’Geran, J., Wynn, H. Note that there are some papers dedicated to discussing the power of adaptivity in quantum algorithms (for example, see[48](https://www.nature.com/articles/s41534-025-01148-0#ref-CR48 "Girish, U., Sinha, M., Tal, A. But it is not difficult to construct a quantum algorithm using only one black-peg query for _k_ = 2, as shown in Supplementary Information [B](https://www.nature.com/articles/s41534-025-01148-0#MOESM1). Thus, _x_(_T_, _s_) can be learned with certainty using _O_(1) black-white-peg queries by the Bernstein-Vazirani algorithm[49](https://www.nature.com/articles/s41534-025-01148-0#ref-CR49 "Bernstein, E. [Article](https://doi.org/10.1126%2Fscience.aar3106)[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2018Sci...362..308B)[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3839777)[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Quantum%20advantage%20with%20shallow%20circuits&journal=Science&doi=10.1126%2Fscience.aar3106&volume=362&pages=308-311&publication_year=2018&author=Bravyi%2CS&author=Gosset%2CD&author=K%C3%B6nig%2CR). [Article](https://doi.org/10.1109%2FTIT.2012.2208581)[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2012ITIT...58.6726G)[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2991804)[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20character%20strings%20via%20mastermind%20queries%2C%20with%20a%20case%20study%20involving%20mtDNA&journal=IEEE%20Trans.%20Inf.%20Theory&doi=10.1109%2FTIT.2012.2208581&volume=58&pages=6726-6736&publication_year=2012&author=Goodrich%2CMT). [Article](https://doi.org/10.1126%2Fscience.aar3106)[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2018Sci...362..308B)[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3839777)[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Quantum%20advantage%20with%20shallow%20circuits&journal=Science&doi=10.1126%2Fscience.aar3106&volume=362&pages=308-311&publication_year=2018&author=Bravyi%2CS&author=Gosset%2CD&author=K%C3%B6nig%2CR). [Article](https://doi.org/10.1109%2FTIT.2012.2208581)[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2012ITIT...58.6726G)[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2991804)[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20character%20strings%20via%20mastermind%20queries%2C%20with%20a%20case%20study%20involving%20mtDNA&journal=IEEE%20Trans.%20Inf.%20Theory&doi=10.1109%2FTIT.2012.2208581&volume=58&pages=6726-6736&publication_year=2012&author=Goodrich%2CMT).
# Exponential Quantum Advantage in Processing Massive Classical Data. Researchers from Caltech, Google Quantum AI, MIT, and Oratomic have published a technical paper demonstrating an exponential space advantage for quantum computers in processing classical data. The research, titled *“Exponential quantum advantage in processing massive classical data,”* addresses the “data loading problem”—the historical difficulty of accessing classical data in quantum superposition without the prohibitive memory overhead of Quantum Random Access Memory (QRAM). The study introduces a framework called quantum oracle sketching, which enables a quantum computer to construct coherent queries from streaming classical data samples. The primary result is a rigorous proof that a quantum processor of polylogarithmic size (e.g., approximately 60 logical qubits) can perform large-scale classification and dimensionality reduction on datasets that would require an exponentially larger classical machine to achieve equivalent performance. This work positions classical data processing as a natural domain for quantum utility, providing a verifiable test of quantum mechanics at the complexity frontier.