Researchers at the California Institute of Technology have put forward a theory suggesting that a fully functional quantum computer may require significantly fewer qubits than previously believed, potentially making such a machine deployable before the end of the decade. Working alongside Oratomic, a startup linked to the institution, the team argues that by substantially reducing the errors common in today’s early-stage quantum systems, a working machine could be built with between 10,000 and 20,000 qubits. This marks a dramatic departure from earlier estimates, which held that millions of qubits would be necessary for a quantum computer to operate reliably. A qubit serves as the fundamental unit of a quantum computer, analogous to a bit in a classical computer for encoding binary information.
The theoretical advance centers on a proposed error-correction architecture that makes use of what are known as neutral-atom systems. In this approach, atoms can be physically repositioned and linked across large distances using lasers referred to as optical tweezers. The method allows each logical qubit to be encoded using as few as five physical qubits, compared to roughly one thousand required under conventional techniques. Caltech describes this as a significant leap in efficiency for fault-tolerant quantum computing.
John Preskill, a theoretical physicist at Caltech, said on Tuesday that the team is developing new architectures for neutral-atom quantum processors that dramatically reduce the resource requirements for fault-tolerant quantum computing. Manuel Endres, a professor of physics at Caltech who recently assembled the largest qubit array on record, also commented on the results. Endres described the performance of the new approach as surprisingly effective, calling it ultra-efficient error correction. Both researchers are involved in the broader effort to advance the underlying science.
Oratomic has stated it will collaborate closely with Caltech’s Advanced Quantum Computing Mission, which focuses on quantum information processing research. The company’s stated goal is to build the world’s first utility-scale fault-tolerant quantum computer. The partnership reflects a broader push to translate theoretical progress into practical hardware development. Caltech’s institutional backing lends additional weight to the initiative’s ambitions.
The announcement comes shortly after Google released a paper claiming that quantum computers could potentially break Bitcoin‘s cryptographic protections in as little as nine minutes, requiring considerably less computing power than earlier projections suggested. Google’s paper urged developers working on blockchain technology to begin transitioning to post-quantum cryptography, known as PQC, without waiting for concrete threats to materialize. The company has set an internal deadline of 2029 for its own PQC migration, cautioning that quantum computing milestones may arrive sooner than anticipated. The convergence of these developments has intensified attention on the timeline for practical quantum computing and its implications for digital security.
Together, the Caltech and Google findings point to a rapidly shifting landscape in quantum computing research. Reduced qubit requirements could lower the barrier to building machines capable of performing tasks beyond the reach of classical computers. At the same time, the potential for such machines to undermine existing cryptographic systems has prompted calls for urgent preparation across the technology and finance sectors. The pace of progress suggests that questions once considered theoretical are becoming increasingly practical concerns.
Originally reported by CoinTelegraph.
