Puzzle made of fantasy CPUs. Conceptual technology 3d illustration
Quantum computers (QCs), once considered an elusive theoretical concept, are emerging. Today’s quantum devices, however, are still small prototypes as their IT infrastructure is in its early stages. Recent industry roadmaps have begun to propose a modular approach to scaling these devices. In this article, we focus on superconductor technology and discuss the factors that favor this approach. Let’s highlight some of the differences in these factors between the quantum and classical setups, and we’ll see that the bandwidth and fidelity between the quantum chips are much more comparable to those within the chips than we’d expect.
Many technology platforms, especially those built with superconducting devices, are starting to show similar trends as researchers look for favorable architectures for QC scaling. As we approach the system thresholds required for short-term quantum optimization, chemistry and simulation, Fig. 1, much work is needed to create scale QC for quantum error correction, which encodes logical quantum bits (qubit) into many physical qubits.
Fig. 1: Qubits versus time showing qubit thresholds for critical milestones in quantum computing.
Observations from the quantum industry
The size of today’s quantum hardware limits the amount of information that can be stored, and the limited capacity when processing information is a major reason why existing QCs are experimental rather than practical devices. There have been preliminary demonstrations of the quantum advantage, but more qubits are needed for quantum computing with impact. Major players in the quantum industry including IBM, Discards, intelAND How many they are establishing long-term plans to expand quantum research and development. Growing quantum initiatives have produced numerous technology roadmaps that present a common theme for future quantum architectures: modularity.
Fig. 2: Scaling trends shown as manufacturing yield and defect rate versus quantum chip size.
Challenges with building larger quantum systems
Many challenges prevent today’s physical qubits from reaching the scale needed for meaningful quantum computing. To begin with, an obvious obstacle is that more qubits need a larger host substrate. Unfortunately, as monolithic QCs expand in footprint, they require greater design investment, higher material costs, and greater verification complexity. Even if these problems were overcome and a sea of qubits were available for processing, the number of on-chip qubits is not the primary characteristic that determines QC computing power. Today’s qubits commonly have high gate errors and limited computation windows that hinder QC performance. For QCs to be useful, new solutions are needed to achieve a higher result amount From quality qubits.
Qubit quality problems often go back to the same source. QC manufacturing is imperfect, and unexpected device variation resulting from processing errors introduces unfavorable QC properties that impair computation. Manufacturing procedures must closely follow targeted QC design specifications for QCs to function as intended. Unfortunately, the tools used to make quantum devices have limited precision at the microscopic scale, making each physical qubit unique in how it varies from its ideal properties. The imprecision associated with manufacturing, particularly in superconducting devices, prevents the fabrication of perfectly identical qubits and QCs, critically impacting reliability. Furthermore, as chip area increases, manufacturing yields decrease as each QC has a greater chance of a manufacturing error, or defect, that severely degrades performance or renders the quantum chip unusable. Yield and defect rate trends for quantum chips are depicted in Fig. 2.
The number of qubits per device on today’s QCs is limited because monolithic chip size and total defects on the chip are directly related. Even if a chip contains qubits that meet all of the basic criteria for fidelity, the physical differences between today’s qubits inject Well documented inter-chip vs. intra-chip performance variation during runtime. For quantum computing to be viable, qubits of ever-increasing quality are needed. Furthermore, inter-chip computing must produce consistent results, regardless of the qubits used during the computation.
Lessons learned from classical calculus
The scalability challenges associated with building a single system on a chip (SoC), especially those related to device manufacturing, are not new. For example, while classical monolithic electronics have advantages that come from the simplicity of being completely self-contained, larger SoCs come at the expense of reduced throughput and higher manufacturing costs. Furthermore, since the monolithic devices must be built with the same processing techniques and materials, they demonstrate greater rigidity in terms of hardware customization and specialization, which limits domain-specific acceleration. Luckily, chiplet-based architectures have been shown to alleviate some of these problems. In a chiplet design, multiple smaller, locally connected chips replace one larger monolithic SoC.
Chiplet architectures are often referred to as multi-chip modules (MCMs) and their advantages include reduced design complexity, lower manufacturing costs, higher efficiencies, higher fault isolation, and more flexibility for customization than their monolithic counterparts. Classical chiplet-based computing offers many advantages that promote system scaling, however simplified scaling is not free. Communication bottlenecks, often in the form of latency and errors in the link hardware that unifies the chiplets, must be properly weighed as a trade-off for reducing design and resource costs. Due to the more expensive connection, MCMs sometimes show a performance loss over their monolithic counterparts, but the benefits associated with modularity, especially in terms of device throughput, are often overwhelmingly worth the cost. Additionally, proper utilization of hardware through software can enable intelligent program mapping, scheduling, and multithreading that minimizes the use of more expensive linkage hardware. Due to modularity-enabled advances in classical computer architecture, QC designers are motivated to pursue modular design for quantum scaling.
Scaling quantum computers via modular design
The demand for quantum computing is increasing as the quantum industry moves towards quality controls as a core cloud service. Unfortunately, the ongoing scaling of QCs with monolithic architectures will reduce device throughput and performance. Modular architectures based on quantum chiplets present a potential solution and this observation has caused an increase in proposals requiring quantum MCMs, Fig. 3. Maximizing the benefits of quantum modularity it requires tools capable of driving the modular implementation of multi-core QC, providing methodologies to evaluate design space trade-offs in terms of throughput, chiplet selection for system integration, and interconnect quality. As the quantum industry begins to make modular quantum implementations a reality, these indispensable tools will be needed to model the complex dynamics associated with QC fabrication, assembly, and operation to be effective in accelerating the scaling of quantum machines producible.
Fig. 3: Quantum Chiplets placed inside an MCM. The interconnects connecting the chiplets are indicated in yellow.
It is important to note that when pursuing a modular approach, quantum systems do not suffer from the same linkage problems as classical systems. In classic MCMs, inter-chip transmissions are often significantly more expensive than inter-chip transmissions in terms of speed and error. Conversely, latency and noise are not excessively high when transmitting quantum information from chip to chip. The reasoning behind this is that superconducting QC chips, which have been experimentally shown to be connected with high fidelity, circuit-built feature qubits take up more chip space than classical bit-encoding transistors. The size of the qubit creates an architectural constraint that forces physical communication of the qubit between its nearest neighbors, illustrated in Fig. 3, and this property prevents inter-chip links from suffering from data bandwidth constraints that are orders of magnitude worse than those on chips. Furthermore, smaller quantum chiplets offer the advantage of maximizing QC yield and post-fabrication selection for MCM integration coupled with quantum MCM reconfigurability lays the groundwork for the discovery of chiplet architectures that vastly outperform their monolithic counterparts in terms of desirable properties of the chip, such as gate fidelity and application performance.
About the Authors:
Kaitlin N. Smith is Quantum Software Manager at Infleqtion. From 2019 to 2022 she was CQE / IBM Postdoctoral Scholar within EPiQC at the University of Chicago under the guidance of Prof. Fred Chong. Kaitlin is on the academic job market this year.
Fred Chong is Seymour Goodman Professor of Computer Architecture at the University of Chicago. He is Lead Principal Investigator of (Enabling Practical-scale Quantum Computation), a NSF Expedition in Computing and a member of the STAQ project.
Many of the ideas in this blog came from conversations with the rest of the EPiQC team: Ken Brown, Ike Chuang, Diana Franklin, Danielle Harlow, Aram Harrow, Andrew Houck, John Reppy, David Schuster and Peter Shor.
Disclaimer:These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author, and do not represent those of ACM SIGARCH or its parent organization, ACM.