Rashi Shrivastava: While still several years away from real-world applications, quantum computers can be used to solve certain complex problems faster and better than the computers that we use today.
Google Researchers Unveil Unique Form of Quantum Teleportation
scitechdaily.com: [Excerpts] Researchers from Google Quantum AI and Stanford University have observed a “measurement-induced phase transition” in a quantum system with up to 70 qubits, marking a breakthrough in understanding the interplay between measurements, interactions, and entanglement in quantum mechanics. The study also revealed a unique form of quantum teleportation, which could pave the way for advancements in quantum computing.
Measurements can dramatically change the behavior of a quantum system. Scientists are investigating this phenomenon to understand its implications for the distribution and organization of data in quantum computers.
Measurement-induced entanglement and teleportation on a noisy quantum processor by Google Quantum AI and Collaborators. Measurement-induced entanglement and teleportation on a noisy quantum processor. Nature 622, 481–486 (2023). https://doi.org/10.1038/s41586-023-06505-7 (10/18/23) [Transcription] [Abstract] Measurement has a special role in quantum theory: by collapsing the wavefunction, it can enable phenomena such as teleportation and thereby alter the ‘arrow of time’ that constrains unitary evolution. When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in space–time that go beyond the established paradigms for characterizing phases, either in or out of equilibrium. For present-day noisy intermediate-scale quantum (NISQ) processors, the experimental realization of such physics can be problematic because of hardware limitations and the stochastic nature of quantum measurement. Here we address these experimental challenges and study measurement-induced quantum information phases on up to 70 superconducting qubits. By leveraging the interchangeability of space and time, we use a duality mapping to avoid mid-circuit measurement and access different manifestations of the underlying phases, from entanglement scaling to measurement-induced teleportation. We obtain finite-sized signatures of a phase transition with a decoding protocol that correlates the experimental measurement with classical simulation data. The phases display remarkably different sensitivity to noise, and we use this disparity to turn an inherent hardware limitation into a useful diagnostic. Our work demonstrates an approach to realizing measurement-induced physics at scales that are at the limits of current NISQ processors.
📌 Notes
⇢ (1) Introduction to Quantum Systems: Quantum systems are similar to classical probability distributions, but they have certain properties that make them unique
⇢ (2) What is Quantum Computing? Key Concepts & Use Cases: The basic idea of quantum computing is to break through the barriers that limit the speed of existing computers by harnessing the strange, counterintuitive, and powerful physics of subatomic particles.
☑️ #49 Oct 5, 2023
Quantum Dots Focus Of 2023 Nobel Prize In Chemistry
nobelprize.org: [Excerpts] Moungi G. Bawendi, Louis E. Brus and Alexei I. Ekimo were awarded with the 2023 Nobel Prize in Chemistry.
The award recognizes the discovery and development of quantum dots. Quantum dots now illuminate computer monitors and television screens based on QLED technology. They also add nuance to the light of some LED lamps, and biochemists and doctors use them to map biological tissue.
The Nobel Prize in Chemistry 2023. NobelPrize.org. Nobel Prize Outreach AB 2023. Fri. 6 Oct 2023. <https://www.nobelprize.org/prizes/chemistry/2023/summary/>
Physicists had long known that in theory size-dependent quantum effects could arise in nanoparticles, but at that time it was almost impossible to sculpt in nanodimensions. Therefore, few people believed that this knowledge would be put to practical use.
Paper: “Efficient tensor network simulation of IBM's largest quantum processors”
arxiv.org: [Abstract] We show how quantum-inspired 2d tensor networks can be used to efficiently and accurately simulate the largest quantum processors from IBM, namely Eagle (127 qubits), Osprey (433 qubits) and Condor (1121 qubits). We simulate the dynamics of a complex quantum many-body system -- specifically, the kicked Ising experiment considered recently by IBM in Nature 618, p. 500-505 (2023) -- using graph-based Projected Entangled Pair States (gPEPS), which was proposed by some of us in PRB 99, 195105 (2019). Our results show that simple tensor updates are already sufficient to achieve very large unprecedented accuracy with remarkably low computational resources for this model. Apart from simulating the original experiment for 127 qubits, we also extend our results to 433 and 1121 qubits, thus setting a benchmark for the newest IBM quantum machines. We also report accurate simulations for infinitely-many qubits. Our results show that gPEPS are a natural tool to efficiently simulate quantum computers with an underlying lattice-based qubit connectivity, such as all quantum processors based on superconducting qubits.
In 2024, IBM plan to offer a tool capable of calculating unbiased observables of circuits with 100 qubits and depth-100 gate operations in a reasonable runtime.
research.ibm.com: [Transcription] [Excerpts] In order to accelerate the timeline to useful quantum computing, we are also announcing a limited number of IBM Quantum Credits that we will award to researchers and developers working on highly innovative projects — those that advance the latest techniques and algorithms and explore their usage in impactful application areas. IBM Quantum Credits allow successful applicants a pre-set amount of access to our utility-scale systems over the cloud.
Preference will be given to projects that are driving cutting-edge advancements for quantum utility and have the potential to scale to large systems in the future — such as the 100x100 system we announced last year, which is planned for release at the end of 2024.
A useful, error-corrected quantum computer is within reach for humanity. There's still a lot of work to do, but we think we might be able to accomplish our goal within the decade. Learn more about what it will take.
What lies beyond classical
Classical computers and quantum computers are used to solve different types of problems. For classical computing, we represent what we think about the world as bits of information in sequences of zeros and ones, and use logic to make decisions. Is there someone at the door? Yes (1) or no (0)? If yes (1); then open the door.
To solve quantum-friendly problems, we built computers that use qubits instead of bits. Qubits represent the world in terms of uncertainty, and we use quantum mechanics to make decisions based on the probability of our qubits being in one state (1) or another (0). These computers enable us to simulate how the world actually works, instead of how we think it works.
Our quantum computing journey. Source: Google Quantum AI
This Quantum Computing Startup Is Taking On Google And IBM With A Fresh Technical Approach
forbes.com (by Rashi Shrivastava): [Transcription] [Excerpts] Atlantic Quantum, which spun out of research at MIT, has developed a new type of quantum architecture that’s easier to scale.
Atlantic Quantum, a young startup that’s building quantum computers– machines that are capable of advanced and high-speed information processing– has published new research Monday that shows the architecture of the circuits underlying its quantum computer produces far fewer errors than the industry standard used in quantum computers built by the likes of IBM and Google.
While still several years away from real-world applications, quantum computers can be used to solve certain complex problems faster and better than the computers that we use today: from advanced drug discovery to building lighter batteries to cracking encryption protocols. It also has potential for application in areas like predicting the weather and forecasting changes in the stock market. But that can happen only after overcoming some major hardware roadblocks, says Atlantic Quantum CEO Bharath Kannan, who cofounded the startup out of MIT last year and raised $9 million in seed funding.
A key component that undergirds a quantum computer is a “qubit,” a aluminum superconducting circuit or “electrical switch” built on silicon chips that encodes information. Source: Getty Images via Forbes.com
ionq.com: The West is in a new space race led primarily by the U.S. to develop quantum computers before China or Russia gets there first. In the 1960s, the challenge was to put a person on the moon before Russia, and in the summer of 1969, NASA was the first (and thus far only) to achieve this remarkable milestone. Space exploration is a global effort, with countries working closely together in ways unimaginable here on Earth. Today the U.S. is faced with a similar challenge: to lead the development of quantum computers without stifling global cooperation. The decisions taken in the next five years by government leaders, technology companies, and investment firms will decide whether the U.S. and West lead in this critical technology or are forced to play by rules favoring authoritarian regimes.
University of Chicago joins global partnerships to advance quantum computing
news.uchicago.edu: Building a quantum-centric supercomputer, or hybrid quantum computer powered by 100,000 qubits is a massive challenge—and one that’s never been attempted. But building such a powerful system could bring tangible benefits to society—from identifying molecules for new medicines to designing more efficient sustainable solutions for energy.
Photo by Nancy Wong. Source: University of Chicago. Office of Communications
More than a decade ago, the University of Chicago made quantum technology a focus in establishing the Pritzker School of Molecular Engineering (PME). Chicago has since become a leading global hub for research in quantum technology and home to one of the largest quantum networks in the country.
🙂
☑️ #42 May 7, 2023
Forecasting: Google Quantum AI and IBM Quantum Computing
infer-pub.com: By Integrated Forecasting and Estimates of Risk (INFER), a forecasting program designed to generate valuable signals and early warning about the future of critical science and technology.
1️⃣ Will Google’s Quantum AI lab publish 20 or more publications in 2023?
Source: INFER (INtegrated Forecasting of Events and Risks) is a program run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland.
Source: INFER (INtegrated Forecasting of Events and Risks) is a program run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland.
INFER, short for INtegrated Forecasting and Estimates of Risk, is a forecasting program designed to generate valuable signals and early warning about the future of critical science and technology trends and high-risk geopolitical events for U.S. Government policymakers. INFER empowers scientists, researchers, analysts, and hobbyists from inside and outside the U.S. Government to have a direct impact on policy and decision-making. The public portion of INFER is one of multiple forecasting sites to be operated as part of this program.
IBM Unveils 400 Qubit-Plus Quantum Processor and Next-Generation IBM Quantum System Two
newsroom.ibm.com: [Transcription] [Excerpts] Company Outlines Path Towards Quantum-Centric Supercomputing with New Hardware, Software, and System Breakthrough.
NEW YORK, Nov. 9, 2022 /PRNewswire/ -- IBM (NYSE: IBM) today kicked off the IBM Quantum Summit 2022, announcing new breakthrough advancements in quantum hardware and software and outlining its pioneering vision for quantum-centric supercomputing. The annual IBM Quantum Summit showcases the company's broad quantum ecosystem of clients, partners and developers and their continued progress to bring useful quantum computing to the world.
"The new 433 qubit 'Osprey' processor brings us a step closer to the point where quantum computers will be used to tackle previously unsolvable problems," said Dr. Darío Gil, Senior Vice President, IBM and Director of Research. "We are continuously scaling up and advancing our quantum technology across hardware, software and classical integration to meet the biggest challenges of our time, in conjunction with our partners and clients worldwide. This work will prove foundational for the coming era of quantum-centric supercomputing."
coinshares.com: [Transcription] [Excerpts] Recent news of advances in quantum computing are stoking fears that Bitcoin’s wallet structure is vulnerable to exploits, theoretically undermining its security.
Using quantum technologies to exploit the Bitcoin protocol is theoretically possible. However, it is exceptionally difficult to do in practice.
To mitigate against such attacks, a soft fork with a commit–delay–reveal scheme could be implemented.
Due to the widespread use of 128-bit cryptography, quantum computing poses a much greater threat to a substantial proportion of the existing cryptographic infrastructure that the ecommerce and banking services rely on for everyday transactions.
IBM Quantum breaks the 100‑qubit processor barrier
research.ibm.com: [Transcription] [Excerpts] Today, IBM Quantum unveiled Eagle, a 127-qubit quantum processor. Eagle is leading quantum computers into a new era — we’ve launched a quantum processor that has pushed us beyond the 100-qubit barrier. We anticipate that, with Eagle, our users will be able to explore uncharted computational territory — and experience a key milestone on the path towards practical quantum computation.
We view Eagle as a step in a technological revolution in the history of computation. As quantum processors scale up, each additional qubit doubles the amount of space complexity — the amount of memory space required to execute algorithms — for a classical computer to reliably simulate quantum circuits. We hope to see quantum computers bring real-world benefits across fields as this increase in space complexity moves us into a realm beyond the abilities of classical computers. While this revolution plays out, we hope to continue sharing our best quantum hardware with the community early and often. This approach allows IBM and our users to work together to understand how best to explore and develop on these systems to achieve quantum advantage as soon as possible
IBM creates largest ever superconducting quantum computer
🙂
☑️ #38 Oct 27-28, 2021
China Achieves Quantum Computational Advantage in Two Mainstream Technical Routes
Quantum Processor > Qubits: 66 > “Zuchongzhi 2.1”
englishcas.cn: [Transcription] [Excerpts] China Achieves Quantum Computational Advantage in Two Mainstream Technical Routes
A Chinese research team has successfully designed a 66-qubit programmable superconducting quantum computing system named "Zuchongzhi 2.1," significantly enhancing the quantum computational advantage.
This remarkable feat makes China the first country to achieve a quantum computational advantage in two mainstream technical routes -- one via photonics quantum computing technology (Jiuzhang 2.0)and the other via superconducting quantum computing technology, the team noted.
Quantum Prototype > “Jiuzhang 2.0”
Septillion times faster! Chinese scientists have established a quantum computer prototype named "Jiuzhang 2.0" with 113 detected photons, achieving major breakthroughs in quantum computational speedup.
Xinhua Global Service:
http://www.news.cn/english/2021-10/26/c_1310270579.htm
[Transcription] [Excerpts] Significant progress has been made in the physical realizations of quantum computers in the last few years. For instance, the quantum computational advantage or supremacy milestone has been achieved using the quantum processors Sycamore, Jiuzhang, and Zuchongzhi 2.0 successively.
However, quantum supremacy is not a single shot achievement because the power of classical computation advances over time. Some classical algorithm improvements and comments were raised to challenge previous work. Therefore, to ensure a long-term quantum computational advantage, the quantum hardware should be improved to withstand the competition of continuously improved classical algorithms and hardware.
A new quantum supremacy record has been reported with a 66-qubit two-dimensional superconducting quantum processor, called Zuchongzhi 2.1. Compared with the previous work of Zuchongzhi 2.0, the readout performance of Zuchongzhi 2.1 improves significantly to an average fidelity of 97.74%. The more powerful quantum processor enables larger-scale random quantum circuit sampling with a system scale of up to 60 qubits and 24 cycles. The achieved sampling task is approximately six orders of magnitude more difficult than that of Sycamore [Nature 574, 505 (2019)] in the classic simulation. Additionally, the time consumption of classically simulating the random circuit sampling experiment using state-of-the-art classical algorithms and supercomputers is extended to tens of thousands of years (approximately 4.8 × 104 years). In contrast, Zuchongzhi 2.1 only takes roughly 4.2 hours, thereby significantly enhancing the quantum computational advantage. The cover shows the chip schematic of the Zuchongzhi 2.1 quantum processor (see the article by Qingling Zhu et al. on page 240).
arxiv.org: [Transcription] [Abstract] A universal fault-tolerant quantum computer that can solve efficiently problems such as integer factorization and unstructured database search requires millions of qubits with low error rates and long coherence times. While the experimental advancement towards realizing such devices will potentially take decades of research, noisy intermediate-scale quantum (NISQ) computers already exist. These computers are composed of hundreds of noisy qubits, i.e. qubits that are not error-corrected, and therefore perform imperfect operations in a limited coherence time. In the search for quantum advantage with these devices, algorithms have been proposed for applications in various disciplines spanning physics, machine learning, quantum chemistry and combinatorial optimization. The goal of such algorithms is to leverage the limited available resources to perform classically challenging tasks. In this review, we provide a thorough summary of NISQ compu- tational paradigms and algorithms. We discuss the key structure of these algorithms, their limitations, and advantages. We additionally provide a comprehensive overview of various benchmarking and software tools useful for programming and testing NISQ devices.
NISQ computers can perform tasks with imperfect reliability, but beyond the capability of classical computers.
🙂
☑️ #36 Jul 8, 2021
Celebrating a Decade of Rigetti: An Evolution of Technology and a Quantum Ecosystem
medium.com/rigetti: [Transcription] [Excerpts] On July 8, 2013 Rigetti was founded by Chad Rigetti as the world’s first full-stack, universal pure-play quantum computing company. Three years later, Rigetti’s Fab-1 was commissioned as the first dedicated quantum chip fabrication facility. Built to further accelerate our progress in design and manufacturing capabilities, Fab-1 continues to be a foundational resource for developing state-of-the-art superconducting qubits and enabling innovative chip architecture — from developing the first scalable quantum chip based on our proprietary modular architecture, to our latest fourth generation Ankaa™ system featuring a square lattice, tunable couplers, and 2x faster gate speeds.
Rigetti’s Fab-1 circa 2021. Source: Rigetti & Co, LLC.
quantum.duke.edu: [Transcription] It looks like ECE quantum computing entrepreneurs Chris Monroe and Jungsang Kim had a great time a couple of weeks ago when they got to ring the opening bell at the New York Stock Exchange in honor of their company, IonQ, officially going public! IonQ uses trapped ions as the foundation for its futuristic quantum computers, and is the first publicly traded pure-play quantum computing company. ECE chair Krishnendu Chakrabarty was on the trading floor to wish the team well, and I know we all join him in sharing our congratulations!
[Transcription] [Excerpts] Quantum computers are a revolutionizing technology — they have the potential to transform business, society, and the planet for the better, and IonQ is at the forefront of this revolution.
After over 25 years of academic research, IonQ was founded in 2015 by Chris Monroe and Jungsang Kim with $2 million in seed funding from New Enterprise Associates, a license to core technology from the University of Maryland and Duke University, and the goal of taking trapped ion quantum computing out of the lab and into the market. In the following three years, we raised an additional $20 million from GV, Amazon Web Services, and NEA, and built two of the world’s most accurate quantum computers.
In 2019, we raised another $55 million in a round led by Samsung and Mubadala, and announced partnerships with Microsoft and Amazon Web Services to make our quantum computers available via the cloud.
In 2020 and 2021, we built additional generations of high performance quantum hardware, addedGoogle Cloud Marketplace to our cloud partner roster and announced a series of collaborations and business partnerships with leading academic and commercial institutions.
On October 1st, 2021, IonQ began trading as IONQ on the New York Stock Exchange, making it the world's first public pure-play quantum computing company. We remain hard at work realizing the world-changing potential of quantum computing.
🙂
☑️ #34 Feb 4, 2021
Top 15 Quantum Computing Stocks to Watch in 2021
mlq.ai: [Transcription] [Excerpts] Quantum computers are beginning to transition from research labs to solving real-world problems. In this guide, we look at 15 quantum computing stocks to watch.
With a market capitalization of just $219 million at the time of writing, Quantum Computing Inc (QUBT) is by far the smallest company on this list. That said, it is also the only one that is pure-play in quantum computing. The company is focused on providing software and applications for quantum computers and has partnerships with hardware companies including D-Wave. In addition, the company offers a product called Qatalyst which allows developers to run quantum algorithms on classical computers through their API.
In addition to its cloud offerings, QUBT is also working on solving real-world problems with quantum computers in fields such as optimization, logistics, cybersecurity, and more.
3 Things to Know About IBM’s Development Roadmap to Build an Open Quantum Software Ecosystem
newsroom.ibm.com: [Transcription] [Excerpts] Today IBM unveiled its Quantum Development Roadmap, which showcases the company’s integrated vision and timeline for full-stack quantum development, including hardware, software, and applications.
Last September, IBM shared our roadmap to scale quantum technology, with a clear vision for how to get to the inflection point of 1,000+ qubits by 2023 – and quantum systems powerful enough to explore solutions to challenges impossible on classical machines, alone. The development roadmap gives the millions of professional developers more reason and opportunity to explore quantum computing within their industry and expertise – without the need to learn new tools or languages.
cloudblogs.microsoft.com: [Transcription] [Excerpts] Azure Quantum, the world’s first full-stack, public cloud ecosystem for quantum solutions, is now open for business. Developers, researchers, systems integrators, and customers can use it to learn and build solutions based on the latest innovations—using familiar tools in the most trusted public cloud.
The unified Azure Quantum ecosystem will accelerate your R&D with access to diverse quantum software and hardware solutions, a network of leading quantum researchers and developers, a robust resource library, and flexible self-service or tailored development programs for customers and systems integrators.
Access quantum computing and optimization solutions in the cloud. Source: Microsoft Corporation
Forecasting: Will quantum computing "supremacy” be achd by 2025? [Resolved]
Quantum Supremacy
metaculus.com: [Transcription] [Resolution Criteria] For decades, tech enthusiasts have hoped that breakthroughs in quantum computing would allow us to do wonderful things, such as:
One of the first steps to this bright future is the achievement of what’s known as quantum supremacy—the creation of a quantum computer capable of solving problems that a classic computer cannot.
But skeptics like Phillip Ball and mathematician Gil Kalai now argue that large scale quantum computing projects will always be hampered by irreducible noise in the system.
Kalai recently explained his argument in an interview with Quanta Magazine:“[the results of an experiment performed by Kalai and colleagues] shows that the noise level [in a quantum computer] cannot be reduced, because doing so will contradict an insight from the theory of computing about the power of primitive computational devices. Noisy quantum computers in the small and intermediate scale deliver primitive computational power. They are too primitive to reach “quantum supremacy” — and if quantum supremacy is not possible, then creating quantum error-correcting codes, which is harder, is also impossible.”
Will quantum supremacy be achieved by 2025, rather than proving elusive for years to come if not indefinitely?
The question resolves positively if quantum supremacy has been demonstrated by 2025. Precisely defining quantum supremacy is itself somewhat tricky. We will thus use a working definition that John Preskill, inventor of the term, makes public declarations to the effect that it has been definitively shown.
Forecasting for a complex world: Metaculus offers trustworthy forecasting and modeling infrastructure for forecasters, decision makers, and the public. Source: Metaculus.com
sicence.org: [Transcription] [Abstract] Quantum computers promise to perform certain tasks that are believed to be intractable to classical computers. Boson sampling is such a task and is considered a strong candidate to demonstrate the quantum computational advantage. We performed Gaussian boson sampling by sending 50 indistinguishable single-mode squeezed states into a 100-mode ultralow-loss interferometer with full connectivity and random matrix—the whole optical setup is phase-locked—and sampling the output using 100 high-efficiency single-photon detectors.
The obtained samples were validated against plausible hypotheses exploiting thermal states, distinguishable photons, and uniform distribution. The photonic quantum computer, Jiuzhang, generates up to 76 output photon clicks, which yields an output state-space dimension of 1030 and a sampling rate that is faster than using the state-of-the-art simulation strategy and supercomputers by a factor of ~1014.
The Jiuzhang Photonic Quantum Computer. Source: Global Times
Quantum primacy
This is only the second demonstration of quantum primacy, which is a term that describes the point at which a quantum computer exponentially outspeeds any classical one, effectively doing what would otherwise essentially be computationally impossible.
🙂
☑️ #29 Oct 5, 2020
Paper: “A rigorous and robust quantum speed-up in supervised machine learning”
In 2020, IBM demonstrated a quantum algorithm with a rigorous and robust speedup when tackling certain elements of supervised machine learning.
arxiv.org: [Transcription] [Abstract] Over the past few years several quantum machine learning algorithms were proposed that promise quantum speed-ups over their classical counterparts. Most of these learning algorithms either assume quantum access to data -- making it unclear if quantum speed-ups still exist without making these strong assumptions, or are heuristic in nature with no provable advantage over classical algorithms. In this paper, we establish a rigorous quantum speed-up for supervised classification using a general-purpose quantum learning algorithm that only requires classical access to data. Our quantum classifier is a conventional support vector machine that uses a fault-tolerant quantum computer to estimate a kernel function. Data samples are mapped to a quantum feature space and the kernel entries can be estimated as the transition amplitude of a quantum circuit. We construct a family of datasets and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely-believed hardness of the discrete logarithm problem. Meanwhile, the quantum classifier achieves high accuracy and is robust against additive errors in the kernel entries that arise from finite sampling statistics.
CES 2019: prnewswire.com: IBM Unveils World's First Integrated Quantum Computing System for Commercial Use. YORKTOWN HEIGHTS, N.Y., Jan. 8, 2019 /PRNewswire/ -- At the 2019 Consumer Electronics Show (CES), IBM (NYSE: IBM) today unveiled IBM Q System One™, the world's first integrated universal approximate quantum computing system designed for scientific and commercial use. IBM also announced plans to open its first IBM Q Quantum Computation Center for commercial clients in Poughkeepsie, New York in 2019.
[In the summer of 2018 the team assembled the system for mechanical testing at Goppion headquarters in Milan.]
IBM, Map, and Goppion team members with IBM Q System One following the two-week assembly. Source: IBM Corporation
Photo by Ryan Lavine for IBM. Source: Goppion S.p.A.
🙂
☑️ #26 Oct 24, 2019
Useful things
@VitalikButerin: My one-sentence impression of recent quantum supremacy stuff so far is that it is to real quantum computing what hydrogen bombs are to nuclear fusion. Proof that a phenomenon and the capability to extract power from it exist, but still far from directed use toward useful things.
Quantum Supremacy Using a Programmable Superconducting Processor
Quantum Supremacy
blog.research.google: [Transcription] [Excerpts] Physicists have been talking about the power of quantum computing for over 30 years, but the questions have always been: will it ever do something useful and is it worth investing in? For such large-scale endeavors it is good engineering practice to formulate decisive short-term goals that demonstrate whether the designs are going in the right direction. So, we devised an experiment as an important milestone to help answer these questions. This experiment, referred to as a quantum supremacy experiment, provided direction for our team to overcome the many technical challenges inherent in quantum systems engineering to make a computer that is both programmable and powerful. To test the total system performance we selected a sensitive computational benchmark that fails if just a single component of the computer is not good enough.
Today we published the results of this quantum supremacy experiment in the Nature article, “Quantum Supremacy Using a Programmable Superconducting Processor”. We developed a new 54-qubit processor, named “Sycamore”, that is comprised of fast, high-fidelity quantum logic gates, in order to perform the benchmark testing. Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output.
Process for demonstrating quantum supremacy. Source: Google LLC
Nature: “Quantum Supremacy” Classical supercomputer outperformed by quantum chip for the first time
🙂
☑️ #24 Oct 23, 2019 1️⃣
Paper: Quantum supremacy using a programmable superconducting processor
Quantum Supremacy
Quantum Processor > Qubits: 54 > “Sycamore”
nature.com: [Transcription] [Abstract] The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubitsto create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253(about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacyfor this specific computational task, heralding a much-anticipated computing paradigm.
a, Layout of processor, showing a rectangular array of 54 qubits (grey), each connected to its four nearest neighbours with couplers (blue). The inoperable qubit is outlined. b, Photograph of the Sycamore chip. Source: Nature (Nature) ISSN 1476-4687 (online)
Hands-On with Google’s Quantum Computer [Transcription] [Excerpt] Google’s quantum computing chip, dubbed Sycamore, achieved its results using exactly 53 qubits. A 54th one on the chip failed. Sycamore’s aim was to randomly produce strings of 1’s and 0’s, one digit for each qubit, producing 253 bit strings (that is, some 9.700199254740992 quadrillion bit strings).Because of the way the qubits interact with one another, some strings are more likely to emerge than others. Sycamore ran the number generator a million times, then sampled the results to come up with the probability that any given string would appear. The Google team also ran a simpler version of the test on Summit, a supercomputer at Oak Ridge National Laboratory, then extrapolated from those results to verify Sycamore’s output. The new chip performed the task in 200 seconds. The same chore, the researchers estimated, would have taken Summit 10,000 years.
IBM: Leveraging Secondary Storage to Simulate Deep 54-qubit Sycamore Circuits[Transcription] [Abstract] In a recent paper, we showed that secondary storage can extend the range of quantum circuits that can be practically simulated with classical algorithms. Here we refine those techniques and apply them to the simulation of Sycamore circuits with 53 and 54 qubits, with the entanglement pattern ABCDCDAB that has proven difficult to classically simulate with other approaches. Our analysis shows that on the Summit supercomputer at Oak Ridge National Laboratories, such circuits can be simulated with high fidelity to arbitrary depth in a matter of days, outputting all the amplitudes.
Quantum supremacy VS "quantum advantage"
“Because the original meaning of the term ‘quantum supremacy,’ as proposed by [California Institute of Technology theoretical physicist] John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met,” the scientists wrote in a post on the IBM Research Blog. Perhaps, then, Google’s achievement might be better labeled “quantum advantage.”
Often abbreviated to just “quantum supremacy,” the term refers to the use of a quantum computer to solve some well-defined set of problems that would take orders of magnitude longer to solve with any currently known algorithms running on existing classical computers—and not for incidental reasons, but for reasons of asymptotic quantum complexity. The emphasis here is on being as sure as possible that the problem really was solved quantumly and really is classically intractable, and ideally achieving the speedup soon (with the noisy, non-universal QCs of the present or very near future). If the problem is also useful for something, then so much the better, but that’s not at all necessary. The Wright Flyer and the Fermi pile weren’t useful in themselves.
Q6. If quantum supremacy calculations just involve sampling from probability distributions, how do you check that they were done correctly?
Glad you asked! This is the subject of a fair amount of theory that I and others developed over the last decade. I already gave you the short version in my answer to Q3: you check by doing statistics on the samples that the QC returned, to verify that they’re preferentially clustered in the “peaks” of the chaotic probability distribution DC. One convenient way of doing this, which Google calls the “linear cross-entropy test,” is simply to sum up Pr[C outputs si] over all the samples s1,…,sk that the QC returned, and then to declare the test a “success” if and only if the sum exceeds some threshold—say, bk/2n, for some constant b strictly between 1 and 2.
Admittedly, in order to apply this test, you need to calculate the probabilities Pr[C outputs si] on your classical computer—and the only known ways to calculate them require brute force and take ~2n time. Is that a showstopper? No, not if n is 50, and you’re Google and are able to handle numbers like 250 (although not 21000, which exceeds a googol, har har). By running a huge cluster of classical cores for (say) a month, you can eventually verify the outputs that your QC produced in a few seconds—while also seeing that the QC was many orders of magnitude faster. However, this does mean that sampling-based quantum supremacy experiments are almost specifically designed for ~50-qubit devices like the ones being built right now. Even with 100 qubits, we wouldn’t know how to verify the results using all the classical computing power available on earth.
(Let me stress that this issue is specific to sampling experiments like the ones that are currently being done. If Shor’s algorithm factored a 2000-digit number, it would be easy to check the result by simply multiplying the claimed factors and running a primality test on them. Likewise, if a QC were used to simulate some complicated biomolecule, you could check its results by comparing them to experiment.)
🙂
☑️ #23 Jun 10, 2019
The Problem with Quantum Computers: Decoherence
blogs.scientificamerican.com [Transcription] [Excerpts] It’s called decoherence—but while a breakthrough solution seems years away, there are ways of getting around it
The promise of quantum computing was first recognized in the 1980s yet remains unfulfilled. Quantum computers are exceedingly difficult to engineer, build and program. As a result, they are crippled by errors in the form of noise, faults and loss of quantum coherence, which is crucial to their operation and yet falls apart before any nontrivial program has a chance to run to completion.
This loss of coherence (called decoherence), caused by vibrations, temperature fluctuations, electromagnetic waves and other interactions with the outside environment, ultimately destroys the exotic quantum properties of the computer. Given the current pervasiveness of decoherence and other errors, contemporary quantum computers are unlikely to return correct answers for programs of even modest execution time.
Quantum device > Qubits: 11 and 79 > “IonQ System1”
pyshicsworld.com: The first commercial quantum computer that uses trapped ions for quantum bits (qubits) has been launched by the US-based start-up IonQ. The device is unlike other commercial systems, which use qubits made from superconducting circuits. The company is now working with a small number of users to improve the technology.
IonQ Quantum device System1. Source:
The company has performed simple quantum operations on a string of 79 qubits and full quantum computations on 11 qubits. It was announced on 11 December at the “Quantum for Business (Q2B 2018)” conference in Mountain View, California.
In 2018, IBM proved that an ideal quantum computers will always beat classical computers when tackling shallow circuits, and last year, extended this proof to cover shallow circuits in the presence of noise.
science.org: [Transcription] [Abstract] Quantum effects can enhance information-processing capabilities and speed up the solution of certain computational problems. Whether a quantum advantage can be rigorously proven in some setting or demonstrated experimentally using near-term devices is the subject of active debate. We show that parallel quantum algorithms running in a constant time period are strictly more powerful than their classical counterparts; they are provably better at solving certain linear algebra problems associated with binary quadratic forms. Our work gives an unconditional proof of a computational quantum advantage and simultaneously pinpoints its origin: It is a consequence of quantum nonlocality. The proposed quantum algorithm is a suitable candidate for near-future experimental realizations, as it requires only constant-depth quantum circuits with nearest-neighbor gates on a two-dimensional grid of qubits (quantum bits).
An IBM Quantum cryostat used to keep IBM’s 50-qubit quantum computer cold in the IBM Quantum lab in Yorktown Heights, New York. Source: IBM Research Zürich (album on Flickr)
A Bristlecone chip being installed by Research Scientist Marissa Giustina at the Quantum AI Lab in Santa Barbara. Source: Quantum AI Lab.
🙂
☑️ #19 Nov 3, 2017
Paper: “Error Mitigation for Short-Depth Quantum Circuits”
In 2016, IBM developed an error mitigation strategy for short-depth quantum circuits; not quite correcting errors, but handling them without additional qubits.
journals.aps.org: [Transcription] [Abstract] Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson’s deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.
Paper: “Quantum attacks on Bitcoin, and how to protect against them”
arxiv.org: [Transcription] [Abstract] The key cryptographic protocols used to secure the internet and financial transactions of today are all susceptible to attack by the development of a sufficiently large quantum computer. One particular area at risk are cryptocurrencies, a market currently worth over 150 billion USD. We investigate the risk of Bitcoin, and other cryptocurrencies, to attacks by quantum computers. We find that the proof-of-work used by Bitcoin is relatively resistant to substantial speedup by quantum computers in the next 10 years, mainly because specialized ASIC miners are extremely fast compared to the estimated clock speed of near-term quantum computers. On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates. We analyze an alternative proof-of-work called Momentum, based on finding collisions in a hash function, that is even more resistant to speedup by a quantum computer. We also review the available post-quantum signature schemes to see which one would best meet the security and efficiency requirements of blockchain applications.
Paper: “Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets”
In 2017, IBM presented the first experimental demonstration of a quantum computer simulating a molecule larger than hydrogen or helium — lithium hydride — using a six-qubit device.
nature.com: [Transcription] [Summary] Scalable quantum simulation. Quantum simulation is currently the most promising application of quantum computers. However, only a few quantum simulations of very small systems have been performed experimentally. Here, researchers from IBM present quantum simulations of larger systems using a variational quantum eigenvalue solver (or eigensolver), a previously suggested method for quantum optimization. They perform quantum chemical calculations of LiH and BeH2 and an energy minimization procedure on a four-qubit Heisenberg model. Their application of the variational quantum eigensolver is hardware-efficient, which means that it is optimized on the given architecture. Noise is a big problem in this implementation, but quantum error correction could eventually help this experimental set-up to yield a quantum simulation of chemically interesting systems on a quantum computer.quasiparticle generation.
IBM reveals prototype of its first commercial quantum computer processor
newatlas.com: [Transcription] [Excerpt] Research teams looking to crunch massive data sets have had access to IBM's quantum processor through the cloud for about a year, and now the company is upping that power with a new generation of processors. The first, boasting 16 quantum bits (qubits), will increase the processing power available through the cloud, while the second is twice as powerful again and designed as a prototype of a commercial quantum processor.
Google’s “Foxtail” quantum processor. Photo by Erik Lucero. Source: Quantum AI Lab.
🔹Continue reading
🙂
☑️ #14 Apr 29, 2015
Paper: “Demonstration of a quantum error detection code using a square lattice of four superconducting qubits”
In 2015, IBM demonstrated the [2,0,2] error-correcting code — one of the earliest experimental demonstrations of quantum error detection.
nature.com: [Transcription] [Abstract] The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
quantumfrontiers.com: [Transcription] In May 1994, Artur Ekert visited Caltech to give a seminar about quantum cryptography. Near the end of the talk, Ekert revealed an exciting new development — just weeks earlier, Peter Shor had announced the discovery of an efficient quantum algorithm for finding the prime factors of large composite integers, a problem for which no efficient classical algorithm is known.
Perhaps I’ve embellished the memory over time, but I recall being awestruck by this news. I spent the next month at the Isaac Newton Institute attending a workshop about quantum black holes, and though it was a very good workshop and I had some great discussions, I spent most of my time there secretly trying to understand Shor’s paper, which Ekert had emailed to me. This took some effort, because I knew little about algorithms or computational complexity at that time (even less than I know now), but by the end of the workshop I felt I understood the ideas behind Shor’s algorithm pretty well. I did not yet realize that I was in the midst of a career transition from particle physics to quantum information science
Source: Isaac Newton Institute for Mathematical Sciences
I had heard before about the idea of quantum computation, but had not been very interested. After Ekert’s disclosure I grasped why the subject is really exciting — with highly controllable quantum systems we should be able to perform surprising and useful tasks that would be impossible in a world governed by classical rather than quantum physics.
Quantum technology has advanced steadily in the 18 years since I heard Ekert’s seminar, but quantum computers that can factor large numbers by running Shor’s algorithm still seem far off. I think we’ll get there eventually, but perhaps it will take a few more decades. In a future post I may say more about why it’s taking so long. Today I would rather emphasize another way to exploit controllable quantum systems.
In parallel with the progress in building hardware for quantum computing, there have been marvelous achievements in controlling systems of ultracold atoms to explore collective phenomena in quantum many-body systems. Eventually, we should be able to use these tools to simulate quantum systems that are too hard to simulate classically, and so gain valuable insights into the behavior of highly correlated quantum matter. But when?
A recent paper by Martin Zwierlein’s group at MIT (arXiv version here) suggests that this may have already happened. They measure, as a function of temperature and chemical potential, the density of a gas of spin-1/2 fermionic Lithium 6 atoms with strong short-range interactions. They compare the measured equation of state to a calculation done using a method for summing perturbation theory (which I had not heard of before) called Bold Diagrammatic Monte Carlo (BDMC), finding excellent agreement. It’s not clear how to do so accurate a calculation using any other currently known technique.
The foundations of BDMC have not been firmly established, in particular because the method rests on unproven assumptions about the convergence of perturbation theory. By validating the method in their experiment, the authors seem to have extracted some nontrivial information about a strongly-coupled quantum many-body system which goes beyond what we know from existing analytic and computational methods. As they put it: “This presents the first — although long anticipated — compelling example of how ultracold atoms can guide new microscopic theories for strongly interacting quantum matter.”
In a recent talk, I proposed using the term “quantum supremacy” to describe super-classical tasks performed using controllable quantum systems. I’m not completely happy with this term, and would be glad if readers could suggest something better.
But more importantly: Is it reasonable to say that the Zwierlein group has achieved quantum supremacy?
Quantum computing and the entanglement frontier [Transcription] [Abstract] Quantum information science explores the frontier of highly complex quantum states, the "entanglement frontier." This study is motivated by the observation (widely believed but unproven) that classical systems cannot simulate highly entangled quantum systems efficiently, and we hope to hasten the day when well controlled quantum systems can perform tasks surpassing what can be done in the classical world. One way to achieve such "quantum supremacy" would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers, but there may be other ways that can be achieved sooner, such as simulating exotic quantum states of strongly correlated matter. To operate a large scale quantum computer reliably we will need to overcome the debilitating effects of decoherence, which might be done using "standard" quantum hardware protected by quantum error-correcting codes, or by exploiting the nonabelian quantum statistics of anyons realized in solid state systems, or by combining both methods. Only by challenging the entanglement frontier will we learn whether Nature provides extravagant resources far beyond what the classical world would allow. (3/26/12)
🙂
☑️ #12 Nov 3, 2011
Paper: “Protecting superconducting qubits from radiation”
In 2011, IBM released a paper demonstrating the impact of external radiation on limiting qubit performance that has allowed the field a new understanding of how to improve our devices.
pubs.aip.org: [Transcription] [Abstract] We characterize a superconducting qubit before and after embedding it along with its package in an absorptive medium. We observe a drastic improvement in the effective qubit temperature and over a tenfold improvement in the relaxation time up to 5.7 μs. Our results suggest the presence of external radiation inside the cryogenic apparatus can be a limiting factor for both qubit initialization and coherence. Calculations support the hypothesis that the relaxation is not limited by direct coupling of thermal photons to the qubit prior to embedding, but by dissipation arising from quasiparticle generation.
spectrum.ieee.org: [Transcription] [Excerpts] Connected quantum dots may form the building blocks of a solid-state quantum computer.
4 August 2004--Researchers have long been trying to develop quantum computers based on the same semiconductor technologies that have so successfully powered conventional computers. Now, after years of exploration, two groups have begun to connect the dots--literally.
Separate groups of researchers at Duke University, in Durham, N.C., and at Harvard University, in Cambridge, Mass., have independently demonstrated how to connect quantum dots to form what may be the building blocks of a solid-state quantum computer.
[Transcription] [Excerpts] The story of the Duke Quantum Center is of longtime partners coming together.
Our origins date to 1995, when National Institute of Standards and Technology (NIST) physicists Christopher Monroe and David Wineland led the demonstration of the first ever quantum logic gate, using trapped atomic ions as qubits (Wineland received the 2012 Nobel Prize in Physics partly based on this work). A few years later, this work caught the attention of Bell Labs physicist-engineers Jungsang Kim and Richart Slusher.
Jungsang Kim (Bell Labs)
In 2004, when Kim joined Duke’s Electrical and Computer Engineering department, he proposed a semiconductor chip and optical systems approach to ion trapping, creating Duke’s first quantum information lab. Shortly after that, Kim and Monroe (then at the University of Maryland), started a collaboration that has intensified to this day.
Charles Bennett and Gilles Brassard employed Wiesner's conjugate coding for distribution of cryptographic keys
Paper (Abstract): This paper appeared originally on pages 175–179 of the Proceedings of the International Conference on Computers, Systems and Signal Processing, which took place in Bangalore (now Bengalūru) in December 1984
More archives from past decades in upcoming newsletters
☑️ #10 November 1982
Paper: "Quantum mechanical hamiltonian models of turing machines" by Paul Benioff
At the Physics of Computation Conference in 1981 Paul Benioff showed that a computer can operate under the laws of quantum mechanics.
ui.adsabs.harvard.edu: [Transcription] [Abstract] Quantum mechanical Hamiltonian models, which represent an aribtrary but finite number of steps of any Turing machine computation, are constructed here on a finite lattice of spin-1/2 systems. Different regions of the lattice correspond to different components of the Turing machine (plus recording system). Successive states of any machine computation are represented in the model by spin configuration states. Both time-independent and time-dependent Hamiltonian models are constructed here. The time-independent models do not dissipate energy or degrade the system state as they evolve. They operate close to the quantum limit in that the total system energy uncertainty/computation speed is close to the limit given by the time-energy uncertainty relation. However, the model evolution is time global and the Hamiltonian is more complex. The time-dependent models do not degrade the system state. Also they are time local and the Hamiltonian is less complex.
Richard Feynman for the first time pointed out the possibility of using the atoms themselves to simulate quantum properties, instead of using the binary logic of classical computers. (source Arnaldo Gunzi)
✧
☑️ #9 Jul 14, 1980
Paper: "Reversible computing” by Tommaso Toffoli
Logic Circuits
“Toffoli Gate” (also CCNOT), introduced by Tommaso Toffoli, is a universal reversible logic gate, which means that any classical reversible circuit can be constructed from Toffoli gates.
*Controlled-controlled-not" (CCNOT) gate:
3-bit inputs and outputs
If the first two bits are set, it flips the third bit.
if the first two bits are both set to 1, it inverts the third bit, otherwise all bits stay the same.
link.springer.com (pdf): [Transcription] [Abstract] The theory of reversible computing is based on invertible primitives and composition rules that preserve invertibility. With these constraints, one can still satisfactorily deal with both functional and structural aspects of computing processes; at the same time, one attains a closer correspondence between the behavior of abstract computing systems and the microscopic physical laws (which are presumed to be strictly reversible) that underly any concrete implementation of such systems.
According to a physical interpretation, the central result of this paper is that it is ideally possible to build sequential circuits with zero internal power dissipation.
By Yurin Manin proposed the idea of a quantum computer.
☑️ #8 1979 - 1980
Paper: "The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines" by Paul Benioff
Benioff described a quantum mechanical model of Turing Machines based on a classical description of reversible Turing machines by Charles H. Bennett (1973) and established quantum computation as a theoretical possibility.
Journal of Statistical Physics, Vol. 22, 1980
Paul Anthony Benioff (1930 - 2022). CC BY 4.0
🙂
☑️ #7 August 1976
Paper: "Quantum Information Theory" by Roman S. Ingarden
After the "Quantum Information Theory" article publication Ingarden is considered one of the founding fathers of the modern theory of quantum information.
sciencdirect.com: [Transcription] [Abstract] A conceptual analysis of the classical information theory of Shannon (1948) shows that this theory cannot be directly generalized to the usual quantum case. The reason is that in the usual quantum mechanics of closed systems there is no general concept of joint and conditional probability. Using, however, the generalized quantum mechanics of open systems (A. Kossakowski 1972) and the generalized concept of observable (“semiobservable”, E.B. Davies and J.T. Lewis 1970) it is possible to construct a quantum information theory being then a straightforward generalization of Shannon's theory.
Paper: "Thermodynamical models of information processing" by R. P. Poplavskii
The limits of the Superposition principle (position and momentum)
This paper confirmed the computational infeasibility of simulating quantum systems on classical computers.
unf.ru: [Transcription] [Abstract] [Translate from Russian] The main problem of the thermodynamics of information processes. Introduction. History of the problem. The measurement process and its information characteristics. Thermodynamic limits of the accuracy of physical measurement. Entropy defect in the measurement. The lower limit of irreversibility of the physical measurement process. Price of the energy of the accuracy and amount of information. Thermodynamic characteristics of the information transmission process. The energetic meaning of the coding. Natural representation of a number. Positional coding methods. Thermodynamic models of the information processing process. Criteria for the complexity of information processing. Energy criterion of the complexity of the information processing process. About the effective calculation methods. Conclusion. Literature cited.
“The general principle of superposition of quantum mechanics applies to the states [that are theoretically possible without mutual interference or contradiction] ... of any one dynamical system. It requires us to assume that between these states there exist peculiar relationships such that whenever the system is definitely in one state we can consider it as being partly in each of two or more other states. The original state must be regarded as the result of a kind of superposition of the two or more new states, in a way that cannot be conceived on classical ideas. Any state may be considered as the result of a superposition of two or more other states, and indeed in an infinite number of ways. Conversely, any two or more states may be superposed to give a new state...
Paul Adrien Maurice Dirac
🙂
☑️ #5 Apr 12, 1973
Paper: "Logical Reversibility of Computation" by Charles H. Bennett
In 1973, building on Landauer's work, he showed that general-purpose computation can indeed be performed by a logically and thermodynamically reversible apparatus, such as a chemical Turing machine.
https://researcher.watson.ibm.com/researcher/view_person_subpage.php?id=2861
dna.caltech.edu (pdf): [Transcription] [Abstract] The usual general-purpose computing automaton (e.g.. a Turing machine) is logically irreversible- its transition function lacks a single-valued inverse.
Here it is shown that such machines may he made logically reversible at every step, while retaining their simplicity and their ability to do general computations. This result is of great physical interest because it makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step.
In the first stage of its computation the logically reversible automaton parallels the corresponding irreversible automaton, except that it saves all intermediate results, thereby avoiding the irreversible operation of erasure.
The second stage consistsof printing out the desired output.
The third stage then reversibly disposes of all the undesired intermediate results by retracing the steps of the first stage in backward order (a process which is only possible because the first stage has been car- ried out reversibly), therebyrestoring the machine (except for the now-written output tape) toits original condition.
The final machine configuration thus contains the desired output anda reconstructed copy of the input, but no other undesired data. The foregoing results are demonstrated explicitly using a type of three-tape Turing machine.
The biosynthesis of messenger RNAis discussed as a physical example of reversible computation.
Paper: "Bounds for the quantity of information transmitted by a quantum communication channel"
http://www.mi.ras.ru/~holevo/: Holevo's theorem by Alexander Semyonovich Holevo is an important limiting theorem in the field of quantum computing, interdisciplinary physics and computer science. The theorem establishes the upper limit of the amount of information that can be learned about the quantum state (available information).
🙂
☑️ #3 March, 1970
Paper: “The concept of transition in quantum mechanics” by James L. Park
ui.adsabs.harvard.edu: The concept of quantum transition is critically examined from the perspective of the modern quantum theory of measurement. Historically rooted in the famous quantum jump of the Old Quantum Theory, the transition idea survives today in experimental jargon due to (1) the notion of uncontrollable disturbance of a system by measurement operations and (2) the wave-packet reduction hypothesis in several forms. Explicit counterexamples to both (1) and (2) are presented in terms of quantum measurement theory. It is concluded that the idea of transition, or quantum jump, can no longer be rationally comprehended within the framework of contemporary physical theory.
1. HISTORICAL DEVELOPMENT OF THE TRANSITION CONCEPT
Quantum physics has evolved during this century through two stages. The first, often called Old Quantum Theory (OQT), was characterized by a prevailing allegiance to the Newtonian-Maxwellian world view, in the sense that the fundamental concepts of the era of mechanism were steadfastly retained, the old laws of nature being subject o amendment, but not to repeal.
The theorem is an evolution of the 1970 no-go theorem authored by James Park.
In physics, the no-cloning theorem states that it is impossible to create an independent and identical copy of an arbitrary unknown quantum state, a statement which has profound implications in the field of quantum computing among others. The theorem is an evolution of the 1970 no-go theorem authored by James Park.
*https://en.wikipedia.org/wiki/No-cloning_theorem
🙂
☑️ #2 1968 - 1983
Paper: “Conjugate Coding”
Quantum Money
In 1968, Stephen Wiesner invented conjugate coding, a cryptographic tool considered part of a method for creating fraud-proof banking notes.
dl.acm.org: [Transcription] [Excerpts] This paper treats a class of codes made possible by restrictions on measurement related to the uncertainty principal. Iwo concrete examples and some general results are given.
The uncertainty principle imposes restrictions on the capacity of certain types of communication channels. This paper will show that in compensation for this "quantum noise", quantum mechanics allows us novel forms of coding without analogue in communication channels adequately described by classical physics.
The Very Strange -- And Fascinating -- Ideas Behind IBM's Quantum Computer [Transcription] [Excerpts] It was that ability to collaborate and explore new horizons without limits that drove Bennett’s work (quantum cryptography). For example, his friend Stephen Wiesner came up with the idea of quantum money that, because of the rules of quantum mechanics, would be impossible to counterfeit. It was the first time someone had a concrete plan to use quantum mechanics for informational purposes.
🙂
☑️ #1 Oct 28, 1963
Reversible Turing Machine*
Due to the unitarity of quantum mechanics, quantum circuits are reversible, as long as they do not "collapse” the quantum states on which they operate.
Williams, Colin P. (2011). “Explorations in Quantum Computing”
*Yves Lecerf, author of a fundamental discovery in mathematics about reversible computing, proposed a reversible Turing machine in 1963: “Logique Mathématique: Machines de Turing réversibles”.