NEWS

Quantum algorithms versus computers

0
The pursuit of “quantum superiority” – the unequivocal proof that a quantum computer can complete a task faster than a classic computer – has a paradoxical side effect: it led to a boom in classic quasi-quantum algorithms.

It is a popular misconception that both the potential and the limits of quantum computing depend on the hardware. In the digital age we have got used to seeing progress in terms of clock rates and storage capacities. And the 50-qubit quantum computers from Intel, IBM and other companies that are now going online have led to the prediction that we are approaching quantum superiority  – that nebulous limit behind which quantum computers create things that are beyond the capabilities of classic machines.

But the desired quantum superiority is not a singular, sweeping victory, but rather a lengthy series of smaller duels. Problem after problem, there is a competition between quantum algorithms and classical algorithms. “With quantum computers, progress cannot be determined by speed alone,” says quantum theorist Michael Bremner from the University of Technology in Sydney, Australia. “It’s more about the complexity of the algorithms used.”

Paradoxically, reports about powerful quantum computers prove to be motivating for the improvement of classic computers – and thus make it even more difficult for the quantum machines to gain a lead. “When people talk about quantum computers, classic computers are mostly dismissed as outdated,” explains mathematician and computer scientist Cristian Calude from the University of Auckland in New Zealand. “But that is by no means true. It’s a competition that has still not been decided. ”

A competition in which the conditions are constantly changing. “The answer to the question at which point quantum superiority is reached depends on how good the best classical algorithms are,” explains the theoretical physicist John Preskill from the California Institute of Technology. “When these get better, that point shifts too.”

To calculate everything that can be calculated in the physical universe

Before the dreams of quantum computers took shape in the 1980s, most computer scientists assumed that there would only be classical computers. The pioneers in this field of research had convincingly demonstrated that classic computers – embodied by the mathematical abstraction of the Turing machine – should be able to calculate everything that could be calculated in the physical universe – from basic arithmetic to the stock market to the Schwarzer collision Holes.

However, classic machines could not perform all of these calculations efficiently. Let’s say we want to understand the chemical behavior of a molecule. This chemical behavior in turn depends on the behavior of the electrons in the molecule, which are in superpositions of many classical states. Worse still: the quantum state of each individual electron depends on the state of all other electrons – via a quantum mechanical phenomenon known as entanglement. The classic calculation of these entangled states is a nightmare with exponentially increasing complexity, even for very simple molecules.

In contrast, a quantum computer can analyze the intertwined fates of electrons by bringing its own quantum bits into superposed and entangled states. This enables a quantum computer to process unusually large amounts of information. Each added qubit doubles the number of states that the system can store simultaneously: two qubits can store 4, three qubits 8, four qubits 16 states, and so on.

“Nature is not classic, damn it, and if you want to simulate nature you have to do it quantum mechanically”(Richard Feynman)

Therefore 50 qubits would be sufficient to model quantum states for which 1.125 quadrillion classical bits would be necessary. For a quantum machine, the classically impracticable simulation of a large, quantum mechanical system would be feasible – or so it seemed. “Nature is not classic, damn it, and if you want to simulate nature, you have to do it quantum mechanically,” a famous saying by physicist Richard Feynman from 1981. “And by the gods, it’s a wonderful problem, because it doesn’t look easy at all. ”And, of course, it really wasn’t.

Even before anyone started tinkering with quantum hardware, theorists were already struggling to develop usable software for quantum computers. Feynman and David Deutsch, physicists at the University of Oxford, showed early on that quantum information can be controlled with the help of mathematical operations derived from linear algebra: logic gates. Analogous to classical gates, quantum gates manipulate qubits in all possible ways – they guide them through a sequence of superpositions and entanglements and ultimately measure their output. By joining and merging gates, theorists could easily develop quantum algorithms.

But conceiving quantum algorithms that offer clear advantages in computation proved to be more difficult. By the early 2000s, mathematicians had presented a number of good candidates. Best known is the quantum algorithm proposed in 1994 by Peter Shor, a young colleague at Bell Laboratories, which factors integers exponentially faster than any known classical algorithm. Many popular encryption methods could be cracked with this efficiency. Two years later, Shor’s colleague Lov Grover designed an algorithm that significantly accelerated traditionally laborious search processes in unsorted databases. “There were a lot of examples that quantum computers should be superior to traditional computers,” reports quantum computer scientist Richard Jozsa from the University of Cambridge.

Beautiful quantum processes are complicated

But Jozsa and other researchers also came across many examples that it could be the other way around. “It turned out that many beautiful quantum processes look as if they are complicated” – and are therefore difficult to simulate on classic computers, says Jozsa. “But with clever, sophisticated mathematical methods you can find out what these processes do.” With these methods, Jozsa and his colleagues succeeded in efficiently simulating a surprisingly large number of quantum circuits – or “dequantizing” them, as Calude would put it. This applies, for example, to circuits that work without entanglement and for those that only entangle a limited number of their qubits or only use certain types of gates for entanglement.

Then what is the reason Shor’s algorithm is so uniquely powerful? “That’s still an open question,” admits Jozsa. “So far we have not been able to find out why some algorithms can be easily simulated in the classic way and others not. Obviously, entanglement is important, but that’s not all. ”And some experts asked whether many of the supposedly superior quantum algorithms might not actually be quite ordinary.

Until recently, the pursuit of higher performance in quantum algorithms was of an abstract nature. “We didn’t give much thought to the implementation of our algorithms because nobody seriously believed that we would have a quantum computer available in the foreseeable future,” says Jozsa. Applying Shors’ algorithm to integers large enough to crack 128-bit encryption would require thousands of qubits – and probably many thousands more for error correction. The experimenters were still busy getting a handful of qubits under control.

Struggle for the right sample

But in 2011 the situation began to change. In the fall, Preskill speculated at a conference in Brussels that “the day on which a well-controlled quantum system can perform tasks that exceed the capabilities of classical systems” is not too far off. The latest laboratory results, the researcher continues, could soon lead to quantum machines with around 100 qubits. And to carry out a “superclassical” task with them would not be unthinkable. On the other hand: Although the commercial quantum processor from D-Wave Systems at that time comprised 128 qubits and now even works with over 2000 qubits, it only tackles very specific optimization problems. Many experts doubt that this can outperform classic computers.

“I was just trying to emphasize that we are close – that we could soon actually reach that milestone in human civilization at which quantum technology is the most powerful information technology we have available,” Preskill explains his intention. He christened this milestone “quantum supremacy”. That designation – and his optimism – stuck. “It developed a momentum I didn’t expect.”

Richard Feynman in 1984
Richard Feynman in 1984 | The physicist Richard Feynman first came up with the idea of ​​quantum computers in the 1980s. “Hell, it’s a wonderful problem,” he joked, “because it doesn’t look easy at all.”

The excitement about quantum supremacy reflected the growing excitement that was prevalent in the field – for experimental progress, but even more so for a whole range of theoretical breakthroughs. The first of these was presented in 2004 by IBM physicists Barbara Terhal and David DiVincenzo. In their attempt to understand the advantages of quantum computers, the research duo had studied a class of simple quantum puzzles known as sampling problems, or more commonly the “sampling problems”. Over time, these problems became the experimenters’ greatest hope: with the help of such calculations, they hoped to demonstrate beyond doubt the superiority of quantum machines.

The volatile nature of quanta creates problems

Sampling problems are based on the volatile nature of quantum information. Suppose we apply a series of gates to 100 qubits. Such a circuit converts the qubits into a mathematical monster, which corresponds to about 2 to the power of 100 classical bits. But when we measure this system, its complexity collapses into a sequence of just 100 bits. The system spits out a single sequence of bits – a single sample, the probability of which depends on the exact nature of the circuit.

A sampling problem is about producing a series of samples that look like they came from this circuit. It is similar to tossing a coin: on average, you get 50 percent heads and 50 percent tails. Except that in a quantum system we don’t get a single value for each “throw”, but a whole series of values ​​and each of these values ​​is influenced by some (or even all) other values.

It’s a no brainer for a well-functioning quantum computer – it’s exactly what its nature means. Classic computers have a hard time doing this, however. In the worst case, they need the probabilities for all possible output bit sequences – all 2 to the power of 100! – calculate and then choose random samples from this distribution. “People always assumed that was the case,” says Ashley Montanaro, an expert on quantum algorithms at the University of Bristol, especially for very complex quantum circuits.

Terhal and DiVincenzo showed that even for some simple quantum circuits it is difficult to generate samples by classical means. In doing so, they had set the bar high: if experimenters could get a quantum system to spit out these samples, then there would be good reasons to assume that this could not be achieved with a classic computer.

What cannot be achieved with a classic computer?

Theorists quickly extended this approach to other types of sampling problems. One of the most promising ideas came from Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology, and his PhD student Alex Arkhipov. On ArXiv, a server for scientific pre-publications, in 2010 they described a quantum machine in which photons are sent through an optical circuit that manipulates the light particles quantum-mechanically and thus provides photons with a specific probability distribution as output.

Researchers refer to the reproduction of this pattern as “boson sampling” (sampling of the boson distribution; photons are bosons). Aaronson and Arkhipov argued that such boson samples would overwhelm classical methods at around 30 photons – a plausible experimental goal.

Computations called instantaneous quantum polynomial circuits were similarly tempting to the researchers. Such an IQP circuit consists exclusively of gates that commutate, which means that they can be used in any order without changing the result – similar to how 2 + 5 gives the same result as 5 + 2. This property makes IQP circuits mathematically pleasant. “We started by examining such circuits because they are easier to analyze,” admits Bremner. But he came across other advantages of the IQP circuitry. His work started in 2010, which culminated in a joint publication in 2016 with Montanaro and Dan Shepard, who now works at the UK’s National Cyber ​​Security Center, provided an explanation for why IQP circuits can be extremely powerful: even for physically realistic systems of hundreds – or even Only dozen – of qubits, sampling would quickly become a very difficult problem for classical systems.

Plan for Achieving Quantum Superiority

In 2016, boson sampling still hadn’t gotten beyond six photons. Research teams from Google and IBM, on the other hand, are already working on chips with almost 50 qubits. And in August 2016, without much ado, Google published a concept paper that contained a plan for achieving quantum superiority with the help of machines that could be realized “in the near future”.

“We just gnaw at the border, we don’t blow it up”(John Preskill)

The Google team had considered generating samples using an IQP circuit. But a closer look led Bremner and his colleagues to the conclusion that such a circuit would probably require error correction – and thus additional gates and at least a few hundred more qubits. Only in this way could it clearly surpass the best classical algorithms. The team therefore showed, with arguments similar to those of Aaronson and Bremner, that circuits made from non-commutating gates, although more difficult to build and analyze like IQP circuits, are at the same time more difficult to simulate for classic circuits. To make the classic calculations even more challenging, the team suggested generating samples with a randomly chosen circuit. In this way, it would not be possible for a classic competition algorithm to use familiar properties of the circuit structure to better predict its behavior.

But nothing stopped the classic algorithms from becoming more and more resourceful. In October 2017, for example, an IBM team showed how a supercomputer can simulate random samples of random circuits with up to 56 qubits with a little classic inventiveness – at least as long as the circuits are not excessively deep, i.e. do not have too many layers of gates. And an even more capable algorithm has raised the classic limit for boson sampling to around 50 photons .

Two days of computing time versus a tenth of a millisecond

However, these improvements are still terribly inefficient. The IBM simulation, for example, required two days of computing time – but it was expected that a quantum computer would only need a tenth of a millisecond. If you add a few more qubits – or a little more circuit depth – quantum computers reach the realm of quantum superiority. “In general terms: If you want to emulate highly entangled systems, then there is no classic breakthrough that would have really changed the rules of the game,” says Preskill. “We just gnaw at the border, we don’t blow it up.”

That doesn’t mean there will be a clear winner. “People will continue to argue about where the line is,” said Bremner, assessing the situation. Let’s imagine the following scenario: Researchers take samples of a 50 quibit circuit of a certain depth – or a slightly larger one with less depth – and declare that they have achieved quantum supremacy.

“The first wave of problems that we will solve are those where we don’t really care about the answers.”(Ashley Montanaro)

But the circuit is noisy – the qubits are behaving badly or the gates are not working reliably. A couple of talented classical theorists come promptly and simulate the quantum circuit without great effort, because “with noise, things that you think are difficult become less difficult from a classical point of view,” explains Bremner. “That’s what might happen.”

What will certainly happen is that the first “superior” quantum machines, whenever they exist, will by no means crack encryption or simulate new pharmaceutical molecules. “That’s the funny thing about superiority,” says Montanaro. “The first wave of problems we will solve are those where we don’t really care about the answers.”

Yet these first victories, no matter how small, will show researchers that they are on the right track – that a new kind of calculation is indeed possible. And so far nobody can even guess what the next wave of problems will be that will then be solved by quantum computers.

Read More:

best gaming headsets

The best laptops for gaming, videos, leisure and work (if you absolutely have to)

Previous article

Apple makes its own air tags superfluous – the reason

Next article

You may also like

Comments

Leave a reply

Your email address will not be published. Required fields are marked *