Mathematicians Discover the Perfect Way to Multiply


Four thousand years ago, the Babylonians invented . Last month, mathematicians perfected it.

On March 18, two researchers described the fastest method ever discovered for multiplying two very large numbers. The paper marks the culmination of a - to find the most efficient procedure for performing one of the most basic operations in math.

“Everybody thinks basically that the method learn in is the best one, but in it’s an active area of ,” said Joris van der Hoeven, a mathematician at the French for Scientific Research and one of the co-authors.

The complexity of many computational problems, from calculating new digits of pi to finding large numbers, boils down to the speed of multiplication. Van der Hoeven describes their result as a kind of mathematical speed limit for how fast many kinds of problems can be solved.

“In you have constants like the speed of light which allow you to describe kinds of ,” van der Hoeven said. “If you want to know how fast computers can solve certain mathematical problems, then integer multiplication pops up as some kind of basic building brick with respect to which you can express those kinds of speeds.”

Most everyone learns to multiply the same way. We stack two numbers, multiply every digit in the bottom number by every digit in the top number, and do addition at the end. If you’re multiplying two two-digit numbers, you end up performing four smaller multiplications to produce a .

The grade school or “carrying” method requires about n2 steps, where n is the number of digits of each of the numbers you’re multiplying. So three-digit numbers require nine multiplications, while 100-digit numbers require 10,000 multiplications.

The carrying method works well for numbers with just a few digits, but it bogs down when we’re multiplying numbers with millions or of digits (which is what computers do to accurately calculate or as part of the worldwide search for large primes). To multiply two numbers with 1 billion digits requires 1 billion squared, or 1018, multiplications, which would take a roughly 30 years.

For millennia it was widely assumed that was no faster way to multiply. Then in 1960, the 23-year- mathematician took a seminar led by Andrey Kolmogorov, one of the mathematicians of the 20th century. Kolmogorov asserted that was no general procedure for doing multiplication that required fewer than n2 steps. Karatsuba there was—and after a week of searching, he found it.

Karatsuba’s method involves breaking up the digits of a number and recombining them in a novel way that allows you to substitute a small number of additions and subtractions for a large number of multiplications. The method saves because addition takes only 2n steps, as opposed to n2 steps.

Lucy -Ikkanda/Quanta Magazine

“With addition, you do it a year in school because it’s much easier, you can do it in linear time, almost as fast as reading the numbers from right to left,” said Martin Fürer, a mathematician at who in 2007 created what was at the time the fastest .

When dealing with large numbers, you can repeat the Karatsuba procedure, splitting the original number into almost as many parts as it has digits. And with each splitting, you replace multiplications that require many steps to compute with additions and subtractions that require far fewer.

“You can turn some of the multiplications into additions, and the idea is additions be faster for computers,” said David Harvey, a mathematician at the University of and a co- on the new paper.

Karatsuba’s method made it possible to multiply numbers using only n1.58 single-digit multiplications. Then in 1971 Arnold Schönhage and published a method capable of multiplying large numbers in n × log n × log(log n) multiplicative steps, where log n is the logarithm of n. For two 1-billion-digit numbers, Karatsuba’s method would require about 165 trillion additional steps.

Joris van der Hoeven, a mathematician at the French National Center for Scientific Research, helped create the fastest-ever method for multiplying very large .
École Polytechnique

Schönhage and Strassen’s method, which is how computers multiply numbers, had two other important long-term consequences. , it introduced the use of a technique from the of signal processing called a fast Fourier transform. The technique has been the for every fast multiplication since.

Second, in that same paper Schönhage and Strassen conjectured that there should be an even faster algorithm than the one they found—a method that needs only n × log n single-digit operations—and that such an algorithm would be the fastest possible. Their conjecture was based on a hunch that an operation as fundamental as multiplication must have a limit more elegant than n × log n × log(log n).

“It was kind of a general consensus that multiplication is such an important basic operation that, just from an aesthetic point of , such an important operation requires a nice complexity bound,” Fürer said. “From general experience the of basic at the end always turns out to be elegant.”

Schönhage and Strassen’s ungainly n × log n × log(log n) method held on for 36 years. In 2007 Fürer beat it and the floodgates opened. Over the past decade, mathematicians have found successively faster multiplication , each of which has inched to n × log n, without quite reaching it. Then last month, Harvey and van der Hoeven got there.

Their method is a refinement of the that came before them. It splits up digits, uses an improved version of the fast Fourier transform, and takes of other advances made over the past forty years. “We use [the fast Fourier transform] in a much more violent way, use it several times instead of a single time, and replace even more multiplications with additions and subtractions,” van der Hoeven said.

Harvey and van der Hoeven’s algorithm proves that multiplication can be done in n × log n steps. , it doesn’t prove that there’s no faster way to do it. Establishing that this is the best possible approach is much more difficult. At the end of February, a of computer scientists at posted a paper arguing that if another unproven conjecture is also true, this is indeed the fastest way multiplication can be done.

And while the new algorithm is important theoretically, in practice it won’t change much, since it’s only marginally better than the algorithms already being used. “The best we can hope for is we’re three times faster,” van der Hoeven said. “It won’t be spectacular.”

In addition, the of has changed. Two decades ago, computers performed addition much faster than multiplication. The speed gap between multiplication and addition has narrowed considerably over the past 20 years to the point where multiplication can be even faster than addition in some chip architectures. With some hardware, “you could actually do addition faster by telling the computer to do a multiplication problem, which is just insane,” Harvey said.

Hardware changes with , but best-in-class algorithms are eternal. Regardless of what computers like in the future, Harvey and van der Hoeven’s algorithm will still be the most efficient way to multiply.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation, whose mission is to enhance understanding of by covering research developments and in mathematics and the physical and .

Related posts