So far this week we’ve looked at Question 1 and Question 2 in our holiday-fun suggestions for technodiversions you might like:
- Install a non-mainstream operating system. Because you can.
- Fool around with software you used to love. No one will know.
- Rewrite a well-known algorithm from scratch. Prove you can still code.
Today, it’s time to consider Question 3:
You settle down to rewrite a well-known algorithm from scratch, to prove you can still code. Which do you choose?
Conway’s Game of Life
Let’s start at the beginning…
One good reason to learn about modular exponentiation is that it’s a very handy algorithm in cryptography.
Indeed, modular exponentiation can be used to agree on a secure, secret encryption key with someone else, even if you have to use a public, insecure network for your communication. (Look for Diffie-Hellman-Merkle, also abbreviated to Diffie-Hellman or just DH.)
The trick is that modular exponents are easy – or, at least, fairly easy – to calculate, but as good as impossible to reverse.
If you remember your school mathematics, exponentiation is repeated multiplication; the inverse (the operation that gets you back where you started) is a logarithm, or log for short.
For example, 2 to the power 3 is 2×2×2, and works out to be 8 (23 = 8, for short); going backwards, we say that the logarithm to the base 2 of 8 is 3 (log28 = 3).
In general, if
bE = Y, then
logbY = E.
(The base is the number at the bottom – the value that gets multiplied by itself over and over – and the exponent is the elevated number above the base – the number of repeated multiplications you need to do.)
23 in your head is easy, but working out
log28 is much trickier.
In fact, you may find it easier to work the other way around and use approximation: keep on multiplying 2 by itself until you hit, or get close to, the answer you’re looking for.
In cryptography, modular exponentiation adds another twist: after each repeated multiplication, you divide the result by a specially-chosen prime number known as the modulus, and take the remainder, like this:
Once you mix the “take the remainder” step into the exponentiation process, it becomes as good as impossible to reverse the calculations algebraically: there’s no formula to compute a modular logarithm, so you pretty much have to try every possible input until you hit on the solution by chance.
In general, if
bX mod P = Y, you can quickly calculate
X, but there is no shortcut by which you can solve the equation backwards for
X if you are given
How quick is “quick”?
We glibly said above that “you can quickly calculate
X“, but just how quick is “quick”?
Let’s ignore the modulus part for now, and just consider the repeated multiplications, given that in cryptographic calculations we aren’t usually multiplying single-digit numbers like 2×2, but dealing with numbers that have hundreds or even thousands of digits.
Most modern computers can only work on 64-bit numbers, and typical IoT devices and smartcards may only be able to do calculations 32 bits or 16 bits at a time.
So we need to break the multiplication down into chunks that work with, just as you do in the old-school process of long multiplication.
Long multiplication lets you multiply big numbers such as 745×368 one digit at a time, because:
745 x 368 = 745 x (3x100 + 6x10 + 8x1) = 745x3x100 + 745x6x10 + 745x8x1 = (7x100 + 4x10 + 5x1) x (3x100) + (7x100 + 4x10 + 5x1) x (6x10) + . . . = (7x3 x100x100 + 4x3 x10x100 + 5x3 x1x100) + . . . etc.
Multiplying by 10, 100, 1000 and so on is easy (just add the correct number of zeros onto the end), so long multiplication means you replace a single 3-digit by 3-digit multiply with nine 1-digit by 1-digit multiplies.
Here’s how to do long multiplication with pen and paper, if you’ve never seen it before:
This approach is quick enough for numbers that you might call “biggish”, but you bog down when the numbers become huge.
For example, using this algorithm on a 64-bit computer to multiply together two 2048-bit prime numbers means splitting the numbers into 32 chunks of 64 bits each, and therefore needs 32×32 = 1024 multiplies.
If you have a 32-bit CPU, you’ll need to do 64×64 = 4096 multiplies to produce all the intermediate results, and then do all the necessary addition operations to combine them into a multi-precision result.
In general, the complexity goes up as a the square of the number of digits, which is OK for small numbers but gets sluggish quickly.
Cutting down the work
Multiplication quickly becomes computationally expensive, given that doubling the lengths of the numbers involved (for example, going from 1024-bit cryptographic keys to 2048-bit keys to stay ahead of crackers) will typically quadruple the workload.
Of course, exponentiation with huge powers means lots of multiplying, so anything we can do to reduce the number of individual multiplies will help enormously.
Handily, when it comes to exponentiation, there’s a shortcut based on the fact that we aren’t multiplying together two arbitrary numbers each time – we’re multiplying by the same number (the base) over and over again.
So, we can repeatedly multiply the result of each previous multiplication with itself, instead of multiplying by the base each time:
And that’s the trick known as exponentiation-by-squaring: after N-1 loops, you reach your base to the power of 2N-1, rather than just to the power of N. (Above, after 4 loops we get to 516 on the right but only to 55 on the left.)
And with all the powers of 2 up to 2N-1, you can represent any number up to 20 + 21 + … 2N-1, which just happens to be 2N−1, so you can represent any exponent up to 2N−1, and therefore you can compute your base raised the power 2N−1 with at most N multiplies.
Actually, you need at most 2N multiplies, because you need N multiplies to do all the squaring, plus up to another N multiplies more to combine the various powers to get the result.
But if your exponent has 2048 bits, that means you’ll need at most 2 x log22048 = 12 multiplies to get the job done, instead of naively looping round 2047 times – that’s a workload of 12/2047, or well under 1% of the effort.
Unfortunately, there just wasn’t time in this article to deal with the other two algorithms in today’s quiz question, so we’ll have to ask you to wait for us to cover them some time in the New Year.
In the meantime, why not take our Holiday Fun quiz (and watch out for our New Year’s #sophospuzzle crossword, coming soon to Naked Security)?
5 comments on “Holiday Fun #3: It’s (never) too late to learn long multiplication!”
For those of you who remember how, do this in Assembler. You’ll love yourself! And it’ll be a blast!!
As a bonus combining two objectives, implement it within Emacs using elisp. (At which point I get kicked for noting that there’s already an implementation of Conway’s game via “M-x life” …)
The remainder is not known as the modulus. That really confused me for awhile. In the example the modulus is 7.
Aaargh, sorry about that. I inserted the comma-clause about “taking the remainder” in the wrong place.
I also spelled “divide” wrongly in that paragraph, so I was able to fix that at the same time.
Thanks for the comment.