I think the Rivest paper in rivest-factoring.ps.Z is an excellent paper
that everyone interested in the RSA algorithm and how hard it is to break
should read.
There is one change I would recommend for the next version of such a
difficulty of factoring paper.
The computer trend has been a doubling in power every 1.5 years for the
same cost. I plotted this 8 years ago and we are right where I predicted
we would be (just dug out the data - see tables below). So I would use
this as "average rate of technological progress". This means that the
"high rate of progress" estimate in this paper is a bit lower than I think
the "average" estimate should be. In the Rivest paper, the algorithmic
assumptions dominate the hardware assumptions, so in practice this has
little impact on the table with size of numbers factorable in the next
25 years.
My old data from 1986:
>Microprocessor Year Drystones "VAX-MIPs"
>
>4004 1971 4 ? 0.0025
>8008 1972 7 ? 0.0044
>8080 1974 36 ? 0.022
>6502 1 Mhz 1975 37 0.023
>Z80 2 Mhz 1976 91 0.057
>68000 4 Mhz 1979 692 0.43
>68010 10 Mhz 1983 1156 0.72
>80286 5 Mhz 1983 1428 0.89
>68020 16 Mhz 1985 4504 2.82
>68020 25 Mhz 1986 6750 ? 4.22
>80386 16 Mhz 1986 5801 3.6
>MIPS 12 Mhz 1986 12000 7.5
>
>Assume: MIPs = Drystones / 1600.
> This comes from a VAX MIP being the standard MIP, and a
>VAX MIP being about 1600 drystones.
>
> The drystones for the 4004, 8008, and 8080 were estimated
>by comparing the time to add two 8 bit numbers with the
>time for a Z80 and scaling the Z80 drystones.
1994-1971 = 23 years since first CPU (4004)
23/1.5 = 15.33333 number of doubling
2^15.33 = 41285 increase in CPU power
0.0025*41285 = 103.2 expected VAX equivalents on a CPU for 1994
Bingo!!! A 100 MHZ Pentium or 100 Mhz PowerPC are each 100 times a VAX.
Using the double in 1.5 years rule starting from a few other points:
8080 in 1974 --> 227x VAX (8080 chip was $500 in 1974 dollars!)
68010 in 1983 --> 116x VAX (about right)
MIPS in 1986 --> 295x VAX (but MIPS was workstation not PC)
Today there are 200 Mhz Alpha PCs ($6,000) that are 200x VAX.
I believe there will be 200 Mhz Alphas for only $3,000 by the end
of the year.
The trend really goes back further. From the IBM 650 in 1956 to
the IBM 370/168 in 1972 IBM mainframes went from 0.001 MIPS to 2.3 MIPS
with the same doubling time of 1.5 years. Probably after the 68000
in 1979 micros had a lower cost/MIP than mainframes, as about that
time mainframes did not keep up their doubling rate. So the "best
price/mip trend" is really a 38 year trend of doubling every 1.5 years.
>System Date MIPs CPUs IBM-MIPs/CPU
>650 9/56 0.001
>7070 9/58 0.022
>1401 10/59 0.0074
>1410 10/62 0.0154
>360/50 4/64 0.178
>360/65 4/65 0.68
>360/85 1/68 2.4
>370/165 6/70 1.89
>370/168 8/72 2.3
>370/168-3 3/75 2.7
>3033 3/77 4.7
>3081-K 10/81 13.5 2 6.75
>3090/200 2/85 28.0 2 14.0
>
>Datamation May 1985 page 90.
>Datamation Feb 1984 page 164.
Predicting the future is always hard. There are reasons to expect
things to progress faster (current chips are only 2D planes of
transistors - once they go 3D doubling times will be shorter
(transistors in 1 cubic-centimeter goes up with 3rd power of size
while transistors on 1 sq-centimeter goes up with 2nd power of size)).
The doubling time could stay the same (it is a 23 or 38 year trend).
Or the doubling time could slow down (as chip features get very small
it gets harder to cut the size by 30%). All in all, I think the best
bet is that the doubling in 1.5 years rule will keep on working.
Expect another factor of 1,000 in computing power per dollar in the
next 15 years. I think Z80 chips are under $1 today, and it is not
hard to imagine something of the power of today's 275 Mhz Alpha
(about $1,000) selling for under $1 in 15 years. I also expect to
see CPUs that are 1,000 times faster, but this will not be trivial.
Anyone is welcome to copy this or reference it. Please send me a copy
if you have a table of ship dates for CPUs since 1986.
Many people noticed the performance trend before I did. The 3D comment
is my own.
-------------------------------------------------------------------
Also good to put in such a paper is something like the following from
the RSA FAQ in ftp://rsa.com/faq:
>As for the slowdown caused by increasing the key size (see Question
>2.3), doubling the modulus length would, on average, increase the
>time required for public-key operations (encryption and signature
>verification) by a factor of 4, and increase the time taken by private
>key operations (decryption and signing) by a factor of 8. The reason that
>public-key operations are affected less than private-key operations is that
>the public exponent can remain fixed when the modulus is increased, whereas
>the private exponent increases proportionally. Key generation time would
>increase by a factor of 16 upon doubling the modulus, but this is a
>relatively infrequent operation for most users.
The point is that going from 250 digits to 500 digits makes RSA
encryption take only 4 times as long, while breaking 500 digits takes
far longer.
RSA-110 took 75 MIP-years
RSA-120 took 800 MIP-years
RSA-129 took 5000 MIP-years
Faster computers help encryption far more than code
breaking. Computer progress helps privacy. In 3 years we will
have another factor of 4 in computing power and can double our
key sizes with the same performance, while the factor of 4 only
lets people factor numbers with a few more digits than they can
currently factor. So even if most of the keys in use today could be
broken within 30 years, 3 years from now most keys in use would take
more than 100 years to break. Strong crypto is here.
Also, RSA is only needed for sending the private key session keys.
For private key algorithms, the performance does not need to drop off
much at all with key size. For example, Terry Ritter (ritter@io.com)
has a method that uses 4 sets of DES to get a key size 4 times as large
and only spend 1.2 times a much computation per byte.
The net result is that programs like PGP do not really run much slower
even with 1,300 bit key sizes. If a few constants were changed, PGP
would probably be fine at 5,000 bits or more. Currently PGP uses IDEA
for the session encryption, which can not scale keysizes. But that
should change with time.
While 5,000 bits might seem like a lot, it is only 1/3 of a second
on a 14.4 kbps modem. :-) And in spite of some expectations, modems
keep getting faster.
-------------------------------------------------------------------
Another point raised on sci.crypt is that the factoring algorithms
are limited by memory bandwidth and not CPU speed. While CPU speed
has been going up exponentially, memory bandwidth has not really
been keeping pace. So there is a chance that factoring performance
will not keep pace with CPU speed increases. All of the good
algorithms known so far are memory bandwidth bound; however, future
algorithms might change this.
There are a number of reasons to think that memory bandwidth may
not slow future computers doing factoring down by that much.
CPUs will be getting huge on chip caches with very high bandwidth
to the CPU. There are a number of folks today working on stacking
a number of cache chips on top of a CPU chip to get very high
bandwidth caches with a lot of room. Fast page mode accesses on
really large DRAMs (with large pages) should give close to cache speeds
for algorithms walking through memory. New fast DRAM designs have
far higher bandwidth. With more parallelism and caches, memory
systems have kept up with CPUs for 38 years. All in all, I think
memory systems will not slow factoring down by more than some almost
constant factor off of full CPU speeds.
Memory bandwidth has always been an issue, and the factoring
algorithms do seem to push the problem, but if you need a really safe
key, it is safest to pick the length assuming only a CPU bottleneck.
-------------------------------------------------------------------
Some people in the press and on the net have missquoted something
from the Aug 1977 Scientific American article and claimed that
scientists thought it would take 40 quadrillion years to break RSA-129.
That is NOT what was said. Here is the quote (from 1977):
> Contrast this with the difficulty of finding the two prime factors
> of a 125 or 126-digit number obtained by multiplying two 63-digit
> primes. If the best algorithm known and the fastest of today's
> computers were used, Rivest estimates that the running time required
> would be about 40 quadrillion years!
That time was only with a 1977 computer and 1977 factoring algorithm.
Clearly computers were expected to get faster, algorithms to get
better, and of course more than one computer could be used. They
expected to pay off on the "ciphertext challenge worth $100".
The wording was:
> As a challenge to Scientific American readers the M.I.T. group
> has encoded another message, using the same public algorithm.
RSA has generated a lot more interest in factoring numbers. The
factoring algorithms have gotten much better. Computers have also
gotten a couple thousand times more cost effective. And the Internet
makes using a large number of idle machines possible.
The improvement over the 40 quadrillion years might be broken into
something like:
2,000 hardware speedup (drop in price/mip)
1,000 many machines
20,000,000,000 better algorithms
Paul Leyland (on RSA-129 team) breaks down the improvement of their
actual factoring over the estimate as follows:
>
> 0.1 hardware speedup
> 2,000 many machines
> 20 speed-up of best available algorithm
>10,000,000,000,000 choice of best, as opposed to newest, algorithm.
He says the machines they used were not faster than the 1977 fastest
machines. He says the Rivest estimate was not for the best
known algorithm.
-------------------------------------------------------------------
--- Vince Cate
vac@cs.cmu.edu
5/94
Master copy of this file:
http://www.offshore.com.ai/security/rate-of-computer-progress