Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 24 Mar 1998 05:43:26 +0300
From:      Anatoly Vorobey <mellon@pobox.com>
To:        freebsd-chat@FreeBSD.ORG
Subject:   Re: time to up your pgp keys to 4096 bits?
Message-ID:  <19980324054326.29283@techunix.technion.ac.il>

next in thread | raw e-mail | index | archive | help
You, Jonathan M. Bresler, were spotted writing this on Mon, Mar 23, 1998 at 10:01:42AM -0800:
> > 
> > Just to clear up here: This is about breaking RSA, not 'generation
> > keys'.
> > 
> > And if I have understood correctly, quantum computing make factoring a
> > trivial excersise due to 'infinite parallelism'.  4096 bits is
> > unlikely to help; _any_ key length is unlikely to help.

More or less; quantum computing makes factoring essentially
polynomial, so that whatever key length you use, generating the key
and cracking it will take about the same time.

> 	achieve....there is no theortical limit to the degree of 
> 	parallelism, just the practical limit in constructing
> 	the apparatus.
> 
> 	the article, for what it is worth, seems to indicate
> 	that each register consists of ~50 ions.  each would 
> 	require its own "laser-drivers".  each regiester could
> 	do 10e5 ops/us or 10e11 ops/sec.  so give them 10e12 or
> 	2e40 ops/sec.

No, this (building very small very fast registers) is not what
quantum computing is about (the news got it wrong as usual).
Quantum computing promises to evaluate exponentially many 
computation routes in polynomial time, this being much more
significant gain than any however impressive gains in speed and
size of one operation, such agins being linear.

Very roughly, it works by exploiting nondeterminism inherent
in quantum mechanics. You use particles which can be in two
well-defined states (e.g. polarized photons). According to
quantum mechanics, one such particle will be described by a
superposition of these two states with some complex factors:

alpha |0> + beta |1>

Where alpha, beta are complex and |0>, |1> are two basic
states which we identify with this particular quantum bit
being 0 or 1. When we "look" at the bit, the wave function
collapses and

with probability |alpha^2| we see |0>
with probability |beta^2|  we see |1>

This is nondeterminism. If you have k such particles altogether,
the system in general will be in a superposition of all possible
outcomes of looking at the system, the number of which is 2^k
(however most of the factors will be 0 in simple cases).
In a sense, we exploit the fact that the system "remembers"
such a very complex superposition (it may involve up to 2^k
terms of the sort alpha_i |outcome>, where outcome is a k-bit 
string) simply by the virtue of being in it.

Suppose we have an operation that, given a quantum bit which is
certainly zero (were we to look at it, we would always see 
|0>, i.e. alpha=1, beta=0 above), it turns the bit into the
state 

1/sqrt(2) |0> + 1/sqrt(2) |1>

This is the ultimate uncertainty bit: when we look at it, we
see |0> with probability 1/2. 

Then, if we start from a row of k quantum bits all being certainly
zero, and apply this operation to each of them in turn, we
get a system in which _every_ outcome is possible exactly with
probability 1/2^k if we look at the string:

1/sqrt(2^k) |outcome>  for every possible of 2^k outcomes.

E.g. for 2 bit string:

1/2 |00> + 1/2 |01> + 1/2 |10> + 1/2 |11>

Here comes the central trick. If we apply some operation on this
system now (_without_ "looking" at it, e.g. without determining 
the outcome) the operation applies at the same time to all
terms in the superposition and the result is the superposition
of results for each outcome. In other words, if we compute 
f(a) where f is a function on a k-bit string (for example, 
computing its parity, or whatever), the result will be
a system with a superposition of f(x) for each possible outcome
x, weighted with probability of x in the original system
(which was 1/sqrt(2^k) for all x's). What we just did is 
computed f for exponentially many values in one atomic step!
We can't really find out all of them, however; if we look
at the system now we'll find just one of them, with its
corresponding probability in the superposition. 

E.g. when f is parity, the outcome will be a system with
equal probability of 0 and 1, since all outcomes had the
same probability in the original system and there're exactly
half the k-bit strings with parity 1. 

We can apply essentially any f we can compute on a usual
computer, by breaking it down to atomic gates such as XOR and
translating those to language of quantum operators. 

For more complex f's related to the number N < 2^k
which we want to factor (factoring, here we come), the 
distribution of values in the system after applying is not
even as in case of parity, but greatly favors some values. 
If we look at the system now, with large probability we'll find 
it in a state which will help us find a factor of N. It won't
work every time, but we can repeat the whole thing independetly
several times to make the probability of finding a factor of
N quickly extremely high so that it'll always work in practice. 

So we found the way after all to exploit that simultaneous
computation of 2^k values of f inherent in applying f to
our original distribution. We do it by picking f so that 
the values we favor and want to discover will turn into the
same results by applying f; in that case their probability
will grow very large and become such that we're likely to
encounter them by looking at the system after applying f. 

Oh my, I must be very bored ;) Hopefully it was at least
interesting to someone. 

-- 
Anatoly Vorobey,
mellon@pobox.com http://pobox.com/~mellon/
"Angels can fly because they take themselves lightly" - G.K.Chesterton



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-chat" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19980324054326.29283>