|Name:||Cryptanalysis & Computer Architecture – A Historical Survey|
|Time:||Wednesday, June 19, 2013
3:45 PM - 4:15 PM
CCL - Congress Center Leipzig
|Breaks:||4:15 PM - 5:15 PM Coffee Break|
|Speakers:||Ansgar Heuser, German Information Security Agency|
|Abstract:||By well established cryptographic standards the only way to check the security of a cipher system is to try all kind of cryptanalytic attacks on it one can think of; if they all fail one is perhaps prepared to take its security for granted. In doing so we allow the attacker to know all the details of the system up to the key itself (“Kerckhoffs principle”); by cryptanalysis we understand the reconstruction of the unknown key out of some amount of cipher or out of a pair of cipher and plain text or out of a similar situation.
But cryptanalysis is not just an intellectual exercise the efficiency of an attack being decisive: it makes a difference whether an attack takes 24 hours or 24 years! Here supercomputers and the suitability of their architecture for cryptanalytic purposes come into play.
Whereas the classical CRAY type vector machines proved their strength in well structured stereotyped operations (a typical example being the FFT) massive parallel systems do better in performing intricate e.g. 1 bit logical operations only but simultaneously for many different key settings. To the extent the design of cipher systems became stronger and stronger over the last decades (e.g. no longer liable to sophisticated statistics based attacks) the security problem was more and more reduced to the feasibility of a complete exhaustion of the key space in a certain implementation or an efficient search through a set of plausible passwords ─ ideally suited for massive parallel systems like clusters where the single processor doesn’t need to be too powerful and the the speed of the interconnection can be moderate.
When it comes to modern so called asymmetric cipher systems like RSA or discrete logarithm based system the security level depends on the efficient solvability of a hard number theoretic problem like factoring; there the typical processes like the sieving and the solution of a gigantic system of linear Boolean equations require massive parallel systems of powerful processors and a very fast interconnection.