SHA1 collision found

sivmu at web.de sivmu at web.de
Thu Feb 23 22:50:44 CET 2017


Am 23.02.2017 um 20:09 schrieb vedaal at nym.hush.com:
> The Openpgp standards group is working on this.

Yes but who know how many years it will take until a new standard is accepted...

>
> The link you give for the collision used 2 PDF's.
> Using a PDF is sort-of 'cheating', and does not extrapolate to being
> 'completely broken'.
>
> Assuming that it is possible to find a pre-image collision, i.e:
>
> [1] m1.txt 1 has an SHA1 hash of H1
> [2] m2.txt will now have the same SHA1 hash H1
>
> What will happen to in order to generate m2.txt  is that there will be
> many trials of a gibberrish string added to the plaintext of m2.txt
> until one is found that has the same SHA1 hash as m1.txt
> BUT
> This will be quite visible in the plaintext of m2.txt, and won't fool
> anyone.
>
> With a PDF, the 'extra gibberish string' is 'hidden'. It is not in the
> actual PDF the receiver reads, only in the meta-data, the appended PDF
> 'Suffix'.

Not sure about you but I am not able to see the difference between a valid pgp key and "gibberish" ;)


>
> While this is *do-able* and a good reason to move on to a future
> SHA256 hash, it would not be transferable (at this time, based on the
> PDF collision data), to find a fingerprint collision for any v4 key.
> vedaal
The question is how many tries it takes until a colliding key is found that is accepted by common pgp implementations when imported, is it not?


As said, if it is as easy as i think it is, providing an option for different hash algos to generate fingerprints would be a nice solution until a new standard is established.



More information about the Gnupg-users mailing list