randomart is troubling [was: Re: QR code]
hans at guardianproject.info
Thu Jan 19 20:23:43 CET 2012
On Jan 19, 2012, at 1:44 PM, Daniel Kahn Gillmor wrote:
> On 01/19/2012 01:25 PM, Hans-Christoph Steiner wrote:
>> About randomart display of fingerprints, we really like the idea as it is implemented in OpenSSH.
>> Most people will find it much easier to compare little pictures rather
> than hex strings.
>> Indeed many people will be quite intimidated by the site of a long hex
> string in their app.
>> So the idea is to incorporate the randomart image into the fingerprint
> validation process.
> I'm unconvinced by these arguments. people might feel more comfortable
> "comparing pictures" than "comparing hex strings", but that doesn't say
> anything about the actual collision-resistance of the pictures
> (especially in the context of the heuristic- and idiosyncrasy-ridden
> human visual apparatus).
> Most people would also feel more comfortable comparing shorter strings
> that were pronouncable (e.g. it says "cat dog zebra" -- does yours say
> "cat dog zebra"?); but we don't do that because those shorter strings
> don't have enough entropy to be collision-resistant in the way that we
> need fingerprints to be.
> Can you point me to studies that show actual resistance to malicious
> attack against "randomart" approaches? Mingerprints themselves are
> subject to attacks against common human mental idiosyncrasies:
> my instincts suggest that visual image comparison is at least as "fuzzy"
> (probably moreso) than string comparison, even if people find it more
> We don't do anyone a good service by introducing insecure steps in a
> critical stage of the verification process.
> It would be better to just get the human out of the loop entirely if the
> opaque data is beyond human capacity to deal with (which is the idea
> behind the QR code stuff, aiui.
> Are there good arguments for randomart? I'd like to hear them if there are.
I think you are misunderstanding two key points:
- we are not claiming randomart is tried and true
- we are not replacing standard manual fingerprint verification
The key idea here is that we are adding another view on the same data. The standard manual fingerprint verification procedure is easy to mess up. If a fingerprint is off by one digit, how many users to do you think will catch that error? How about off by a few?
Do you have data that more people will feel comfortable comparing a string of random words rather than any other technique? How about error rates of experts doing standard manual fingerprint verification, or anyone using any other technique? The point here is that I do not think there is much of this data anywhere, so we are all left to intuition and educated guesses. If you do have this data, please share :) Getting these techniques into easily useable forms will lead to this data being generated.
Our current idea is something like this:
- the software is geared towards smartphones with cameras
- the fingerprint verification screen shows the standard hex fingerprint, the QR Code version, and the randomart image
- users scan each other's QR code to get the fingerprint
- they will then validate each others' fingerprints by looking at the QR code, the hex string and the randomart image
We could also throw in the random words technique you mentioned above, can you recommend a library?
"[T]he greatest purveyor of violence in the world today [is] my own government." - Martin Luther King, Jr.
More information about the Gnupg-devel