basic identity mgmt

Doug Barton dougb at dougbarton.email
Sun Jan 17 03:19:32 CET 2016


On 01/15/2016 01:37 PM, Andrew Gallagher wrote:
> On 15/01/16 21:02, Doug Barton wrote:
>
>> On 01/15/2016 12:21 PM, Andrew Gallagher wrote:
>> |  I've
>> | worked on several projects for more than one financial institution,
>> | and airgaps like this are considered barely sufficient for some
>> | important keys. (Of course in such projects the idea of a
>> | certification subkey not on the airgapped machine would be
>> | completely unacceptable...)
>>
>> That's interesting, and you have made me curious ... what's the threat
>> model? And what is that key certifying?
>
> Most relevant example, a system where users can register their
> authorisation keys against a semi-automated authority which signs them
> for trust by a third system. The root key that certifies the automated
> authority keys is offline. Essentially a private root CA.
>
> Now, this example is using x509 rather than pgp,

Right, that's what I suspected. I have set up similar systems myself, 
and I'm very familiar with security requirements there.

X.509 is very different from PGP, although I do understand that in some 
ways the semantics are the same. Most particularly X.509 is used 
primarily to establish trust relationships between systems, not people. 
So the ability for a system to identify itself to another system, 
without human review being involved, is something much more precious 
that deserves a higher degree of protection.

OTOH, PGP is designed primarily to establish trust relationships between 
people, with human review of the results an integral part of the process.

I read your example, and there are numerous flaws with your theoretical 
threat model. Let's assume your premise, that someone could root a 
laptop, and by so doing gain access to use all of the PGP keys on that 
laptop (Note, I disagree with this premise, but let's grant it for 
argument's sake). There is no need to deal with the certification key at 
all in order to do the kind of damage you proposed. All you need to do 
is sign a message that authorizes your nefarious deeds. Said attacker 
would also have the ability to decrypt all manner of messages and/or 
data, all of which are likely to be vastly more interesting than what 
you propose.

In fact, I assert with a great deal of confidence that *for PGP*, the 
certification key is the least interesting key of the bunch, and yet 
it's the one that people have created this intricate protection 
mechanism for.

Further I don't see signing as all that interesting either. As has been 
discussed several times on this list the primary area of reliability for 
signing is to make sure that the message that arrived was the one that 
was sent. But it provides no guarantee about who was in control of the 
key when the message was signed, whether the signer was coerced, etc. We 
can infer things about these topics from our knowledge/beliefs about the 
sender, but I can't think of any rational person would go along with a 
request to "Pay Joe $10,000" just because the message was PGP signed. 
Forget the validity of the key, that kind of request would require 
serious OOB authentication.

Glossing over authentication (because there's no real use case for those 
keys yet), that leaves us with encryption, and that's where it's at, my 
friends. But unless you really enjoy making your life harder than it has 
to be, you can't routinely use encryption with an air-gapped key, so I 
remain unconvinced that there is a use case for air-gapping PGP keys. 
But I'm still willing to listen. :)

Doug



More information about the Gnupg-users mailing list