Whishlist for next-gen card
ndk.clanbo at gmail.com
Sat Feb 21 19:54:38 CET 2015
Il 21/02/2015 12:51, Peter Lebbing ha scritto:
>> 1 - support for more keys (expired ENC keys, multiple signature keys)
> Yes! This would be a great feature to keep expired encryption keys on a card. I
> personally would have no use for more than 1 signature and 1 authentication key,
> but I don't see a reason why you wouldn't just have a whole bunch of generic key
> slots and only indicate its intended usage on creation/upload so people can do
> that as well.
Yup. But if those are too generic, then it could be way cheaper to just
use a generic PKCS#15 card (I've had some experiences with Aventra ones:
they've got space for many keys and 14 PINs... and are quite cheap!).
> If ECC were supported by the card, you'd need quite a lot less storage to keep
> all these keys.
Yup. But on a device like GnuK space is really not the limiting factor.
>> 2 - different PINs for different keys
> This would partly amount to resurrecting an old feature. IIRC, there were 2 user
> PINs in the v1.1 spec, but the v2 spec pretty much retired the second PIN. Don't
> take my word for it, though, check the spec.
I remember seeing an unused PIN object.
> However, I think the primary key/subkey feature is already covered pretty well
> by simply having two smartcards (it's what I do).
Twice the cost, twice the risk of losing one, twice the management burden...
>> 3 - separate key for NFC auth (with its own optional PIN)
> Sounds like an interesting concept. I don't know how much work it would be to
> have the ISO 7816 parts needed by the OpenPGP card really working for NFC. Do
> you just exchange the lower few communication layers (physical, data, ...) and
> is everything above the same for the subset of ISO 7816 you need? I haven't
> looked at NFC yet.
I started implementing it on MyPGPid. From JavaCards it's quite easy to
differentiate between wired and wireless, since the applet receives the
protocol used to transmit the APDU.
> What I'm hinting at: NFC and cards with contacts are different enough that it
> might warrant handling NFC separately from the rest and hence doesn't need to be
> "integrated into" the process for determining the next cards-with-contacts
> standard and implementation.
There are dual-interface cards, and I think they're not so disjoint,
once you're using secure messaging.
> But NFC authentication through asymmetric crypto sounds interesting.
Way more than EM4100 tags or MIFARE cards.
>> 4 - HOTP PINs for signature/certification keys
> What generates the HOTP then? Do you type a PIN on the HOTP device to get the HOTP?
No need. Just an applet on the phone could do. At least if you aren't
using the same phone to do the crypto.
> What I'm guessing you might mean, is that the HOTP device might be more trusted
> than the pinpad of the card reader: the card reader is connected to the PC. The
> HOTP device is simply a standalone device; is air-gapped. So even if the PC is
> compromised, it will not be able to learn your PIN, which you entered on the
> HOTP device.
As I said, no need for a PIN on the HOTP device. The only important
thing is that it's a *different* device, better if air-gapped.
> Is this what you're getting at?
> I don't really see the use. Smartcards protect extraction of the private key;
> they're not equipped to prevent usage of the key material through a compromised
> PC. So what they can't learn your PIN; they'll just get you to enter it for
> them. I don' see this adding something beyond your point 6, which I'll treat there.
You're authorizing a *single* operation. As you noticed, malaware could
be smart enough to fool the user for decryption (where using HOTP would
be foolish: you'd have to continuously generate new codes just to scan
the mail), but signature is another beast. HOTP could be seen as a
stronger 'alwaysauthenticate' flag.
>> 5 - possibility to export private keys to user-certified devices
> I'm not terribly interested in that. Firstly, you would still need a backup of
> the key the data is encrypted to; the chain has to start somewhere. Secondly,
> provided you can have a trusted system for the generation of the keys, you could
> simply generate them on that trusted system, encrypt them to the key you wish to
> encrypt it to, and then store the encrypted data as you see fit.
Unless you're doing something like SmartcardHSM, where each card gets
initialized with a device attestation key (generated on-card) and its
certificate, so you can choose to trust other cards as long as they're
certified. A more user-centric approach would require a "temporary" CA
(no need for long-term storage of its secret key) that the user uses to
certify keys generated on his own cards. Once the cards have been
certified, the CA key can be deleted. In the worst case, the user won't
be able to transfer his keys to other (newer) devices.
> On-card generation is putting a lot of trust in the on-card RNG as well; I put
> more trust in Linux's PRNG on a trusted system. As long as you're generating the
> keys on a PC anyway, you might as well handle all the backup thingies there.
On the other hand I think a TRNG that accumulates entropy under user
control (like GnuK) can be trusted more than something relying on
>> 6 - support for out-of-band authorization (HW)
> It's not watertight. Neither is a canary. I see it as defence in depth. But it
> is clearly a contested subject. And I've never seen any indication that Werner
> believes in this solution; I get the feeling he doesn't. I suppose he's the main
> influence on what goes in the card specification and what doesn't.
Maybe its addition to the security is marginal, but can be *way* more
practical than having to reenter a complex PIN every time.
>> 3 to 6 should be under a "policy" object connected to the main key to
>> make it public and let relying parties evaluate how much trust to give.
> This seems to preclude ever allowing the user access to their own private key
> material; otherwise you're moving your trust back again from the smartcard
> attesting this policy to the user attesting this policy, since they can do what
> they want, whatevah! I don't think we should limit the user in handling their
> own keys; it reeks like DRM to me. But if you don't limit the user, there's no
> reason to have the smartcard attest to any attributes, since the card can't
> guard them anyway. Just leave the policy to the user.
Well, on one hand the user could *choose* to use his own CA, thus being
free to do what he likes with the keys. On the other had he could
*choose* to trust another CA (the one managed by the card producer,
maybe) and in that case he won't be able to access his keys and other
users will know that.
>> 2 - If I have to use my card to login on a possibly untrusted computer
> Hmmm. I would definitely use a different card for this, regardless of all the
> cool protections on the card. Say I have a card with my work identity, and one
> with my private identity. I wouldn't ever stick the private identity in a work
> PC, even if the keys were protected by a different PIN. It only seems to serve
> the purpose of having one piece of plastic less in your wallet; but I already
> need a separate card wallet because of all the cards I have, one more isn't
> going to matter.
Separate cards for separate identities is one thing, but separate cards
for separate roles of a single identity could quickly become too
cumbersome to manage.
For me it wouldn't be a problem to plug my card in a work PC, as long as
I can be "confident enough" that I'm in control of which keys are being
>> 4 - since HOTP changes at every use, it makes keyloggers nearly useless
>> and gives a third factor to the auth process (might be combined by
>> simple means -like digit-by-digit addition modulo 10- to the PIN)
> Third factor IMHO implies a different *kind* of factor. You already have
> possession (the OpenPGP card) and knowledge (the PIN). What's your third factor;
You're quite right. Maybe third factor is not the correct term. What I
meant was that the user had to have another object. Better if it's
something he's used to bring around and guard (like the phone).
> Malware can replace the hash of the object being signed... and still display the
> correct file name type and size, etcetera, everything you mention. So I'm not
> sure what you're getting at. You need to present data that is actually *signed*
> for this to be useful in any way.
If that info is embedded in the signature packet, it could add something
to the signature value (if the receiving party sees that signature is
about a txt file and the presented object is a doc, there's something
wrong and suspect).
BTW the card could even (randpmly?) ask to be passed the whole file
to-be-signed to compute on its own the hashing. If it's different from
the one presented in the "summary", alert the user. I know card
interfaces are quite slow, but I think it should be allowed anyway
because 1) you can have faster interfaces, if needed (see GnuK) and 2)
it's an user's choice (I never sign ISO files, and even if I did I can
stand that once in a while it takes some hours... if I signed ISOs
everyday I'd probably choose NOT to enable such a feature, or just use a
card/token without display).
> It might be that SSH authentication does
> include a "peer ID" in its challenge; I haven't checked. But that is all.
That's the fingerprint ssh shows you. It should be computed from the
complete public key.
> What is signed is a hash computed over the data and a few OpenPGP
> (sub)packets. People can't compute hashes. So you need a computer to do it. If
> there's a computer you trust to do this for you, just sign the data there
> already. I only see this work with trust in numbers: compute the hash on many
> computers, and if they all agree, they're probably not all hacked. I say I see
> this work, but I don't, really: I don't believe in this mechanism. And it's a
> tremendous amount of work for the user for one single signature.
See above. In poker terms there's a chance that the card asks to see
your hand. And if you cheated the user is alerted.
> [... half a proofread of this mail later ...]
> Oh ouch. I suddenly realise something about the canary press-to-decrypt button
> (point 6). I've thought of a nasty attack. Maybe it's not such a great canary
> for decryption keys...
I didn't mean it for decryption, just for signature/certification.
> So I access mail A, which is encrypted, and my PC is compromised. The malware
> listens in, and, crucially, secretly saves the session key for mail A! A few
> days later, I again access mail A. Now, I expect to be prompted for my PIN:
> that's how it normally works when I access an encrypted mail. However, the
> malware arranges that a document it is interested in is decrypted instead. And
> since it has saved the session key for mail A, it still presents to me mail A as
> expected. Now I haven't pressed the button any more than I expect to do, but
> still it decrypts other data than I expect it to. I've just helped the malware
> access my encrypted documents, and I'm totally unaware.
Well, currently it could simply cache your PIN and do *everything*. Do
you agree that developing the depicted attack costs *way* more than a
simple keylogger (about 5€ for one that intercepts the PS2 connector, IIRC)?
> Detecting false signatures is already more complicated.
> Now I'm really starting to have doubts about the canary button.
Maybe it's just convenience that *eventually* adds some marginal
security. Can't see it doing harm.
More information about the Gnupg-users