rfc: verify-ssh

Daniel Kahn Gillmor dkg at fifthhorseman.net
Wed Feb 1 01:32:37 CET 2012

Hi Nikos--

On 01/31/2012 06:37 PM, Nikos Mavrogiannopoulos wrote:
> This wasn't the intent, but being able to support pinning (which looks
> like a similar idea) would be nice.

yes, indeed.  perhaps verify-ssh.c is a misleading title for this
approach?  What you're enabling here isn't specifically for ssh.  it can
be used to implement a TOFU (trust on first use) scheme, an out-of-band
key distribution scheme, etc.

> Storing hashes though, I don't think
> it is a good idea. Why would one trust a hash and not the actual public
> key? If he trusts the hash he could use it to verify the public key and
> then store it. The issue with hashes is that they might break (e.g. if
> collisions are found) and replacing stored hashes is a mess, thus I'd
> try to avoid them (are they really required for pinning?).

the idea behind identifying keys by their digest instead of the public
key material is that you can pin your site to two keys (one active, one
that you keep as an offline backup).  If your hardware dies and you lose
access to your active key (or if you worry the old key may have been
compromised), you can re-deploy with your backup immediately, without
waiting for the pin to expire.

until we all move to ECDSA keys, shipping two full 2048-bit RSA public
keys in the header was probably considered a bit too much;  hence using
the digest.

So, in particular, storing a digest would be useful for a pinning
implementation; it would enable the use of verify_stored_pubkey(),
though obviously find_stored_pubkey wouldn't be possible.

> What I am skeptical of pinning is that it allows the server to modify
> the expiration of the pinning at will.

yep, there's a bit of a race condition during a compromise; but it only
affects users connecting to that service

> If an application wants to store keys that are specific for that one it
> could specify an application string. The service I use it currently to
> specify the port number of the service connecting to.

ah, i got it: "application" is the tool using gnutls, while "service" is
the network service it is connecting to.

in Monkeysphere, we haven't bothered with the "application" distinction
(on the basis that different applications that have entirely distinct
requirements will want to use an entirely separate key databse from each
other, while most services offered by a host will just use the common
system database), and our equivalent of your "service"  is called
"context", which is more flexibly-defined (and human-visible) than just
the port number (e.g. "https" or "ssh" or "e-mail").  This allows the
keyfinding operation to work for more than just network service keys; it
could work for signed or encrypted e-mail messages, IM conversations,
etc as well.

What do you think the tradeoffs are between these approaches?

> I was thinking about it, but it seems it is very simple to implement an
> alternative approach with a different backend. Thus I am not really sure
> if a generic approach would be better, or it would provide a complex
> interface that would be hard to use.

I like the general outline of the interface you've defined; my main
concern is the backend storage.  why do you think that a more efficient
backend-storage mechanism would require a different interface?


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 1030 bytes
Desc: OpenPGP digital signature
URL: </pipermail/attachments/20120131/359dd367/attachment.pgp>

More information about the Gnutls-devel mailing list