[mod_gnutls-devel] Certificates not always exported to environment

Daniel Kahn Gillmor dkg at fifthhorseman.net
Fri Feb 21 11:35:53 CET 2014


Hi Benny--

On 02/18/2014 03:59 AM, Benny Baumann wrote:

> while skimming through the code I noticed a minor bug (which I currently
> could not yet trigger due to lack of a sufficiently large certificate
> (and getting such one shouldn't be too hard) which causes mod_gnutls to
> fail exporting the server and client certificate when the PEM encoding
> of the certificate is more than 10 KiB.

hm, i see the concern.  thanks for bringing this up.

> What happens: If you supply a certificate large enough to not fit into
> the buffer (about 7KiB, which should possible to do with a well-crafted
> 16kRSA key with non-standard exponent) or 40k RSA with standard exponent
> OR a CA putting LOTS of extensions in you can cause mod_gnutls to skip
> exporting this certificate to a client process like fcgi. If the client
> process relies on the certificate to be present it will see that a
> certificate was supplied (by the means of the ther variables in the
> environment) but wont see the actual certificate and thus might fail
> authentication or other functions relying on the presence of the
> exported certificate.

fwiw, the buffer is 10KiB, which is quite a lot:

 char cert_buf[10 * 1024];

16Kib RSA keys are quite large and expensive to use.  There is a point
of diminishing returns with RSA: as the keys get larger, the resistance
to an attacker doesn't scale proportionally.  Note that a 16Kib RSA
public key itself (regardless of the certificate around it) takes up
2KiB, which doesn't even fit in a normal TCP packet (the typical MTU is
~1.5KB).  And the key signature (for DHE key exchanges) or encrypted
premaster secret (for RSA-style key exchanges) will be just as large as
the key itself; so we're talking at least several packets just to
transmit these parts of the handshake.  if the CA that issued the cert
also uses a 16Kibit key, then the cert size doubles (because a cert
contains both a pubkey and a signature), and then the intermediate CA's
cert itself is a contributing large factor.  So the handshakes here
could get very very large.

as another point of reference: using GnuTLS's certtool, the
configuration "--sec-param ULTRA" only generates 15Kib keys.

> What should happen: Use a dynamic buffer which is allocated using the
> size returned by the function exporting the certificate so that the
> buffer is always large enough.

I don't think we want to allow a client with an arbitrary-sized
certificate to exhaust the RAM of the server.  There needs to be a limit
somewhere based on reasonable criteria.  What do you think the limit
should be?

> Implications: The current buffer is allocated on stack and thus might be
> an attack vector if the GnuTLS routine used was prone to a buffer overrun.

Overrunning a dynamic buffer on the heap is also potentially an issue
(though i grant that stack smashing has a richer history); we need to
rely on GnuTLS to not overrun a buffer if we tell it the size of the
buffer, i think.

All that said, I'm open to concrete suggestions that don't open the
module to a RAM exhaustion attack; arbitrarily dropping a 10KiB buffer
on the stack definitely seems a little weird.

What do you think?

	--dkg

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 1010 bytes
Desc: OpenPGP digital signature
URL: </pipermail/attachments/20140221/e556b2c2/attachment-0001.sig>


More information about the mod_gnutls-devel mailing list