[RFC PATCH] enable configurable SECMEM_BUFFER_SIZE
wk at gnupg.org
Wed Nov 22 20:46:10 CET 2017
On Wed, 22 Nov 2017 16:23, matthew.summers at syapse.com said:
> We have a lot of active connections to the gpg-agent using a single
> private key. I outlined our use case in more detail here .
Frankly I followed the discussion in June only briefly. Thanks for
pointing me again to this.
> We are enlarging the secmem_buffer using the patch attached
> previously. We are determining the size using a simple empirical test
I would accept this as a quick workaround but we need a better solution.
> configurable, per the patch. It is perhaps notable that we needed to
> go from ~1mb to 2mb when moving from 2.1.23 to 2.2.0.
I just checked the commits between tehse release and I can't see
anything which shoudl affect the allocation pattern. This only shows
how fragile the fixed memory approach is.
On Wed, 22 Nov 2017 14:40, Amul.Shah at fisglobal.com said:
> very fast. With fis-gtm, we spawn so many processes that exhaust
> mlock()ed space frequently enough for this to be a problem for us.
Okay, that is the same pattern.
> [amul] "secure" memory allocations only use the first mlock()ed memory area. Is
> this what you mean by "standard memory"?
I call this all secure memory. The first chunk is mlock()ed but if we
need to allocate more chunks to satisfy memory requests from Libgcrypt
which do not expect to fail , we can't use mlock anymore. There are
two other properties of secmem:
- A free() wipes out the formerly allocated memory
- Some crypto code in Libgcrypt enables slower but less side-channel
leaking algorithms if the material is stored in secmem.
If possible gpg-agent also does an extra wipememory before a free to
protect against not weel behaving external memory allocaters used with
As dkg already mentioned, the protection against swapping out sensitive
data can nowdays better achieved by using encrypted swap. Along with no
way to protect against suspend/resume, I view the mlock more of a
> [amul] When you say that the agent can cache a secret key, does that mean
> subsequent requests for the same key will be serviced from cache?
Right. However, each running session needs a copy of the key because we
don't have reference counting. This is the problem of too many
concurrent session (more exact: decrypt or sign commands).
> [amul] In our case, we have many active connections to the gpg-agent (via
> libgpgme and gpg) as processes decrypt the secret key that encrypts the
Well, adding a limit to GPGME so that it will block in case of too many
concurrent gpg operations should be easy to implement. But that does
not solve the problem of having several gpgme using processes running
concurrently. So let's forget this.
> [amul] I agree with your assessment of option #1. I think, however, that there
> is a third option. In its current form, the gpg-agent accepts every request and
> spawns a thread to handle it. There is no limit on the rate at which it accepts
> connections. Other than crudely limiting the thread count, I don't know of a
> good way to slew the agent. Do you have any suggestions? Additionally, I have
> not seen any timeouts in gpg/libassuan when communicating with the gpg-agent.
Limiting the number of concurrent connections to gpg-agent is a useful
feature but it requires more thought and is unlikley to make it into 2.2
anytime soon. I opened a ticket at https://dev.gnupg.org/T3529 .
> [amul] If I were to prepare a patch for option #2, assuming it passes a review and
> conforms to the coding standards, would it be acceptable?
We should try this. Let me implement something for testing.
 Error checking all gcry_mpi_foo function would make the code
unreadable and requires complex and hard to test error cleanup.
Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 227 bytes
Desc: not available
More information about the Gnupg-devel