Follow-up to Crashes with gpg-agent 2.1.18
matthew.summers at syapse.com
Tue Jun 6 07:43:23 CEST 2017
On Mon, Jun 5, 2017 at 3:40 PM, Daniel Kahn Gillmor
<dkg at fifthhorseman.net> wrote:
> On Sun 2017-06-04 22:35:59 -0500, Matthew Summers wrote:
>> It's probably true that for most users, highly parallelized decryption
>> operations are rare. However, it's not difficult to demonstrate use
>> cases where it's important to handle highly parallelized requests to
> It would really help this conversation to document a few of these
> real-world use cases. I appreciate that your demonstration scripts are
> narrow and targeted because it helps to isolate the problem, but having
> clear real-world cases will help in motivating the fix to actually
> get deployed.
> Where are highly-parallelized requests to gpg-agent likely to happen?
I am delighted to outline a scenario. First some background.
There is an open source config management and remote execution tool
written in python called Salt. It let's the user store information
formatted as yaml. It's useful to store secrets in these yaml files.
It's good to encrypt these secrets  when storing them, encryption
at rest being good and all that. This yaml can be rendered for runtime
use in the config management system.
So, for example, let's say we have secret data encrypted with the
entire Ops team as recipients, like an API token or something larger
like private TLS key material. Further, let's say we need to put this
secret onto 10000 systems in batches of 100.
Salt provides a handy mechanism to make such a thing simple, via ssh
even , if desired. With a few lines of code, our gpg'd secret, and
a small bit of setup, the we get to a point where the one-liner
`salt-ssh -E 'myhosts' state.apply update-special-secret` will do the
trick. That is, it will initiate 100 processes that each decrypt the
secret, render out the execution data, ssh to the remote host,
execute, and return relevant information to us, then pickup the next
batch, and so on.
The numbers here, of 100 or 1000, are totally arbitrary, but
sufficiently large to demonstrate the scale. On a high CPU count
system with lots of memory, it would not be terribly crazy to push
these numbers much higher. In practice, with the secmem size set to
64k, we would run into issues withe batch sizes of 10.
I hope this is clear and helps. I could probably put together a demo
of this, though I am not sure how to simulate 1000 hosts really nicely
(read cheaply and easily). Ansible or Puppet can do similar things
too, using GPG to store/decrypt secrets data.
Please feel free to ask questions if what I have outlined above needs
More information about the Gnupg-devel