me at halfdog.net
Thu May 28 23:39:50 CEST 2020
Peter Lebbing writes:
> On 28/05/2020 06:20, halfdog wrote:
>> Using a signing key per source seems to be impractical here
>> too as it would also require to transfer the whole file beforehand
>> for signature verification.
> What about solving your entire use-case with an explicit two-step
> There's an encrypted, signed OpenPGP file with just a
> cryptographically secure random number. It is signed by a per-source
> signing key and encrypted to the secure machine.
> There's another file with symmetrically encrypted data using
> the tool of preference (could be GnuPG with --symmetric encryption)
> which actually contains the bulk of the data. The "passphrase"
> or encryption key is the cryptographically secure random number
> from the OpenPGP file.
> So every time you would now create a public-key encrypted file
> with the historic data, you explicitly generate a fresh "session
> key". You encrypt the data symmetrically with that key, and
> also public-key encrypt and sign that "session key".
> When you need to recover the historic data, the device sends
> only all the public-key encrypted "session keys" to the secure
> machine. That checks all these are signed by the device in
> the field, and if so, returns all "session keys".
> It's basically a duplication of the OpenPGP session key, and
> indeed, internally you are using a session key to encrypt a
> "session key". So it needs twice the randomness, which might
> be a problem on embedded systems. But it prevents needing packet
> surgery and inspection, instead just using default mechanisms.
You are right. The reason for the gpgsplit/transfer/decrypt scheme
was mainly because increase of data volume made the initial design
with full data transfer problematic. I should have moved to intermediate
key design back then already.
I think I will change the encryption of new data according to
your suggestings and keep some old gpgv1 instance while there
is still some old data around using the old encryption scheme.
It would be even possible to "upgrade" the old data by extracting
the session keys and reencrypting them but I do not think this
would be worth it.
PS: good point thinking about the randomness needed. That should
be considered in general but in this specific use case (due to
the IO activity for the data to be encrypted) the software RNG
should gather sufficient entropy between invocations once per
More information about the Gnupg-users