Encrypted file-size approximation with multiple recipients
dshaw at jabberwocky.com
Wed Apr 2 06:37:22 CEST 2014
On Apr 1, 2014, at 9:01 PM, Tim Chase <gnupg at tim.thechases.com> wrote:
> I've been trying to find a good explanation on how something like
> gpg -r DEADBEEF -r CAFEBABE -r 8BADFOOD -o output.gpg -e input.txt
> works. The best I've been able to find is this:
> I'm mostly interested in the overhead, so I set up 4 distinct
> homedirs for testing. It looks like each additional recipient adds
> about 271 bytes (though one of them only has an extra 270 bytes), and
> there's a per-file overhead of about 66 or 67 bytes.
> So from my experimentation, the final file-size ends up being
> something like
> input_file_size + 67 + (271 * recipient_count)
> but I'm not sure how much that might change based on conditions I'm
> not taking into consideration (all my test GPG users were just
> gpg1 at example.com, gpg2 at example.com, etc), all with 2048-bit keys.
This can change pretty significantly given different key lengths, different algorithms, and perhaps most significantly, how compressible the original document is (by default GPG compresses data before encryption). An input file of text will compress very differently than an input file that's a jpeg (as jpegs are already compressed, and so do not benefit much from a second layer of compression).
> Is there a more formal formula that can be used to approximate the
> overhead of multi-recipient encryption?
Not really. If you constrain the problem as you have (everyone gets 2048 bit keys, etc), and constrain the input to a particular type of data, you can get a better approximation, but as soon as you open the problem up, the file sizes vary.
More information about the Gnupg-users