Any way to get smaller key sizes?

Jean-David Beyer jdbeyer at
Wed Sep 28 17:24:30 CEST 2005

Hash: SHA1

Jason Barrett wrote:

> In a nutshell, I'm encrypting data entered via a website and storing it
> in a database for later retrieval and decryption by real-time user
> programs. I don't want to give up the value that the PGP brand adds to
> the product, but I can understand that some of my colleagues have
> concerns about the storage requirements, even though I have done
> worst-case analyses which indicate that the storage we need is available
> at minimal cost.  I'm willing to address those concerns with some
> weakening of the public key security, given that there are other
> mechanisms in place to protect vital data (SSL for one).
I had an occasion to work on a large project that management said had to run
IBM IMS/DB (hierarchical model) almost 10 years after the relational model
came out. We had lots of problems because the transaction handling provided
by IMS did not handle transactions as perceived by the users, so if nothing
were done, concurrent updates would be a problem. There were to be something
like 5000 clerks entering transactions from all over the country. I figured
out a way to provide locks for the transactions the users perceived (which
involved several database transactions) to solve this. But it required 8
bytes per record. They argued that the disk space required for the locking
procedure was excessive, and that the IO cost would cripple the system. I
argued that the indices and the rest of the data in the records was so large
that no one would notice the 8 bytes, and the locking procedure required
only one more IO per transaction. They said that would double the IO cost,
but I counted the IOs required and they went up from 19 to 20, or almost 5%.
Management was not convinced, so I took it all out of the code. They said
the odds against the concurrent update being a problem were a million to
one. I calculated it to be far less than that, but that even using their
number, we would have a problem every week or two. They ignored me, and the
first problem was detected in only a few days. By then it was too late to
change anything as there were hundreds of programs relying on the structure
of that database and nothing could be changed. They decided to use manual
methods to prevent concurrent updates. I got off that project, fortunately,
so I do not know how they figured out how to coordinate 5000 employees
scattered all around the country by manual methods.

In your case, it might be that the encrypted records will actually be
shorter than the unencrypted ones, since gpg compresses the things. But for
security, it is difficult to imagine a program enhancement applied after
sensitive data has been stolen that will get it back, is it not?

I feel pretty cynical about corporate management. Perhaps there are well
managed corporations, but they must be in the minority.

- --
  .~.  Jean-David Beyer          Registered Linux User 85642.
  /V\  PGP-Key: 9A2FC99A         Registered Machine   241939.
 /( )\ Shrewsbury, New Jersey
 ^^-^^ 11:10:00 up 105 days, 5:08, 3 users, load average: 4.46, 4.27, 4.13
Version: GnuPG v1.2.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -


More information about the Gnupg-users mailing list