Making a gpg library

tftp tftp at yahoo.com
Tue Oct 24 05:08:22 CEST 2000


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- --- Michael Still <mikal at stillhq.com> wrote:
> On Mon, 23 Oct 2000, tftp wrote:
>
> > $ time ./gpgrun 1000
> > 5.70user 1.89system 1:32.86elapsed 8%CPU (0avgtext+0avgdata 0maxresident)k
> > 0inputs+0outputs (356255major+91062minor)pagefaults 0swaps

> The problem is we have no timing for a dynamic load of a library and then
> the call of a routine. The best I can come up with on short notice is
> 10,000 (plus a few for setup) calls of a PDF generation routine on a PII
> 300 laptop...
>
> 8.56user 0.27system 0:09.32elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (102major+3257minor)pagefaults 0swaps

Indeed your code is mostly byte-crunching (very few page faults and
high CPU utilization). My test was mostly I/O and memory allocation.
That's why I mentioned that if gpg is given a crypto job then results
will change; I decided to finish the experiment and replaced --version
with:

gpg --encrypt -r 0x0BABFC68 -o /dev/null complete.tcsh > /dev/null

Here is the result for 1000 passes:

$ time ./gpgrun 1000
318.60user 22.29system 7:47.65elapsed 72%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (418215major+144035minor)pagefaults 0swaps

Previously we spent 93 seconds to perform 1000 invocations of gpg
without having any work done. Now we spent 468 seconds to encrypt
1000 files (the same file and to /dev/null, this also matters).
Therefore if you encrypt your files this way your loss on process
overhead will be about 20% which indeed does matter. I assume here
that the initialization/deallocation of resources that gpg does even
when called with --version can be done only once regardless of what
crypto job is requred, and how many.

You will be reading and writing into real files, therefore you have
to spend more time on file I/O. However an SMP box can help here,
depending on architecture of your software. In fact, I would not be
surprised if your project has to include hardware and be shipped as
complete system if you want to survive such data traffic. Symmetric
ciphers are suitable for hardware implementation, and that's where
you spend your filesize-dependent time. I would consider a PCI card
(or several) for that.

> Anyway, perhaps my needs are non-standard and therefore should not be
> supported. My point is more that the design of gpg makes my task harder,
> for no reason that I can fathom...

Sure, most people don't encrypt/decrypt so many files. gpg can't be
everything for everyone; we have Windows for that :-) Your project is
so enormously computationally intensive that you may be better off
simply linking gpg into your software, like PGP SDK provided AFAIR.
I assume that licensing of both products is compatible. That would be
something that can be done in 15 minutes of lazy hacking: replace main()
with another code. Another step would be to get rid of intermediate
files (you probably already know what to process and where to put the
data). Filesystem would save a lot of cycles if not used :-) Either run
main() as a thread or as a coprocess in singly-threaded system. I would
prefer not to rely on IPC to transfer your amounts of data; this may be
too expensive.

Cheers,
Dmitri
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE59WzvTBNuAuzeIlARAjyJAKDwgDyYvkMQAQygYWyBqYbzEXRWNgCeKFHJ
2ku0TZ9BP5ROh2ZnoUp4tL0=
=pUcm
-----END PGP SIGNATURE-----



__________________________________________________
Do You Yahoo!?
Yahoo! Messenger - Talk while you surf!  It's FREE.
http://im.yahoo.com/



More information about the Gnupg-devel mailing list