Making a gpg library

raf raf at comdyn.com.au
Tue Oct 24 22:53:50 CEST 2000


Michael Still wrote:

> On Mon, 23 Oct 2000, tftp wrote:
> 
> > $ time ./gpgrun 1000
> > 5.70user 1.89system 1:32.86elapsed 8%CPU (0avgtext+0avgdata 0maxresident)k
> > 0inputs+0outputs (356255major+91062minor)pagefaults 0swaps
> 
> My case is for nine million documents. Sure I might not get this all done
> in any 24 hour period, but time really starts to matter.
> 
> > #!/usr/bin/perl
> > for ($n=0; $n < $ARGV[0]; ++$n) {
> >         system("gpg --version > /dev/null");
> > }
> > 
> > The perl overhead will be somewhat visible on low pass counts but
> > should be irrelevant on longer tests.
> > 
> > Runs       Time, s.    Time increase   Pass cost, s.
> > 1          0:00.26          1            0.26
> > 10         0:00.89          3            0.09
> > 100        0:09.52         36            0.09
> > 1000       1:32.86        358            0.09
> > 
> > As we see, numerous invocations of gpg (as a new process)
> > cost 90 ms. per invocation on AMD Athlon 450 MHz, Linux 2.2.17.
> > The first pass has a lot more cost because Perl and gpg aren't
> > cached yet. This test does not involve actual crypto work; if
> > that is included then cost of loading gpg becomes even smaller.
> 
> A system call is perl is as inefficient (if not more) than the shell
> option you showed above. A system call has to: fork, exec the shell, and
> _then_ exec gpg.
> 
> The problem is we have no timing for a dynamic load of a library and then
> the call of a routine. The best I can come up with on short notice is
> 10,000 (plus a few for setup) calls of a PDF generation routine on a PII
> 300 laptop...
> 
> 8.56user 0.27system 0:09.32elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (102major+3257minor)pagefaults 0swaps
> 
> The function called does some pretty ugly linked list traversal, so that
> should be taken into account.
> 
> If we take your gpg --version example in perl for 1,000 and times by ten,
> my ugly library timing is still faster -- even though it is running on a
> slower machine.
> 
> Anyway, perhaps my needs are non-standard and therefore should not be
> supported. My point is more that the design of gpg makes my task harder,
> for no reason that I can fathom...
> 
> Mikal

being able to run gpg in a coprocess mode should be a lot easier
to achieve than converting the code to suit a new, well designed
C API and it would solve your problem. only one gpg process. any
takers?

raf



More information about the Gnupg-devel mailing list