Python bindings HOWTO proof reader request

Tobias Mueller muelli at cryptobitch.de
Sat Mar 17 23:24:47 CET 2018


Hi,

On Fri, 2018-03-16 at 03:51 +1100, Ben McGinnes wrote:
> Fair enough, how about adding this paragraph immediately thereafter as
> an explanation or clarification:
yeah, cool. Thanks.
I think it's worth discussing the technical aspects further.

> 
>    This is because the bindings use SWIG to dynamically generate C
>    bindings against =gpgme.h= and =gpgme.h= is generated at compile
>    time when GPGME is built from source.  Thus to include a package in
>    PyPI which actually built correctly would require either statically
>    built libraries for every architecture bundled with it or a full
>    implementation of C for each architecture.
> 
I understand that the system's gpgme.h is needed for generating the
python bindings. But is that not very similar to other python bindings
to (system wide) libraries on PyPI? What's making gpgme more special
than those?

You don't really have to ship a gpgme.h, do you?
Neither for source distribution (you pick the system's header) nor for
binary distribution (the header is not needed at runtime, is it?). 
That's at least how to packages that I know do it.


For binary distribution, you don't have to provide things for each and
every OS on each and every architecture under the sun. If you provide
only a small subset, you've helped already someone installing the
library.
That is to say, PyPI is quite happy to accept packages without having
binaries compiled for HURD on s/390 or BSD on a toaster. So there is no
technical limitation in providing only, say, Linux binaries for x86 and
x64.

Soon the question becomes whether it's worth the hassle to provide a
binary package. Of course, for your HURD on s/390 or the BSD on
someone's forgotten MIPS laptop, it's probably not really worth the
effort. Because it turns out that the consumers of the gpgme bindings
published on PyPI are not equally distributed over the combinations of
OS and architecture, but rather concentrating on Linux on x64.

> By the way, each architecture there would mean *both* software
> (operating system) and hardware.
I guess it depends on what you want to achieve.  If you want to cater
for the HURD user with their s/390 server or the single remaining person
running Solaris on an UltraSPARC then yes, you'll have trouble providing
that.

I'd argue that if you publish on PyPI you want to make your library more
accessible to people; users and developers alike.  I'd further argue
that making it more interesting for people to adopt the gpgme Python
bindings, providing binaries for Linux on x64 would help a lot, if only
because that's where close to all consumers of the gpgme bindings
published as binary package on PyPI are.  The BSD user who installs
everything on their own PCB won't lack the build environment nor the
willingness of providing that, so they will be happy to use the source
distribution.  But if you're creating an OpenPGP-based tool and you're
deciding what library to use, then having to have a full build
environment for C code may very well scare you away from gpgme.  The
fact that you can't easily distribute your app, because your users would
have to have a full build environment is probably an even heavier
argument.

A quick comparison of "gpg" and "build-essential" on Debian's popcon
seems to indicate that it's much easier to bring gpgme to user's
machines if there was a binary package on PyPI.

I appreciate that this is not a short-term problem, because Ubuntu
stable will just be EOLed soon and that there is no other relevant
GNU/Linux distribution which doesn't have a recent enough gpgme along
with Python bindings. But the next distribution not shipping a recent
enough version will come.

Long story short, PEP-513 is quite clear about what it expects and while
producing such a manylinux wheel is a bit of an effort, the number of
potential consumers might make up for it.


Cheers,
  Tobi



More information about the Gnupg-devel mailing list