Python bindings HOWTO proof reader request
ben at adversary.org
Mon Mar 19 04:35:02 CET 2018
On Sat, Mar 17, 2018 at 11:24:47PM +0100, Tobias Mueller wrote:
> On Fri, 2018-03-16 at 03:51 +1100, Ben McGinnes wrote:
>> Fair enough, how about adding this paragraph immediately thereafter as
>> an explanation or clarification:
> yeah, cool. Thanks.
> I think it's worth discussing the technical aspects further.
> I understand that the system's gpgme.h is needed for generating the
> python bindings. But is that not very similar to other python bindings
> to (system wide) libraries on PyPI? What's making gpgme more special
> than those?
Essentially it boils down to the dynamic generation of that header
file when GPGME is compiled in conjunction with the dynamic generation
of the SWIG bindings. If gpgme.h shipped statically with the source
then there'd be no problem, but that isn't the case here.
> You don't really have to ship a gpgme.h, do you?
Yeah, actually we do since that's what GPGME itself depends upon.
Take my MUA, for instance, though it's dealing solely with the C code
itself it does support GPGME natively (and is compiled with GPGME
support). It is explicitly compiled against the version of gpgme.h on
this system and it explicitly calls those features of the version of
GPGME in the specific path I point it to at compile time (which is
usually the current master or pretty close to it).
> Neither for source distribution (you pick the system's header) nor for
> binary distribution (the header is not needed at runtime, is it?).
> That's at least how to packages that I know do it.
You've just hit the nail on the head; these bindings don't behave the
same way as the ones you're used to do.
> For binary distribution, you don't have to provide things for each
> and every OS on each and every architecture under the sun. If you
> provide only a small subset, you've helped already someone
> installing the library.
No, but we are already aware of the range of architectures which
actively use GnuPG components and attempt to keep binary distributions
of any type to a minimum as a result.
Although there is another aspect to that, of course, and that is that
with crypto projects there is a general preference to not ship
binaries at all. We want end users to know their installations were
made from the same source code everyone is looking at. We want them
to verify the tarballs of that source code and we want them to then
compile that source code. These are good habits to encourage.
> That is to say, PyPI is quite happy to accept packages without
> having binaries compiled for HURD on s/390 or BSD on a toaster. So
> there is no technical limitation in providing only, say, Linux
> binaries for x86 and x64.
As Jakob already pointed out, it's more accurate to say that those are
the only options available for Linux binaries on PyPI.
> Soon the question becomes whether it's worth the hassle to provide a
> binary package. Of course, for your HURD on s/390 or the BSD on
> someone's forgotten MIPS laptop, it's probably not really worth the
> effort. Because it turns out that the consumers of the gpgme
> bindings published on PyPI are not equally distributed over the
> combinations of OS and architecture, but rather concentrating on
> Linux on x64.
So the question then becomes, is that usage enough to justify a
special binary built just for Linux x86-64 users or not?
Maybe there is? I'm unsure ... since my own usage spans Linux,
FreeBSD and OS X ... with the occasional return to Solaris every now
and then. On the other hand, I generally make a point of compiling my
crypto libraries from source no matter what.
OTOH, I know better than to assume that my use case will be the
standard. If anything, I'm just not convinced that there is going to
be a "standard use case" in this particular field.
>> By the way, each architecture there would mean *both* software
>> (operating system) and hardware.
> I guess it depends on what you want to achieve. If you want to
> cater for the HURD user with their s/390 server or the single
> remaining person running Solaris on an UltraSPARC then yes, you'll
> have trouble providing that.
And yet GnuPG has an extraordinarily wide customer base. Recent posts
to gnupg-devel and gnupg-users alone indicates there are still plenty
of people using Solaris and GPG.
Hell, there are even still people trying to port OpenBSD to every bit
of SPARC and UltraSPARC hardware they can get there hands on (and then
inevitably run into trouble when they encounter the Fujitsu
UltraSPARCs ... but that's quite off-topic).
> I'd argue that if you publish on PyPI you want to make your library
> more accessible to people; users and developers alike.
That's a fair point.
> I'd further argue that making it more interesting for people to
> adopt the gpgme Python bindings, providing binaries for Linux on x64
> would help a lot, if only because that's where close to all
> consumers of the gpgme bindings published as binary package on PyPI
This too is a very good point.
> The BSD user who installs everything on their own PCB won't
> lack the build environment nor the willingness of providing that, so
> they will be happy to use the source distribution.
Yep. The same actually applies to certain types of OS X users
(generally the ones using MacPorts). Homebrew is a car crash in slow
motion and so far GPGME is not bundled with GPG Suite (formerly known
as GPG Tools).
> But if you're creating an OpenPGP-based tool and you're deciding
> what library to use, then having to have a full build environment
> for C code may very well scare you away from gpgme.
Which has been the state of play for a long time, hence wanting to
bring it to more players through various types of bindings in the
> The fact that you can't easily distribute your app, because your
> users would have to have a full build environment is probably an
> even heavier argument.
Yeah, there's actually a much longer term goal that's aimed at
addressing this, but it's still in the early stages and requires
> A quick comparison of "gpg" and "build-essential" on Debian's popcon
> seems to indicate that it's much easier to bring gpgme to user's
> machines if there was a binary package on PyPI.
Ah, yes, the Debian ecosystem ... I have a slightly different plan
there, but that essentially boils down to convincing dkg to make sure
GPGME ships with future Debian releases in whole with the relevant
language bindings (specifically the Python bondings and the C++/Qt
bindings) should open that up quite nicely. With its inevitable
flow-on effect with Debian-based distros.
> I appreciate that this is not a short-term problem,
Now *that* is a distinct understatement.
> because Ubuntu stable will just be EOLed soon and that there is no
> other relevant GNU/Linux distribution which doesn't have a recent
> enough gpgme along with Python bindings. But the next distribution
> not shipping a recent enough version will come.
> Long story short,
Too late! :P
> PEP-513 is quite clear about what it expects and while producing
> such a manylinux wheel is a bit of an effort, the number of
> potential consumers might make up for it.
Yes, it might very well do so at that. We'll certainly consider it
and if we do decide to change tack there and start pushing binary
installs to PyPI then I'll update the HOWTO to reflect that. It would
obviously need to remain as accurate as possible (hence giving it a
version number from the outset).
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 228 bytes
Desc: not available
More information about the Gnupg-devel