[PATCH Libgpg-error] gpg-error.m4: support pkg-config
wk at gnupg.org
Wed Oct 17 11:27:41 CEST 2018
On Sat, 13 Oct 2018 00:07, alon.barlev at gmail.com said:
> make was introduced to manage a set of simple rules to avoid re-build
> when possible, then realized that for large project it is difficult to
make was also devised in 1976 as a simple form of dependency tracker to
make sure that changed source file won't go unnoticed.
> autoconf was introduced to generate files from templates based on
> logic as it was difficult to add functional logic into make files.
That more describes imake. autoconf impelements the GNU strategy to
first test a system for features and the use only standard style macros
to make use of system specific features. This avoided the often deep
nested ifdef chhains you still see in some software.
> automake was introduced to provide a simple method to generate make
> files that actually work with less error prune syntax, supporting
Right. And to make sure that the required targets are always availabale
(e.g. make distcheck).
> These concerns could have been address using metadata or scripts, at
> first there was no standard for metadata, so these programs that
This is why autoconf runs tests. The meta data provided by libraries
are actually not tests but hints on how they were configured.
> selection to manage the metadata, having consistent behavior among
> packages, not running anything from sysroot - all is important to
Why should one not run something from sysroot? POSIX shell scripts are
well suited to be run on all platforms, be it on the build system or or
final host (after installation or shared with an emulator).
> The gnupg projects already use them all, make, autoconf, automake,
> libtool and pkg-config, so let's at least count them all and
pkg-config only becuase some external packages provide only pkg-config
> When I saw libgpg-error master publishes pkg-config metadata, I was
> very happy, as it does show some new openness, as I know your point of
> view. You also stated that you want pkg-config to be second class
> option, so I introduced the enable_pkg_config flag with default no, to
That is the whole poing with avoiding a second build system. People
will soon start to use that alternative system and as maintainers we run
in all kind of problems because it is assumed tha this is a supported
way of using a library.
> keep backward compatibility. Using the pkg-config resolves issues of
> multilib and cross-compile without need to modify the existing
I still can't see why this is the case. SYSROOT support is in our
libraries since 2014 and makes it really easy to use a cross-build
library: The foo-config script is run as $SYSROOT/bin/foo-config where
the make install of the library has stored it.
> 4. Per each component a config script is being maintained.
Which is as easy as writing a pc file if not easier.
> 5. The script is different per project.
Sure, it describes the configuraion.
> 6. In libgpg-error master the script is a wrapper on top of pkg-config
> metadata which implies that the pkg-config metadata is formally
That is a technical detail on how the "second class citizen" foo.pc is
> 8. pkg-config and pkg.m4 are already used by the following projects:
> $ find . -name 'pkg.m4' | sort
As well as dozens of other m4 files to test for system features.
> the existing script. I do not see a difference between pkg-config and
> a. it can detect a package based on component version expression
But that is all what you get. A script is much more powerful than a
static description language and the script interpreter is a standard
Unix tool on _all_ Unix platforms. In contrast pgk-config requires to
build and install an extra tool before you can start.
> c. if a severe ABI breakage is happening project can modify the name
> of the metadata for side-by-side or even install multiple metadata
> each with different setting
Changing the project name to declare an ABI break. There are better
ways. In fact pkg-config could also support this by not only tracking a
version but also an ABI counter. AFAIK, it does not do that.
>> foo-config) stuff makes a lot of sense. For libraries with a maintained
>> API and ABI a simpler, more portable but also harder to initially create
>> dedicated config file is a cleaner approach.
> I do not understand this argument, I will appreciate an example.
If you look at really old foo-config scripts (they were introduced by
Gtk+) you will see a lot of checks done by those scripts to cope with
the fact that developers did not know how to properly design and
maintain libraries. The foo-config scripts tried to detect that and
warned the developer/user. Given the complexity of those scripts it was
natural to unify that and we finally got to pkg-config. Large projects
like GNOME or KDE put a lot of code in libraries and view them similar
as programs and change the as they like. Those libraries are not
re-usable but other software unless this other software's authors want
to track the development of the umbrella project of a used library. In
contrast other libraries are written in a way to make sure that other
software can rely on the interface of such a library. For those
libraries it is a negligible effort to maintain even a complicated
foo-config script compared to what work is needed for all the other
> 3. for cross-compile, install these config scripts outside of the
> sysroot, as it makes no sense to execute anything from sysroot of
> other architecture
As explained above I take another view here. Running config scripts is
portable accorsoo platforms.
> 4. for cross-compile, the tool need to be familiar with sysroot
> concept to output paths relative to sysroot
Agreed and implemented. Where do you see the bug in the SYSROOT support
of our foo-configs?
> I only expected to trigger discussion, I did not expected you just
> merge it as is :)
Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 227 bytes
Desc: not available
More information about the Gnupg-devel