From x-alina at gmx.net Sun Jun 2 17:46:31 2013 From: x-alina at gmx.net (Alina Friedrichsen) Date: Sun, 02 Jun 2013 17:46:31 +0200 Subject: zlib dependency Message-ID: <1370187991.1940.4.camel@s400ca.x-alina.name> Hi, the ./configure script of gnupg-2.0.20 don't check for the zlib dependency. Regards Alina From x-alina at gmx.net Sun Jun 2 22:18:34 2013 From: x-alina at gmx.net (Alina Friedrichsen) Date: Sun, 02 Jun 2013 22:18:34 +0200 Subject: .xsession and .xinitrc Message-ID: <1370204314.1842.4.camel@s400ca.x-alina.name> ~/.xsession and ~/.xinitrc will not be executed on my system. Where put you your gpg-agent start up code? From lists at michel-messerschmidt.de Sun Jun 2 22:54:20 2013 From: lists at michel-messerschmidt.de (Michel Messerschmidt) Date: Sun, 2 Jun 2013 22:54:20 +0200 Subject: .xsession and .xinitrc In-Reply-To: <1370204314.1842.4.camel@s400ca.x-alina.name> References: <1370204314.1842.4.camel@s400ca.x-alina.name> Message-ID: <20130602205420.GA6736@ryu.matrix> On Sun, Jun 02, 2013 at 10:18:34PM +0200, Alina Friedrichsen wrote: > ~/.xsession and ~/.xinitrc will not be executed on my system. > Where put you your gpg-agent start up code? Default location on this Debian 7.0 system is /etc/X11/Xsession.d/90gpg-agent From x-alina at gmx.net Mon Jun 3 03:05:25 2013 From: x-alina at gmx.net (Alina Friedrichsen) Date: Mon, 03 Jun 2013 03:05:25 +0200 Subject: .xsession and .xinitrc In-Reply-To: <20130602205420.GA6736@ryu.matrix> References: <1370204314.1842.4.camel@s400ca.x-alina.name> <20130602205420.GA6736@ryu.matrix> Message-ID: <1370221525.1703.11.camel@s400ca.x-alina.name> Am Sonntag, den 02.06.2013, 22:54 +0200 schrieb Michel Messerschmidt: > On Sun, Jun 02, 2013 at 10:18:34PM +0200, Alina Friedrichsen wrote: > > ~/.xsession and ~/.xinitrc will not be executed on my system. > > Where put you your gpg-agent start up code? > > Default location on this Debian 7.0 system is /etc/X11/Xsession.d/90gpg-agent I must unfortunately use Fedora, because my touchscreen does not work on Debian 7.0 und Debian testing. There is no"Xsession.d". From abbotti at mev.co.uk Mon Jun 3 11:41:30 2013 From: abbotti at mev.co.uk (Ian Abbott) Date: Mon, 3 Jun 2013 10:41:30 +0100 Subject: .xsession and .xinitrc In-Reply-To: <1370221525.1703.11.camel@s400ca.x-alina.name> References: <1370204314.1842.4.camel@s400ca.x-alina.name> <20130602205420.GA6736@ryu.matrix> <1370221525.1703.11.camel@s400ca.x-alina.name> Message-ID: <51AC64CA.60305@mev.co.uk> On 2013-06-03 02:05, Alina Friedrichsen wrote: > Am Sonntag, den 02.06.2013, 22:54 +0200 schrieb Michel Messerschmidt: >> On Sun, Jun 02, 2013 at 10:18:34PM +0200, Alina Friedrichsen wrote: >>> ~/.xsession and ~/.xinitrc will not be executed on my system. >>> Where put you your gpg-agent start up code? >> >> Default location on this Debian 7.0 system is /etc/X11/Xsession.d/90gpg-agent > > I must unfortunately use Fedora, because my touchscreen does not work on > Debian 7.0 und Debian testing. There is no"Xsession.d". It depends on your desktop environment. If you are using GNOME, try running "gnome-session-properties" to set your start-up programs. -- -=( Ian Abbott @ MEV Ltd. E-mail: )=- -=( Tel: +44 (0)161 477 1898 FAX: +44 (0)161 718 3587 )=- From gnupg-devel at spodhuis.org Tue Jun 4 03:04:08 2013 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Mon, 3 Jun 2013 21:04:08 -0400 Subject: .xsession and .xinitrc In-Reply-To: <1370221525.1703.11.camel@s400ca.x-alina.name> References: <1370204314.1842.4.camel@s400ca.x-alina.name> <20130602205420.GA6736@ryu.matrix> <1370221525.1703.11.camel@s400ca.x-alina.name> Message-ID: <20130604010408.GA57459@redoubt.spodhuis.org> On 2013-06-03 at 03:05 +0200, Alina Friedrichsen wrote: > Am Sonntag, den 02.06.2013, 22:54 +0200 schrieb Michel Messerschmidt: > > On Sun, Jun 02, 2013 at 10:18:34PM +0200, Alina Friedrichsen wrote: > > > ~/.xsession and ~/.xinitrc will not be executed on my system. > > > Where put you your gpg-agent start up code? > > > > Default location on this Debian 7.0 system is /etc/X11/Xsession.d/90gpg-agent > > I must unfortunately use Fedora, because my touchscreen does not work on > Debian 7.0 und Debian testing. There is no"Xsession.d". At least as of Beefy Miracle (the latest I have in a VM), /etc/X11/xinit/xinitrc.d/ is used. Whatever you'd put in Xsession.d/ in Debian/Ubuntu/... you put in xinitrc.d instead. -Phil From abel at guardianproject.info Wed Jun 5 14:42:16 2013 From: abel at guardianproject.info (Abel Luck) Date: Wed, 05 Jun 2013 12:42:16 +0000 Subject: [PATCH] Allow the user to specify AES256 as well as AES128. In-Reply-To: References: <1369157640-13136-1-git-send-email-kylebutt@gmail.com> <1369184890.3501.0.camel@cfw2.gniibe.org> <8761yb9xsr.fsf@vigenere.g10code.de> Message-ID: <51AF3228.5060903@guardianproject.info> Kyle Butt: > On Wed, May 22, 2013 at 2:19 AM, Werner Koch wrote: > >> On Wed, 22 May 2013 03:08, gniibe at fsij.org said: >> >>> Although it is a story in future for me, I could imagine that some >>> people using RSA 4096-bit key now (w/ enough entropy for their pass >>> phrases) would like that, to match its content. >> >> The weakest link we have in the key protection is the passphrase - >> virtually nobody is able to remember a passphrase with 128 bit entropy >> and 256 bit is well out of scope. Now I hear a counterargument of a >> random passphrase which is pasted into the Pinentry. That bears the >> question where to store that one and whether this additional software, >> USB gimmicks etc. don't introduce a much higher risk of passphrase >> compromise. The passphrase tries to mitigate the worst case of a >> compromised box - now if that is the risk, what are the probabilities >> that such a compromise is limited to a one-time disk or memory copy >> attack and not a long lasting active keyboard (or whatever) sniffing >> attack? Does anyone really consider that this probability is less than >> breaking 128 bit AES? >> > > While passphrases are the weakest link in general, This isn't true of > every gnupg user, just most. That's why I left the default as AES-128. > Most gnupg users will never see the option or consider changing it. > 128 bits of entropy isn't that far out of scope, that's 20 printable ascii > characters, and a dedicated user could memorize that. > > I never tried to argue about storing a random passphrase and pasting > it. I don't believe it's relevant to the discussion here. > > >> >>> Well, I have a suggestion that it will be better to use SHA-2 of >>> relevant size, instead of SHA-1. It will be more coherent, then. >> >> To satisfy todays crypto policy requirements, it would be better to move >> to a MAC based encryption scheme instead of the ad-hoc OpenPGP encrypted >> SHA-1 integrity checking. >> > > I would be willing to do this work. > >> >> For small devices a different KDF (e.g. scrypt) might be useful, though. >> > > Or this. > This is what I would like to see. It is desperately needed on Android and other mobile devices where users use even weaker than normal passphrases. > >> >> >> I'm not sure if it should be a user option or not. It makes more >>> sense for me, when GnuPG selects appropriate key store scheme >>> automatically. >> >> Yep. Algorithm proliferation does not gain us anything. In fact it >> makes the system more weak. If we want to extend the current key >> protection, a single alternative algorithm with a well defined use case >> is what we should do. >> >> > I'm not sure I understand the argument here. Are you saying that you > agree, and that rather than have it be a flag, it should be dependent > on key size and selected automatically? I'd be happy to send that patch > instead. > > Kyle. > > > > _______________________________________________ > Gnupg-devel mailing list > Gnupg-devel at gnupg.org > http://lists.gnupg.org/mailman/listinfo/gnupg-devel > From abel at guardianproject.info Wed Jun 5 14:30:45 2013 From: abel at guardianproject.info (Abel Luck) Date: Wed, 05 Jun 2013 12:30:45 +0000 Subject: prompt for pin before operations Message-ID: <51AF2F75.2050302@guardianproject.info> Hi, A standard --sign --encrypt on a large file can take some time, especially on an ARM device like most Android phones. Currently, GnuPG is encrypting the data and then invoking pinentry for the sign operation. This is a usability problem on Android, as the user can background the long-running encrypt operation and do other things, and the pin entry prompt sort of gets lost in the background. Is it possible to force GnuPG to prompt for the pin before invoking a potentially long-running operation? This is of course gnupg from master. ~abel From hans at guardianproject.info Thu Jun 6 00:01:58 2013 From: hans at guardianproject.info (Hans-Christoph Steiner) Date: Wed, 05 Jun 2013 18:01:58 -0400 Subject: gpgme function idea: gpgme_signers_len() Message-ID: <51AFB556.2030405@guardianproject.info> I have found that it was necessary to check whether there are any Signers in the GnuPGContext in order to select whether to run gpgme_op_encrypt() vs gpgme_op_encrypt_sign(). This makes it possible to have gnupg-for-java's encrypt() method automatically sign if there are signers present in the GnuPGContext. The implementation would be trivial: int gpgme_signers_len(gpgme_ctx_t ctx) { TRACE2 (DEBUG_CTX, "gpgme_signers_len", ctx, "ctx->signers_len=%i", ctx->signers_len); return ctx->signers_len; } .hc From lrn1986 at gmail.com Thu Jun 6 10:18:06 2013 From: lrn1986 at gmail.com (LRN) Date: Thu, 06 Jun 2013 12:18:06 +0400 Subject: libassuan fails pipeconnect test on W32 Message-ID: <51B045BE.3000304@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The test runs pipeconnect in client mode, then the client spawns a pipeconnect instance with a --server argument. The server tries to write "OK pleased to meet you" back to the client, but fails, because it's trying to write into CRT file descriptor using WriteFile(). Apparently, there needs to be some code to convert fileno(stdout) into handle that WriteFile() understands. Or A backtrace to WriteFile() is attached. I suspect that the code that needs modification is parse_std_file_handles(). It has some W32CE-only code that appears to be doing the right thing (that is, if the spawn function (or whatever calls the spawn function) does pass the fd via command line), i wonder why it's CE-only...That said, converting fileno(stdout) to HANDLE is trivial, so there's no need to use that CE code. Just a few _get_osfhandle() calls with an #ifdef _WIN32. I've also spotted a comment about handle inheritance that piqued my curiosity, but that's off-topic. - -- O< ascii ribbon - stop html email! - www.asciiribbon.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (MingW32) iQEcBAEBAgAGBQJRsEW9AAoJEOs4Jb6SI2CwNY4IAIjNz9JDQavjZOaCE9GlFC5A AfDA7Mstdlt/FuL0QS6TrhmHGyFs/FZdUei7MITUn6oj23Ku6E7gMeuvvr2ZpLMe 0cpVitb8vXhkowlucTvhWEOnFVzbjTJE+YIth5dZNYvVb9yLIwNZMURdmkt2NK68 d9Qv+QHTKoEOERuEs+FMP/c9bgDs6NG+hsUPaR98m30gqAtfMntDaBfz/WB6P9SV a8EtzaCAoImtFoCbQooALbN3uvnx5hvCfA2V+Ny390FX7IbZ+P+mBVztxF/sO8oK fZV36+FHBWlD5z++86ZDfymfryuLp3cJ1QIRgCpQLaBfJyEcs+an7APhd0dOBiY= =0028 -----END PGP SIGNATURE----- -------------- next part -------------- #0 __assuan_write (ctx=0x753ed8, fd=0x1, buffer=0x65892156, size=22) at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/src/system-w32.c:258 #1 0x658899d9 in _assuan_simple_write (ctx=0x753ed8, buffer=0x65892156, size=22) at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/src/assuan-io.c:60 #2 0x6588386e in writen (ctx=ctx at entry=0x753ed8, buffer=buffer at entry=0x65892156 "OK Pleased to meet you", length=length at entry=22) at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/src/assuan-buffer.c:48 #3 0x65883eb1 in _assuan_write_line (ctx=ctx at entry=0x753ed8, prefix=prefix at entry=0x0, line=line at entry=0x65892156 "OK Pleased to meet you", len=len at entry=22) at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/src/assuan-buffer.c:302 #4 0x6588407a in assuan_write_line (ctx=ctx at entry=0x753ed8, line=line at entry=0x65892156 "OK Pleased to meet you") at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/src/assuan-buffer.c:335 #5 0x65886ab6 in assuan_accept (ctx=0x753ed8) at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/src/assuan-listen.c:127 #6 0x00408420 in run_server (enable_debug=0) at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/tests/pipeconnect.c:157 #7 main (argc=, argv=) at /src/mingw/libassuan-2.1.0-1/libassuan-2.1.0/tests/pipeconnect.c:383 From john.marshall at riverwillow.com.au Mon Jun 10 07:46:10 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Mon, 10 Jun 2013 15:46:10 +1000 Subject: [PATCH] 2.0.20 configure cannot find curl-config when --with-libcurl=PATH Message-ID: <20130610054610.GA13326@rwpc15.mby.riverwillow.net.au> During a libcurl-related debugging exercise following an upgrade from 2.0.19 to 2.0.20, I discovered that configure was not finding and using the installed /usr/local/bin/curl-config. I was passing --with-libcurl=/usr/local to configure. config.log was showing failure to discover curl-config but success finding libcurl: configure:9445: checking for /usr/local/bin/curl-config configure:9478: result: no configure:9595: checking whether libcurl is usable configure:9628: cc -o conftest -O -pipe -march=pentium4 -I/usr/local/include -I/usr/local/include -L/usr/local/lib conftest.c -L/usr/local/lib -lcurl >&5 configure:9628: $? = 0 configure:9642: result: yes Passing a bare --with-libcurl to configure worked: configure:9486: checking for curl-config configure:9504: found /usr/local/bin/curl-config configure:9516: result: /usr/local/bin/curl-config configure:9527: checking for the version of libcurl configure:9534: result: 7.24.0 configure:9541: checking for libcurl >= version 7.10 configure:9554: result: yes Looking at the configure script, it was passing a fully-qualified pathname for curl-config (instead of just the program name) to AC_PATH_PROG, which will never work. The following patch against the STABLE-BRANCH-2-0 m4/libcurl.m4 yields what I believe to be the intended result: configure:9522: checking for the version of libcurl configure:9529: result: 7.24.0 configure:9536: checking for libcurl >= version 7.10 configure:9549: result: yes diff --git a/m4/libcurl.m4 b/m4/libcurl.m4 index fe9809e..dca20e3 100644 --- a/m4/libcurl.m4 +++ b/m4/libcurl.m4 @@ -75,7 +75,7 @@ AC_DEFUN([LIBCURL_CHECK_CONFIG], if test -d "$_libcurl_with" ; then LIBCURL_CPPFLAGS="-I$withval/include" _libcurl_ldflags="-L$withval/lib" - AC_PATH_PROG([_libcurl_config],["$withval/bin/curl-config"]) + _libcurl_config="$withval/bin/curl-config" else AC_PATH_PROG([_libcurl_config],[curl-config]) fi -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From john.marshall at riverwillow.com.au Mon Jun 10 12:33:14 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Mon, 10 Jun 2013 20:33:14 +1000 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server Message-ID: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> The "SNI" changes to gnupg's handling of DNS SRV keyserver records means that a client accessing an hkp keyserver via a web proxy server can no longer contact the selected target keyserver. In 2.0.19, the target host from the SRV list would be selected and the HTTP query would be addressed to that target host domain. In 2.0.20, the target host is selected, its address looked up, a fake record constructed comprising the SRV record owner's domain name (not the SRV target's domain name) and the query is constructed using the SRV record owner's domain as the host part. Although not following the intention of RFC2782, this works fine for a directly-connected client because the IP address of the selected target is used. However, in the case of a client behind a web proxy, the fake (SRV RR owner) domain is used as the hostname in the query and passed to the web proxy server. If the SRV RR owner also has an A or AAAA record, that (rather than whatever address was selected by gnupg) will be used by the proxy server to contact what may or may not be a keyserver. If there is no A record in the SRV RR owner domain, the proxy server returns an error to gnupg. Either way it's broken. I am not a programmer and I don't speak C but I have made a little hack to keyservers/gpgkeys_hkp.c to get us operational again. This is probably not the right thing to do, it obviously breaks the "SNI" changes and I am not suggesting this as a fix but I can't help wondering if the introduction of non-standard handling of SRV records ought to be invoked via an option rather than as default behaviour. Is anyone able to suggest a better/safer patch for us - or better still, a proper solution to this problem that can be used as a patch in downstream ports? diff --git a/keyserver/gpgkeys_hkp.c b/keyserver/gpgkeys_hkp.c index f0647d7..5a203b5 100644 --- a/keyserver/gpgkeys_hkp.c +++ b/keyserver/gpgkeys_hkp.c @@ -639,6 +639,9 @@ srv_replace(const char *srvtag, { char *entry,*host; + free(opt->host); + opt->host=srvlist[srvindex].target; + entry=malloc (strlen(opt->host)+1 +strlen(portstr)+1+strlen(ipaddr)+1); Thank you for any help you may be able to provide and for all the work that goes into providing this wonderful product. -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From gnupg-devel at spodhuis.org Tue Jun 11 01:15:20 2013 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Mon, 10 Jun 2013 19:15:20 -0400 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> Message-ID: <20130610231519.GA35308@redoubt.spodhuis.org> On 2013-06-10 at 20:33 +1000, John Marshall wrote: > The "SNI" changes to gnupg's handling of DNS SRV keyserver records means > that a client accessing an hkp keyserver via a web proxy server can no > longer contact the selected target keyserver. The changes were explicitly requested by those of us in the SKS operational community working on pool infrastructure, to make sure that link security can work with pools and in the presence of DNS attacks, but without requiring DNSSEC -- at this point in time, DNSSEC is a worthy goal (and something I'm pushing towards elsewhere) but to mandate it as the only approach to verifiable link security would be washing our hands of a problem. If there is to be validation, the SNI and the Host: header have to match the identity that is validated. In the absence of DNSSEC validation, the only available non-compromisable input is the original domain name. This is what led to the criteria for hkps.pool.sks-keyservers.net outlined at . > However, in the > case of a client behind a web proxy, the fake (SRV RR owner) domain is > used as the hostname in the query and passed to the web proxy server. > If the SRV RR owner also has an A or AAAA record, that (rather than > whatever address was selected by gnupg) will be used by the proxy server > to contact what may or may not be a keyserver. If you use a dedicated port (11373 is spreading a little for hkps) then you can default all handling through to the correct default vhost, and put the port into the SRV record. That way, things that don't understand SRV will send the hostname you are expecting, and things that do understand SRV will connect to the correct port. The other alternative is to list hostnames for known pools/domains that you serve for, so that you're not serving for unknown domains, which is a common operational security stance. So, you then need a certificate for the original domain, or to disable verification. Using the hostname is equivalent to disabling verification, because the hostname is derived insecurely, but the problem can be "hidden" so that folks don't recognise the attack vector. Getting a certificate for the original domain means talking to whomever controls the domain. If it's not your domain and the operator doesn't want to help sort out certs, then you simply don't offer hkps for that domain, only for your own. The link above includes instructions for how SKS keyserver operators can get a certificate under a trust anchor available for sks-keyservers.net. There's no sane way to manage this within the traditional PKIX because no Certificate Authority is going to issue multiple certs for the same name to different operators, and reusing keys between independently operated keyservers would be a security joke. > I am not a programmer and I don't speak C but I have made a little hack > to keyservers/gpgkeys_hkp.c to get us operational again. This is Unfortunately, you've moved things back to a state where the client is verifying that the delivered certificate has a hostname that was retrieved over insecure DNS, introducing a security hole back into the client. To safely do what you've done, you'd also have to add DNSSEC verification. Really, you might as well use `--keyserver-options no-check-cert` since that's the security you're really getting, using the patch you provide. > Is anyone able to suggest a better/safer patch for us - or better still, > a proper solution to this problem that can be used as a patch in > downstream ports? The most proper solution is for DNSSEC to be deployed everywhere today. In the absence of that, HKPS support has moved towards verifying the domain. Ultimately, with DNSSEC the way back out of this is with DANE and . For HKPS the solution would then be: * if no DNSSEC validation available, use the 2.0.20 logic * if DNSSEC insecure, use the 2.0.20 logic * if DNSSEC validates as secure, then check for a TLSA record for the members of the SRV pool * if TLSA record found, use the derived hostname for checks * if TLSA record not found, use the 2.0.20 logic This gives as much security as is possible to each client and the operator of the _target_ host can decide for themselves which hostname should be validated, since they're presumably in control of the DNS for the target domain (and can chop out support of pools which rename to their own hostname). Regards, -Phil -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From john.marshall at riverwillow.com.au Tue Jun 11 09:53:42 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Tue, 11 Jun 2013 17:53:42 +1000 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <20130610231519.GA35308@redoubt.spodhuis.org> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> <20130610231519.GA35308@redoubt.spodhuis.org> Message-ID: <20130611075341.GA28277@rwpc15.mby.riverwillow.net.au> Sorry. It hadn't occurred to me that the Subject of my post may have been ambiguous. I am talking about the gnupg client's local web proxy server, not a remote reverse proxy sitting in front of a keyserver. $ gpg --keyserver hkp://example.com \ --keyserver-options 'http-proxy=http://proxy.local.network:3128' \ ... (where the _pgpkey-http._tcp.example.com. label has SRV RR's; and where the example.com. label does not necessarily have any A or AAAA RR's) On Mon, 10 Jun 2013, 19:15 -0400, Phil Pennock wrote: > On 2013-06-10 at 20:33 +1000, John Marshall wrote: > > The "SNI" changes to gnupg's handling of DNS SRV keyserver records means > > that a client accessing an hkp keyserver via a web proxy server can no > > longer contact the selected target keyserver. This is the important bit. The gnupg client can no longer reliably (if at all) access the SRV-derived keyserver host via its local proxy. It may happen to contact some other keyserver or the proxy may just return an error (e.g. no address record for that domain), because the domain name being passed to the local proxy server (for the proxy to resolve to an IP address and connect) is the domain name of the SRV record's owner, not the domain name of the SRV target host. > The changes were explicitly requested by those of us in the SKS > operational community working on pool infrastructure, to make sure that > link security can work with pools and in the presence of DNS attacks, > but without requiring DNSSEC -- at this point in time, DNSSEC is a > worthy goal (and something I'm pushing towards elsewhere) but to mandate > it as the only approach to verifiable link security would be washing our > hands of a problem. > > If there is to be validation, the SNI and the Host: header have to match > the identity that is validated. In the absence of DNSSEC validation, > the only available non-compromisable input is the original domain name. > This is what led to the criteria for hkps.pool.sks-keyservers.net > outlined at . Explaining why the change was introduced is helpful and of great interest to me. I really appreciate you taking the time to go into that detail (and the detail below). Obviously the hkps/SNI link-security goals were the focus and the proxy-related hkp(s) breakage was not intentional. > > However, in the > > case of a client behind a web proxy, the fake (SRV RR owner) domain is > > used as the hostname in the query and passed to the web proxy server. > > If the SRV RR owner also has an A or AAAA record, that (rather than > > whatever address was selected by gnupg) will be used by the proxy server > > to contact what may or may not be a keyserver. > > If you use a dedicated port (11373 is spreading a little for hkps) then > you can default all handling through to the correct default vhost, and > put the port into the SRV record. That way, things that don't > understand SRV will send the hostname you are expecting, and things that > do understand SRV will connect to the correct port. > > The other alternative is to list hostnames for known pools/domains that > you serve for, so that you're not serving for unknown domains, which is > a common operational security stance. > > So, you then need a certificate for the original domain, or to disable > verification. Using the hostname is equivalent to disabling > verification, because the hostname is derived insecurely, but the > problem can be "hidden" so that folks don't recognise the attack vector. > > Getting a certificate for the original domain means talking to whomever > controls the domain. If it's not your domain and the operator doesn't > want to help sort out certs, then you simply don't offer hkps for that > domain, only for your own. > > The link above includes instructions for how SKS keyserver operators can > get a certificate under a trust anchor available for sks-keyservers.net. > There's no sane way to manage this within the traditional PKIX because > no Certificate Authority is going to issue multiple certs for the same > name to different operators, and reusing keys between independently > operated keyservers would be a security joke. Thanks for the keyserver reverse-proxy hkps tips - but that wasn't the point of my post. > > I am not a programmer and I don't speak C but I have made a little hack > > to keyservers/gpgkeys_hkp.c to get us operational again. This is > > Unfortunately, you've moved things back to a state where the client is > verifying that the delivered certificate has a hostname that was > retrieved over insecure DNS, introducing a security hole back into the > client. It's not a solution, it's a quick and nasty hack by a non-programmer so that clients can talk to an off-LAN hkp keyserver. I'm looking for help to find a proper solution. For example: - a patch to change the default back to constructing the query using the SRV target's domain name UNLESS a --use-sni-srv-hack option is set AND the specified keyserver scheme is hkps://. - even just a patch that restores proper SRV handling for the hkp:// scheme would be appreciated. > To safely do what you've done, you'd also have to add DNSSEC > verification. > > Really, you might as well use `--keyserver-options no-check-cert` since > that's the security you're really getting, using the patch you provide. We do use validating resolvers but we've never actually used hkps. As far as I can see, this hkps/SNI modification provides no benefit at all for hkp - only proxy breakage. > > Is anyone able to suggest a better/safer patch for us - or better still, > > a proper solution to this problem that can be used as a patch in > > downstream ports? > > The most proper solution is for DNSSEC to be deployed everywhere today. > In the absence of that, HKPS support has moved towards verifying the > domain. > > Ultimately, with DNSSEC the way back out of this is with DANE and > . > > For HKPS the solution would then be: > > * if no DNSSEC validation available, use the 2.0.20 logic > * if DNSSEC insecure, use the 2.0.20 logic > * if DNSSEC validates as secure, then check for a TLSA record for the > members of the SRV pool > * if TLSA record found, use the derived hostname for checks > * if TLSA record not found, use the 2.0.20 logic > > This gives as much security as is possible to each client and the > operator of the _target_ host can decide for themselves which hostname > should be validated, since they're presumably in control of the DNS for > the target domain (and can chop out support of pools which rename to > their own hostname). For what it's worth, I think that if the DNSSEC/DANE solution is the correct one, then implementing that is the best way to encourage its adoption. The current hkps/SRV/SNI solution really seems to me like taking a chisel to the round hole in order to try to make the square peg fit. Thanks, Phil, for the pointer to the DANE SRV document. I think that's really exciting stuff. -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From dshaw at jabberwocky.com Tue Jun 11 14:33:15 2013 From: dshaw at jabberwocky.com (David Shaw) Date: Tue, 11 Jun 2013 08:33:15 -0400 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> Message-ID: <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> On Jun 10, 2013, at 6:33 AM, John Marshall wrote: > The "SNI" changes to gnupg's handling of DNS SRV keyserver records means > that a client accessing an hkp keyserver via a web proxy server can no > longer contact the selected target keyserver. > > In 2.0.19, the target host from the SRV list would be selected and the > HTTP query would be addressed to that target host domain. In 2.0.20, > the target host is selected, its address looked up, a fake record > constructed comprising the SRV record owner's domain name (not the SRV > target's domain name) and the query is constructed using the SRV > record owner's domain as the host part. Although not following the > intention of RFC2782, this works fine for a directly-connected client > because the IP address of the selected target is used. However, in the > case of a client behind a web proxy, the fake (SRV RR owner) domain is > used as the hostname in the query and passed to the web proxy server. > If the SRV RR owner also has an A or AAAA record, that (rather than > whatever address was selected by gnupg) will be used by the proxy server > to contact what may or may not be a keyserver. If there is no A record > in the SRV RR owner domain, the proxy server returns an error to gnupg. > Either way it's broken. I think I'm following what the problem is here. Can you confirm something : Are you using the libcurl support in GnuPG, or the built-in HTTP support? I suspect you are using libcurl, and I can see how you would get exactly the behavior you describe. Out of curiosity, if you are indeed using libcurl, can you try the built-in HTTP support and see if that works better (i.e. properly) for you? Just build with "./configure --without-libcurl" David From john.marshall at riverwillow.com.au Tue Jun 11 23:25:44 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Wed, 12 Jun 2013 07:25:44 +1000 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> Message-ID: <20130611212544.GA33316@rwpc15.mby.riverwillow.net.au> On Tue, 11 Jun 2013, 08:33 -0400, David Shaw wrote: > On Jun 10, 2013, at 6:33 AM, John Marshall wrote: > > > The "SNI" changes to gnupg's handling of DNS SRV keyserver records means > > that a client accessing an hkp keyserver via a web proxy server can no > > longer contact the selected target keyserver. > > > > In 2.0.19, the target host from the SRV list would be selected and the > > HTTP query would be addressed to that target host domain. In 2.0.20, > > the target host is selected, its address looked up, a fake record > > constructed comprising the SRV record owner's domain name (not the SRV > > target's domain name) and the query is constructed using the SRV > > record owner's domain as the host part. Although not following the > > intention of RFC2782, this works fine for a directly-connected client > > because the IP address of the selected target is used. However, in the > > case of a client behind a web proxy, the fake (SRV RR owner) domain is > > used as the hostname in the query and passed to the web proxy server. > > If the SRV RR owner also has an A or AAAA record, that (rather than > > whatever address was selected by gnupg) will be used by the proxy server > > to contact what may or may not be a keyserver. If there is no A record > > in the SRV RR owner domain, the proxy server returns an error to gnupg. > > Either way it's broken. > > I think I'm following what the problem is here. Can you confirm something : Are you using the libcurl support in GnuPG, or the built-in HTTP support? We are using --with-libcurl=/usr/local because we want the SRV support. (see my earlier post on the battle I had getting --with-libcurl=PATH to detect curl-config) > I suspect you are using libcurl, and I can see how you would get exactly the behavior you describe. Out of curiosity, if you are indeed using libcurl, can you try the built-in HTTP support and see if that works better (i.e. properly) for you? Just build with "./configure --without-libcurl" ?? I can't see how --without-libcurl does any SRV processing at all - but I've tested for you anyway (see below) and, no, it doesn't work. We need libcurl because we need the SRV processing and we need it to return the domain name of the selected target to pass to the proxy server. It seems that my problem description is not sufficiently clear. I was hoping there would be no need to post explicit examples but... gnupg 2.0.20 (curl-shim) ======================== rwpc13> gpg --keyserver hkp://au.gnupg.net --keyserver-options 'http-proxy=http://cache1.mby.riverwillow.net.au:3128 debug' --search-keys 0xA29A84A2 gpg: searching for "0xA29A84A2" from hkp server au.gnupg.net gpgkeys: curl version = GnuPG curl-shim gpgkeys: search type is 5, and key is "A29A84A2" * HTTP proxy is "http://cache1.mby.riverwillow.net.au:3128" * HTTP URL is "http://au.gnupg.net:11371/pks/lookup?op=index&options=mr&search=0xA29A84A2" * SRV tag is "pgpkey-http": host and port may be overridden * HTTP auth is "null" * HTTP method is GET gpg: key "0xA29A84A2" not found on keyserver rwpc13> gnupg 2.0.20 (libcurl) ====================== rwpc13> gpg --keyserver hkp://au.gnupg.net --keyserver-options 'http-proxy=http://cache1.mby.riverwillow.net.au:3128 debug' --search-keys 0xA29A84A2 gpg: searching for "0xA29A84A2" from hkp server au.gnupg.net gpgkeys: curl version = libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.7 gpgkeys: Faking pgpkey-http SRV from au.gnupg.net to svcs4.riverwillow.net.au:11371 gpgkeys: search type is 5, and key is "A29A84A2" * Added au.gnupg.net:11371:202.125.45.72 to DNS cache * About to connect() to proxy cache1.mby.riverwillow.net.au port 3128 (#0) * Trying 172.25.24.28... * connected * Connected to cache1.mby.riverwillow.net.au (172.25.24.28) port 3128 (#0) > GET http://au.gnupg.net:11371/pks/lookup?op=index&options=mr&search=0xA29A84A2 HTTP/1.1 Accept: */* Proxy-Connection: Keep-Alive Host: au.gnupg.net Pragma: no-cache Cache-Control: no-cache < HTTP/1.1 503 Service Unavailable < Server: squid/3.3.5 < Mime-Version: 1.0 < Date: Tue, 11 Jun 2013 21:02:46 GMT < Content-Type: text/html < Content-Length: 3486 < X-Squid-Error: ERR_DNS_FAIL 0 < Vary: Accept-Language < Content-Language: en < X-Cache: MISS from cache1.mby.riverwillow.net.au < Via: 1.1 cache1.mby.riverwillow.net.au (squid/3.3.5) < Connection: keep-alive < * Connection #0 to host cache1.mby.riverwillow.net.au left intact * Closing connection #0 gpg: key "0xA29A84A2" not found on keyserver rwpc13> gnupg 2.0.19 (libcurl) ====================== rwpc13> gpg --keyserver hkp://au.gnupg.net --keyserver-options 'http-proxy=http://cache1.mby.riverwillow.net.au:3128 debug' --search-keys 0xA29A84A2 gpg: searching for "0xA29A84A2" from hkp server au.gnupg.net gpgkeys: curl version = libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.7 gpgkeys: search type is 5, and key is "A29A84A2" * About to connect() to proxy cache1.mby.riverwillow.net.au port 3128 (#0) * Trying 172.25.24.28... * connected * Connected to cache1.mby.riverwillow.net.au (172.25.24.28) port 3128 (#0) > GET http://svcs4.riverwillow.net.au:11371/pks/lookup?op=index&options=mr&search=0xA29A84A2 HTTP/1.1 Host: svcs4.riverwillow.net.au:11371 Accept: */* Proxy-Connection: Keep-Alive Pragma: no-cache Cache-Control: no-cache < HTTP/1.1 200 OK < Date: Tue, 11 Jun 2013 21:07:16 GMT < Server: sks_www/1.1.4 < Cache-Control: no-cache < Pragma: no-cache < Expires: 0 < Content-Length: 111 < X-HKP-Results-Count: 1 < Content-Type: text/plain < X-Cache: MISS from svcs4.riverwillow.net.au < X-Cache: MISS from cache1.mby.riverwillow.net.au < Via: 1.1 svcs4.riverwillow.net.au (squid/3.3.5), 1.1 cache1.mby.riverwillow.net.au (squid/3.3.5) < Connection: keep-alive < * Connection #0 to host cache1.mby.riverwillow.net.au left intact * Closing connection #0 (1) John Marshall 1024 bit DSA key A29A84A2, created: 2008-05-03 Keys 1-1 of 1 for "0xA29A84A2". Enter number(s), N)ext, or Q)uit > q rwpc13> Thank you for taking the time to look at this problem report. -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From dshaw at jabberwocky.com Thu Jun 13 06:13:44 2013 From: dshaw at jabberwocky.com (David Shaw) Date: Thu, 13 Jun 2013 00:13:44 -0400 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <20130611212544.GA33316@rwpc15.mby.riverwillow.net.au> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> <20130611212544.GA33316@rwpc15.mby.riverwillow.net.au> Message-ID: <4F6FA1BA-4946-4D95-85C4-01CB1C6BB839@jabberwocky.com> On Jun 11, 2013, at 5:25 PM, John Marshall wrote: > On Tue, 11 Jun 2013, 08:33 -0400, David Shaw wrote: >> On Jun 10, 2013, at 6:33 AM, John Marshall wrote: >> >>> The "SNI" changes to gnupg's handling of DNS SRV keyserver records means >>> that a client accessing an hkp keyserver via a web proxy server can no >>> longer contact the selected target keyserver. >>> >>> In 2.0.19, the target host from the SRV list would be selected and the >>> HTTP query would be addressed to that target host domain. In 2.0.20, >>> the target host is selected, its address looked up, a fake record >>> constructed comprising the SRV record owner's domain name (not the SRV >>> target's domain name) and the query is constructed using the SRV >>> record owner's domain as the host part. Although not following the >>> intention of RFC2782, this works fine for a directly-connected client >>> because the IP address of the selected target is used. However, in the >>> case of a client behind a web proxy, the fake (SRV RR owner) domain is >>> used as the hostname in the query and passed to the web proxy server. >>> If the SRV RR owner also has an A or AAAA record, that (rather than >>> whatever address was selected by gnupg) will be used by the proxy server >>> to contact what may or may not be a keyserver. If there is no A record >>> in the SRV RR owner domain, the proxy server returns an error to gnupg. >>> Either way it's broken. >> >> I think I'm following what the problem is here. Can you confirm something : Are you using the libcurl support in GnuPG, or the built-in HTTP support? > > We are using --with-libcurl=/usr/local because we want the SRV support. > (see my earlier post on the battle I had getting --with-libcurl=PATH to > detect curl-config) > >> I suspect you are using libcurl, and I can see how you would get exactly the behavior you describe. Out of curiosity, if you are indeed using libcurl, can you try the built-in HTTP support and see if that works better (i.e. properly) for you? Just build with "./configure --without-libcurl" > > ?? I can't see how --without-libcurl does any SRV processing at all It does. Both --with-libcurl and --without-libcurl have SRV support. For the libcurl version, it's faked from the outside. For the without-libcurl version, the built-in HTTP engine actually supports SRV natively. That it doesn't work on either path is for a different reason on each path. I'm not convinced that it makes sense for a client to resolve the SRV, and then pass the resulting hostname to a proxy. For example, leaving aside SRV, the client does not try and resolve an A record or chase a CNAME, but rather passes the requested resource to the proxy and the proxy does the work translating that to a DNS name, looking up that name, making the connection, etc. Indeed, the client may not even be able to resolve external DNS at all. It's true this worked in a previous version of GnuPG, but this was due to an incorrect implementation in GPG and the way your DNS and proxy is set up. It was basically doing part of the DNS work in the client, and then passing the intermediate result to the proxy to do the rest, sort of like a DNS-level redirect, which as you point out does not follow the intent of RFC2782. One thing you might do is "keyserver-options no-try-dns-srv", but even that won't really help with names like "au.gnupg.net". A SRV-only keyserver name isn't going to work properly with a proxy. David From john.marshall at riverwillow.com.au Thu Jun 13 09:18:50 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Thu, 13 Jun 2013 17:18:50 +1000 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <4F6FA1BA-4946-4D95-85C4-01CB1C6BB839@jabberwocky.com> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> <20130611212544.GA33316@rwpc15.mby.riverwillow.net.au> <4F6FA1BA-4946-4D95-85C4-01CB1C6BB839@jabberwocky.com> Message-ID: <20130613071850.GE38402@rwpc15.mby.riverwillow.net.au> On Thu, 13 Jun 2013, 00:13 -0400, David Shaw wrote: > I'm not convinced that it makes sense for a client to resolve the SRV, and then pass the resulting hostname to a proxy. For example, leaving aside SRV, the client does not try and resolve an A record or chase a CNAME, but rather passes the requested resource to the proxy and the proxy does the work translating that to a DNS name, looking up that name, making the connection, etc. Indeed, the client may not even be able to resolve external DNS at all. I think you're right. Here am I complaining about 2.0.20 breaking that functionality and it should never have been there in the first place. So why is the gnupg client doing DNS work for hkp(s) in the presence of a configured HTTP proxy server? > It's true this worked in a previous version of GnuPG, but this was due to an incorrect implementation in GPG and the way your DNS and proxy is set up. It was basically doing part of the DNS work in the client, and then passing the intermediate result to the proxy to do the rest, sort of like a DNS-level redirect, which as you point out does not follow the intent of RFC2782. Well, the way we were using it worked because 2.0.19 did the SRV processing, selected a keyserver, and passed the SELECTED keyserver's domain name to the proxy for connection. It broke in 2.0.20 because gnupg no longer passes the SELECTED keyserver's domain name to the proxy server. I understand, from Phil's response, the reasons for the change. It seems to me that my problem arose from me failing to realize that gnupg, in this configuration, is an HTTP proxy client and, as such, should not be doing any DNS resolution (SRV or otherwise) at all. I was capitalizing on the fact that it did! > One thing you might do is "keyserver-options no-try-dns-srv", but even that won't really help with names like "au.gnupg.net". A SRV-only keyserver name isn't going to work properly with a proxy. If there was a --no-try-dns-at-all-with-http-proxy option, that would be the one to use - but then, perhaps gnupg should do that all by itself - and, yes, using domain names which point to anything other than address records is the wrong thing to do when using an HTTP proxy. I think I've learned that lesson. BUT.... Couldn't this work (gnupg doing SRV selection) with a SOCKS5 proxy? I can't find SOCKS in the man page or in the source code. Are there any plans for gnupg to support keyserver connection via a SOCKS5 proxy? -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From john.marshall at riverwillow.com.au Thu Jun 13 12:59:07 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Thu, 13 Jun 2013 20:59:07 +1000 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <20130613071850.GE38402@rwpc15.mby.riverwillow.net.au> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> <20130611212544.GA33316@rwpc15.mby.riverwillow.net.au> <4F6FA1BA-4946-4D95-85C4-01CB1C6BB839@jabberwocky.com> <20130613071850.GE38402@rwpc15.mby.riverwillow.net.au> Message-ID: <20130613105907.GA41484@rwpc15.mby.riverwillow.net.au> On Thu, 13 Jun 2013, 17:18 +1000, John Marshall wrote: > BUT.... > > Couldn't this work (gnupg doing SRV selection) with a SOCKS5 proxy? I > can't find SOCKS in the man page or in the source code. Are there any > plans for gnupg to support keyserver connection via a SOCKS5 proxy? I've just discovered that libcurl already supports SOCKS5 proxy, and that it works without explicit support in gnupg. It looks like gnupg passes the scheme through to curl along with the proxy server name, so specifying --keyserver-options http-proxy=socks5://local.socks5.proxy:1080 works! That lets gnupg do all the DNS SRV stuff and delegates only the TCP connection to the SOCKS5 proxy. rwpc13> gpg --keyserver hkp://au.gnupg.net --keyserver-options 'http-proxy=socks5://rwsrv04.mby.riverwillow.net.au:1080 debug' --search-keys 0xA29A84A2 gpg: searching for "0xA29A84A2" from hkp server au.gnupg.net gpgkeys: curl version = libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.7 gpgkeys: Faking pgpkey-http SRV from au.gnupg.net to keyserver.oeg.com.au:11371 gpgkeys: search type is 5, and key is "A29A84A2" * Added au.gnupg.net:11371:203.33.246.146 to DNS cache * About to connect() to proxy rwsrv04.mby.riverwillow.net.au port 1080 (#0) * Trying 172.25.24.17... * connected * Connected to rwsrv04.mby.riverwillow.net.au (172.25.24.17) port 1080 (#0) > GET /pks/lookup?op=index&options=mr&search=0xA29A84A2 HTTP/1.1 Accept: */* Host: au.gnupg.net Pragma: no-cache Cache-Control: no-cache < HTTP/1.1 200 OK < Date: Thu, 13 Jun 2013 10:47:56 GMT < Content-Type: text/plain < Connection: keep-alive < Server: sks_www/1.1.3 < Content-length: 111 < X-HKP-Results-Count: 1 < Via: 1.1 keyserver.oeg.com.au:80 (nginx) < * Connection #0 to host au.gnupg.net left intact * Closing connection #0 (1) John Marshall 1024 bit DSA key A29A84A2, created: 2008-05-03 Keys 1-1 of 1 for "0xA29A84A2". Enter number(s), N)ext, or Q)uit > q rwpc13> -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From alphazo at gmail.com Thu Jun 13 20:37:19 2013 From: alphazo at gmail.com (Alphazo) Date: Thu, 13 Jun 2013 20:37:19 +0200 Subject: No subject Message-ID: Hello, Just compiled 2.1.0beta3 and got a segfault when using gpg --list-keys gpg: signal Segmentation fault caught ... exiting [1] 1894 segmentation fault (core dumped) gpg --list-keys Could that be linked to a malformed keys in my keyring? How should I report my bug with more details? Thanks Dany -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjh at sixdemonbag.org Thu Jun 13 22:09:57 2013 From: rjh at sixdemonbag.org (Robert J. Hansen) Date: Thu, 13 Jun 2013 13:09:57 -0700 Subject: In-Reply-To: References: Message-ID: <20130613130957.Horde.btkS3sv07jYa8VOU_i3lKg6@mail.monkeyblade.net> > Could that be linked to a malformed keys in my keyring? A segmentation fault is almost always an indicator of a bug in the program itself. Well-written code should not segfault on any input, even malformed input. (This is not a slight on Werner, g10 Code, or anyone involved with GnuPG. 2.1-beta3 is exactly what it claims to be, a beta, and these sorts of bugs are to be expected. This is a bug to be fixed before final release, not something that should cast doubt on the quality of the GnuPG codebase.) Thanks for discovering this bug, and I hope you and Werner are able to get it sorted out! From dshaw at jabberwocky.com Sat Jun 15 16:57:08 2013 From: dshaw at jabberwocky.com (David Shaw) Date: Sat, 15 Jun 2013 10:57:08 -0400 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: <20130613071850.GE38402@rwpc15.mby.riverwillow.net.au> References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> <20130611212544.GA33316@rwpc15.mby.riverwillow.net.au> <4F6FA1BA-4946-4D95-85C4-01CB1C6BB839@jabberwocky.com> <20130613071850.GE38402@rwpc15.mby.riverwillow.net.au> Message-ID: On Jun 13, 2013, at 3:18 AM, John Marshall wrote: > On Thu, 13 Jun 2013, 00:13 -0400, David Shaw wrote: > >> I'm not convinced that it makes sense for a client to resolve the SRV, and then pass the resulting hostname to a proxy. For example, leaving aside SRV, the client does not try and resolve an A record or chase a CNAME, but rather passes the requested resource to the proxy and the proxy does the work translating that to a DNS name, looking up that name, making the connection, etc. Indeed, the client may not even be able to resolve external DNS at all. > > I think you're right. Here am I complaining about 2.0.20 breaking that > functionality and it should never have been there in the first place. > So why is the gnupg client doing DNS work for hkp(s) in the presence of > a configured HTTP proxy server? Bug. It shouldn't be. > Couldn't this work (gnupg doing SRV selection) with a SOCKS5 proxy? I > can't find SOCKS in the man page or in the source code. Are there any > plans for gnupg to support keyserver connection via a SOCKS5 proxy? As you discovered, SOCKS5 does work - we get this for free because libcurl supports it. There is a gotcha with all this proxy stuff, however. If you're going over something like TOR, you are effectively "leaking" what queries you are doing because GPG will do the keyserver SRV request through the local DNS before sending the actual keyserver query through TOR. I wonder if the healthiest thing to do here is to just flip SRV to off if any proxy is provided. If the user chooses to turn it back on again, that's up to them, but it should default to off. Comments from people using TOR would be welcome! David From wk at gnupg.org Mon Jun 17 18:07:06 2013 From: wk at gnupg.org (Werner Koch) Date: Mon, 17 Jun 2013 18:07:06 +0200 Subject: In-Reply-To: <20130613130957.Horde.btkS3sv07jYa8VOU_i3lKg6@mail.monkeyblade.net> (Robert J. Hansen's message of "Thu, 13 Jun 2013 13:09:57 -0700") References: <20130613130957.Horde.btkS3sv07jYa8VOU_i3lKg6@mail.monkeyblade.net> Message-ID: <87obb4yb4l.fsf@vigenere.g10code.de> On Thu, 13 Jun 2013 22:09, rjh at sixdemonbag.org said: > Thanks for discovering this bug, and I hope you and Werner are able to > get it sorted out! Hope so too. However the beta3 is 18 month old and we kept on working on the code. Thus it does not make much sense to look at this specific bug. alphazo: If you are not able to build from git and check whether the bug has gone, please have 2 or 3 weeks patience until I can do a new beta. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From alphazo at gmail.com Mon Jun 17 23:45:36 2013 From: alphazo at gmail.com (Alphazo) Date: Mon, 17 Jun 2013 23:45:36 +0200 Subject: In-Reply-To: <87obb4yb4l.fsf@vigenere.g10code.de> References: <20130613130957.Horde.btkS3sv07jYa8VOU_i3lKg6@mail.monkeyblade.net> <87obb4yb4l.fsf@vigenere.g10code.de> Message-ID: I think I'm going to wait few more weeks as I am only able to compile 2.1beta3 and not the latest git version. Crashes around here during the compilation: gcc -DHAVE_CONFIG_H -I. -I.. -I../gl -I../common -I../intl -DLOCALEDIR=\"/usr/share/locale\" -DGNUPG_BINDIR="\"/usr/bin\"" -DGNUPG_LIBEXECDIR="\"/usr/lib/gnupg\"" -DGNUPG_LIBDIR="\"/usr/lib/gnupg\"" -DGNUPG_DATADIR="\"/usr/share/gnupg\"" -DGNUPG_SYSCONFDIR="\"/usr/etc/gnupg\"" -DGNUPG_LOCALSTATEDIR="\"/usr/var\"" -D_FORTIFY_SOURCE=2 -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -O3 -Wall -Wcast-align -Wshadow -Wstrict-prototypes -Wformat -Wno-format-y2k -Wformat-security -W -Wno-sign-compare -Wno-missing-field-initializers -Wdeclaration-after-statement -Wno-pointer-sign -Wpointer-arith -MT minip12.o -MD -MP -MF .deps/minip12.Tpo -c -o minip12.o minip12.c mv -f .deps/minip12.Tpo .deps/minip12.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../gl -I../common -I../intl -DLOCALEDIR=\"/usr/share/locale\" -DGNUPG_BINDIR="\"/usr/bin\"" -DGNUPG_LIBEXECDIR="\"/usr/lib/gnupg\"" -DGNUPG_LIBDIR="\"/usr/lib/gnupg\"" -DGNUPG_DATADIR="\"/usr/share/gnupg\"" -DGNUPG_SYSCONFDIR="\"/usr/etc/gnupg\"" -DGNUPG_LOCALSTATEDIR="\"/usr/var\"" -D_FORTIFY_SOURCE=2 -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -O3 -Wall -Wcast-align -Wshadow -Wstrict-prototypes -Wformat -Wno-format-y2k -Wformat-security -W -Wno-sign-compare -Wno-missing-field-initializers -Wdeclaration-after-statement -Wno-pointer-sign -Wpointer-arith -MT qualified.o -MD -MP -MF .deps/qualified.Tpo -c -o qualified.o qualified.c mv -f .deps/qualified.Tpo .deps/qualified.Po gcc -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -O3 -Wall -Wcast-align -Wshadow -Wstrict-prototypes -Wformat -Wno-format-y2k -Wformat-security -W -Wno-sign-compare -Wno-missing-field-initializers -Wdeclaration-after-statement -Wno-pointer-sign -Wpointer-arith -Wl,-O1,--sort-common,--as-needed,-z,relro -o gpgsm gpgsm.o misc.o keydb.o server.o call-agent.o call-dirmngr.o fingerprint.o base64.o certlist.o certdump.o certcheck.o certchain.o keylist.o verify.o sign.o encrypt.o decrypt.o import.o export.o delete.o certreqgen.o certreqgen-ui.o minip12.o qualified.o ../kbx/libkeybox.a ../common/libcommon.a ../gl/libgnu.a ../common/libgpgrl.a -lgcrypt -lgpg-error -lksba -lgpg-error -lassuan -lgpg-error -lgpg-error -lreadline -lz -lbz2 -llber make[2]: Leaving directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/sm' Making all in agent make[2]: Entering directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/agent' Makefile:1566: *** missing separator (did you mean TAB instead of 8 spaces?). Stop. make[2]: Leaving directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/agent' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build' make: *** [all] Error 2 On Mon, Jun 17, 2013 at 6:07 PM, Werner Koch wrote: > On Thu, 13 Jun 2013 22:09, rjh at sixdemonbag.org said: > > > Thanks for discovering this bug, and I hope you and Werner are able to > > get it sorted out! > > Hope so too. > > However the beta3 is 18 month old and we kept on working on the code. > Thus it does not make much sense to look at this specific bug. > > alphazo: If you are not able to build from git and check whether the bug > has gone, please have 2 or 3 weeks patience until I can do a new beta. > > > Salam-Shalom, > > Werner > > -- > Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wich at yuugen.jp Mon Jun 17 23:10:47 2013 From: wich at yuugen.jp (Remko van der Vossen) Date: Tue, 18 Jun 2013 06:10:47 +0900 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 Message-ID: <20130617211047.GR24976@yuugen.jp> Hello, It seems that when gnupg is compiled with -march=native -mtune=native -m64 the message digests calculated by gnupg are incorrect, for instance: > echo test | gpg --print-md sha512 88DBFA0D 846B20F0 07F93185 27B7A753 378C4F6D 575E36EE 220ECADE 124157CC 1FAE5F71 C87CEEAB AB71D609 C7E29BDE 7A03A386 6DC6340A F98F0BDD EF24165A > echo test | sha512sum 0e3e75234abc68f4378a86b3f4b32a19 8ba301845b0cd6e50106e874345700cc 6663a86c1ea125dc5e92be17c98f9a0f 85ca9d5f595db2012f7cc3571945c123 - When gnupg is compiled with -march=k8 -fPIC -DPIC instead everything is fine. Note that I did not test other hashing algorithms than sha512. The system is based around an AMD FX6300 running a 3.9.6 SMP kernel, with glibc 2.17. Kind regards, Remko van der Vossen From wk at gnupg.org Tue Jun 18 10:30:57 2013 From: wk at gnupg.org (Werner Koch) Date: Tue, 18 Jun 2013 10:30:57 +0200 Subject: gpgme function idea: gpgme_signers_len() In-Reply-To: <51AFB556.2030405@guardianproject.info> (Hans-Christoph Steiner's message of "Wed, 05 Jun 2013 18:01:58 -0400") References: <51AFB556.2030405@guardianproject.info> Message-ID: <8761xbx1ku.fsf@vigenere.g10code.de> On Thu, 6 Jun 2013 00:01, hans at guardianproject.info said: > I have found that it was necessary to check whether there are any Signers in > the GnuPGContext in order to select whether to run gpgme_op_encrypt() vs > gpgme_op_encrypt_sign(). This makes it possible to have gnupg-for-java's > encrypt() method automatically sign if there are signers present in the > GnuPGContext. Makes sense. I used a different name, though: -- Function: unsigned int gpgme_signers_count (const gpgme_ctx_t CTX) The function `gpgme_signers_count' returns the number of signer keys in the context CTX. Just pushed. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From wk at gnupg.org Tue Jun 18 11:05:39 2013 From: wk at gnupg.org (Werner Koch) Date: Tue, 18 Jun 2013 11:05:39 +0200 Subject: prompt for pin before operations In-Reply-To: <51AF2F75.2050302@guardianproject.info> (Abel Luck's message of "Wed, 05 Jun 2013 12:30:45 +0000") References: <51AF2F75.2050302@guardianproject.info> Message-ID: <871u7zwzz0.fsf@vigenere.g10code.de> On Wed, 5 Jun 2013 14:30, abel at guardianproject.info said: > A standard --sign --encrypt on a large file can take some time, > especially on an ARM device like most Android phones. > > Currently, GnuPG is encrypting the data and then invoking pinentry for > the sign operation. This is a usability problem on Android, as the user I recall that we had a similar bug report in the past. I can't remeber the details and the BTS doesn't reveal anything. > Is it possible to force GnuPG to prompt for the pin before invoking a > potentially long-running operation? With 2.1 the agent handles the signing and it needs the hash of the message to be signed before it can pop up the pinentry. What we need is a prepare signing command for the agent to get the key ready before it received the data to be hashed. This is possible but we would either need to always use this feature or delay it until we have detected that there is a lot of plaintext to be processed. This will be quite some work and would delay 2.1.0 even more. Shall we put it on a wishlist for 2.1.n ? Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From wk at gnupg.org Tue Jun 18 11:23:51 2013 From: wk at gnupg.org (Werner Koch) Date: Tue, 18 Jun 2013 11:23:51 +0200 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <20130617211047.GR24976@yuugen.jp> (Remko van der Vossen's message of "Tue, 18 Jun 2013 06:10:47 +0900") References: <20130617211047.GR24976@yuugen.jp> Message-ID: <87sj0fvkk8.fsf@vigenere.g10code.de> On Mon, 17 Jun 2013 23:10, wich at yuugen.jp said: > It seems that when gnupg is compiled with -march=native -mtune=native Which version of GnuPG and Libgcrypt? Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From wk at gnupg.org Tue Jun 18 11:20:39 2013 From: wk at gnupg.org (Werner Koch) Date: Tue, 18 Jun 2013 11:20:39 +0200 Subject: libassuan fails pipeconnect test on W32 In-Reply-To: <51B045BE.3000304@gmail.com> (LRN's message of "Thu, 06 Jun 2013 12:18:06 +0400") References: <51B045BE.3000304@gmail.com> Message-ID: <87wqprvkpk.fsf@vigenere.g10code.de> On Thu, 6 Jun 2013 10:18, lrn1986 at gmail.com said: > but fails, because it's trying to write into CRT file descriptor using > WriteFile(). Apparently, there needs to be some code to convert > fileno(stdout) into handle that WriteFile() understands. Or That is actually a big problem: We have many different classes of handles and file descriptors and try to convey the using one scalar type. We spend many many debugging hours for the simple reason that we were not always sure what object we are looking at. It is a real mess. > I suspect that the code that needs modification is > parse_std_file_handles(). It has some W32CE-only code that appears to be > doing the right thing (that is, if the spawn function (or whatever calls > the spawn function) does pass the fd via command line), i wonder why > it's CE-only...That said, converting fileno(stdout) to HANDLE is CE is quite special because it has no pipes - we use our own pipe emulation driver. Another reason is that I did not want to get a regression in plain W32 while doing the W32CE code. > trivial, so there's no need to use that CE code. Just a few > _get_osfhandle() calls with an #ifdef _WIN32. Back and forth converting them is not easy to maintain. Remember that we also use sockets which are yet another kind of object. For example you may use select with a socket but not with a file handle and vice versa. The only clean solution I see is to give up on scalar type for these objects and instead allocate transfer an object which can keep track of the actual type. That requires quite some changes and wrappers for almost all I/O functions. For a full 64 bit version of GnuPG on Windows this should be done. I am working on a 64 bit version of libassuan, thus I probably need to look into the pipeconnect test, anyway. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From wich at yuugen.jp Tue Jun 18 11:48:11 2013 From: wich at yuugen.jp (Remko van der Vossen) Date: Tue, 18 Jun 2013 18:48:11 +0900 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <87sj0fvkk8.fsf@vigenere.g10code.de> References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> Message-ID: <20130618094811.GS24976@yuugen.jp> On Tue, Jun 18, 2013 at 11:23:51AM +0200, Werner Koch wrote: > On Mon, 17 Jun 2013 23:10, wich at yuugen.jp said: > > > It seems that when gnupg is compiled with -march=native -mtune=native > > Which version of GnuPG and Libgcrypt? Ah, apologies for the ommission. I'm using gnupg release 1.4.13. Regards, Remko van der Vossen From wk at gnupg.org Tue Jun 18 13:56:39 2013 From: wk at gnupg.org (Werner Koch) Date: Tue, 18 Jun 2013 13:56:39 +0200 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <20130618094811.GS24976@yuugen.jp> (Remko van der Vossen's message of "Tue, 18 Jun 2013 18:48:11 +0900") References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> <20130618094811.GS24976@yuugen.jp> Message-ID: <87obb3vdhk.fsf@vigenere.g10code.de> On Tue, 18 Jun 2013 11:48, wich at yuugen.jp said: > Ah, apologies for the ommission. I'm using gnupg release 1.4.13. "make check" does not fail? In particular mds.test does a known answer test. Well, that test is pretty limited and could be extended. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From wich at yuugen.jp Tue Jun 18 15:36:50 2013 From: wich at yuugen.jp (Remko van der Vossen) Date: Tue, 18 Jun 2013 22:36:50 +0900 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <87obb3vdhk.fsf@vigenere.g10code.de> References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> <20130618094811.GS24976@yuugen.jp> <87obb3vdhk.fsf@vigenere.g10code.de> Message-ID: <20130618133650.GT24976@yuugen.jp> On Tue, Jun 18, 2013 at 01:56:39PM +0200, Werner Koch wrote: > On Tue, 18 Jun 2013 11:48, wich at yuugen.jp said: > > > Ah, apologies for the ommission. I'm using gnupg release 1.4.13. > > "make check" does not fail? In particular mds.test does a known answer > test. Well, that test is pretty limited and could be extended. The make check fails, 1 of 27 tests (mds) fails. Though of course it shouldn't break in the first place right? Regards, Remko van der Vossen From dshaw at jabberwocky.com Tue Jun 18 15:44:37 2013 From: dshaw at jabberwocky.com (David Shaw) Date: Tue, 18 Jun 2013 09:44:37 -0400 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <20130618133650.GT24976@yuugen.jp> References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> <20130618094811.GS24976@yuugen.jp> <87obb3vdhk.fsf@vigenere.g10code.de> <20130618133650.GT24976@yuugen.jp> Message-ID: On Jun 18, 2013, at 9:36 AM, Remko van der Vossen wrote: > On Tue, Jun 18, 2013 at 01:56:39PM +0200, Werner Koch wrote: >> On Tue, 18 Jun 2013 11:48, wich at yuugen.jp said: >> >>> Ah, apologies for the ommission. I'm using gnupg release 1.4.13. >> >> "make check" does not fail? In particular mds.test does a known answer >> test. Well, that test is pretty limited and could be extended. > > The make check fails, 1 of 27 tests (mds) fails. Interesting. What does "grep CFLAGS Makefile" from your build directory return? Also, what happens if you try and build with -O0 in your CFLAGS? David From wich at yuugen.jp Tue Jun 18 16:24:31 2013 From: wich at yuugen.jp (Remko van der Vossen) Date: Tue, 18 Jun 2013 23:24:31 +0900 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> <20130618094811.GS24976@yuugen.jp> <87obb3vdhk.fsf@vigenere.g10code.de> <20130618133650.GT24976@yuugen.jp> Message-ID: <20130618142431.GU24976@yuugen.jp> On Tue, Jun 18, 2013 at 09:44:37AM -0400, David Shaw wrote: > On Jun 18, 2013, at 9:36 AM, Remko van der Vossen wrote: > > > > On Tue, Jun 18, 2013 at 01:56:39PM +0200, Werner Koch wrote: > >> On Tue, 18 Jun 2013 11:48, wich at yuugen.jp said: > >> > >>> Ah, apologies for the ommission. I'm using gnupg release 1.4.13. > >> > >> "make check" does not fail? In particular mds.test does a known answer > >> test. Well, that test is pretty limited and could be extended. > > > > The make check fails, 1 of 27 tests (mds) fails. > > > Interesting. What does "grep CFLAGS Makefile" from your build directory return? Also, what happens if you try and build with -O0 in your CFLAGS? CFLAGS = -march=native -mtune=native -m64 -pipe -O3 -Wall -Wno-pointer-sign When I compile with -march=native -mtune=native -m64 -pipe -O0 -Wall -Wno-pointer-sign the mds check passes. However, when I compile with -march=k8 -fPIC -DPIC -pipe -O3 -Wall -Wno-pointer-sign everything is fine. Running gcc 4.6.3 Has anybody reproduced the failure? Regards, Remko van der Vossen From dshaw at jabberwocky.com Tue Jun 18 16:41:51 2013 From: dshaw at jabberwocky.com (David Shaw) Date: Tue, 18 Jun 2013 10:41:51 -0400 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <20130618142431.GU24976@yuugen.jp> References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> <20130618094811.GS24976@yuugen.jp> <87obb3vdhk.fsf@vigenere.g10code.de> <20130618133650.GT24976@yuugen.jp> <20130618142431.GU24976@yuugen.jp> Message-ID: <07E4F1D5-3E8A-47A2-AE5B-3DF8FB10AA8D@jabberwocky.com> On Jun 18, 2013, at 10:24 AM, Remko van der Vossen wrote: > On Tue, Jun 18, 2013 at 09:44:37AM -0400, David Shaw wrote: >> On Jun 18, 2013, at 9:36 AM, Remko van der Vossen wrote: >> >> >>> On Tue, Jun 18, 2013 at 01:56:39PM +0200, Werner Koch wrote: >>>> On Tue, 18 Jun 2013 11:48, wich at yuugen.jp said: >>>> >>>>> Ah, apologies for the ommission. I'm using gnupg release 1.4.13. >>>> >>>> "make check" does not fail? In particular mds.test does a known answer >>>> test. Well, that test is pretty limited and could be extended. >>> >>> The make check fails, 1 of 27 tests (mds) fails. >> >> >> Interesting. What does "grep CFLAGS Makefile" from your build directory return? Also, what happens if you try and build with -O0 in your CFLAGS? > > CFLAGS = -march=native -mtune=native -m64 -pipe -O3 -Wall -Wno-pointer-sign > > When I compile with -march=native -mtune=native -m64 -pipe -O0 -Wall -Wno-pointer-sign the mds check passes. > > However, when I compile with -march=k8 -fPIC -DPIC -pipe -O3 -Wall -Wno-pointer-sign everything is fine. I've seen something like this once before. That one turned out to be a compiler issue with -O greater than 2, coincidentally also breaking the hashing code (it was TIGER if I recall). I'm not saying your problem is a compiler issue, but out of curiosity, what happens if you compile with -O2 ? David From wich at yuugen.jp Tue Jun 18 17:09:55 2013 From: wich at yuugen.jp (Remko van der Vossen) Date: Wed, 19 Jun 2013 00:09:55 +0900 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <07E4F1D5-3E8A-47A2-AE5B-3DF8FB10AA8D@jabberwocky.com> References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> <20130618094811.GS24976@yuugen.jp> <87obb3vdhk.fsf@vigenere.g10code.de> <20130618133650.GT24976@yuugen.jp> <20130618142431.GU24976@yuugen.jp> <07E4F1D5-3E8A-47A2-AE5B-3DF8FB10AA8D@jabberwocky.com> Message-ID: <20130618150955.GV24976@yuugen.jp> On Tue, Jun 18, 2013 at 10:41:51AM -0400, David Shaw wrote: > On Jun 18, 2013, at 10:24 AM, Remko van der Vossen wrote: > > > On Tue, Jun 18, 2013 at 09:44:37AM -0400, David Shaw wrote: > >> On Jun 18, 2013, at 9:36 AM, Remko van der Vossen wrote: > >> > >>> On Tue, Jun 18, 2013 at 01:56:39PM +0200, Werner Koch wrote: > >>>> On Tue, 18 Jun 2013 11:48, wich at yuugen.jp said: > >>>> > >>>>> Ah, apologies for the ommission. I'm using gnupg release 1.4.13. > >>>> > >>>> "make check" does not fail? In particular mds.test does a known answer > >>>> test. Well, that test is pretty limited and could be extended. > >>> > >>> The make check fails, 1 of 27 tests (mds) fails. > >> > >> > >> Interesting. What does "grep CFLAGS Makefile" from your build directory return? Also, what happens if you try and build with -O0 in your CFLAGS? > > > > CFLAGS = -march=native -mtune=native -m64 -pipe -O3 -Wall -Wno-pointer-sign > > > > When I compile with -march=native -mtune=native -m64 -pipe -O0 -Wall -Wno-pointer-sign the mds check passes. > > > > However, when I compile with -march=k8 -fPIC -DPIC -pipe -O3 -Wall -Wno-pointer-sign everything is fine. > > > I've seen something like this once before. That one turned out to be a compiler issue with -O greater than 2, coincidentally also breaking the hashing code (it was TIGER if I recall). I'm not saying your problem is a compiler issue, but out of curiosity, what happens if you compile with -O2 ? Compiling with -O2 is still okay it seems. Regards, Remko van der Vossen From gniibe at fsij.org Wed Jun 19 02:54:48 2013 From: gniibe at fsij.org (NIIBE Yutaka) Date: Wed, 19 Jun 2013 09:54:48 +0900 Subject: [PATCH] pinpad input support improvement Message-ID: <1371603288.3408.1.camel@cfw2.gniibe.org> Hello, This is an improvement of scdaemon for pinpad input. It found that: (1) Pinpad input with pcscd doesn't work well for Gemalto GemPCPinpad SmartCard Reader and Covadis Vega Alpha Reader. (2) Pinpad input with internal ccid driver doesn't work well for Covadis Vega Alpha Reader. For (1), those readers are the one which don't support variable length pinpad input, and we need to identify PINMIN and PINMAX to compose pinpad requests. Recent PC/SC-lite implements FEATURE_GET_TLV_PROPERTIES, and we use that. For (2), Covadis Vega Alpha Reader requires "escape" command at initialization of the reader for GnuPG. This is because the reader has a feature to display "the counts before lockdown" (PIN retry counter, in terms of OpenPGP card), and it is implemented by the reader using empty VERIFY command. Since OpenPGP card doesn't support empty VERIFY command, we need to disable the feature using "escape" command. At the initialization of the reader, the card is not yet powered or it is not inserted yet. We need to ignore errors of "card inactive" or "no card". Thanks to Michel who kindly tested the code. Tested with both readers. Here is a patch. As it is now possible to identify readers by USB vendor/product ID, I will further modify scd/apdu.c to match internal ccid driver. ====================== * scd/apdu.c (struct reader_table_s): New member PINMIN and PINMAX. (pcsc_vendor_specific_init): New. Examine parameters at init. (check_pcsc_pinpad): Set ->minlen and ->maxlen. (pcsc_pinpad_verify, pcsc_pinpad_modify): Use more conservative settings for bNumberMessage and bMsgIndex*. * scd/app-openpgp.c (parse_login_data): Free RELPTR on error. Fix parsing. * scd/ccid-driver.c (ccid_vendor_specific_init): Fix initialization for Covadis Vega Alpha reader. diff --git a/scd/apdu.c b/scd/apdu.c index 372932a..9b67eba 100644 --- a/scd/apdu.c +++ b/scd/apdu.c @@ -117,6 +117,8 @@ struct reader_table_s { pcsc_dword_t protocol; pcsc_dword_t verify_ioctl; pcsc_dword_t modify_ioctl; + int pinmin; + int pinmax; #ifdef NEED_PCSC_WRAPPER int req_fd; int rsp_fd; @@ -220,9 +222,19 @@ static char (* DLSTDCALL CT_close) (unsigned short ctn); #define PCSC_W_REMOVED_CARD 0x80100069 #define CM_IOCTL_GET_FEATURE_REQUEST (0x42000000 + 3400) +#define CM_IOCTL_VENDOR_IFD_EXCHANGE (0x42000000 + 1) + #define FEATURE_VERIFY_PIN_DIRECT 0x06 #define FEATURE_MODIFY_PIN_DIRECT 0x07 +#define FEATURE_GET_TLV_PROPERTIES 0x12 + +#define PCSCv2_PART10_PROPERTY_bEntryValidationCondition 2 +#define PCSCv2_PART10_PROPERTY_bTimeOut2 3 +#define PCSCv2_PART10_PROPERTY_bMinPINSize 6 +#define PCSCv2_PART10_PROPERTY_bMaxPINSize 7 +#define PCSCv2_PART10_PROPERTY_wIdVendor 11 +#define PCSCv2_PART10_PROPERTY_wIdProduct 12 /* The PC/SC error is defined as a long as per specs. Due to left shifts bit 31 will get sign extended. We use this mask to fix @@ -424,6 +436,8 @@ new_reader_slot (void) #endif reader_table[reader].pcsc.verify_ioctl = 0; reader_table[reader].pcsc.modify_ioctl = 0; + reader_table[reader].pcsc.pinmin = -1; + reader_table[reader].pcsc.pinmax = -1; return reader; } @@ -1687,6 +1701,115 @@ reset_pcsc_reader (int slot) } +/* Examine reader specific parameters and initialize. This is mostly + for pinpad input. Called at opening the connection to the reader. */ +static int +pcsc_vendor_specific_init (int slot) +{ + unsigned char buf[256]; + pcsc_dword_t len; + int sw; + int vendor = 0; + int product = 0; + pcsc_dword_t get_tlv_ioctl = (pcsc_dword_t)-1; + unsigned char *p; + + len = sizeof (buf); + sw = control_pcsc (slot, CM_IOCTL_GET_FEATURE_REQUEST, NULL, 0, buf, &len); + if (sw) + { + log_error ("pcsc_vendor_specific_init: GET_FEATURE_REQUEST failed: %d\n", + sw); + return SW_NOT_SUPPORTED; + } + else + { + p = buf; + while (p < buf + len) + { + unsigned char code = *p++; + int l = *p++; + unsigned int v = 0; + + if (l == 1) + v = p[0]; + else if (l == 2) + v = ((p[0] << 8) | p[1]); + else if (l == 4) + v = ((p[0] << 24) | (p[1] << 16) | (p[2] << 8) | p[3]); + + if (code == FEATURE_VERIFY_PIN_DIRECT) + reader_table[slot].pcsc.verify_ioctl = v; + else if (code == FEATURE_MODIFY_PIN_DIRECT) + reader_table[slot].pcsc.modify_ioctl = v; + else if (code == FEATURE_GET_TLV_PROPERTIES) + get_tlv_ioctl = v; + + if (DBG_CARD_IO) + log_debug ("feature: code=%02X, len=%d, v=%02X\n", code, l, v); + + p += l; + } + } + + if (get_tlv_ioctl == (pcsc_dword_t)-1) + return 0; + + len = sizeof (buf); + sw = control_pcsc (slot, get_tlv_ioctl, NULL, 0, buf, &len); + if (sw) + { + log_error ("pcsc_vendor_specific_init: GET_TLV_IOCTL failed: %d\n", sw); + return SW_NOT_SUPPORTED; + } + + p = buf; + while (p < buf + len) + { + unsigned char tag = *p++; + int l = *p++; + unsigned int v = 0; + + /* Umm... here is little endian, while the encoding above is big. */ + if (l == 1) + v = p[0]; + else if (l == 2) + v = ((p[1] << 8) | p[0]); + else if (l == 4) + v = ((p[3] << 24) | (p[2] << 16) | (p[1] << 8) | p[0]); + + if (tag == PCSCv2_PART10_PROPERTY_bMinPINSize) + reader_table[slot].pcsc.pinmin = v; + else if (tag == PCSCv2_PART10_PROPERTY_bMaxPINSize) + reader_table[slot].pcsc.pinmax = v; + else if (tag == PCSCv2_PART10_PROPERTY_wIdVendor) + vendor = v; + else if (tag == PCSCv2_PART10_PROPERTY_wIdProduct) + product = v; + + if (DBG_CARD_IO) + log_debug ("TLV properties: tag=%02X, len=%d, v=%08X\n", tag, l, v); + + p += l; + } + + if (vendor == 0x0982 && product == 0x0008) /* Vega Alpha */ + { + /* + * Please read the comment of ccid_vendor_specific_init in + * ccid-driver.c. + */ + const unsigned char cmd[] = { '\xb5', '\x01', '\x00', '\x03', '\x00' }; + sw = control_pcsc (slot, CM_IOCTL_VENDOR_IFD_EXCHANGE, + cmd, sizeof (cmd), NULL, 0); + if (sw) + return SW_NOT_SUPPORTED; + } + + return 0; +} + + /* Open the PC/SC reader without using the wrapper. Returns -1 on error or a slot number for the reader. */ #ifndef NEED_PCSC_WRAPPER @@ -1785,6 +1908,7 @@ open_pcsc_reader_direct (const char *portstr) reader_table[slot].send_apdu_reader = pcsc_send_apdu; reader_table[slot].dump_status_reader = dump_pcsc_reader_status; + pcsc_vendor_specific_init (slot); dump_reader_status (slot); unlock_slot (slot); return slot; @@ -1989,6 +2113,8 @@ open_pcsc_reader_wrapped (const char *portstr) reader_table[slot].send_apdu_reader = pcsc_send_apdu; reader_table[slot].dump_status_reader = dump_pcsc_reader_status; + pcsc_vendor_specific_init (slot); + /* Read the status so that IS_T0 will be set. */ pcsc_get_status (slot, &dummy_status); @@ -2028,56 +2154,23 @@ open_pcsc_reader (const char *portstr) static int check_pcsc_pinpad (int slot, int command, pininfo_t *pininfo) { - unsigned char buf[256]; - pcsc_dword_t len = 256; - int sw; + int r; - (void)pininfo; /* XXX: Identify reader and set pininfo->fixedlen. */ + pininfo->minlen = reader_table[slot].pcsc.pinmin; + pininfo->maxlen = reader_table[slot].pcsc.pinmax; - check_again: - if (command == ISO7816_VERIFY) - { - if (reader_table[slot].pcsc.verify_ioctl == (pcsc_dword_t)-1) - return SW_NOT_SUPPORTED; - else if (reader_table[slot].pcsc.verify_ioctl != 0) - return 0; /* Success */ - } - else if (command == ISO7816_CHANGE_REFERENCE_DATA) - { - if (reader_table[slot].pcsc.modify_ioctl == (pcsc_dword_t)-1) - return SW_NOT_SUPPORTED; - else if (reader_table[slot].pcsc.modify_ioctl != 0) - return 0; /* Success */ - } + if ((command == ISO7816_VERIFY && reader_table[slot].pcsc.verify_ioctl != 0) + || (command == ISO7816_CHANGE_REFERENCE_DATA + && reader_table[slot].pcsc.modify_ioctl != 0)) + r = 0; /* Success */ else - return SW_NOT_SUPPORTED; - - reader_table[slot].pcsc.verify_ioctl = (pcsc_dword_t)-1; - reader_table[slot].pcsc.modify_ioctl = (pcsc_dword_t)-1; - - sw = control_pcsc (slot, CM_IOCTL_GET_FEATURE_REQUEST, NULL, 0, buf, &len); - if (sw) - return SW_NOT_SUPPORTED; - else - { - unsigned char *p = buf; + r = SW_NOT_SUPPORTED; - while (p < buf + len) - { - unsigned char code = *p++; - - p++; /* Skip length */ - if (code == FEATURE_VERIFY_PIN_DIRECT) - reader_table[slot].pcsc.verify_ioctl - = (p[0] << 24) | (p[1] << 16) | (p[2] << 8) | p[3]; - else if (code == FEATURE_MODIFY_PIN_DIRECT) - reader_table[slot].pcsc.modify_ioctl - = (p[0] << 24) | (p[1] << 16) | (p[2] << 8) | p[3]; - p += 4; - } - } + if (DBG_CARD_IO) + log_debug ("check_pcsc_pinpad: command=%02X, r=%d\n", + (unsigned int)command, r); - goto check_again; + return r; } @@ -2124,7 +2217,7 @@ pcsc_pinpad_verify (int slot, int class, int ins, int p0, int p1, pin_verify[7] = 0x02; /* bEntryValidationCondition: Validation key pressed */ if (pininfo->minlen && pininfo->maxlen && pininfo->minlen == pininfo->maxlen) pin_verify[7] |= 0x01; /* Max size reached. */ - pin_verify[8] = 0xff; /* bNumberMessage: Default */ + pin_verify[8] = 0x01; /* bNumberMessage: One message */ pin_verify[9] = 0x09; /* wLangId: 0x0409: US English */ pin_verify[10] = 0x04; /* wLangId: 0x0409: US English */ pin_verify[11] = 0x00; /* bMsgIndex */ @@ -2214,12 +2307,12 @@ pcsc_pinpad_modify (int slot, int class, int ins, int p0, int p1, pin_modify[10] = 0x02; /* bEntryValidationCondition: Validation key pressed */ if (pininfo->minlen && pininfo->maxlen && pininfo->minlen == pininfo->maxlen) pin_modify[10] |= 0x01; /* Max size reached. */ - pin_modify[11] = 0xff; /* bNumberMessage: Default */ - pin_modify[12] = 0x09; /* wLangId: 0x0409: US English */ + pin_modify[11] = 0x03; /* bNumberMessage: Three messages */ + pin_modify[12] = 0x09; /* wLangId: 0x0409: US English */ pin_modify[13] = 0x04; /* wLangId: 0x0409: US English */ pin_modify[14] = 0x00; /* bMsgIndex1 */ - pin_modify[15] = 0x00; /* bMsgIndex2 */ - pin_modify[16] = 0x00; /* bMsgIndex3 */ + pin_modify[15] = 0x01; /* bMsgIndex2 */ + pin_modify[16] = 0x02; /* bMsgIndex3 */ pin_modify[17] = 0x00; /* bTeoPrologue[0] */ pin_modify[18] = 0x00; /* bTeoPrologue[1] */ pin_modify[19] = 2 * pininfo->fixedlen + 0x05; /* bTeoPrologue[2] */ diff --git a/scd/app-openpgp.c b/scd/app-openpgp.c index 4af4e93..9186e18 100644 --- a/scd/app-openpgp.c +++ b/scd/app-openpgp.c @@ -629,7 +629,11 @@ parse_login_data (app_t app) if (*buffer == '\n') break; if (buflen < 2 || buffer[1] != '\x14') - return; /* No control sequences. */ + { + xfree (relptr); + return; /* No control sequences. */ + } + buflen--; buffer++; do @@ -676,14 +680,11 @@ parse_login_data (app_t app) m = strtol (q, &q, 10); } - buffer = q; if (buflen < ((unsigned char *)q - buffer)) - { - buflen = 0; - break; - } - else - buflen -= ((unsigned char *)q - buffer); + break; + + buflen -= ((unsigned char *)q - buffer); + buffer = q; if (buflen && !(*buffer == '\n' || *buffer == '\x18')) goto next; @@ -694,11 +695,11 @@ parse_login_data (app_t app) } } next: - for (; buflen && *buffer != '\x18'; buflen--, buffer++) - if (*buffer == '\n') - buflen = 1; + /* Skip to FS (0x18) or LF (\n). */ + for (; buflen && *buffer != '\x18' && *buffer != '\n'; buflen--) + buffer++; } - while (buflen); + while (buflen && *buffer != '\n'); xfree (relptr); } diff --git a/scd/ccid-driver.c b/scd/ccid-driver.c index 42a219f..b812cb0 100644 --- a/scd/ccid-driver.c +++ b/scd/ccid-driver.c @@ -1537,6 +1537,7 @@ ccid_vendor_specific_init (ccid_driver_t handle) { if (handle->id_vendor == VENDOR_VEGA && handle->id_product == VEGA_ALPHA) { + int r; /* * Vega alpha has a feature to show retry counter on the pinpad * display. But it assumes that the card returns the value of @@ -1545,9 +1546,12 @@ ccid_vendor_specific_init (ccid_driver_t handle) * VERIFY command with empty data. This vendor specific command * sequence is to disable the feature. */ - const unsigned char cmd[] = "\xb5\x01\x00\x03\x00"; + const unsigned char cmd[] = { '\xb5', '\x01', '\x00', '\x03', '\x00' }; - return send_escape_cmd (handle, cmd, sizeof (cmd), NULL, 0, NULL); + r = send_escape_cmd (handle, cmd, sizeof (cmd), NULL, 0, NULL); + if (r != 0 && r != CCID_DRIVER_ERR_CARD_INACTIVE + && r != CCID_DRIVER_ERR_NO_CARD) + return r; } return 0; -- From gniibe at fsij.org Wed Jun 19 05:00:02 2013 From: gniibe at fsij.org (NIIBE Yutaka) Date: Wed, 19 Jun 2013 12:00:02 +0900 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <20130618142431.GU24976@yuugen.jp> References: <20130617211047.GR24976@yuugen.jp> <87sj0fvkk8.fsf@vigenere.g10code.de> <20130618094811.GS24976@yuugen.jp> <87obb3vdhk.fsf@vigenere.g10code.de> <20130618133650.GT24976@yuugen.jp> <20130618142431.GU24976@yuugen.jp> Message-ID: <1371610802.3408.3.camel@cfw2.gniibe.org> On 2013-06-18 at 23:24 +0900, Remko van der Vossen wrote: > CFLAGS = -march=native -mtune=native -m64 -pipe -O3 -Wall -Wno-pointer-sign [...] > Running gcc 4.6.3 > > Has anybody reproduced the failure? I tested on Debian wheezy with gcc-4.6, which is: gcc-4.6 (Debian 4.6.3-14) 4.6.3 I did: $ CC=gcc-4.6 CFLAGS="-g -O3 -m64 -mtune=native -march=native" ./configure $ make Then, generated GPG got "Illegal instruction" at "andn" instruction. It doesn't work. I did again avoiding bmi1 feature: $ CC=gcc-4.6 CFLAGS="-g -O3 -m64 -mtune=native -march=native -mno-bmi" ./configure $ make $ make check Then, it works fine, as well as sha512 for "test\n". The machine is virtual environment, so it could be some bug there. It's: --------------- vendor_id : AuthenticAMD cpu family : 6 model : 2 model name : QEMU Virtual CPU version 1.1.2 stepping : 3 microcode : 0x1000065 --------------- -- From hans at guardianproject.info Wed Jun 19 06:30:48 2013 From: hans at guardianproject.info (Hans-Christoph Steiner) Date: Wed, 19 Jun 2013 00:30:48 -0400 Subject: gpgme-android dies when it needs pinentry Message-ID: <51C133F8.6000502@guardianproject.info> So thanks to Abel's hard work, we have a working pinentry for Android. But now it seems that gpgme has its own issues when it comes to processes that require pinentry. We have gpgme working for things that don't require pinentry: import, export, encrypting, listing keys, getting keys by fingerprint, etc. But for things like creating a new key, decrypting, signing, etc. gpgme always crashes. It does not seem to even try to launch pinentry, because we're not seeing and logging from pinentry. Attached in the gpgme log for trying to create a key. I'm stumped, any ideas? .hc -------------- next part -------------- A non-text attachment was scrubbed... Name: gpgme-gen-key.log Type: text/x-log Size: 12517 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 939 bytes Desc: OpenPGP digital signature URL: From john.marshall at riverwillow.com.au Wed Jun 19 09:42:51 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Wed, 19 Jun 2013 17:42:51 +1000 Subject: 2.0.20 breaks DNS SRV hkp keyserver access via web proxy server In-Reply-To: References: <20130610103314.GB13326@rwpc15.mby.riverwillow.net.au> <4E23F628-BDCD-40BC-A5D3-89ED5879A2F8@jabberwocky.com> <20130611212544.GA33316@rwpc15.mby.riverwillow.net.au> <4F6FA1BA-4946-4D95-85C4-01CB1C6BB839@jabberwocky.com> <20130613071850.GE38402@rwpc15.mby.riverwillow.net.au> Message-ID: <20130619074251.GC24403@rwpc15.mby.riverwillow.net.au> On Sat, 15 Jun 2013, 10:57 -0400, David Shaw wrote: > On Jun 13, 2013, at 3:18 AM, John Marshall wrote: > > On Thu, 13 Jun 2013, 00:13 -0400, David Shaw wrote: > >> I'm not convinced that it makes sense for a client to resolve the SRV, and then pass the resulting hostname to a proxy. For example, leaving aside SRV, the client does not try and resolve an A record or chase a CNAME, but rather passes the requested resource to the proxy and the proxy does the work translating that to a DNS name, looking up that name, making the connection, etc. Indeed, the client may not even be able to resolve external DNS at all. > > > > I think you're right. Here am I complaining about 2.0.20 breaking that > > functionality and it should never have been there in the first place. > > So why is the gnupg client doing DNS work for hkp(s) in the presence of > > a configured HTTP proxy server? > > Bug. It shouldn't be. > > > Couldn't this work (gnupg doing SRV selection) with a SOCKS5 proxy? I > > can't find SOCKS in the man page or in the source code. Are there any > > plans for gnupg to support keyserver connection via a SOCKS5 proxy? > > As you discovered, SOCKS5 does work - we get this for free because libcurl supports it. There is a gotcha with all this proxy stuff, however. If you're going over something like TOR, you are effectively "leaking" what queries you are doing because GPG will do the keyserver SRV request through the local DNS before sending the actual keyserver query through TOR. > > I wonder if the healthiest thing to do here is to just flip SRV to off if any proxy is provided. If the user chooses to turn it back on again, that's up to them, but it should default to off. s/SRV/DNS/ ...but only for the types of proxy where it makes sense, e.g. proxy scheme http:// <-- proxy should resolve DNS proxy scheme socks5h:// <-- proxy should resolve DNS proxy scheme socks5:// <-- client should resolve DNS [From curl_easy_setopt(3) - under CURLOPT_PROXY] Since 7.21.7, the proxy string may be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// (the last one to enable socks5 and asking the proxy to do the resolving, also known as CURLPROXY_SOCKS5_HOSTNAME type) to request the specific SOCKS version to be used. No protocol specified, http:// and all others will be treated as HTTP proxies. > Comments from people using TOR would be welcome! -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From wk at gnupg.org Wed Jun 19 18:41:44 2013 From: wk at gnupg.org (Werner Koch) Date: Wed, 19 Jun 2013 18:41:44 +0200 Subject: gpgme-android dies when it needs pinentry In-Reply-To: <51C133F8.6000502@guardianproject.info> (Hans-Christoph Steiner's message of "Wed, 19 Jun 2013 00:30:48 -0400") References: <51C133F8.6000502@guardianproject.info> Message-ID: <87zjumrr1z.fsf@vigenere.g10code.de> On Wed, 19 Jun 2013 06:30, hans at guardianproject.info said: > But for things like creating a new key, decrypting, signing, etc. gpgme always > crashes. It does not seem to even try to launch pinentry, because we're not > seeing and logging from pinentry. Attached in the gpgme log for trying to I can't see a gpgme crash for the log. It might be a crash in gpg or gpg-agent. Can you please check? Having gpg-agent and gpg log files would also be useful. > GPGME 2013-06-19 00:22:03 <0x4227> _gpgme_io_read: enter: fd=0x2b, buffer=0x5c8339b0, count=1024 > GPGME 2013-06-19 00:22:03 <0x4227> _gpgme_io_read: leave: result=0 The first read of the status-fd returns 0, i.e. EOF. That should not happen. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From hans at guardianproject.info Thu Jun 20 00:32:25 2013 From: hans at guardianproject.info (Hans-Christoph Steiner) Date: Wed, 19 Jun 2013 18:32:25 -0400 Subject: gpgme-android dies when it needs pinentry In-Reply-To: <87zjumrr1z.fsf@vigenere.g10code.de> References: <51C133F8.6000502@guardianproject.info> <87zjumrr1z.fsf@vigenere.g10code.de> Message-ID: <51C23179.9040305@guardianproject.info> On 06/19/2013 12:41 PM, Werner Koch wrote: > On Wed, 19 Jun 2013 06:30, hans at guardianproject.info said: > >> But for things like creating a new key, decrypting, signing, etc. gpgme always >> crashes. It does not seem to even try to launch pinentry, because we're not >> seeing and logging from pinentry. Attached in the gpgme log for trying to > > I can't see a gpgme crash for the log. It might be a crash in gpg or > gpg-agent. Can you please check? Having gpg-agent and gpg log files > would also be useful. > >> GPGME 2013-06-19 00:22:03 <0x4227> _gpgme_io_read: enter: fd=0x2b, buffer=0x5c8339b0, count=1024 >> GPGME 2013-06-19 00:22:03 <0x4227> _gpgme_io_read: leave: result=0 > > The first read of the status-fd returns 0, i.e. EOF. That should not > happen. > > > Salam-Shalom, > > Werner I just re-ran the gen-key procedure. Looks like another Android doosie... gpg2.log does not get created, gpg-agent.log is 0. I included gpgme.log and the Android logcat starting from when the app started till when it failed on the key create. Those are attached. If you remember with other bits, we had to set LD_LIBRARY_PATH for things to work, and we can't just set it on the app process because of Java restrictions. import, export, and gpgmeOpKeylist() all work, so I guess that means LD_LIBRARY_PATH is correct. I can't think of anything else. But gpgme is consistently failing only on the operations that require pinentry to get a passphrase. .hc -------------- next part -------------- A non-text attachment was scrubbed... Name: gen-key-logs.tar.bz2 Type: application/x-bzip Size: 9591 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 939 bytes Desc: OpenPGP digital signature URL: From pgut001 at cs.auckland.ac.nz Thu Jun 20 07:48:47 2013 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Thu, 20 Jun 2013 17:48:47 +1200 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <07E4F1D5-3E8A-47A2-AE5B-3DF8FB10AA8D@jabberwocky.com> Message-ID: David Shaw writes: >I've seen something like this once before. That one turned out to be a >compiler issue with -O greater than 2, coincidentally also breaking the >hashing code (it was TIGER if I recall). I'm not saying your problem is a >compiler issue, but out of curiosity, what happens if you compile with -O2 ? I've found so many optimiser bugs in gcc across different platforms with -O3 (and wasted so much time staring at -S output that bore no resemablance to the original C code) that I switched to -O2 by default in my code. I don't think I've had any problems since. You don't lose much -O with -O2 (and in particular in gcc 4.x they made changes that produce far larger binaries than -O2, with ensuing poor cache localisation properties), so the switch to -O2 was pretty easy and solved a lot of headaches. Peter. From pgut001 at cs.auckland.ac.nz Thu Jun 20 07:51:28 2013 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Thu, 20 Jun 2013 17:51:28 +1200 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <20130618150955.GV24976@yuugen.jp> Message-ID: Remko van der Vossen writes: >Compiling with -O2 is still okay it seems. Yep, see my earlier message. I'm willing to bet that if you do a build with -S at -O3 you'll see that the resulting asm code for wherever the problem is bears little to no resemblance to the C source code. I'd recommend switching to -O2 as a default. (or LLVM :-). Peter. From wk at gnupg.org Thu Jun 20 10:16:41 2013 From: wk at gnupg.org (Werner Koch) Date: Thu, 20 Jun 2013 10:16:41 +0200 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: (Peter Gutmann's message of "Thu, 20 Jun 2013 17:48:47 +1200") References: Message-ID: <87fvwdryc6.fsf@vigenere.g10code.de> On Thu, 20 Jun 2013 07:48, pgut001 at cs.auckland.ac.nz said: > I've had any problems since. You don't lose much -O with -O2 (and in FWIW, the reason I switched to -O3 (if configured with --enable-maintainer-mode) is 2009-10-12 Werner Koch * configure.ac: Use -O3 because newer gcc versions require that for uninitialized variable warnings. Which is a quite helpful warning. Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From john.marshall at riverwillow.com.au Thu Jun 20 10:40:51 2013 From: john.marshall at riverwillow.com.au (John Marshall) Date: Thu, 20 Jun 2013 18:40:51 +1000 Subject: [PATCH] 2.0.20 configure cannot find curl-config when --with-libcurl=PATH In-Reply-To: <20130610054610.GA13326@rwpc15.mby.riverwillow.net.au> References: <20130610054610.GA13326@rwpc15.mby.riverwillow.net.au> Message-ID: <20130620084051.GE4048@rwpc15.mby.riverwillow.net.au> On Mon, 10 Jun 2013, 15:46 +1000, John Marshall wrote: > I discovered that configure was not finding and using > the installed /usr/local/bin/curl-config. I was passing > --with-libcurl=/usr/local to configure. config.log was showing failure > to discover curl-config but success finding libcurl: > Passing a bare --with-libcurl to configure worked: > Looking at the configure script, it was passing a fully-qualified > pathname for curl-config (instead of just the program name) to > AC_PATH_PROG, which will never work. The following patch against the > STABLE-BRANCH-2-0 m4/libcurl.m4 yields what I believe to be the intended > result: No reply to this report after 1.5 weeks so I decided to create a BTS account and submit it there: Issue 1510 Thank you again for producing this wonderful software! -- John Marshall -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From abbotti at mev.co.uk Thu Jun 20 12:01:57 2013 From: abbotti at mev.co.uk (Ian Abbott) Date: Thu, 20 Jun 2013 11:01:57 +0100 Subject: Bug in sha512 digest with -march=native -mtune=native -m64 In-Reply-To: <87fvwdryc6.fsf@vigenere.g10code.de> References: <87fvwdryc6.fsf@vigenere.g10code.de> Message-ID: <51C2D315.7080200@mev.co.uk> On 2013/06/20 09:16 AM, Werner Koch wrote: > On Thu, 20 Jun 2013 07:48, pgut001 at cs.auckland.ac.nz said: > >> I've had any problems since. You don't lose much -O with -O2 (and in > > FWIW, the reason I switched to -O3 (if configured with > --enable-maintainer-mode) is > > 2009-10-12 Werner Koch > > * configure.ac: Use -O3 because newer gcc versions require that > for uninitialized variable warnings. > > Which is a quite helpful warning. Possibly fixed in gcc-4.4. http://gcc.gnu.org/gcc-4.4/changes.html says: Uninitialized warnings do not require enabling optimization anymore, that is, -Wuninitialized can be used together with -O0. Nonetheless, the warnings given by -Wuninitialized will probably be more accurate if optimization is enabled. -- -=( Ian Abbott @ MEV Ltd. E-mail: )=- -=( Tel: +44 (0)161 477 1898 FAX: +44 (0)161 718 3587 )=- From emanuel at intevation.de Mon Jun 24 16:16:32 2013 From: emanuel at intevation.de (Emanuel =?utf-8?q?Sch=C3=BCtze?=) Date: Mon, 24 Jun 2013 16:16:32 +0200 Subject: [Gpg4win-devel] Wiki for GnuPG/Gpg4win In-Reply-To: <201304101238.42331.emanuel.schuetze@intevation.de> References: <201304101238.42331.emanuel.schuetze@intevation.de> Message-ID: <201306241616.36727.emanuel@intevation.de> Hi, we (Intevation) have set up a new public GnuPG (Moin) wiki: http://wiki.gnupg.org/ For wiki login we want to use the registered user accounts from the GnuPG (roundup) issues tracker only: https://bugs.gnupg.org/gnupg/ There are two possibilities which we could implement in Moin: 1. For each login: Moin wiki ask Roundup if user (with given password) exists. 2. Export regularly user names and password hashes from Roundup and copy via rsync to Moin wiki. Moin uses these account data for login. Which solution do you prefer? @Werner: ^ Best regards, Emanuel Am Mittwoch 10 April 2013 12:38:42 schrieb Emanuel Sch?tze: > Dear GnuPG and Gpg4win communities, > > we (g10Code and Intevation) offer to create a wiki under wiki.gnupg.org for > better coordination of the development of GnuPG and Gpg4win. > > We would use https for this wiki only for editing (with the CA of > Intevation). > > Any votes for or against this plan? > > Best regards > Emanuel -- Emanuel Sch?tze | ++49-541-33 50 83 - 746 | http://www.intevation.de/ Intevation GmbH, Neuer Graben 17, 49074 Osnabr?ck | AG Osnabr?ck, HR B 18998 Gesch?ftsf?hrer: Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From wk at gnupg.org Mon Jun 24 18:07:32 2013 From: wk at gnupg.org (Werner Koch) Date: Mon, 24 Jun 2013 18:07:32 +0200 Subject: [Gpg4win-devel] Wiki for GnuPG/Gpg4win In-Reply-To: <201306241616.36727.emanuel@intevation.de> ("Emanuel =?utf-8?Q?Sch=C3=BCtze=22's?= message of "Mon, 24 Jun 2013 16:16:32 +0200") References: <201304101238.42331.emanuel.schuetze@intevation.de> <201306241616.36727.emanuel@intevation.de> Message-ID: <871u7rpk57.fsf@vigenere.g10code.de> On Mon, 24 Jun 2013 16:16, emanuel at intevation.de said: > 1. For each login: Moin wiki ask Roundup if user (with given password) exists. > > 2. Export regularly user names and password hashes from Roundup and copy via > rsync to Moin wiki. Moin uses these account data for login. > > Which solution do you prefer? > @Werner: ^ I don't care too much. The second option seems to be better because it does not introduce a dependency on another machine (bugs.g10code.com) and thus the wiki will continue to work even if the otehr box is unavailable. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From hans at guardianproject.info Tue Jun 25 01:04:32 2013 From: hans at guardianproject.info (Hans of Guardian) Date: Mon, 24 Jun 2013 19:04:32 -0400 Subject: changing SONAME for libs on Android Message-ID: Right now, when building the GnuPG suite for Android, it creates shared library files that are named in the normal GNU/Linux style, i.e. libfoo.so.1.0.3, including the ABI version after .so. When building for Android, it currently inherits this behavior, perhaps because they share the Linux kernel. But unfortunately, this is not the proper Android behavior. While versioned libraries are wonderful things, Android does not use or support them. It might complicate the build process to make the Android build not use this naming scheme, it will greatly simplify the process of using GnuPG in an Android app because we can then distribute the .so files in the official Android way. So I'm curious whether this would take a lot of work to do, and what other repercussions such a change would have. As far as just changing the SONAME in the build process, that should be pretty straightforward. .hc -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 881 bytes Desc: This is a digitally signed message part URL: From emanuel at intevation.de Tue Jun 25 09:07:34 2013 From: emanuel at intevation.de (Emanuel =?iso-8859-1?q?Sch=FCtze?=) Date: Tue, 25 Jun 2013 09:07:34 +0200 Subject: [Gpg4win-devel] Wiki for GnuPG/Gpg4win In-Reply-To: <871u7rpk57.fsf@vigenere.g10code.de> References: <201304101238.42331.emanuel.schuetze@intevation.de> <201306241616.36727.emanuel@intevation.de> <871u7rpk57.fsf@vigenere.g10code.de> Message-ID: <201306250907.34624.emanuel@intevation.de> Werner, Am Montag 24 Juni 2013 18:07:32 schrieb Werner Koch: > I don't care too much. ?The second option seems to be better because it > does not introduce a dependency on another machine (bugs.g10code.com) > and thus the wiki will continue to work even if the otehr box is > unavailable. Ok. (2) was also our prefered solution. We will implement it... Emanuel -- Emanuel Sch?tze | ++49-541-33 50 83 - 746 | http://www.intevation.de/ Intevation GmbH, Neuer Graben 17, 49074 Osnabr?ck | AG Osnabr?ck, HR B 18998 Gesch?ftsf?hrer: Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From wk at gnupg.org Tue Jun 25 09:39:00 2013 From: wk at gnupg.org (Werner Koch) Date: Tue, 25 Jun 2013 09:39:00 +0200 Subject: changing SONAME for libs on Android In-Reply-To: (Hans of Guardian's message of "Mon, 24 Jun 2013 19:04:32 -0400") References: Message-ID: <87bo6uod0r.fsf@vigenere.g10code.de> On Tue, 25 Jun 2013 01:04, hans at guardianproject.info said: > Right now, when building the GnuPG suite for Android, it creates > shared library files that are named in the normal GNU/Linux style, > i.e. libfoo.so.1.0.3, including the ABI version after .so. When > building for Android, it currently inherits this behavior, perhaps > because they share the Linux kernel. But unfortunately, this is not The reason is that libtool selects this versioning scheme. It is based on the several factors but mainly on the cpu-vendor-os triplet. > So I'm curious whether this would take a lot of work to do, and what > other repercussions such a change would have. As far as just changing Because we are using libtool, this may only be fixed in libtool. A quick check of the current libtool (2.4.2) does not reveal any strings "android" or "bionic" thus I assume there is no special code for android there. Autoconf however knows about Android (see config.sub). > the SONAME in the build process, that should be pretty > straightforward. I don't exactly understand the problem. Is it merely that ld.so does not follow symlinks or is ld.so very different from the glibc ld.so ? For the former it should be easy to change the name after installation. For the latter we also need to change libtool because it takes care of installing the libraries. A quick search found these notes on a gstreamer presentation slide: - android's dynamic linker has a hard-coded limit on the number of .so files (shared libraries and/or plugins) you can load in a single process. - Android's linker is limited to 64, 96 and 128 shared libraries Including all plugins we have 262 shared libraries May be it it worth to look closer at this. I consider it important to know the exact reason why there are problems with the SO files before we start hacking on libtool. The next version of libtool has major changes and is not anymore a complex shell script. I assume it will take more than a year before a new libtool version is released. For that reason, small changes to the included 2.42 version of libtool are justified (in fact I recently did this for W64 and def file parsing). Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From daniele.athome at gmail.com Wed Jun 26 11:15:56 2013 From: daniele.athome at gmail.com (Daniele Ricci) Date: Wed, 26 Jun 2013 11:15:56 +0200 Subject: Decrypting with ECDH: no secret key Message-ID: Hello, I'm using a beta version so I think this list is more appropriate than gnupg-users. I've been trying 2.1.0 beta3 for elliptic curve encryption and I've just bumped into an issue: gpg: encrypted with 256-bit ECDH key, ID 8BA3201C, created 2013-06-16 "Test ECDSA-ECDH (Key and subkey are 256 bits long) " gpg: decryption failed: No secret key gpg-agent is correctly running and I can export the very same secret key I've been trying to decrypt with. Here is --list-secret-keys output: sec 256E/8C5A3D53 2013-06-16 uid Test ECDSA-ECDH (Key and subkey are 256 bits long) ssb# 256e/8BA3201C 2013-06-16 Signing and verifying do work. I've attached the ciphertext and the key pair I'm using. Cleartext is "This is a simple message.". Please note that I have 3 installations of gnupg (1.4.12, 2.0.20 from Debian and 2.1.0beta3 compiled manually), of course I made sure no other agent was running, environment variables were set correctly for gnupg beta agent. Cheers -- Daniele -------------- next part -------------- -----BEGIN PGP PRIVATE KEY BLOCK----- Version: GnuPG v2.1.0beta3 (GNU/Linux) lKUEUb4GwBMIKoZIzj0DAQcCAwS8p3TFaRAx58qCG63W+UNthXBPSJDnVDPTb/sT iXePaAZ/Gh1GKXTq7k6ab/67MMeVFp/EdySumqdWLtvceFKs/gcDAo11YYCae/K2 1uKGJ/uU4b4QHYnPIsAdYpuo5HIdoAOL/WwduRa8C6vSFrtMJLDqPK3BUpMz3CXN GyMhjuaHKP5MPbBZkIfgUGZO5qvU9+i0UFRlc3QgRUNEU0EtRUNESCAoS2V5IGFu ZCBzdWJrZXkgYXJlIDI1NiBiaXRzIGxvbmcpIDx0ZXN0LmVjZHNhLmVjZGhAZXhh bXBsZS5jb20+iHoEExMIACIFAlG+BsACGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4B AheAAAoJEPfAOVaMWj1Tk7YBAOLz007oltog5sLreMwxayIBpAesEI+QNkuJuA2/ vcMuAP99qi4fhhkmz5w41AE9Wc/VtobU2lZHmmk3clQsVyuFyg== =aBzf -----END PGP PRIVATE KEY BLOCK----- -------------- next part -------------- -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v2.1.0beta3 (GNU/Linux) mFIEUb4GwBMIKoZIzj0DAQcCAwS8p3TFaRAx58qCG63W+UNthXBPSJDnVDPTb/sT iXePaAZ/Gh1GKXTq7k6ab/67MMeVFp/EdySumqdWLtvceFKstFBUZXN0IEVDRFNB LUVDREggKEtleSBhbmQgc3Via2V5IGFyZSAyNTYgYml0cyBsb25nKSA8dGVzdC5l Y2RzYS5lY2RoQGV4YW1wbGUuY29tPoh6BBMTCAAiBQJRvgbAAhsDBgsJCAcDAgYV CAIJCgsEFgIDAQIeAQIXgAAKCRD3wDlWjFo9U5O2AQDi89NO6JbaIObC63jMMWsi AaQHrBCPkDZLibgNv73DLgD/faouH4YZJs+cONQBPVnP1baG1NpWR5ppN3JULFcr hcq4VgRRvgbAEggqhkjOPQMBBwIDBLtY8Nmfz0zSEa8C1snTOWN+VcT8pXPwgJRy z6kSP4nPt1xj1lPKj5zwPXKWxMkPO9ocqhKdg2mOh6/rc1ObIoMDAQgHiGEEGBMI AAkFAlG+BsACGwwACgkQ98A5VoxaPVN8cgEAj4dMNMNwRSg2ZBWunqUAHqIedVbS dmwmbysD192L3z4A/ReXEa0gtv8OFWjuALD1ovEK8TpDORLUb6IuUb5jUIzY =17nm -----END PGP PUBLIC KEY BLOCK----- -------------- next part -------------- -----BEGIN PGP MESSAGE----- Version: GnuPG v2.1.0beta3 (GNU/Linux) hH4Dp5+FdoujIBwSAgMErx4BSvgXY3irwthgxU8zPoAoR+8rhmxdpwbw6ZJAO2GX azWJ85JNcobHKDeGeUq6wkTFu+g6yG99gIX8J5xJAjBRhyCRcaFgwbdDV4orWTe3 iewiT8qs4BQ23e0c8t+thdKoK4thMsCJy7wSKqY0sJTSVAELroNbCOi2lcO15YmW 6HiuFH7VKWcxPUBjXwf5+Z3uOKEp28tBgNyDrdbr1BbqlgYzIKq/pe9zUbUXfitn vFc6HcGhvmRQreQ+Yw1x3x0HJeoPwg== =HKTh -----END PGP MESSAGE----- From christian at quelltextlich.at Wed Jun 26 13:23:33 2013 From: christian at quelltextlich.at (Christian Aistleitner) Date: Wed, 26 Jun 2013 13:23:33 +0200 Subject: DCO for Christian Aistleitner Message-ID: <20130626112332.GA2228@quelltextlich.at> GnuPG Developer's Certificate of Origin. Version 1.0 ===================================================== By making a contribution to the GnuPG project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the free software license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate free software license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same free software license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the free software license(s) involved. Signed-off-by: Christian Aistleitner -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From christian at quelltextlich.at Wed Jun 26 13:30:20 2013 From: christian at quelltextlich.at (Christian Aistleitner) Date: Wed, 26 Jun 2013 13:30:20 +0200 Subject: [PATCH] Correct cherry-pick conflicts in agent's Makefile Message-ID: <20130626113020.GA32108@quelltextlich.at> -- Signed-off-by: Christian Aistleitner --- agent/Makefile.am | 3 --- 1 file changed, 3 deletions(-) diff --git a/agent/Makefile.am b/agent/Makefile.am index 92f8fc0..d7fae8c 100644 --- a/agent/Makefile.am +++ b/agent/Makefile.am @@ -71,9 +71,6 @@ gpg_agent_LDADD = $(commonpth_libs) \ $(resource_objs) gpg_agent_LDFLAGS = $(extra_bin_ldflags) $(gpg_agent_res_ldflags) gpg_agent_DEPENDENCIES = $(gpg_agent_res_deps) - $(LIBGCRYPT_LIBS) $(LIBASSUAN_LIBS) $(PTH_LIBS) \ - $(GPG_ERROR_LIBS) $(LIBINTL) $(NETLIBS) $(LIBICONV) \ - $(resource_objs) gpg_protect_tool_SOURCES = \ protect-tool.c \ -- 1.8.1.5 From hans at guardianproject.info Wed Jun 26 17:02:29 2013 From: hans at guardianproject.info (Hans of Guardian) Date: Wed, 26 Jun 2013 11:02:29 -0400 Subject: DCO for Hans-Christoph Steiner In-Reply-To: <20130626112332.GA2228@quelltextlich.at> References: <20130626112332.GA2228@quelltextlich.at> Message-ID: Christian's email just reminded me that I don't think I sent one of these in: GnuPG Developer's Certificate of Origin. Version 1.0 ===================================================== By making a contribution to the GnuPG project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the free software license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate free software license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same free software license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the free software license(s) involved. Signed-off-by: Hans-Christoph Steiner -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 881 bytes Desc: This is a digitally signed message part URL: From hans at guardianproject.info Wed Jun 26 17:15:00 2013 From: hans at guardianproject.info (Hans of Guardian) Date: Wed, 26 Jun 2013 11:15:00 -0400 Subject: changing SONAME for libs on Android In-Reply-To: <87bo6uod0r.fsf@vigenere.g10code.de> References: <87bo6uod0r.fsf@vigenere.g10code.de> Message-ID: On Jun 25, 2013, at 3:39 AM, Werner Koch wrote: > On Tue, 25 Jun 2013 01:04, hans at guardianproject.info said: > >> Right now, when building the GnuPG suite for Android, it creates >> shared library files that are named in the normal GNU/Linux style, >> i.e. libfoo.so.1.0.3, including the ABI version after .so. When >> building for Android, it currently inherits this behavior, perhaps >> because they share the Linux kernel. But unfortunately, this is not > > The reason is that libtool selects this versioning scheme. It is based > on the several factors but mainly on the cpu-vendor-os triplet. > >> So I'm curious whether this would take a lot of work to do, and what >> other repercussions such a change would have. As far as just changing > > Because we are using libtool, this may only be fixed in libtool. A > quick check of the current libtool (2.4.2) does not reveal any strings > "android" or "bionic" thus I assume there is no special code for android > there. Autoconf however knows about Android (see config.sub). > >> the SONAME in the build process, that should be pretty >> straightforward. > > I don't exactly understand the problem. Is it merely that ld.so does > not follow symlinks or is ld.so very different from the glibc ld.so ? > > For the former it should be easy to change the name after installation. > For the latter we also need to change libtool because it takes care of > installing the libraries. The linker in Android is very different, its very stripped down and basic. It only looks in LD_LIBRARY_PATH for paths, it has no hard-coded default lib path and it entirely ignores rpath in a binary. My guess is that it does follow symlinks. The Android APK installer handles installing the .so files, and they are installed in a way that only root has access to that folder. That means even the app itself cannot modify them or add symlinks. We have worked around this by entirely managing the installation of the .so files in the app, then setting LD_LIBRARY_PATH. But this is a approach that is fraught with difficulties and complexity. > A quick search found these notes on a gstreamer presentation slide: > > - android's dynamic linker has a hard-coded limit on the number of .so > files (shared libraries and/or plugins) you can load in a single > process. > > - Android's linker is limited to 64, 96 and 128 shared libraries > Including all plugins we have 262 shared libraries Currently, we are only using about 10 .so files for the app. I guess I'm not aware of all the plugins for GnuPG. > May be it it worth to look closer at this. I consider it important to > know the exact reason why there are problems with the SO files before we > start hacking on libtool. > > The next version of libtool has major changes and is not anymore a > complex shell script. I assume it will take more than a year before a > new libtool version is released. For that reason, small changes to the > included 2.42 version of libtool are justified (in fact I recently did > this for W64 and def file parsing). Sounds like modifying libtool is the way to do it, and also working to get this stuff incorporated upstream. .hc From wk at gnupg.org Wed Jun 26 17:40:10 2013 From: wk at gnupg.org (Werner Koch) Date: Wed, 26 Jun 2013 17:40:10 +0200 Subject: Decrypting with ECDH: no secret key In-Reply-To: (Daniele Ricci's message of "Wed, 26 Jun 2013 11:15:56 +0200") References: Message-ID: <87wqpgg9t1.fsf@vigenere.g10code.de> On Wed, 26 Jun 2013 11:15, daniele.athome at gmail.com said: > sec 256E/8C5A3D53 2013-06-16 > uid Test ECDSA-ECDH (Key and subkey are 256 bits > long) > ssb# 256e/8BA3201C 2013-06-16 The hash mark after "sbb" indicates that the subkey if off-line. There is only a stub key left. In the case of GnuPG 2.1 this is similar; the gpg-agent does not have access to the actually key below private-keys-v1.d/. If you run gpg2 --with-keygrip -K 8C5A3D53 You will see something like: ssb 1024g/47BE2775 2003-12-31 Keygrip = 7E201E28B6FEB2927B321F443205F4724EBE637E Now to check whether the key is really available do ls ~/.gnupg/private0keys-v1.d/7E201E28B6FEB2927B321F443205F4724EBE637E.key I don't know why it is missing. You may however import it again from an exported copy of that secret key ("gpg2 --import seckey.gpg") Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From alphazo at gmail.com Wed Jun 26 17:50:37 2013 From: alphazo at gmail.com (Alphazo) Date: Wed, 26 Jun 2013 17:50:37 +0200 Subject: Issues compiling git version Message-ID: Hello, I'm trying to compile the git version but it fails at some point. Would someone have an idea on how to fix it ? mv -f .deps/qualified.Tpo .deps/qualified.Po gcc -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -O3 -Wall -Wcast-align -Wshadow -Wstrict-prototypes -Wformat -Wno-format-y2k -Wformat-security -W -Wno-sign-compare -Wno-missing-field-initializers -Wdeclaration-after-statement -Wno-pointer-sign -Wpointer-arith -Wl,-O1,--sort-common,--as-needed,-z,relro -o gpgsm gpgsm.o misc.o keydb.o server.o call-agent.o call-dirmngr.o fingerprint.o base64.o certlist.o certdump.o certcheck.o certchain.o keylist.o verify.o sign.o encrypt.o decrypt.o import.o export.o delete.o certreqgen.o certreqgen-ui.o minip12.o qualified.o ../kbx/libkeybox.a ../common/libcommon.a ../gl/libgnu.a ../common/libgpgrl.a -lgcrypt -lgpg-error -lksba -lgpg-error -lassuan -lgpg-error -lgpg-error -lreadline -lz -lbz2 -llber make[2]: Leaving directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/sm' Making all in agent make[2]: Entering directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/agent' Makefile:1566: *** missing separator (did you mean TAB instead of 8 spaces?). Stop. make[2]: Leaving directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/agent' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build' make: *** [all] Error 2 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From wk at gnupg.org Wed Jun 26 17:51:59 2013 From: wk at gnupg.org (Werner Koch) Date: Wed, 26 Jun 2013 17:51:59 +0200 Subject: Issues compiling git version In-Reply-To: (alphazo@gmail.com's message of "Wed, 26 Jun 2013 17:50:37 +0200") References: Message-ID: <87obasg99c.fsf@vigenere.g10code.de> On Wed, 26 Jun 2013 17:50, alphazo at gmail.com said: > I'm trying to compile the git version but it fails at some point. Would > someone have an idea on how to fix it ? Use automake 1.11. It seems that the lates automake introduces some problems. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From daniele.athome at gmail.com Wed Jun 26 18:07:39 2013 From: daniele.athome at gmail.com (Daniele Ricci) Date: Wed, 26 Jun 2013 18:07:39 +0200 Subject: Decrypting with ECDH: no secret key In-Reply-To: <87wqpgg9t1.fsf@vigenere.g10code.de> References: <87wqpgg9t1.fsf@vigenere.g10code.de> Message-ID: I think I found the issue. Key was indeed incomplete. I tried to create a new one from scratch (ECDSA+ECDH) and this time ssb shows without the hash mark. However, if I try to export it, gpg2 outputs the key "correctly", without the subkey, printing this warning: gpg: key F08342D6/0AF4E702: error receiving key from agent: Bad secret key - skipped When re-importing the exported secret key in another keyring, the same issue: hash mark after ssb. gpg-agent was started in a custom environment (--homedir) and GNUPGHOME set accordingly. By the way: I don't know if it's normal, pinentry asked me the key password twice during the export. On Wed, Jun 26, 2013 at 5:40 PM, Werner Koch wrote: > On Wed, 26 Jun 2013 11:15, daniele.athome at gmail.com said: > >> sec 256E/8C5A3D53 2013-06-16 >> uid Test ECDSA-ECDH (Key and subkey are 256 bits >> long) >> ssb# 256e/8BA3201C 2013-06-16 > > The hash mark after "sbb" indicates that the subkey if off-line. There > is only a stub key left. In the case of GnuPG 2.1 this is similar; the > gpg-agent does not have access to the actually key below > private-keys-v1.d/. > > If you run > > gpg2 --with-keygrip -K 8C5A3D53 > > You will see something like: > > ssb 1024g/47BE2775 2003-12-31 > Keygrip = 7E201E28B6FEB2927B321F443205F4724EBE637E > > Now to check whether the key is really available do > > ls ~/.gnupg/private0keys-v1.d/7E201E28B6FEB2927B321F443205F4724EBE637E.key > > I don't know why it is missing. You may however import it again from an > exported copy of that secret key ("gpg2 --import seckey.gpg") > > > Salam-Shalom, > > Werner > > > -- > Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. > -- Daniele From wk at gnupg.org Wed Jun 26 18:52:32 2013 From: wk at gnupg.org (Werner Koch) Date: Wed, 26 Jun 2013 18:52:32 +0200 Subject: Decrypting with ECDH: no secret key In-Reply-To: (Daniele Ricci's message of "Wed, 26 Jun 2013 18:07:39 +0200") References: <87wqpgg9t1.fsf@vigenere.g10code.de> Message-ID: <87k3lgg6gf.fsf@vigenere.g10code.de> On Wed, 26 Jun 2013 18:07, daniele.athome at gmail.com said: > However, if I try to export it, gpg2 outputs the key "correctly", > without the subkey, printing this warning: > gpg: key F08342D6/0AF4E702: error receiving key from agent: Bad secret > key - skipped Are you using the old beta tarball or a build from master? If the latetr it would be good to see the log output from all tools: Add log-file socket:///foo/bar/S.log debug 1024 verbose to gpg-agent.conf and gpg.conf. Then run watchgnupg --time-only --force /foo/bar/S.log | tee gnupg.log on another tty. > When re-importing the exported secret key in another keyring, the same > issue: hash mark after ssb. Yes, because the subkey is missing but the public key is available and has a corresponding private primary key. > gpg-agent was started in a custom environment (--homedir) and > GNUPGHOME set accordingly. An easy way to debug this is to use GNUPGHOME=$(pwd) gpg-agent --daemon /bin/bash in a test directory und use this shell for all tests. > By the way: I don't know if it's normal, pinentry asked me the key > password twice during the export. Should not as long as the primart key and the subkeys alluse the same passphrase. Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From christian at quelltextlich.at Wed Jun 26 19:12:36 2013 From: christian at quelltextlich.at (Christian Aistleitner) Date: Wed, 26 Jun 2013 19:12:36 +0200 Subject: Issues compiling git version In-Reply-To: References: Message-ID: <20130626171235.GA14831@quelltextlich.at> Hi, On Wed, Jun 26, 2013 at 05:50:37PM +0200, Alphazo wrote: > Would > someone have an idea on how to fix it ? Use the fix from http://lists.gnupg.org/pipermail/gnupg-devel/2013-June/027745.html > `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/agent' > Makefile:1566: *** missing separator (did you mean TAB instead of 8 > spaces?). Stop. That's just the problem the above fix solves. The Makefile.am is currently broken. Kind regards, Christian -- ---- quelltextlich e.U. ---- \\ ---- Christian Aistleitner ---- Companies' registry: 360296y in Linz Christian Aistleitner Gruendbergstrasze 65a Email: christian at quelltextlich.at 4040 Linz, Austria Phone: +43 732 / 26 95 63 Fax: +43 732 / 26 95 63 Homepage: http://quelltextlich.at/ --------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From christian at quelltextlich.at Wed Jun 26 19:27:07 2013 From: christian at quelltextlich.at (Christian Aistleitner) Date: Wed, 26 Jun 2013 19:27:07 +0200 Subject: Issues compiling git version In-Reply-To: <87obasg99c.fsf@vigenere.g10code.de> References: <87obasg99c.fsf@vigenere.g10code.de> Message-ID: <20130626172707.GB14831@quelltextlich.at> Hi Werner, On Wed, Jun 26, 2013 at 05:51:59PM +0200, Werner Koch wrote: > On Wed, 26 Jun 2013 17:50, alphazo at gmail.com said: > > > I'm trying to compile the git version but it fails at some point. Would > > someone have an idea on how to fix it ? > > Use automake 1.11. It seems that the lates automake introduces some > problems. Much rather, the conflicts in cherry-pick 88e24341e57c96e31a25e92e09d67989e64cc1c1 were not resolved cleanly. In agent/Makefile.am the three lines after setting gpg_agent_DEPENDENCIES (lines 74?76) do not make too much sense. Best regards, Christian P.S.: Arguably, one might also remove gpg_agent_LDFLAGS, and gpg_agent_DEPENDENCIES as in the corresponding commit 049b3d9ca0285d15c00c215ac9b533c994196ca4 on STABLE-BRANCH-2-0. However, I do not have a w32 setup to test against and after removing the above three lines, the code already compiled nicely again. -- ---- quelltextlich e.U. ---- \\ ---- Christian Aistleitner ---- Companies' registry: 360296y in Linz Christian Aistleitner Gruendbergstrasze 65a Email: christian at quelltextlich.at 4040 Linz, Austria Phone: +43 732 / 26 95 63 Fax: +43 732 / 26 95 63 Homepage: http://quelltextlich.at/ --------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From alphazo at gmail.com Wed Jun 26 22:55:09 2013 From: alphazo at gmail.com (Alphazo) Date: Wed, 26 Jun 2013 22:55:09 +0200 Subject: Issues compiling git version In-Reply-To: <20130626171235.GA14831@quelltextlich.at> References: <20130626171235.GA14831@quelltextlich.at> Message-ID: Thanks a lot...That did the trick! One last thing, I got the following recommendations from someone on ArchLinux AUR in order to build the package: Replace: ./autogen.sh --force ./configure --enable-maintainer-mode --prefix=/usr --libexecdir=/usr/lib/gnupg #$EXTRAOPTS make by: sed -i 's/AM_CONFIG_HEADER/AC_CONFIG_HEADERS/g' configure.ac sed -i 's/doc, yes/doc, no/g' configure.ac autoreconf -fiv --force LIBS=-llber ./configure --enable-maintainer-mode --prefix=/usr --libexecdir=/usr/lib/gnupg #$EXTRAOPTS make It works but is there a cleaner way to stay to the more official instructions? The official instructions give me : checking that generated files are newer than configure... done checking that generated files are newer than configure... done configure: creating ./config.status config.status: error: cannot find input file: `m4/Makefile.in' Noticed the following during the build: configure.ac:69: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see: configure.ac:69: http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_005fINIT_005fAUTOMAKE-invocation configure.ac:1113: warning: The 'AM_PROG_MKDIR_P' macro is deprecated, and its use is discouraged. configure.ac:1113: You should use the Autoconf-provided 'AC_PROG_MKDIR_P' macro instead, configure.ac:1113: and use '$(MKDIR_P)' instead of '$(mkdir_p)'in your Makefile.am files. parallel-tests: error: required file 'scripts/test-driver' not found parallel-tests: 'automake --add-missing' can install 'test-driver' kbx/Makefile.am:22: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS') Running autoconf --force ... You may now run: ./configure --sysconfdir=/etc --enable-maintainer-mode --enable-symcryptrun --enable-mailto --enable-gpgtar && make checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes /home/alpha/PERSO/DOCS/arch-packages/gnupg2-git/src/gnupg-build/scripts/missing: Unknown `--is-lightweight' option Try `/home/alpha/PERSO/DOCS/arch-packages/gnupg2-git/src/gnupg-build/scripts/missing --help' for more information configure: WARNING: 'missing' script is too old or missing On Wed, Jun 26, 2013 at 7:12 PM, Christian Aistleitner < christian at quelltextlich.at> wrote: > Hi, > > On Wed, Jun 26, 2013 at 05:50:37PM +0200, Alphazo wrote: > > Would > > someone have an idea on how to fix it ? > > Use the fix from > > http://lists.gnupg.org/pipermail/gnupg-devel/2013-June/027745.html > > > `/tmp/yaourt-tmp-alpha/aur-gnupg2-git/src/gnupg-build/agent' > > Makefile:1566: *** missing separator (did you mean TAB instead of 8 > > spaces?). Stop. > > That's just the problem the above fix solves. > The Makefile.am is currently broken. > > Kind regards, > Christian > > > -- > ---- quelltextlich e.U. ---- \\ ---- Christian Aistleitner ---- > Companies' registry: 360296y in Linz > Christian Aistleitner > Gruendbergstrasze 65a Email: christian at quelltextlich.at > 4040 Linz, Austria Phone: +43 732 / 26 95 63 > Fax: +43 732 / 26 95 63 > Homepage: http://quelltextlich.at/ > --------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wk at gnupg.org Wed Jun 26 23:29:15 2013 From: wk at gnupg.org (Werner Koch) Date: Wed, 26 Jun 2013 23:29:15 +0200 Subject: Issues compiling git version In-Reply-To: <20130626171235.GA14831@quelltextlich.at> (Christian Aistleitner's message of "Wed, 26 Jun 2013 19:12:36 +0200") References: <20130626171235.GA14831@quelltextlich.at> Message-ID: <87zjucef2s.fsf@vigenere.g10code.de> On Wed, 26 Jun 2013 19:12, christian at quelltextlich.at said: > That's just the problem the above fix solves. > The Makefile.am is currently broken. You are right and I have to apologize for accusing automake. I was pretty sure that I did a complete build before the last push, but obviously I did. 136f190 does build now. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From wk at gnupg.org Wed Jun 26 23:32:26 2013 From: wk at gnupg.org (Werner Koch) Date: Wed, 26 Jun 2013 23:32:26 +0200 Subject: Issues compiling git version In-Reply-To: <20130626172707.GB14831@quelltextlich.at> (Christian Aistleitner's message of "Wed, 26 Jun 2013 19:27:07 +0200") References: <87obasg99c.fsf@vigenere.g10code.de> <20130626172707.GB14831@quelltextlich.at> Message-ID: <87vc50eexh.fsf@vigenere.g10code.de> On Wed, 26 Jun 2013 19:27, christian at quelltextlich.at said: > P.S.: Arguably, one might also remove gpg_agent_LDFLAGS, and > gpg_agent_DEPENDENCIES as in the corresponding commit > 049b3d9ca0285d15c00c215ac9b533c994196ca4 I am actually not sure whether the removal of dependencies in 2.0 is correct. We can remove that from master because it is required to build for WindowsCE. I did some cleanups anyway. Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From wk at gnupg.org Thu Jun 27 09:37:44 2013 From: wk at gnupg.org (Werner Koch) Date: Thu, 27 Jun 2013 09:37:44 +0200 Subject: Issues compiling git version In-Reply-To: (alphazo@gmail.com's message of "Wed, 26 Jun 2013 22:55:09 +0200") References: <20130626171235.GA14831@quelltextlich.at> Message-ID: <87li5wdmwn.fsf@vigenere.g10code.de> On Wed, 26 Jun 2013 22:55, alphazo at gmail.com said: > One last thing, I got the following recommendations from someone on > ArchLinux AUR in order to build the package: I just updated some stuff to be prepared for newer automake versions. However, newer automakes will default to parallel tests and that won't work with gnupg. Unfortunately it is not possible to use "serial-tests" with automake 1.11 because it prints an annoying warning. If you want to use automake 1.13 you need to add that option to configure.ac yourself; just grep for serial-tests. > sed -i 's/doc, yes/doc, no/g' configure.ac You better install the transfig package. > autoreconf -fiv --force Please don't do that. From README.GIT: Please don't use autopoint, libtoolize or autoreconf unless you are the current maintainer and want to update the standard configuration files. All those files should be in GIT and only updated manually if the maintainer decides that newer versions are required. The [...] Using "./autogen.sh --force" is the suggested way to build configure. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From alphazo at gmail.com Thu Jun 27 11:00:33 2013 From: alphazo at gmail.com (Alphazo) Date: Thu, 27 Jun 2013 11:00:33 +0200 Subject: Issues compiling git version In-Reply-To: <87li5wdmwn.fsf@vigenere.g10code.de> References: <20130626171235.GA14831@quelltextlich.at> <87li5wdmwn.fsf@vigenere.g10code.de> Message-ID: Hi Werner, I added AUTOMAKE_OPTIONS = serial-tests to configure.ac but it failed (also tried to add it to Makefile.am). I get the following error when using the above option and recommended build command: ./autogen.sh --force ./configure --enable-maintainer-mode --prefix=/usr --libexecdir=/usr/lib/gnupg #$EXTRAOPTS ******* configure: creating ./config.status config.status: error: cannot find input file: `m4/Makefile.in' On Thu, Jun 27, 2013 at 9:37 AM, Werner Koch wrote: > On Wed, 26 Jun 2013 22:55, alphazo at gmail.com said: > > One last thing, I got the following recommendations from someone on > > ArchLinux AUR in order to build the package: > > I just updated some stuff to be prepared for newer automake versions. > However, newer automakes will default to parallel tests and that won't > work with gnupg. Unfortunately it is not possible to use "serial-tests" > with automake 1.11 because it prints an annoying warning. If you want > to use automake 1.13 you need to add that option to configure.ac > yourself; just grep for serial-tests. > > > sed -i 's/doc, yes/doc, no/g' configure.ac > > You better install the transfig package. > > > autoreconf -fiv --force > > Please don't do that. From README.GIT: > > Please don't use autopoint, libtoolize or autoreconf unless you are > the current maintainer and want to update the standard configuration > files. All those files should be in GIT and only updated manually > if the maintainer decides that newer versions are required. The > [...] > > > Using "./autogen.sh --force" is the suggested way to build configure. > > > > Salam-Shalom, > > Werner > > -- > Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wk at gnupg.org Thu Jun 27 12:30:00 2013 From: wk at gnupg.org (Werner Koch) Date: Thu, 27 Jun 2013 12:30:00 +0200 Subject: Issues compiling git version In-Reply-To: (alphazo@gmail.com's message of "Thu, 27 Jun 2013 11:00:33 +0200") References: <20130626171235.GA14831@quelltextlich.at> <87li5wdmwn.fsf@vigenere.g10code.de> Message-ID: <87d2r7ethz.fsf@vigenere.g10code.de> On Thu, 27 Jun 2013 11:00, alphazo at gmail.com said: > config.status: error: cannot find input file: `m4/Makefile.in' m4 is clearly listed in SUBDIRS of the top Makefile. Thus automake should have created it. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From alphazo at gmail.com Thu Jun 27 12:50:35 2013 From: alphazo at gmail.com (Alphazo) Date: Thu, 27 Jun 2013 12:50:35 +0200 Subject: Issues compiling git version In-Reply-To: <87d2r7ethz.fsf@vigenere.g10code.de> References: <20130626171235.GA14831@quelltextlich.at> <87li5wdmwn.fsf@vigenere.g10code.de> <87d2r7ethz.fsf@vigenere.g10code.de> Message-ID: Just to check I installed automake 1.11 and was able to compile using: AUTOMAKE_SUFFIX="-1.11" ./autogen.sh --force but it fails when reverting to ./autogen.sh --force and add AUTOMAKE_OPTIONS = serial-tests to configure.ac Is this the right way to specify the option? On Thu, Jun 27, 2013 at 12:30 PM, Werner Koch wrote: > On Thu, 27 Jun 2013 11:00, alphazo at gmail.com said: > > > config.status: error: cannot find input file: `m4/Makefile.in' > > m4 is clearly listed in SUBDIRS of the top Makefile. Thus automake > should have created it. > > > Salam-Shalom, > > Werner > > -- > Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.athome at gmail.com Sat Jun 29 12:26:18 2013 From: daniele.athome at gmail.com (Daniele Ricci) Date: Sat, 29 Jun 2013 12:26:18 +0200 Subject: Decrypting with ECDH: no secret key In-Reply-To: <87k3lgg6gf.fsf@vigenere.g10code.de> References: <87wqpgg9t1.fsf@vigenere.g10code.de> <87k3lgg6gf.fsf@vigenere.g10code.de> Message-ID: Hi, I was using the beta3 tarball. Shouldn't I? Anyway just to be sure I just cloned gnupg from git and tried the same tests. Same error :-( Log is attached. If you need more tests/data (e.g. a trace of some kind) just tell me. On Wed, Jun 26, 2013 at 6:52 PM, Werner Koch wrote: > On Wed, 26 Jun 2013 18:07, daniele.athome at gmail.com said: > >> However, if I try to export it, gpg2 outputs the key "correctly", >> without the subkey, printing this warning: >> gpg: key F08342D6/0AF4E702: error receiving key from agent: Bad secret >> key - skipped > > Are you using the old beta tarball or a build from master? If the > latetr it would be good to see the log output from all tools: Add > > log-file socket:///foo/bar/S.log > debug 1024 > verbose > > to gpg-agent.conf and gpg.conf. Then run > > watchgnupg --time-only --force /foo/bar/S.log | tee gnupg.log > > on another tty. > >> When re-importing the exported secret key in another keyring, the same >> issue: hash mark after ssb. > > Yes, because the subkey is missing but the public key is available and > has a corresponding private primary key. > >> gpg-agent was started in a custom environment (--homedir) and >> GNUPGHOME set accordingly. > > An easy way to debug this is to use > > GNUPGHOME=$(pwd) gpg-agent --daemon /bin/bash > > in a test directory und use this shell for all tests. > >> By the way: I don't know if it's normal, pinentry asked me the key >> password twice during the export. > > Should not as long as the primart key and the subkeys alluse the same > passphrase. > > > Shalom-Salam, > > Werner > > -- > Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. > -- Daniele -------------- next part -------------- A non-text attachment was scrubbed... Name: gnupg.log Type: application/octet-stream Size: 15728 bytes Desc: not available URL: