From matthew.summers at syapse.com Thu Jun 1 17:53:55 2017 From: matthew.summers at syapse.com (Matthew Summers) Date: Thu, 1 Jun 2017 10:53:55 -0500 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> Message-ID: WIth the latest release I still experience errors with parallel decryption (10 in parallel) using a 4096 bit RSA key. ``` GPG fails with: gpg: decryption failed: No secret key ``` GPG-Agent displays the following error msgs (log level guru) for ~15 - 25% of the decrypt operations. ``` DBG: rsa_decrypt => Cannot allocate memory decryption failed: Cannot allocate memory command 'PKDECRYPT' failed: Cannot allocate memory DBG: chan_19 -> ERR 16810070 Cannot allocate memory ``` If I push the parallelism up to 50, I can basically destroy the agent, 98% of decrypt operations fail, where it crashes in a similar manner as we see here: https://lists.gnutls.org/pipermail/gnupg-users/2017-March/057940.html gpg-agent[2191] command 'PKDECRYPT' failed: Cannot allocate memory gpg-agent[2191] DBG: chan_15 -> ERR 16810070 Cannot allocate memory gpg-agent[2191] DBG: chan_15 <- [eof] gpg-agent[2191] DBG: rsa_decrypt res: [out of core] gpg-agent[2191] Ohhhh jeeee: ... this is a bug (sexp.c:1433:do_vsexp_sscan) At the time of the error there is at least 10GB of free RAM available. I have also increased the ulimits to the following (basically unlimited for everything). data seg size unlimited max locked memory unlimted max memory size unlimited stack size unlimited virtual memory unlimited gpg (GnuPG) 2.1.21 libgcrypt 1.7.6 npth 1.4 It may be notable that when using a smartcard (yubikey4 or nitrokey), it appears that all decrypt operations are serialized and we see no failures. None. We are testing in the following manner (default-recipient is a local 4096 rsa key) ``` echo test | gpg -aer -o gpg.asc yes gpg.asc | head -n 50 | xargs -n 1 -P 50 gpg -qd ``` We change the number 50 in both places to alter the level of parallelism. Any help here would be greatly appreciated. Kind Regards, Matt Summers From gniibe at fsij.org Fri Jun 2 01:31:31 2017 From: gniibe at fsij.org (NIIBE Yutaka) Date: Fri, 02 Jun 2017 08:31:31 +0900 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> Message-ID: <878tlbi8to.fsf@iwagami.gniibe.org> Hello, Thank you for your keeping this issue. We don't ignore problem, I work for similar bug report of: https://dev.gnupg.org/T3027 For original report of crash of this thread, I think that it is the same thing. I have been trying fix the problem of crash itself, and I fixed several bugs in gpg-agent/npth/libgcrypt. I believe that I identified the cause of crash, and proposed a particular fix for libgcrypt (in D431). So, I think that I know some. For crash itself, the problem will be gone, by next release of nPth (for macOS) and libgcrypt. For "Cannot allocate memory" error, we need another approach like throttling, I think. I mean, we need to regulate rate of accepting requests. Matthew Summers wrote: > ``` > GPG fails with: gpg: decryption failed: No secret key > ``` > > GPG-Agent displays the following error msgs (log level guru) for ~15 - > 25% of the decrypt operations. > > ``` > DBG: rsa_decrypt => Cannot allocate memory > decryption failed: Cannot allocate memory > command 'PKDECRYPT' failed: Cannot allocate memory > DBG: chan_19 -> ERR 16810070 Cannot allocate memory > ``` For current implementation, it is (somehow) expected for many requests (in the situation with no smartcard). > At the time of the error there is at least 10GB of free RAM available. The limitation comes from the fact we only have 32KB or 64KB for secure memory; The region is mlock(2)-ed to avoid data transfer to swap storage. ... even if we have multiple giga bytes of memory. > It may be notable that when using a smartcard (yubikey4 or nitrokey), > it appears that all decrypt operations are serialized and we see no > failures. None. Yes. Perhaps, it would be good for libgcrypt to have an API for an application (in this case gpg-agent) showing how much secure memory is used (and how much left), so that an application can throttle accepting requests. Or, more easier, we can introduce a limitation to gpg-agent, say, only accepts some fixed number of multiple connections simultaneously. -- From wk at gnupg.org Fri Jun 2 10:31:31 2017 From: wk at gnupg.org (Werner Koch) Date: Fri, 02 Jun 2017 10:31:31 +0200 Subject: [Announce] Libgcrypt 1.7.7 released Message-ID: <87o9u67puk.fsf@wheatstone.g10code.de> Hi! The GnuPG Project is pleased to announce the availability of Libgcrypt version 1.7.7. This is a bug fixing. Libgcrypt is a general purpose library of cryptographic building blocks. It is originally based on code used by GnuPG. It does not provide any implementation of OpenPGP or other protocols. Thorough understanding of applied cryptography is required to use Libgcrypt. Noteworthy changes in version 1.7.7 (2017-06-02) [C21/A1/R7] =================================== * Bug fixes: - Fix possible timing attack on EdDSA session key. - Fix long standing bug in secure memory implementation which could lead to a segv on free. [bug#3027] Download ======== Source code is hosted at the GnuPG FTP server and its mirrors as listed at https://gnupg.org/download/mirrors.html . On the primary server the source tarball and its digital signature are: ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.bz2 (2794k) ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.bz2.sig That file is bzip2 compressed. A gzip compressed version is here: ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.gz (3290k) ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.gz.sig The same files are also available via HTTP: https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.bz2 https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.bz2.sig https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.gz https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.7.tar.gz.sig In order to check that the version of Libgcrypt you downloaded is an original and unmodified file please follow the instructions found at . In short, you may use one of the following methods: - Check the supplied OpenPGP signature. For example to check the signature of the file libgcrypt-1.7.7.tar.bz2 you would use this command: gpg --verify libgcrypt-1.7.7.tar.bz2.sig libgcrypt-1.7.7.tar.bz2 This checks whether the signature file matches the source file. You should see a message indicating that the signature is good and made by one or more of the release signing keys. Make sure that this is a valid key, either by matching the shown fingerprint against a trustworthy list of valid release signing keys or by checking that the key has been signed by trustworthy other keys. See the end of this mail for information on the signing keys. - If you are not able to use an existing version of GnuPG, you have to verify the SHA-1 checksum. On Unix systems the command to do this is either "sha1sum" or "shasum". Assuming you downloaded the file libgcrypt-1.7.7.tar.bz2, you run the command like this: sha1sum libgcrypt-1.7.7.tar.bz2 and check that the output matches the first line from the this list: ea4ae1a4dba51f15095319419d7b42a0bf160384 libgcrypt-1.7.7.tar.bz2 fc5b02758dc47fde70a155a05efe9d92a1778f8d libgcrypt-1.7.7.tar.gz You should also verify that the checksums above are authentic by matching them with copies of this announcement. Those copies can be found at other mailing lists, web sites, and search engines. Copying ======= Libgcrypt is distributed under the terms of the GNU Lesser General Public License (LGPLv2.1+). The helper programs as well as the documentation are distributed under the terms of the GNU General Public License (GPLv2+). The file LICENSES has notices about contributions that require that these additional notices are distributed. Support ======= For help on developing with Libgcrypt you should read the included manual and optional ask on the gcrypt-devel mailing list [1]. A listing with commercial support offers for Libgcrypt and related software is available at the GnuPG web site [2]. If you are a developer and you may need a certain feature for your project, please do not hesitate to bring it to the gcrypt-devel mailing list for discussion. Maintenance and development of Libgcrypt is mostly financed by donations; see . We currently employ 4 full-time developers, one part-timer, and one contractor to work on GnuPG and closely related software like Libgcrypt. Thanks ====== We like to thank all the people who helped with this release, be it testing, coding, translating, suggesting, auditing, administering the servers, spreading the word, and answering questions on the mailing lists. Also many thanks to all our donors [3]. For the GnuPG hackers, Werner [1] https://lists.gnupg.org/mailman/listinfo/gcrypt-devel [2] https://www.gnupg.org/service.html [3] https://gnupg.org/donate/kudos.html p.s. This is an announcement only mailing list. Please send replies only to the gcrypt-devel 'at' gnupg.org mailing list. p.p.s List of Release Signing Keys: To guarantee that a downloaded GnuPG version has not been tampered by malicious entities we provide signature files for all tarballs and binary versions. The keys are also signed by the long term keys of their respective owners. Current releases are signed by one or more of these five keys: 2048R/4F25E3B6 2011-01-12 [expires: 2019-12-31] Key fingerprint = D869 2123 C406 5DEA 5E0F 3AB5 249B 39D2 4F25 E3B6 Werner Koch (dist sig) rsa2048/E0856959 2014-10-29 [expires: 2019-12-31] Key fingerprint = 46CC 7308 65BB 5C78 EBAB ADCF 0437 6F3E E085 6959 David Shaw (GnuPG Release Signing Key) rsa2048/33BD3F06 2014-10-29 [expires: 2016-10-28] Key fingerprint = 031E C253 6E58 0D8E A286 A9F2 2071 B08A 33BD 3F06 NIIBE Yutaka (GnuPG Release Key) rsa2048/7EFD60D9 2014-10-19 [expires: 2020-12-31] Key fingerprint = D238 EA65 D64C 67ED 4C30 73F2 8A86 1B1C 7EFD 60D9 Werner Koch (Release Signing Key) rsa3072/4B092E28 2017-03-17 [expires: 2027-03-15] Key fingerprint = 5B80 C575 4298 F0CB 55D8 ED6A BCEF 7E29 4B09 2E28 Andre Heinecke (Release Signing Key) You may retrieve these keys from a keyserver using this command gpg --keyserver hkp://keys.gnupg.net --recv-keys \ 249B39D24F25E3B6 04376F3EE0856959 \ 2071B08A33BD3F06 8A861B1C7EFD60D9 BCEF7E294B092E28 The keys are also available at https://gnupg.org/signature_key.html and in any recently released GnuPG tarball in the file g10/distsigkey.gpg . Note that this mail has been signed by a different key. -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Gnupg-announce mailing list Gnupg-announce at gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-announce From look at my.amazin.horse Fri Jun 2 14:07:02 2017 From: look at my.amazin.horse (Vincent Breitmoser) Date: Fri, 2 Jun 2017 14:07:02 +0200 Subject: OpenPGP Secret Key Transfer Message-ID: <20170602120702.2l2qqav4teu7b77x@calamity> Hi there, one of the long time unsolved problems we had with OpenKeychain was a good way to transfer secret keys between devices, particularly between Desktop and Mobile. We finally came up with a concept based on qr-code authenticated TLS-PSK via local network, which I implemented this week: https://github.com/open-keychain/open-keychain/pull/2117 You can see it in action here: http://valodim.stratum0.net/transfer_active.gif http://valodim.stratum0.net/transfer_passive.gif The use of TLS-PSK ensures that data on the wire has no value, except for participants on the local network who have access to the PSK, and only during the time of exchange. The usefulness of this mechanism is of course limited until it is supported on more platforms, which is why I approached Andre about this and we discussed the idea together with Werner earlier this week. Andre asked me to write a short spec and post it here, to collect feedback: https://pad.stratum0.org/p/openpgp-skt I went over this with dkg and worked out some warts, and he also seemed interested in writing a standalone client. Special thanks to Oliver Wiese and his students at FU Berlin, who got this ball rolling! - V From wk at gnupg.org Fri Jun 2 16:27:28 2017 From: wk at gnupg.org (Werner Koch) Date: Fri, 02 Jun 2017 16:27:28 +0200 Subject: OpenPGP Secret Key Transfer In-Reply-To: <20170602120702.2l2qqav4teu7b77x@calamity> (Vincent Breitmoser's message of "Fri, 2 Jun 2017 14:07:02 +0200") References: <20170602120702.2l2qqav4teu7b77x@calamity> Message-ID: <87r2z25usv.fsf@wheatstone.g10code.de> On Fri, 2 Jun 2017 14:07, look at my.amazin.horse said: > Desktop and Mobile. We finally came up with a concept based on qr-code > authenticated TLS-PSK via local network, which I implemented this week: I briefly looked at the specs which made things more clear to me after our phone conference. One immediate problem I see is the use of an arbitrary TCP port. A common use case for moving keys between devices are meetings. There you often have the corporate network and a separate guest network which are physically local but from the topology different networks. Thus the FW rules won't allow to pass data between them. To a large extend you have this problem with all peer-to-peer protocols on the Internet (meaning connected network segments) Thus I would suggest to use a dedicated near-field protocol like Bluetooth. Or piggyback your protocol on another protocol which is known to interconnect devices without problems: VoIP or maybe Signal. Right, that is more effort on the software site but it avoids lots of practical problems. Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From look at my.amazin.horse Fri Jun 2 17:14:18 2017 From: look at my.amazin.horse (Vincent Breitmoser) Date: Fri, 2 Jun 2017 17:14:18 +0200 Subject: OpenPGP Secret Key Transfer In-Reply-To: <87r2z25usv.fsf@wheatstone.g10code.de> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> Message-ID: <20170602151418.rjdc3uqupjcc6pow@calamity> > A common use case for moving keys between devices are meetings. Moving secret keys between devices at a meeting, is this a common use case? Can you elaborate? > Thus I would suggest to use a dedicated near-field protocol like > Bluetooth. Even between Android devices, the experiences I've had as a user with apps that transfer data via bluetooth have been horrible. Throwing linux and windows into that mix, I can't imagine this approach leading to useful results within reasonable effort. > Right, that is more effort on the software site but it avoids lots of > practical problems. WebRTC is a candidate that I have looked into, which is basically ICE for transport and DTLS for crypto. My conclusion was that we should stick with simplicity now, and can always make the transport layer more complex later when we notice that this leads to problems in practice. - V From wk at gnupg.org Fri Jun 2 17:54:46 2017 From: wk at gnupg.org (Werner Koch) Date: Fri, 02 Jun 2017 17:54:46 +0200 Subject: OpenPGP Secret Key Transfer In-Reply-To: <20170602151418.rjdc3uqupjcc6pow@calamity> (Vincent Breitmoser's message of "Fri, 2 Jun 2017 17:14:18 +0200") References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> Message-ID: <87bmq62xmh.fsf@wheatstone.g10code.de> On Fri, 2 Jun 2017 17:14, look at my.amazin.horse said: > Moving secret keys between devices at a meeting, is this a common use > case? Can you elaborate? Project releated (sub)keys. Not very common today but I hope in the future this will be a standard practise. > Even between Android devices, the experiences I've had as a user with > apps that transfer data via bluetooth have been horrible. Throwing linux > and windows into that mix, I can't imagine this approach leading to > useful results within reasonable effort. I have always used BT to transfer contacts and calendar. Between cell phones and desktops. In these cloudy times it might not be en vogue anymore - don't know. > stick with simplicity now, and can always make the transport layer more > complex later when we notice that this leads to problems in practice. I agree that using an URL allows to switch to a different transport protocol. So, using good old Internet practice it is okay to first test the code and then work on the specs. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From dkg at fifthhorseman.net Sat Jun 3 00:36:53 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Fri, 02 Jun 2017 18:36:53 -0400 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: <878tlbi8to.fsf@iwagami.gniibe.org> References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> Message-ID: <87o9u6f24a.fsf@fifthhorseman.net> On Fri 2017-06-02 08:31:31 +0900, NIIBE Yutaka wrote: >> At the time of the error there is at least 10GB of free RAM available. > > The limitation comes from the fact we only have 32KB or 64KB for secure > memory; The region is mlock(2)-ed to avoid data transfer to swap > storage. ... even if we have multiple giga bytes of memory. is this a reasonable conclusion today? shouldn't swap be encrypted on machines where it's a risk? not to be leakage-nihilist or anything, but hibernation itself can't be prevented at the application level, right? So these "out of core" crashes just gratuitous outages? > Perhaps, it would be good for libgcrypt to have an API for an > application (in this case gpg-agent) showing how much secure memory is > used (and how much left), so that an application can throttle accepting > requests. > > Or, more easier, we can introduce a limitation to gpg-agent, say, only > accepts some fixed number of multiple connections simultaneously. what would gpg-agent do when the limit is reached? it would be better if a thread could perform a blocking call when asking for more core, so that the thread requesting the limited resource was the thing blocked, and other threads could go about their business. --dkg From gniibe at fsij.org Mon Jun 5 02:33:10 2017 From: gniibe at fsij.org (NIIBE Yutaka) Date: Mon, 05 Jun 2017 09:33:10 +0900 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: <87o9u6f24a.fsf@fifthhorseman.net> References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> <87o9u6f24a.fsf@fifthhorseman.net> Message-ID: <8760gb5l4p.fsf@iwagami.gniibe.org> Daniel Kahn Gillmor wrote: > On Fri 2017-06-02 08:31:31 +0900, NIIBE Yutaka wrote: > >>> At the time of the error there is at least 10GB of free RAM available. >> >> The limitation comes from the fact we only have 32KB or 64KB for secure >> memory; The region is mlock(2)-ed to avoid data transfer to swap >> storage. ... even if we have multiple giga bytes of memory. > > is this a reasonable conclusion today? shouldn't swap be encrypted on > machines where it's a risk? not to be leakage-nihilist or anything, but > hibernation itself can't be prevented at the application level, right? > So these "out of core" crashes just gratuitous outages? I described the current implementation, why it occurs. > what would gpg-agent do when the limit is reached? it would be better > if a thread could perform a blocking call when asking for more core, so > that the thread requesting the limited resource was the thing blocked, > and other threads could go about their business. I think that simpler is better, here, to avoid dead locks; I get half, you get half, when both need more, both wait forever. In my opinion, the current implementation of crypto with MPI could be improved. Currently, it allocates memory during its computation. It would be good if it can allocate all memory needed for the computation for a particular crypto method, in advance, atomically (or block). -- From matthew.summers at syapse.com Mon Jun 5 05:35:59 2017 From: matthew.summers at syapse.com (Matthew Summers) Date: Sun, 4 Jun 2017 22:35:59 -0500 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: <878tlbi8to.fsf@iwagami.gniibe.org> References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> Message-ID: On Thu, Jun 1, 2017 at 6:31 PM, NIIBE Yutaka wrote: > The limitation comes from the fact we only have 32KB or 64KB for secure > memory; The region is mlock(2)-ed to avoid data transfer to swap > storage. ... even if we have multiple giga bytes of memory. Hi, huge thanks for the effort here! I can see a definite improvement in stability with the newly released libs. I ask a question of you: This mlock(2)'d region, is this something that could be made a run-time setting or even compile time changed in configure? I would be keenly interested to know if allocating a larger mlock(2)'d region would address the problem I have. I understand that it's often needed and desirable to restrict memory regions to smaller sizes. It's probably true that for most users, highly parallelized decryption operations are rare. However, it's not difficult to demonstrate use cases where it's important to handle highly parallelized requests to gpg-agent. At present, if I make too many calls too fast to gpg-agent, I can crash it very easily. Maybe I can figure out how to set this AC_DEFINE HAVE_BROKEN_MLOCK, seen in acinclude.m4, Presumably, this disables the secure region. I am already building with `--enable-large-secmem` so I assume it's at 64kb now. Anyway, thanks again! Matt From matthew.summers at syapse.com Mon Jun 5 05:41:53 2017 From: matthew.summers at syapse.com (Matthew Summers) Date: Sun, 4 Jun 2017 22:41:53 -0500 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> Message-ID: On Sun, Jun 4, 2017 at 10:35 PM, Matthew Summers wrote: > `--enable-large-secmem` so I assume it's at 64kb now. Ha, I wonder if this will take care of things diff --git a/configure b/configure index 61a111e..96850dd 100755 --- a/configure +++ b/configure @@ -5536,7 +5536,7 @@ fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $large_secmem" >&5 $as_echo "$large_secmem" >&6; } if test "$large_secmem" = yes ; then - SECMEM_BUFFER_SIZE=65536 + SECMEM_BUFFER_SIZE=262144 else SECMEM_BUFFER_SIZE=32768 fi From matthew.summers at syapse.com Mon Jun 5 06:00:32 2017 From: matthew.summers at syapse.com (Matthew Summers) Date: Sun, 4 Jun 2017 23:00:32 -0500 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> Message-ID: On Sun, Jun 4, 2017 at 10:41 PM, Matthew Summers wrote: > On Sun, Jun 4, 2017 at 10:35 PM, Matthew Summers > wrote: >> `--enable-large-secmem` so I assume it's at 64kb now. > > Ha, I wonder if this will take care of things > > diff --git a/configure b/configure > index 61a111e..96850dd 100755 > --- a/configure > +++ b/configure > @@ -5536,7 +5536,7 @@ fi > { $as_echo "$as_me:${as_lineno-$LINENO}: result: $large_secmem" >&5 > $as_echo "$large_secmem" >&6; } > if test "$large_secmem" = yes ; then > - SECMEM_BUFFER_SIZE=65536 > + SECMEM_BUFFER_SIZE=262144 > else > SECMEM_BUFFER_SIZE=32768 > fi Using SECMEM_BUFFER_SIZE=1048576 I can run 100 parallel decrypt operations again without crashes! (i.e. yes gpg.txt.asc | head -n 50 | xargs -n 1 -P 50 gpg -qd) From gniibe at fsij.org Mon Jun 5 09:46:29 2017 From: gniibe at fsij.org (NIIBE Yutaka) Date: Mon, 05 Jun 2017 16:46:29 +0900 Subject: Fwd: card_status - change-request to update allways In-Reply-To: References: <558E1C26-A190-48FE-ACFB-A050A7FC812E@gmail.com> Message-ID: <87zidmc1wq.fsf@fsij.org> Myonium wrote: > Any chance to get this change pushed into the next build? > ----------------------snip------------------------- > diff --git a/g10/card-util.c b/g10/card-util.c > index 78cd52b..950b76f 100644 > --- a/g10/card-util.c > +++ b/g10/card-util.c > @@ -376,7 +376,7 @@ current_card_status (ctrl_t ctrl, estream_t fp, > if (serialno && serialnobuflen) > *serialno = 0; > > - rc = agent_scd_learn (&info, 0); > + rc = agent_scd_learn (&info, 1); > if (rc) > { > if (opt.with_colons) > ----------------------snip???????????? FYI, we have a ticket: https://dev.gnupg.org/T2898 under this parent ticket: https://dev.gnupg.org/T2291 -- From dkg at fifthhorseman.net Sun Jun 4 23:04:59 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Sun, 04 Jun 2017 17:04:59 -0400 Subject: OpenPGP Secret Key Transfer In-Reply-To: <87bmq62xmh.fsf@wheatstone.g10code.de> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> Message-ID: <87efuzea6c.fsf@fifthhorseman.net> On Fri 2017-06-02 17:54:46 +0200, Werner Koch wrote: > On Fri, 2 Jun 2017 17:14, look at my.amazin.horse said: > >> Moving secret keys between devices at a meeting, is this a common use >> case? Can you elaborate? > > Project releated (sub)keys. Not very common today but I hope in the > future this will be a standard practise. This idea is neat; but it sounds speculative and only useful to a certain subset of people (not everyone is involved with projects that use split or shared keys). So it doesn't seem like an objection that should block a plan to improve user experience on more common practice (many more people will have two devices than will work with project-related (sub)keys) so i lean in the direction of pursuing Vincent's original approach for now. --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From myonium at gmail.com Mon Jun 5 17:57:46 2017 From: myonium at gmail.com (Myonium) Date: Mon, 5 Jun 2017 17:57:46 +0200 Subject: card_status - change-request to update allways In-Reply-To: <87zidmc1wq.fsf@fsij.org> References: <558E1C26-A190-48FE-ACFB-A050A7FC812E@gmail.com> <87zidmc1wq.fsf@fsij.org> Message-ID: <5915BC49-8809-4A01-87BE-9E93508DAE7A@gmail.com> Thank you very much for notifying. Yes that?s exactly what I?m trying to addressing. Do you think this patch addresses the problem appropriate? Is there anything I could help/contribute to get this implemented? Thanks, Ben > On Jun 5, 2017, at 09:46, NIIBE Yutaka wrote: > > Myonium wrote: >> Any chance to get this change pushed into the next build? >> ----------------------snip------------------------- >> diff --git a/g10/card-util.c b/g10/card-util.c >> index 78cd52b..950b76f 100644 >> --- a/g10/card-util.c >> +++ b/g10/card-util.c >> @@ -376,7 +376,7 @@ current_card_status (ctrl_t ctrl, estream_t fp, >> if (serialno && serialnobuflen) >> *serialno = 0; >> >> - rc = agent_scd_learn (&info, 0); >> + rc = agent_scd_learn (&info, 1); >> if (rc) >> { >> if (opt.with_colons) >> ----------------------snip???????????? > > FYI, we have a ticket: > > https://dev.gnupg.org/T2898 > > under this parent ticket: > > https://dev.gnupg.org/T2291 > -- From guilhem at fripost.org Mon Jun 5 16:12:15 2017 From: guilhem at fripost.org (Guilhem Moulin) Date: Mon, 5 Jun 2017 16:12:15 +0200 Subject: OpenPGP Secret Key Transfer In-Reply-To: <87efuzea6c.fsf@fifthhorseman.net> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> <87efuzea6c.fsf@fifthhorseman.net> Message-ID: <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> Hi there, On Sun, 04 Jun 2017 at 17:04:59 -0400, Daniel Kahn Gillmor wrote: > On Fri 2017-06-02 17:54:46 +0200, Werner Koch wrote: >> On Fri, 2 Jun 2017 17:14, look at my.amazin.horse said: >> >>> Moving secret keys between devices at a meeting, is this a common use >>> case? Can you elaborate? >> >> Project releated (sub)keys. Not very common today but I hope in the >> future this will be a standard practise. > > This idea is neat; but it sounds speculative and only useful to a > certain subset of people (not everyone is involved with projects that > use split or shared keys). For signature verification I think we would need some mechanism to tell GnuPG to limit the scope of this or that subkey. FWIW I brought that up to gnupg-devel in autumn 2015, and proposed to use certification notation to limit subkey scopes: https://lists.gnupg.org/pipermail/gnupg-devel/2015-November/030576.html (I wish I could limit the scope of the signing subkey I use for Debian packages for instance, and take it offline. ;-) Cheers, -- Guilhem. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dkg at fifthhorseman.net Mon Jun 5 22:40:54 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Mon, 05 Jun 2017 16:40:54 -0400 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> Message-ID: <87poeicgmh.fsf@fifthhorseman.net> On Sun 2017-06-04 22:35:59 -0500, Matthew Summers wrote: > It's probably true that for most users, highly parallelized decryption > operations are rare. However, it's not difficult to demonstrate use > cases where it's important to handle highly parallelized requests to > gpg-agent. It would really help this conversation to document a few of these real-world use cases. I appreciate that your demonstration scripts are narrow and targeted because it helps to isolate the problem, but having clear real-world cases will help in motivating the fix to actually get deployed. Where are highly-parallelized requests to gpg-agent likely to happen? --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From dkg at fifthhorseman.net Mon Jun 5 22:34:57 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Mon, 05 Jun 2017 16:34:57 -0400 Subject: limiting scope of signing subkeys [was: Re: OpenPGP Secret Key Transfer] In-Reply-To: <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> <87efuzea6c.fsf@fifthhorseman.net> <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> Message-ID: <87shjecgwe.fsf@fifthhorseman.net> On Mon 2017-06-05 16:12:15 +0200, Guilhem Moulin wrote: > For signature verification I think we would need some mechanism to tell > GnuPG to limit the scope of this or that subkey. FWIW I brought that up > to gnupg-devel in autumn 2015, and proposed to use certification > notation to limit subkey scopes: > > https://lists.gnupg.org/pipermail/gnupg-devel/2015-November/030576.html > > (I wish I could limit the scope of the signing subkey I use for Debian > packages for instance, and take it offline. ;-) I think this is an entirely different feature request than the one that Werner was talking about, which is why i've changed the Subject: line here. If i understand you correctly, i think you want more than just limiting the scope of your debian package signing subkey -- i think you want to limit the scope of your e-mail signing subkey so that it will not be considered acceptable for signing debian packages. is that right? to make a new subkey and mark it as capable only for package signing is just a matter of making up a new notation and marking it with *no* capabilities otherwise (or to mark it as just package-signing). But to get you the security constraints you want, you'll need to mark the *other* subkeys as incapable of signing packages. I'm happy to talk this over further -- the other approach would be to have a capability that means "signing software" as distinct from "signing messages" or "certifying identities". From an API perspective, i'm not sure how you'd phase that in on top of the existing signature verification API. How should GnuPG learn that the thing you're verifying is in the "software" domain as opposed to the "e-mail message" domain? --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From wk at gnupg.org Tue Jun 6 00:03:07 2017 From: wk at gnupg.org (Werner Koch) Date: Tue, 06 Jun 2017 00:03:07 +0200 Subject: [Announce] GnuPG Funding Campaign Launched Message-ID: <871sqyytwk.fsf@wheatstone.g10code.de> Independent Encryption Software, GnuPG, Needs Financial Support D?sseldorf, Germany --- Tuesday, June 6, 2017. The GnuPG Project today announced the launch of a funding campaign to further support and improve its leading mail and data encryption software, GnuPG. The campaign aims to secure 15000 Euro per month in recurring donations from individual donors to finance the development of their free software. Donations can be made at the newly reworked website: Activists, journalists, lawyers, and many others rely on GnuPG to protect their communication. The software guards emails, files, and programs from government and criminal snooping and spying on Windows, Mac, and Linux. And, more than two-thirds of the servers running the Internet rely on GnuPG to verify the integrity of system updates. Ongoing government spying revelations have shown how little of our information is really safe. GnuPG is one of the few tools that can offer real protection, free of commercial interests. Edward Snowden used it to encrypt his communications with journalists. Many institutions use GnuPG because by using an open standard they can be sure that they will always be able to access their data. The 6 person development team is currently financed from a successful campaign in early 2015, regular donations from the Linux Foundation, Stripe, Facebook, and a few paid development projects. To ensure long-term stability the new campaign focuses on recurring donations and not one-time donations. Says lead developer Werner Koch: ?We want to continue our work in the long term. But, we want to do so in such a way that our first loyalty is unambiguously to the general public. This means making sure that a majority of our funding comes from individual donors, and not corporations.? To highlight GnuPG's role in protecting data, user stories from 26 organizations including activist groups, news organizations, lawyers, and companies from all over the world have been collected. Their testimonials are presented in daily changing videos on the campaign site. About GNU Privacy Guard (GnuPG) Since 1997, GnuPG has allowed individuals and companies to encrypt and sign data and communication using the well-established and highly interoperable OpenPGP standard. It comes with state of the art cryptography and features a versatile key management system. GnuPG, also known as GPG or sometimes incorrectly as PGP, can be used standalone, but has all features needed for easy integration with other software. It is used as the core cryptography engine of a wealth of other applications: For example Thunderbird with Enigmail, Gpg4win, and GPGTools. Most operating systems use GnuPG's signing ability to protect system updates against malicious attempts to introduce backdoors. GnuPG is available free of charge, and comes with all source code to allow anyone to audit the software. About g10 Code GmbH g10 Code GmbH is the privately owned legal entity behind the GnuPG Project. They employ all developers and keep all profits for the development of GnuPG and related free software. ### Media contacts: Neal H. Walfield, Werner Koch Email: media at gnupg.org Phone: +49-2104-4938797 (during European business hours) Twitter: @gnupg OpenPGP: 370C 0FC3 1293 B339 61C1 FC20 B367 1B93 6BA8 BCB2 _______________________________________________ Gnupg-announce mailing list Gnupg-announce at gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-announce From matthew.summers at syapse.com Tue Jun 6 07:43:23 2017 From: matthew.summers at syapse.com (Matthew Summers) Date: Tue, 6 Jun 2017 00:43:23 -0500 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: <87poeicgmh.fsf@fifthhorseman.net> References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> <87poeicgmh.fsf@fifthhorseman.net> Message-ID: On Mon, Jun 5, 2017 at 3:40 PM, Daniel Kahn Gillmor wrote: > On Sun 2017-06-04 22:35:59 -0500, Matthew Summers wrote: >> It's probably true that for most users, highly parallelized decryption >> operations are rare. However, it's not difficult to demonstrate use >> cases where it's important to handle highly parallelized requests to >> gpg-agent. > > It would really help this conversation to document a few of these > real-world use cases. I appreciate that your demonstration scripts are > narrow and targeted because it helps to isolate the problem, but having > clear real-world cases will help in motivating the fix to actually > get deployed. > > Where are highly-parallelized requests to gpg-agent likely to happen? I am delighted to outline a scenario. First some background. There is an open source config management and remote execution tool written in python called Salt. It let's the user store information formatted as yaml. It's useful to store secrets in these yaml files. It's good to encrypt these secrets [1] when storing them, encryption at rest being good and all that. This yaml can be rendered for runtime use in the config management system. So, for example, let's say we have secret data encrypted with the entire Ops team as recipients, like an API token or something larger like private TLS key material. Further, let's say we need to put this secret onto 10000 systems in batches of 100. Salt provides a handy mechanism to make such a thing simple, via ssh even [2], if desired. With a few lines of code, our gpg'd secret, and a small bit of setup, the we get to a point where the one-liner `salt-ssh -E 'myhosts' state.apply update-special-secret` will do the trick. That is, it will initiate 100 processes that each decrypt the secret, render out the execution data, ssh to the remote host, execute, and return relevant information to us, then pickup the next batch, and so on. The numbers here, of 100 or 1000, are totally arbitrary, but sufficiently large to demonstrate the scale. On a high CPU count system with lots of memory, it would not be terribly crazy to push these numbers much higher. In practice, with the secmem size set to 64k, we would run into issues withe batch sizes of 10. I hope this is clear and helps. I could probably put together a demo of this, though I am not sure how to simulate 1000 hosts really nicely (read cheaply and easily). Ansible or Puppet can do similar things too, using GPG to store/decrypt secrets data. Please feel free to ask questions if what I have outlined above needs further explanation. Thanks again! Matt S. [1] https://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.gpg.html [2] https://docs.saltstack.com/en/latest/topics/ssh/#running-salt-ssh-as-non-root-user [3] https://docs.saltstack.com/en/latest/topics/pillar/#the-pillar-get-function From guilhem at fripost.org Tue Jun 6 21:23:04 2017 From: guilhem at fripost.org (Guilhem Moulin) Date: Tue, 6 Jun 2017 21:23:04 +0200 Subject: limiting scope of signing subkeys In-Reply-To: <87shjecgwe.fsf@fifthhorseman.net> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> <87efuzea6c.fsf@fifthhorseman.net> <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> <87shjecgwe.fsf@fifthhorseman.net> Message-ID: <20170606192304.s7kd5lurqjazqxjw@localhost.localdomain> On Mon, 05 Jun 2017 at 16:34:57 -0400, Daniel Kahn Gillmor wrote: > On Mon 2017-06-05 16:12:15 +0200, Guilhem Moulin wrote: >> For signature verification I think we would need some mechanism to tell >> GnuPG to limit the scope of this or that subkey. FWIW I brought that up >> to gnupg-devel in autumn 2015, and proposed to use certification >> notation to limit subkey scopes: >> >> https://lists.gnupg.org/pipermail/gnupg-devel/2015-November/030576.html >> >> (I wish I could limit the scope of the signing subkey I use for Debian >> packages for instance, and take it offline. ;-) > > I think this is an entirely different feature request than the one that > Werner was talking about, which is why i've changed the Subject: line > here. Oops yeah, apologies for hijacking the thread :-/ > If i understand you correctly, i think you want more than just limiting > the scope of your debian package signing subkey -- i think you want to > limit the scope of your e-mail signing subkey so that it will not be > considered acceptable for signing debian packages. is that right? That's correct indeed. > to make a new subkey and mark it as capable only for package signing is > just a matter of making up a new notation and marking it with *no* > capabilities otherwise (or to mark it as just package-signing). But to > get you the security constraints you want, you'll need to mark the > *other* subkeys as incapable of signing packages. > > I'm happy to talk this over further -- the other approach would be to > have a capability that means "signing software" as distinct from > "signing messages" or "certifying identities". I recall you and I discussed that on #debian-keyring a while ago (probably around the time I sent that mail to gnupg-devel) :-P Adding another capability sounds neat, but IMHO that won't scale if other folks want to limit the scope of their signing subkeys to other domains / types of data. > From an API perspective, i'm not sure how you'd phase that in on top > of the existing signature verification API. How should GnuPG learn > that the thing you're verifying is in the "software" domain as opposed > to the "e-mail message" domain? As I envisioned it that new option ?--assert-notation=? should make gpg and gpgv reject signatures made with a signing (sub)key that is lacking the given notation. With (yet :-/) another flag, the program would relax the behavior to accept the signature when *none* of the *non-revoked* signing (sub)keys have the given notation. Then of course one would need to pass these options to all signature verifiers; but when signature verification is done centrally like in the case of Debian it sounds feasible, right? -- Guilhem. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From look at my.amazin.horse Wed Jun 7 15:55:31 2017 From: look at my.amazin.horse (Vincent Breitmoser) Date: Wed, 07 Jun 2017 15:55:31 +0200 Subject: limiting scope of signing subkeys In-Reply-To: <20170606192304.s7kd5lurqjazqxjw@localhost.localdomain> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> <87efuzea6c.fsf@fifthhorseman.net> <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> <87shjecgwe.fsf@fifthhorseman.net> <20170606192304.s7kd5lurqjazqxjw@localhost.localdomain> Message-ID: <9F40A60D-5603-4BA1-B90D-6B4AA384438E@my.amazin.horse> > With (yet :-/) another flag, the program would >relax the behavior to accept the signature when *none* of the >*non-revoked* signing (sub)keys have the given notation. Careful there: if the key is obtained via an untrusted channel, subkeys may be suppressed and this won't be caught by the usual fingerprint checks. This becomes relevant here, since the properties of one subkey depend on the presence of other. - V (sent from K-9 Mail) From dkg at fifthhorseman.net Wed Jun 7 16:13:51 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Wed, 07 Jun 2017 10:13:51 -0400 Subject: limiting scope of signing subkeys In-Reply-To: <20170606192304.s7kd5lurqjazqxjw@localhost.localdomain> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> <87efuzea6c.fsf@fifthhorseman.net> <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> <87shjecgwe.fsf@fifthhorseman.net> <20170606192304.s7kd5lurqjazqxjw@localhost.localdomain> Message-ID: <87efuvc2cg.fsf@fifthhorseman.net> On Tue 2017-06-06 21:23:04 +0200, Guilhem Moulin wrote: > I recall you and I discussed that on #debian-keyring a while ago > (probably around the time I sent that mail to gnupg-devel) :-P Adding > another capability sounds neat, but IMHO that won't scale if other folks > want to limit the scope of their signing subkeys to other domains / > types of data. How about a non-critical notation "signing-scope" to the subkey binding signature (or to the self-sig, if the primary key is marked as signing-capable) which is a comma-separated list of domains? we could enumerate a few different domains and people could add them as they wanted: * email * software then you'd add a new parameter to GnuPG's --verify-options "signing-scope=foo", and it would accept signatures only from: * signing-capable (sub)keys without the signing-scope notation * signing-capable (sub)keys with the signing-scope notation with "foo" in the list. and signatures from any other key would be rejected. Then people who want to constrain their keys can just issue new subkey-binding signatures as needed. wdyt? --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From guilhem at fripost.org Wed Jun 7 18:44:55 2017 From: guilhem at fripost.org (Guilhem Moulin) Date: Wed, 7 Jun 2017 18:44:55 +0200 Subject: limiting scope of signing subkeys In-Reply-To: <9F40A60D-5603-4BA1-B90D-6B4AA384438E@my.amazin.horse> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> <87efuzea6c.fsf@fifthhorseman.net> <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> <87shjecgwe.fsf@fifthhorseman.net> <20170606192304.s7kd5lurqjazqxjw@localhost.localdomain> <9F40A60D-5603-4BA1-B90D-6B4AA384438E@my.amazin.horse> Message-ID: <20170607164455.f7magdtgrkvszopv@localhost.localdomain> On Wed, 07 Jun 2017 at 15:55:31 +0200, Vincent Breitmoser wrote: >> With (yet :-/) another flag, the program would >>relax the behavior to accept the signature when *none* of the >>*non-revoked* signing (sub)keys have the given notation. > > Careful there: if the key is obtained via an untrusted channel, > subkeys may be suppressed and this won't be caught by the usual > fingerprint checks. This becomes relevant here, since the properties > of one subkey depend on the presence of other. Isn't that the same for subkey rotation via revocation + creation? A MiTM could strip away the revocation subpacket and the new subkey; gpg(1) would then accept signatures made by old subkey (until it expires), right? -- Guilhem. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From guilhem at fripost.org Wed Jun 7 19:03:35 2017 From: guilhem at fripost.org (Guilhem Moulin) Date: Wed, 7 Jun 2017 19:03:35 +0200 Subject: limiting scope of signing subkeys In-Reply-To: <87efuvc2cg.fsf@fifthhorseman.net> References: <20170602120702.2l2qqav4teu7b77x@calamity> <87r2z25usv.fsf@wheatstone.g10code.de> <20170602151418.rjdc3uqupjcc6pow@calamity> <87bmq62xmh.fsf@wheatstone.g10code.de> <87efuzea6c.fsf@fifthhorseman.net> <20170605141215.6djyml6rhzw4jwze@localhost.localdomain> <87shjecgwe.fsf@fifthhorseman.net> <20170606192304.s7kd5lurqjazqxjw@localhost.localdomain> <87efuvc2cg.fsf@fifthhorseman.net> Message-ID: <20170607170335.xn2hydeb775ine4b@localhost.localdomain> On Wed, 07 Jun 2017 at 10:13:51 -0400, Daniel Kahn Gillmor wrote: > then you'd add a new parameter to GnuPG's --verify-options > "signing-scope=foo", and it would accept signatures only from: > > * signing-capable (sub)keys without the signing-scope notation > * signing-capable (sub)keys with the signing-scope notation with "foo" > in the list. > > and signatures from any other key would be rejected. > > Then people who want to constrain their keys can just issue new > subkey-binding signatures as needed. > > wdyt? I like this! :-) Compared to the previous proposal this verification logic sounds a lot less error-prone for verifiers, while keeping an easy ?upgrade path? for users willing to limit the scope of their signing (sub)keys. (I would for instance add another ? annotated ? subkey binding signature to the subkey used to sign this email, in order to limit its scope to the ?email? domain. And generate another signing subkey to use e.g., for code signing.) Thanks for the idea! -- Guilhem. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From wk at gnupg.org Thu Jun 8 18:28:18 2017 From: wk at gnupg.org (Werner Koch) Date: Thu, 08 Jun 2017 18:28:18 +0200 Subject: Non-blocking connect for dirmngr Message-ID: <87mv9io34t.fsf@wheatstone.g10code.de> Hi! When using --auto-key-retrieve or --auto-key-locate to automagically retrieve keys from the Web Key Directory (WKD) or keyservers it often happens that a server does not respond timely. Keyservers may be down and dirmngr would then select another keyserver. However, it may take several minutes until the connect call returns an error. Annoying. Even more annoying are WKD queries to servers which don't support this service and - worse - don't run a web server at all at the expected address. For example one of our core hackers has an address at iki.fi. Now when dirmngr want to lookup an address it tries https://iki.fi/foo/bar and hangs (plain http redirects to www.iki.fo and works). After it times out the code tries the next server listed for that address, until that one times out as well. Finally after 3 times the default timeout you get an error message back. That can be 15 minutes or more. Clearly not acceptable. The obvious solution to this is to use a lower timeout. However, Unix has no easy way to do this because connect(2) has no timeout parameter and the way it can be done used to be non-portable: You switch the socket into blocking mode, call connect and then the select on the socket. Now this works, but according to Stevens, systems use slightly different semantics to tell you the outcome of the operation. This is unfortunate but let's assume it works with todays systems without too much trouble. I implemented that in master and there are now default timeouts of 15 seconds for regular operations and 2 seconds for "unimportant" operations (looking up a key for verification). Works nice on my Linux box but I have not yet tested on any other system. There is code for Windows which builds but it has not yet been tested. I like to ask those of you who are using master on non Debian/Linux boxes to try it out. For example put debug ipc,dns,network verbose log-file socket:// into dirmngr.conf, fireup watchgnupg watchgnupg --time-only --force $(gpgconf --list-dirs socketdir)/S.log and in another term/screen run gpg-connect-agent --dirmngr as test shell. Entering for example WKD_GET --quick --submission-address -- foo at iki.fi should give you a Connection Refused after just a few seconds. Or does it not on your system? Shalom-Salam, Werner p.s The new options we have are: --connect-timeout n --connect-quick-timeout n Set the timeout for HTTP and generic TCP connection attempts to N seconds. The value set with the quick variant is used when the --quick option has been given to certain Assuan commands. The quick value is capped at the value of the regular connect timeout. The default values are 15 and 2 seconds. Note that the timeout values are for each connection attempt; the connection code will attempt to connect all addresses listed for a server. -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From dkg at fifthhorseman.net Thu Jun 8 21:55:43 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Thu, 08 Jun 2017 15:55:43 -0400 Subject: Non-blocking connect for dirmngr In-Reply-To: <87mv9io34t.fsf@wheatstone.g10code.de> References: <87mv9io34t.fsf@wheatstone.g10code.de> Message-ID: <87tw3qp83k.fsf@fifthhorseman.net> On Thu 2017-06-08 18:28:18 +0200, Werner Koch wrote: > When using --auto-key-retrieve or --auto-key-locate to automagically > retrieve keys from the Web Key Directory (WKD) or keyservers it often > happens that a server does not respond timely. > > Keyservers may be down and dirmngr would then select another keyserver. > However, it may take several minutes until the connect call returns an > error. Annoying. thanks for working on fixing this, Werner! It's definitely important if GnuPG is going to be making these queries. > Even more annoying are WKD queries to servers which don't support this > service and - worse - don't run a web server at all at the expected > address. For example one of our core hackers has an address at iki.fi. > Now when dirmngr want to lookup an address it tries > https://iki.fi/foo/bar and hangs (plain http redirects to www.iki.fo and > works). After it times out the code tries the next server listed for > that address, until that one times out as well. Finally after 3 times > the default timeout you get an error message back. That can be 15 > minutes or more. Clearly not acceptable. Another approach could be "happy eyeballs" -- if the name resolves to 2 IP addresses, connect to both of them concurrently and take the first connection that completes. You wouldn't want to do this if there was a hundred IP addresses, but there's probably a reasonable middle-ground that gets you both resiliency when one host is down, and avoids flodding the network. perhaps up to 4 outstanding concurrent connections? And as they fail, if there are as-yet-untried addresses, they could launch them. that'd allow you to keep a reasonable timeout so that sluggish servers have a chance to respond, while not having responsive servers get stuck behind sluggish servers if they exist. --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From dgouttegattat at incenp.org Fri Jun 9 12:17:32 2017 From: dgouttegattat at incenp.org (Damien Goutte-Gattat) Date: Fri, 9 Jun 2017 12:17:32 +0200 Subject: Non-blocking connect for dirmngr In-Reply-To: <87mv9io34t.fsf@wheatstone.g10code.de> References: <87mv9io34t.fsf@wheatstone.g10code.de> Message-ID: Hi, On 06/08/2017 06:28 PM, Werner Koch wrote: > should give you a Connection Refused after just a few seconds. Or does > it not on your system? Connection Refused after 4 seconds here: 4 - 19:50:58 dirmngr[25707.5]: DBG: dns: resolve_dns_name(iki.fi): Success 4 - 19:51:02 dirmngr[25707.5]: can't connect to 'iki.fi': Connection refused The box is a Slackware64 14.2, running linux-4.10.13 with glibc-2.23. Damien -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From fgunbin at fastmail.fm Fri Jun 9 21:02:05 2017 From: fgunbin at fastmail.fm (Filipp Gunbin) Date: Fri, 09 Jun 2017 22:02:05 +0300 Subject: Non-blocking connect for dirmngr In-Reply-To: <87mv9io34t.fsf@wheatstone.g10code.de> (Werner Koch's message of "Thu, 08 Jun 2017 18:28:18 +0200") References: <87mv9io34t.fsf@wheatstone.g10code.de> Message-ID: Hi, I'm on macOS 10.12.5, built gnupg master today and getting 20 sec timeout before getting error. This is what watchgnupg outputs: [client at fd 4 connected (local)] 4 - 21:56:32 dirmngr[91488]: listening on socket '/Users/fgunbin/.gnupg/S.dirmngr' 4 - 21:56:33 dirmngr[91489.0]: permanently loaded certificates: 49 4 - 21:56:33 dirmngr[91489.0]: runtime cached certificates: 0 4 - 21:56:33 dirmngr[91489.0]: trusted certificates: 49 (48,0,0,1) 4 - 21:56:33 dirmngr[91489.6]: handler for fd 6 started 4 - 21:56:33 dirmngr[91489.6]: DBG: chan_6 -> # Home: /Users/fgunbin/.gnupg 4 - 21:56:33 dirmngr[91489.6]: DBG: chan_6 -> # Config: /Users/fgunbin/.gnupg/dirmngr.conf 4 - 21:56:33 dirmngr[91489.6]: DBG: chan_6 -> OK Dirmngr 2.1.22-beta44 at your service 4 - 21:56:33 dirmngr[91489.6]: connection from process -1 (501:20) 4 - 21:56:37 dirmngr[91489.6]: DBG: chan_6 <- WKD_GET --quick --submission-address -- foo at iki.fi 4 - 21:56:37 dirmngr[91489.6]: DBG: dns: fallback resolution order, files then DNS 4 - 21:56:37 dirmngr[91489.6]: DBG: dns: libdns initialized 4 - 21:56:57 dirmngr[91489.6]: DBG: dns: getsrv(_openpgpkey._tcp.iki.fi): Server indicated a failure 4 - 21:56:57 dirmngr[91489.6]: command 'WKD_GET' failed: Server indicated a failure 4 - 21:56:57 dirmngr[91489.6]: DBG: chan_6 -> ERR 219 Server indicated a failure Filipp From patrick at enigmail.net Sat Jun 10 09:13:49 2017 From: patrick at enigmail.net (Patrick Brunschwig) Date: Sat, 10 Jun 2017 09:13:49 +0200 Subject: Non-blocking connect for dirmngr In-Reply-To: References: <87mv9io34t.fsf@wheatstone.g10code.de> Message-ID: <373a7ba8-6937-e033-ab04-b7d95808546a@enigmail.net> On 09.06.17 21:02, Filipp Gunbin wrote: > Hi, I'm on macOS 10.12.5, built gnupg master today and getting 20 sec > timeout before getting error. > > This is what watchgnupg outputs: > > [client at fd 4 connected (local)] > 4 - 21:56:32 dirmngr[91488]: listening on socket '/Users/fgunbin/.gnupg/S.dirmngr' > 4 - 21:56:33 dirmngr[91489.0]: permanently loaded certificates: 49 > 4 - 21:56:33 dirmngr[91489.0]: runtime cached certificates: 0 > 4 - 21:56:33 dirmngr[91489.0]: trusted certificates: 49 (48,0,0,1) > 4 - 21:56:33 dirmngr[91489.6]: handler for fd 6 started > 4 - 21:56:33 dirmngr[91489.6]: DBG: chan_6 -> # Home: /Users/fgunbin/.gnupg > 4 - 21:56:33 dirmngr[91489.6]: DBG: chan_6 -> # Config: /Users/fgunbin/.gnupg/dirmngr.conf > 4 - 21:56:33 dirmngr[91489.6]: DBG: chan_6 -> OK Dirmngr 2.1.22-beta44 at your service > 4 - 21:56:33 dirmngr[91489.6]: connection from process -1 (501:20) > 4 - 21:56:37 dirmngr[91489.6]: DBG: chan_6 <- WKD_GET --quick --submission-address -- foo at iki.fi > 4 - 21:56:37 dirmngr[91489.6]: DBG: dns: fallback resolution order, files then DNS > 4 - 21:56:37 dirmngr[91489.6]: DBG: dns: libdns initialized > 4 - 21:56:57 dirmngr[91489.6]: DBG: dns: getsrv(_openpgpkey._tcp.iki.fi): Server indicated a failure > 4 - 21:56:57 dirmngr[91489.6]: command 'WKD_GET' failed: Server indicated a failure > 4 - 21:56:57 dirmngr[91489.6]: DBG: chan_6 -> ERR 219 Server indicated a failure I got a timeout after 4 seconds on my macOS 10.12.5 -Patrick From dkg at fifthhorseman.net Sun Jun 11 04:03:18 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Sun, 11 Jun 2017 04:03:18 +0200 Subject: GPGME: const-ifying cbs argument for gpgme_data_new_from_cbs() Message-ID: <8760g35li1.fsf@fifthhorseman.net> Hey all-- i'm playing with gpgme using callbacks, and i noticed that gpgme_data_new_from_cbs takes its cbs argument (the struct of callback function pointers) as non-const. is there any reason that it plans to modify this object or is the lack of constness just an oversight? What is the expected lifespan of the cbs callback table? Can it be freed (or fall off the stack) before the gpgme_data_t is released? Should these expected constraints be documented? I'm up for improving the documentation if someone wants to explain the expectations here. Regards, --dkg From dkg at fifthhorseman.net Tue Jun 13 06:47:12 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Tue, 13 Jun 2017 00:47:12 -0400 Subject: gpg-agent self-termination when private-keys-v1.d goes away Message-ID: <87k24gpk8f.fsf@fifthhorseman.net> hey folks-- gpg-agent currently knows to terminate itself when its socket is unlinked from the filesystem. to work around the sun_path length constraint, though, we're often putting the sockets in the /run/user/$(id -u) directory. This means that the workflow of: export GNUPGHOME=$(mktemp -d) # do some experiments rm -rf $GNUPGHOME Actually leaves any associated gpg-agents running in the background. Do this as a part of an automated test suite, and you could easily end up with hundreds of agents or more that are still active. I propose to add private-keys-v1.d/ to the gpg-agent's inotify watchlist, and to have the agent terminate if it notices that directory being deleted as well. I think that will fix the concern described above on GNU/Linux at least. any objections or concerns about this approach? --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From dkg at fifthhorseman.net Tue Jun 13 05:45:10 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Mon, 12 Jun 2017 23:45:10 -0400 Subject: gpgme_op_import_keys() -- unclear documentation, problematic behavior Message-ID: <87mv9cpn3t.fsf@fifthhorseman.net> hey all-- the documentation for gpgme_op_import_keys() (in "info gpgme") describes this function: -- Function: gpgme_error_t gpgme_op_import_keys (gpgme_ctx_t CTX, gpgme_key_t *KEYS) And it says: The function ?gpgme_op_import_keys? adds the keys described by the ?NULL? terminated array KEYS to the key ring of the crypto engine used by CTX. This function is the general interface to move a key from one crypto engine to another as long as they are compatible. In particular it is used to actually import and make keys permanent which have been retrieved from an external source (i.e. using ?GPGME_KEYLIST_MODE_EXTERN?). (1) Only keys of the currently selected protocol of CTX are considered for import. Other keys specified by the KEYS are ignored. As of now all considered keys must have been retrieved using the same method, that is the used key listing mode must be identical. Even reading the footnote (which says "(1) Thus it is a replacement for the usual workaround of exporting and then importing a key to make an X.509 key permanent."), it's hard to tell what any of this means. It looks like it means i am encourage to take a key listing from one gpgme context (e.g. a gpgme_key_t object extracted from gpgme_get_keylist_next(ctx0) or gpgme_get_key(ctx0)) and feed it into another (e.g..gpgme_op_import_keys(ctx1)). But in practice, looking at src/engine-gpg.c, if i use the gpgme_op_import_keys() form (instead of the keydata form), the backend actually uses --recv-keys on the importing context. This doesn't work at all if the keys are not on the public keyservers, or if the local host is offline. And even when keys are on the public keyservers and the local host is online, in the case where the two contexts may have specialized knowledge of the OpenPGP certificate (e.g. non-published certifications, freshly-generated subkeys, etc) it has particularly strange failure cases -- it'll result in different OpenPGP certificates held by the two contexts. Additionally, using the keyservers for this represents a metadata leakage, without any warning to the user that such a thing is planned. Finally, Its final paragraph says: The function returns the error code ?GPG_ERR_NO_ERROR? if the import was completed successfully, ?GPG_ERR_INV_VALUE? if KEYDATA if CTX or KEYDATA is not a valid pointer, ?GPG_ERR_CONFLICT? if the key listing mode does not match, and ?GPG_ERR_NO_DATA? if no keys are considered for export. The mention of KEYDATA seems like it might be a copy/paste issue, since there is no KEYDATA in the function signature. Can anybody clarify these concerns? Have i misunderstood things, or should i try to use the tool differently? --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From justus at g10code.com Tue Jun 13 12:45:51 2017 From: justus at g10code.com (Justus Winter) Date: Tue, 13 Jun 2017 12:45:51 +0200 Subject: gpg-agent self-termination when private-keys-v1.d goes away In-Reply-To: <87k24gpk8f.fsf@fifthhorseman.net> References: <87k24gpk8f.fsf@fifthhorseman.net> Message-ID: <87bmpskvxc.fsf@thinkbox.jade-hamburg.de> Daniel Kahn Gillmor writes: > I propose to add private-keys-v1.d/ to the gpg-agent's inotify > watchlist, and to have the agent terminate if it notices that directory > being deleted as well. Why not watch the $GNUPGHOME itself? > I think that will fix the concern described above on GNU/Linux at > least. For the other platforms, we can periodically stat the directory in the thread that manually watches the sockets. > any objections or concerns about this approach? I'm concerned that we are still cleaning up after the socketdir "fix". Cheers, Justus -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From wk at gnupg.org Tue Jun 13 15:07:00 2017 From: wk at gnupg.org (Werner Koch) Date: Tue, 13 Jun 2017 15:07:00 +0200 Subject: gpg-agent self-termination when private-keys-v1.d goes away In-Reply-To: <87bmpskvxc.fsf@thinkbox.jade-hamburg.de> (Justus Winter's message of "Tue, 13 Jun 2017 12:45:51 +0200") References: <87k24gpk8f.fsf@fifthhorseman.net> <87bmpskvxc.fsf@thinkbox.jade-hamburg.de> Message-ID: <87ink0f34b.fsf@wheatstone.g10code.de> On Tue, 13 Jun 2017 12:45, justus at g10code.com said: > For the other platforms, we can periodically stat the directory in the > thread that manually watches the sockets. Which I think is anyway the better solution than the non-portable inotify. But we also need to consider the implications that we need to use stat in contrast to fstat; the directory entry might have already gone while gpg-agent still has a connection and is working fine on the inode. Anyway, nothing we should change for 2.2. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From wk at gnupg.org Tue Jun 13 15:14:12 2017 From: wk at gnupg.org (Werner Koch) Date: Tue, 13 Jun 2017 15:14:12 +0200 Subject: Non-blocking connect for dirmngr In-Reply-To: (Damien Goutte-Gattat's message of "Fri, 9 Jun 2017 12:17:32 +0200") References: <87mv9io34t.fsf@wheatstone.g10code.de> Message-ID: <87a85cf2sb.fsf@wheatstone.g10code.de> On Fri, 9 Jun 2017 12:17, dgouttegattat at incenp.org said: > Connection Refused after 4 seconds here: Okay, so as expectecd it works on Linux ;-). Thanks. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From wk at gnupg.org Tue Jun 13 15:13:35 2017 From: wk at gnupg.org (Werner Koch) Date: Tue, 13 Jun 2017 15:13:35 +0200 Subject: Non-blocking connect for dirmngr In-Reply-To: (Filipp Gunbin's message of "Fri, 09 Jun 2017 22:02:05 +0300") References: <87mv9io34t.fsf@wheatstone.g10code.de> Message-ID: <87efuof2tc.fsf@wheatstone.g10code.de> On Fri, 9 Jun 2017 21:02, fgunbin at fastmail.fm said: > 4 - 21:56:33 dirmngr[91489.6]: connection from process -1 (501:20) > 4 - 21:56:37 dirmngr[91489.6]: DBG: chan_6 <- WKD_GET --quick --submission-address -- foo at iki.fi I wonder why it takes 4 seconds that gpg sends the command after the connection has already been setup. > 4 - 21:56:37 dirmngr[91489.6]: DBG: dns: libdns initialized > 4 - 21:56:57 dirmngr[91489.6]: DBG: dns: getsrv(_openpgpkey._tcp.iki.fi): Server indicated a failure Here we have a 20 second delay but it comes from the resolver - there has been no connection attempt. I noticed that Patrick has the same problem. Is your connection toi the resolver IPv4 or IPv6? (Justus fixed a v6 bug today). Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From wk at gnupg.org Tue Jun 13 15:19:46 2017 From: wk at gnupg.org (Werner Koch) Date: Tue, 13 Jun 2017 15:19:46 +0200 Subject: Non-blocking connect for dirmngr In-Reply-To: <87tw3qp83k.fsf@fifthhorseman.net> (Daniel Kahn Gillmor's message of "Thu, 08 Jun 2017 15:55:43 -0400") References: <87mv9io34t.fsf@wheatstone.g10code.de> <87tw3qp83k.fsf@fifthhorseman.net> Message-ID: <8760g0f2j1.fsf@wheatstone.g10code.de> On Thu, 8 Jun 2017 21:55, dkg at fifthhorseman.net said: > thanks for working on fixing this, Werner! It's definitely important if > GnuPG is going to be making these queries. For whatever reasons, bugs which affecting me get a high priority ;-) > Another approach could be "happy eyeballs" -- if the name resolves to 2 > IP addresses, connect to both of them concurrently and take the first > connection that completes. You wouldn't want to do this if there was a That would make a bad netizen. A more acceptable approach would be be to start a second connection attempt after one or two seconds. In that case we can assume that there is a problem with the first address. Also randomly selecting from 3 addresses could make sense. Anyway, such an optimization requires a larger rework and we need to get 2.2 out of the door before we do such things. Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From fgunbin at fastmail.fm Wed Jun 14 20:54:51 2017 From: fgunbin at fastmail.fm (Filipp Gunbin) Date: Wed, 14 Jun 2017 21:54:51 +0300 Subject: Non-blocking connect for dirmngr In-Reply-To: <87efuof2tc.fsf@wheatstone.g10code.de> (Werner Koch's message of "Tue, 13 Jun 2017 15:13:35 +0200") References: <87mv9io34t.fsf@wheatstone.g10code.de> <87efuof2tc.fsf@wheatstone.g10code.de> Message-ID: On 13/06/2017 15:13 +0200, Werner Koch wrote: > On Fri, 9 Jun 2017 21:02, fgunbin at fastmail.fm said: >> 4 - 21:56:33 dirmngr[91489.6]: connection from process -1 (501:20) >> 4 - 21:56:37 dirmngr[91489.6]: DBG: chan_6 <- WKD_GET --quick --submission-address -- foo at iki.fi > > I wonder why it takes 4 seconds that gpg sends the command after the > connection has already been setup. It's just me who sent the WKD_GET command 4 seconds after the connection. >> 4 - 21:56:37 dirmngr[91489.6]: DBG: dns: libdns initialized >> 4 - 21:56:57 dirmngr[91489.6]: DBG: dns: getsrv(_openpgpkey._tcp.iki.fi): Server indicated a failure > > Here we have a 20 second delay but it comes from the resolver - there > has been no connection attempt. I noticed that Patrick has the same > problem. Is your connection toi the resolver IPv4 or IPv6? (Justus > fixed a v6 bug today). I'm getting the same thing on the current master. How can I check v4/v6 connection type? Filipp -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From justus at g10code.com Mon Jun 19 15:52:23 2017 From: justus at g10code.com (Justus Winter) Date: Mon, 19 Jun 2017 15:52:23 +0200 Subject: gpgme_op_import_keys() -- unclear documentation, problematic behavior In-Reply-To: <87mv9cpn3t.fsf@fifthhorseman.net> References: <87mv9cpn3t.fsf@fifthhorseman.net> Message-ID: <87o9tk3x0o.fsf@europa.jade-hamburg.de> Daniel Kahn Gillmor writes: > But in practice, looking at src/engine-gpg.c, if i use the > gpgme_op_import_keys() form (instead of the keydata form), the backend > actually uses --recv-keys on the importing context. This doesn't work > at all if the keys are not on the public keyservers, or if the local > host is offline. > > And even when keys are on the public keyservers and the local host is > online, in the case where the two contexts may have specialized > knowledge of the OpenPGP certificate (e.g. non-published certifications, > freshly-generated subkeys, etc) it has particularly strange failure > cases -- it'll result in different OpenPGP certificates held by the two > contexts. > > Additionally, using the keyservers for this represents a metadata > leakage, without any warning to the user that such a thing is planned. > > [...] > > Can anybody clarify these concerns? Have i misunderstood things, or > should i try to use the tool differently? Your analysis is right, this is bad. We should at least fix the documentation. Cheers, Justus -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From liste at secarica.ro Mon Jun 19 19:54:52 2017 From: liste at secarica.ro (Cristian =?UTF-8?Q?Secar=C4=83?=) Date: Mon, 19 Jun 2017 20:54:52 +0300 Subject: backup key saved from GPA has some wrong text encoding in its header Message-ID: <20170619205416.000036de@secarica.ro> (resent of a resent message) From: Mon, 19 Jun 2017 15:38:52 +0300 To: gpa-dev at gnupg.org Date: Cristian Secar? Subject: backup key saved from GPA has some wrong text encoding in its header When saving a backup copy of a key using GPA, the header from the saved text contains some wrong encoded Unicode characters. This does not affect the key itself, it is rather a cosmetic bug ? but still, it is a bug. I first observed this bug by using GPA in my language, Romanian (I am using my own translation since version 0.7.x), but I was not sure if the error was due to my translation or not, so I switched to German language ? there was no difference, however. See this screenshot http://www.secarica.ro/misc/gpa_bad_utf8_backup_key_de.png Cristi -- Cristian Secar? http://www.sec?ric?.ro From steve at gpgtools.org Tue Jun 20 10:16:16 2017 From: steve at gpgtools.org (steve (GPGTools)) Date: Tue, 20 Jun 2017 10:16:16 +0200 Subject: gpg 2.1.21 In-Reply-To: <821F8390-790F-44A0-B8AF-BA2ABFCDFCA6@gpgtools.org> References: <821F8390-790F-44A0-B8AF-BA2ABFCDFCA6@gpgtools.org> Message-ID: <3743430C-687D-471D-A369-91961E6C6A48@gpgtools.org> Just FYI this is now tracked as https://dev.gnupg.org/T3212 > Am 19.05.2017 um 14:42 schrieb steve (GPGTools) : > > Dear all, > > we are internally testing gpg 2.1 and ran into an issue with pinentry, when using gpg 2.1.21. > > When generating a new subkey for an existing key and canceling the pinentry dialog, a secondary pinentry dialog shows up. > > The bug in our ticket system can be found here: > https://gpgtools.lighthouseapp.com/projects/66001/tickets/693 > There?s a screencast included showing what?s going on. > > Are you able to reproduce the problem of the secondary pinentry dialog? > > Kind regards, > steve > _______________________________________________ > Gnupg-devel mailing list > Gnupg-devel at gnupg.org > http://lists.gnupg.org/mailman/listinfo/gnupg-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From dkg at fifthhorseman.net Wed Jun 21 00:41:58 2017 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Tue, 20 Jun 2017 18:41:58 -0400 Subject: gpgme_op_import_keys() -- unclear documentation, problematic behavior In-Reply-To: <87o9tk3x0o.fsf@europa.jade-hamburg.de> References: <87mv9cpn3t.fsf@fifthhorseman.net> <87o9tk3x0o.fsf@europa.jade-hamburg.de> Message-ID: <87efuejn7t.fsf@fifthhorseman.net> On Mon 2017-06-19 15:52:23 +0200, Justus Winter wrote: > Your analysis is right, this is bad. We should at least fix the > documentation. thanks for the confirmation, Justus. I've just opened https://dev.gnupg.org/T3215 to keep track of this. --dkg From Amul.Shah at fisglobal.com Tue Jun 27 16:18:46 2017 From: Amul.Shah at fisglobal.com (Shah, Amul) Date: Tue, 27 Jun 2017 14:18:46 +0000 Subject: Follow-up to Crashes with gpg-agent 2.1.18 In-Reply-To: References: <87k26savh5.fsf@fifthhorseman.net> <87bmrqblrh.fsf@iwagami.gniibe.org> <878tlbi8to.fsf@iwagami.gniibe.org> <87poeicgmh.fsf@fifthhorseman.net> Message-ID: -----Original Message----- From: Gnupg-devel [mailto:gnupg-devel-bounces at gnupg.org] On Behalf Of Matthew Summers Sent: Tuesday, June 06, 2017 1:43 AM >On Mon, Jun 5, 2017 at 3:40 PM, Daniel Kahn Gillmor wrote: >> On Sun 2017-06-04 22:35:59 -0500, Matthew Summers wrote: >>> It's probably true that for most users, highly parallelized >>> decryption operations are rare. However, it's not difficult to >>> demonstrate use cases where it's important to handle highly >>> parallelized requests to gpg-agent. >> >> It would really help this conversation to document a few of these >> real-world use cases. I appreciate that your demonstration scripts >> are narrow and targeted because it helps to isolate the problem, but >> having clear real-world cases will help in motivating the fix to >> actually get deployed. >> >> Where are highly-parallelized requests to gpg-agent likely to happen? > [snip Matt's response and thanks to him for his investigation] Certainly! Sorry for the late reply. GT.M (http://fis-gtm.com/) is a high performance cooperatively managed database engine with ACID transaction support and MUMPS language compiler and runtime. GT.M leverages GnuPG for both encryption and encryption key management (thank you for the tools and your hard work). When I say that GT.M is a cooperatively managed database, I mean that GT.M processes cooperatively manage a shared memory cache. There is no primary daemon that starts and stops the database. Each process calls out to GnuPG for access to encrypted resources. During testing with GnuPG 2.1.18 and above, when our automated tests started more than 20 processes at once, we would face intermittent inexplicable failures in acquiring encryption keys. The example that I provided illustrates the bug with hundreds of process that we see with fewer processes. At database startup, our customers typically start somewhere between 10 and hundreds of processes. One customer goes up into the tens of thousands by the time they reach a steady state. These are real world use cases (http://www.fisglobal.com/solutions/banking-and-wealth/services/database-engine/sample-users). Please let me know if you have any questions or need more information from me. Regards, Amul PS: Please excuse the abominable reformatting by Outlook. The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. From wk at gnupg.org Thu Jun 29 09:28:20 2017 From: wk at gnupg.org (Werner Koch) Date: Thu, 29 Jun 2017 09:28:20 +0200 Subject: [Announce] Libgcrypt 1.7.8 released to fix CVE-2017-7526 Message-ID: <87r2y35k2z.fsf@wheatstone.g10code.de> Hi! The GnuPG Project is pleased to announce the availability of Libgcrypt version 1.7.8. This release fixes a local side-channel attack. Libgcrypt is a general purpose library of cryptographic building blocks. It is originally based on code used by GnuPG. It does not provide any implementation of OpenPGP or other protocols. Thorough understanding of applied cryptography is required to use Libgcrypt. Noteworthy changes in version 1.7.8 (2017-06-29) [C21/A1/R8] =================================== * Bug fixes: - Mitigate a flush+reload side-channel attack on RSA secret keys dubbed "Sliding right into disaster". For details see . [CVE-2017-7526] Note that this side-channel attack requires that the attacker can run arbitrary software on the hardware where the private RSA key is used. Allowing execute access to a box with private keys should be considered as a game over condition, anyway. Thus in practice there are easier ways to access the private keys than to mount this side-channel attack. However, on boxes with virtual machines this attack may be used by one VM to steal private keys from another VM. Download ======== Source code is hosted at the GnuPG FTP server and its mirrors as listed at . On the primary server the source tarball and its digital signature are: ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.8.tar.bz2 (2830k) ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.8.tar.bz2.sig That file is bzip2 compressed. A gzip compressed version is here: ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.8.tar.gz (3398k) ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.7.8.tar.gz.sig The same files are also available via HTTP: https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.8.tar.bz2 https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.8tar.bz2.sig https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.8.tar.gz https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.7.8.tar.gz.sig In order to check that the version of Libgcrypt you downloaded is an original and unmodified file please follow the instructions found at . In short, you may use one of the following methods: - Check the supplied OpenPGP signature. For example to check the signature of the file libgcrypt-1.7.8.tar.bz2 you would use this command: gpg --verify libgcrypt-1.7.8.tar.bz2.sig libgcrypt-1.7.8.tar.bz2 This checks whether the signature file matches the source file. You should see a message indicating that the signature is good and made by one or more of the release signing keys. Make sure that this is a valid key, either by matching the shown fingerprint against a trustworthy list of valid release signing keys or by checking that the key has been signed by trustworthy other keys. See the end of this mail for information on the signing keys. - If you are not able to use an existing version of GnuPG, you have to verify the SHA-1 checksum. On Unix systems the command to do this is either "sha1sum" or "shasum". Assuming you downloaded the file libgcrypt-1.7.8.tar.bz2, you run the command like this: sha1sum libgcrypt-1.7.8.tar.bz2 and check that the output matches the first line from the this list: 65a4a495aa858483e66868199eaa8238572ca6cd libgcrypt-1.7.8.tar.bz2 b1290e278170c638955de430699a425c2121750b libgcrypt-1.7.8.tar.gz You should also verify that the checksums above are authentic by matching them with copies of this announcement. Those copies can be found at other mailing lists, web sites, and search engines. Copying ======= Libgcrypt is distributed under the terms of the GNU Lesser General Public License (LGPLv2.1+). The helper programs as well as the documentation are distributed under the terms of the GNU General Public License (GPLv2+). The file LICENSES has notices about contributions that require that these additional notices are distributed. Support ======= For help on developing with Libgcrypt you should read the included manual and optional ask on the gcrypt-devel mailing list [1]. A listing with commercial support offers for Libgcrypt and related software is available at the GnuPG web site [2]. If you are a developer and you may need a certain feature for your project, please do not hesitate to bring it to the gcrypt-devel mailing list for discussion. Maintenance and development of Libgcrypt is mostly financed by donations; see . We currently employ 4 full-time developers, one part-timer, and one contractor to work on GnuPG and closely related software like Libgcrypt. Thanks ====== We like to thank all the people who helped with this release, be it testing, coding, translating, suggesting, auditing, administering the servers, spreading the word, and answering questions on the mailing lists. Also many thanks to all our donors [3]. Happy hacking, The GnuPG Team [1] https://lists.gnupg.org/mailman/listinfo/gcrypt-devel [2] https://www.gnupg.org/service.html [3] https://gnupg.org/donate/kudos.html p.s. This is an announcement only mailing list. Please send replies only to the gcrypt-devel 'at' gnupg.org mailing list. p.p.s List of Release Signing Keys: To guarantee that a downloaded GnuPG version has not been tampered by malicious entities we provide signature files for all tarballs and binary versions. The keys are also signed by the long term keys of their respective owners. Current releases are signed by one or more of these five keys: 2048R/4F25E3B6 2011-01-12 [expires: 2019-12-31] Key fingerprint = D869 2123 C406 5DEA 5E0F 3AB5 249B 39D2 4F25 E3B6 Werner Koch (dist sig) rsa2048/E0856959 2014-10-29 [expires: 2019-12-31] Key fingerprint = 46CC 7308 65BB 5C78 EBAB ADCF 0437 6F3E E085 6959 David Shaw (GnuPG Release Signing Key) rsa2048/33BD3F06 2014-10-29 [expires: 2016-10-28] Key fingerprint = 031E C253 6E58 0D8E A286 A9F2 2071 B08A 33BD 3F06 NIIBE Yutaka (GnuPG Release Key) rsa2048/7EFD60D9 2014-10-19 [expires: 2020-12-31] Key fingerprint = D238 EA65 D64C 67ED 4C30 73F2 8A86 1B1C 7EFD 60D9 Werner Koch (Release Signing Key) rsa3072/4B092E28 2017-03-17 [expires: 2027-03-15] Key fingerprint = 5B80 C575 4298 F0CB 55D8 ED6A BCEF 7E29 4B09 2E28 Andre Heinecke (Release Signing Key) You may retrieve these keys from a keyserver using this command gpg --keyserver hkp://keys.gnupg.net --recv-keys \ 249B39D24F25E3B6 04376F3EE0856959 \ 2071B08A33BD3F06 8A861B1C7EFD60D9 BCEF7E294B092E28 The keys are also available at https://gnupg.org/signature_key.html and in any recently released GnuPG tarball in the file g10/distsigkey.gpg . Note that this mail has been signed by a different key. -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Gnupg-announce mailing list Gnupg-announce at gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-announce From cjo at redhat.com Thu Jun 29 20:32:14 2017 From: cjo at redhat.com (CJ Oster) Date: Thu, 29 Jun 2017 13:32:14 -0500 Subject: Certificate of Origin Message-ID: <20170629133214.2443c6c3@elguapo> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 GnuPG Developer's Certificate of Origin. Version 1.0 ===================================================== By making a contribution to the GnuPG project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the free software license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate free software license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same free software license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the free software license(s) involved. Signed-off-by: CJ Oster - -- CJ Oster - Senior Consultant Red Hat Consulting -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJZVUeuAAoJEHfsUI/H0BE20hAP/Rb1Ss683CVVOevke30W4kQu xGCt/3vFewtREuy7DKDa6lk2V8nsfjoJJi88EG0pX3aSWLGc7TKIuAGFgOqwVPee g5M/vQ6xIbdBorGyMlAAEXSN3gi6t+Wjel0a7JKqcaLlLTsIoDmBlCKtsZQjE8gf j0eIEk7NDy4Pt4afp6OjvO+3n9TVP2YDsZUNU3CGJLnTzJjpuzViMxlrIlELDkl7 Rw7bhon3cSTYBTAtFUqgIHKugwTrGDIkT3CpADNHTGBNhARYZ8p9Lv2B1ATNKb8P Ocn/8561lm3RaoSUfASx3IOIHx6WAQVgBQ1CeLuguT6mJsDX9QxF4f56vbSIailK 1d814fmwNLjc6tXLvaOL5Aidvi0B+44RHTyBqm2oYH4CZMHRcplIBzmyyPnmG1ox zdeUVtwh2uCEtc+x87MWUkg8+3TAr/KbIJCCHEupzhgubkEOwHSeem97talAWeBG +CY7ExIdrh9501XKn7yA+rLLDhS9ml+MylYSZpD6ZfAgUYn4hbafyuzPTCauSHl8 i/jrZu10Urx9wBKAwUrDfqj9MaUT1A9e4TwRqPEgyVFR7H9S3iidZG6yYikxJO69 RyWPhiLp+i2LX9Lg7/b/W8C7WI+l4G2m8hrKqVvCqJSAjvJDiF1znU+HTwapVsmM Kih+hL6q9u4x6QXxLdm+ =Lbax -----END PGP SIGNATURE-----