From dkg at fifthhorseman.net Wed Apr 1 17:27:35 2009 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Wed, 01 Apr 2009 11:27:35 -0400 Subject: HKP keyservers over SSL In-Reply-To: <20090331215338.GA32245@redoubt.spodhuis.org> References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> Message-ID: <49D387E7.1090401@fifthhorseman.net> On 03/31/2009 05:53 PM, Phil Pennock wrote: > I see from the URL which you posted to sks-devel: > http://cvs.gnupg.org/cgi-bin/viewcvs.cgi/branches/STABLE-BRANCH-1-4/keyserver/gpgkeys_hkp.c?root=GnuPG&rev=4924&r1=4878&r2=4924 > that you're using: > curl_easy_setopt(curl,CURLOPT_SSL_VERIFYPEER,(long)opt->flags.check_cert); > > How is this expected to interact with keyserver pools? Should every > server know every pool which $random_people construct and include it in > subjectAltName? How about certificate validation for that case? I think the simple answer for this question is: * if you're connecting to a pool run by diverse groups, do not use HKP-over-TLS. Put another way: * HKP-over-TLS only really makes sense when connecting to a known, identifiable keyserver. In a way, this makes sense: if you're using a secure connection to a trusted keyserver, you by definition do not want to use a pool, because you don't know who will be in that pool. If you really do want to use HKP-over-TLS against a pool, your suggestion of SNI is one option: > I strongly suspect that the only workable way forward here, for those > who want to have verifiable certificates, is to have the client support > Server Name Indication, SNI, per RFC 4366. But it's not the only option. The other option would be to use OpenPGP certificates for authentication (RFC 5081), which allow multiple User IDs (which in this case means multiple hostnames) in a certificate. They also allow multiple issuers per hostname certification. So, for example, the host maintainer can add a User ID matching the actual name, and also a User ID matching the name of any pool in which it participates. The maintainer of any particular pool could also certify the pool-specific User IDs for the hosts which she knows to be participating. If we're talking about doing more work to support a TLS extension, I'd rather see RFC 5081 be adopted (using reasonable certificates) than 4366 (ways to use multiple unreasonable certificates). However, i think that the arguments for using HKPS tend to suggest that a user would want to use a single known keyserver (or at least a pool maintained by the same administrator who can give them all the same host keys or get them all independently certified). --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: From jpaulo.melo at gmail.com Wed Apr 1 21:11:41 2009 From: jpaulo.melo at gmail.com (Joao Paulo Fernandes) Date: Wed, 1 Apr 2009 16:11:41 -0300 Subject: Using GPG exported key Message-ID: <1a27245d0904011211s79b6ec23va71036b23906c57d@mail.gmail.com> Hi everyboby, Im exporting my gpg key from my unix server gpg --armor --output "key.txt" --export myserver i import the key.txt in other computer gpg --import key.txt i trust ultimately the myserver key with --edit-key but i still get the error when i try to encrypt files with the key: gpg -r myserver --output "test.gpg" -e mytext.txt gpg: myserver: skipped: public key not found gpg: mytext.txt: encryption failed: public key not found I saw this is a common error people have but no solution. Somebody could help me ? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From John at Mozilla-Enigmail.org Wed Apr 1 21:38:19 2009 From: John at Mozilla-Enigmail.org (John Clizbe) Date: Wed, 01 Apr 2009 14:38:19 -0500 Subject: Using GPG exported key In-Reply-To: <1a27245d0904011211s79b6ec23va71036b23906c57d@mail.gmail.com> References: <1a27245d0904011211s79b6ec23va71036b23906c57d@mail.gmail.com> Message-ID: <49D3C2AB.5020401@Mozilla-Enigmail.org> Mistakenly sent to GnuPG-Users the first time. Joao Paulo Fernandes wrote: > > Hi everyboby, > > > > Im exporting my gpg key from my unix server > > gpg --armor --output "key.txt" --export myserver > > > > i import the key.txt in other computer > > gpg --import key.txt > > > > i trust ultimately the myserver key with --edit-key > > but i still get the error when i try to encrypt files with the key: > > gpg -r myserver --output "test.gpg" -e mytext.txt > > > > gpg: myserver: skipped: public key not found > > gpg: mytext.txt: encryption failed: public key not found > > > > I saw this is a common error people have but no solution. > > Somebody could help me ? gpg --export-secret-keys 0xdecafbad > secret.asc gpg --export 0xdecafbad > public.asc On other machine: gpg --import secret.asc gpg --import public.asc FWIW, the GnuPG keyring files {pubring,secring,trustdb}.gpg are binary compatible. I find it's just as convenient to copy or import entire keyrings. -- John P. Clizbe Inet:John (a) Mozilla-Enigmail.org You can't spell fiasco without SCO. hkp://keyserver.gingerbear.net or mailto:pgp-public-keys at gingerbear.net?subject=HELP Q:"Just how do the residents of Haiku, Hawai'i hold conversations?" A:"An odd melody / island voices on the winds / surplus of vowels" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 678 bytes Desc: OpenPGP digital signature URL: From gnupg-devel at spodhuis.org Thu Apr 2 00:04:31 2009 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Wed, 1 Apr 2009 15:04:31 -0700 Subject: HKP keyservers over SSL In-Reply-To: <49D387E7.1090401@fifthhorseman.net> References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> <49D387E7.1090401@fifthhorseman.net> Message-ID: <20090401220431.GA56208@redoubt.spodhuis.org> On 2009-04-01 at 11:27 -0400, Daniel Kahn Gillmor wrote: > Put another way: > > * HKP-over-TLS only really makes sense when connecting to a known, > identifiable keyserver. My understanding is that part of the rationale for HKP-over-TLS is simply to defeat traffic snooping and to talk to a set of keyservers somewhat trusted to respect privacy and that total privacy is not a major driver. I can well see people wanting to talk to keys.gnupg.net and just to have some reasonable confidence that the communications are private, ie that they're talking to the keys.gnupg.net pool so no MitM to sniff. So a private CA supported by the client works here. The pool maintainer can add certs. The only issue is revocation, which is an issue what exists whether the server certification is done via host PGP identities or via x509 certs -- ie, the same issues exist for both and the client verification logic needs to exist for both, so the complexity is equivalent. > If we're talking about doing more work to support a TLS extension, I'd > rather see RFC 5081 be adopted (using reasonable certificates) than 4366 > (ways to use multiple unreasonable certificates). First, note that curl natively and automatically supports SNI if the SSL library supports it, AFAICT. NSS might be an exception. GnuTLS support is used, and if OpenSSL is recent enough and built with support for SNI then curl will automatically use it. Yesterday I wrote patches for some common tools to support using SNI as a client -- it's extremely simple: check the hostname isn't an IP address, then issue one call against the handle to add some metadata that will be used later on. So the SNI support requirement for GnuPG is to make sure that non-Curl shims support this (trivial) and to document that hkps verification to pools is dependent upon the backend -- GnuTLS always works, OpenSSL depends upon the install (version and build options) -- again trivial. Second, where does this "unreasonable" spring from? If the CA is the pool operator, so any pool operator is their own CA with no costs to enter the "market" beside perhaps getting the tool vendors (GnuPG) to include their CA cert when shipping then there's no artificial barriers to participation, no artificial revenue streams and things work simply using existing, tested, tools and libraries. > However, i think that the arguments for using HKPS tend to suggest that > a user would want to use a single known keyserver (or at least a pool > maintained by the same administrator who can give them all the same host > keys or get them all independently certified). Perhaps some will. And they'll pay for access to a pool of resilient servers because it's worthwhile to them. Heck, client SSL verification comes in. As I understand matters, as a keyserver operator, it's rare for any organisation to run more than one or two keyservers. Most pools are just collections of keyservers run by someone else. HKPS lets you talk to those pools without your local network traffic showing, in the clear, who you want to talk to. I use a laptop. I use wifi. For work access, I have a VPN. For private stuff, I have SSH, SMTP-Submission-STARTTLS, IMAP/SSL and IMAP/STARTTLS, HTTPS to private websites, etc. If I compose email and retrieve keys for the mail, at the moment the key retrieval is the only part which leaks, over the coffee-shop wifi, anything to do with who I'm communicating with. The IMAP and SMTP all go to fixed servers. I want to be able to have that be private too, whether it's at a cafe/diner or at a conference filled with people giving demonstrations of wifi traffic interception. So a design which supports remote endpoint verification to have privacy with MitM protection and which lets the pool maintainers be different from the keyserver operators is a win; if the design can offer more than that, great, but if it can't offer that much, it loses. With SNI, each keyserver can be part of various pools for various constituencies affiliated with the keyserver operator without any one acting to inhibit membership in another pool (at a technical level; conflicting hypothetical maintenance certification standards could reintroduce that but then the keyserver operator can choose, in a *free* market, which pools matter most to them and if they, eg, want to just run two keyservers). -Phil -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From dkg at fifthhorseman.net Thu Apr 2 00:42:26 2009 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Wed, 01 Apr 2009 18:42:26 -0400 Subject: HKP keyservers over SSL In-Reply-To: <20090401220431.GA56208@redoubt.spodhuis.org> References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> <49D387E7.1090401@fifthhorseman.net> <20090401220431.GA56208@redoubt.spodhuis.org> Message-ID: <49D3EDD2.1070400@fifthhorseman.net> On 04/01/2009 06:04 PM, Phil Pennock wrote: > I can well see people wanting to talk to keys.gnupg.net and just to have > some reasonable confidence that the communications are private, ie that > they're talking to the keys.gnupg.net pool so no MitM to sniff. So a > private CA supported by the client works here. The pool maintainer can > add certs. Yes, i understand (and think GnuPG should support) this use case. > So the SNI support requirement for GnuPG is to make sure that non-Curl > shims support this (trivial) and to document that hkps verification to > pools is dependent upon the backend -- GnuTLS always works, OpenSSL > depends upon the install (version and build options) -- again trivial. I didn't know SNI implementation was this close. I agree, it sounds like a win, and a worthwhile thing to do. > Second, where does this "unreasonable" spring from? Sorry, i have serious concerns over the X.509 specification and the way that it's generally deployed [0]. But the setup you describe (with the keyserver pool operator acting as the CA directly) is not an unreasonable use of X.509. I shouldn't have let my general X.509 grumpiness cloud my reasoning ;) > As I understand matters, as a keyserver operator, it's rare for any > organisation to run more than one or two keyservers. Right, i was thinking of two keyservers in a private pool, acting as a redundant arrangement, much like common configurations of krb5kdc, slapd, etc. > If I compose email and > retrieve keys for the mail, at the moment the key retrieval is the only > part which leaks, over the coffee-shop wifi, anything to do with who I'm > communicating with. Yup. I wasn't trying to argue against SNI, just saying that if it was significant work to be done, it seemed to me like RFC 5081 support would be more worthwhile. I don't think they're in direct conflict other than time of implementation, and it sounds like SNI would be simpler to do. Thanks for the clarification, --dkg [0] http://lair.fifthhorseman.net/~dkg/tls-centralization -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: From gnupg-devel at spodhuis.org Thu Apr 2 01:56:38 2009 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Wed, 1 Apr 2009 16:56:38 -0700 Subject: HKP keyservers over SSL In-Reply-To: <49D3EDD2.1070400@fifthhorseman.net> References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> <49D387E7.1090401@fifthhorseman.net> <20090401220431.GA56208@redoubt.spodhuis.org> <49D3EDD2.1070400@fifthhorseman.net> Message-ID: <20090401235638.GA87437@redoubt.spodhuis.org> On 2009-04-01 at 18:42 -0400, Daniel Kahn Gillmor wrote: > Sorry, i have serious concerns over the X.509 specification and the way > that it's generally deployed [0]. But the setup you describe (with the > keyserver pool operator acting as the CA directly) is not an > unreasonable use of X.509. I shouldn't have let my general X.509 > grumpiness cloud my reasoning ;) I have serious concerns there too and am normally the one who is grumpy on the topic of the so-called PKI, so no worries. -Phil, who installs CA certs after receiving them PGP-signed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From dshaw at jabberwocky.com Thu Apr 2 04:42:36 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Wed, 1 Apr 2009 22:42:36 -0400 Subject: HKP keyservers over SSL In-Reply-To: <20090331215338.GA32245@redoubt.spodhuis.org> References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> Message-ID: <78B4295E-EA47-48A7-BA93-B47248FF1BEA@jabberwocky.com> On Mar 31, 2009, at 5:53 PM, Phil Pennock wrote: > On 2009-03-12 at 09:10 -0400, David Shaw wrote: >> A few weeks ago, I added support for SSL to the HKP keyserver handler >> (gpg(2)keys_hkp) to help test some new keyserver work that is going >> on. (Though "Added" is a bit of a strong term - it's really just 4-5 >> lines of code to tell libcurl to accept SSL.) Anyway, Werner pointed >> out that we may want to do something more than that. After all, >> gpgsm >> manipulates X.509 certificates for lunch. >> >> So, let's talk about it a bit: How can we do something smart here, >> design-wise? It would be nice to also support client auth, and not >> just the standard server validation SSL test. > > David, some questions/suggestions if I may? > > I see from the URL which you posted to sks-devel: > http://cvs.gnupg.org/cgi-bin/viewcvs.cgi/branches/STABLE-BRANCH-1-4/keyserver/gpgkeys_hkp.c?root=GnuPG&rev=4924&r1=4878&r2=4924 > that you're using: > curl_easy_setopt(curl,CURLOPT_SSL_VERIFYPEER,(long)opt- > >flags.check_cert); > > How is this expected to interact with keyserver pools? Should every > server know every pool which $random_people construct and include it > in > subjectAltName? How about certificate validation for that case? It's an interesting question. To a certain degree, this isn't GnuPG's problem. If the server presents a certificate where neither the common name or the subject alternate name match the name used to connect, the connection should (and will) fail. Curl handles all that for us automatically. Managing those certificates is, as you point out, fairly ugly from the server operator's point of view... > If instead we can establish that from day 1, hkps clients *will* use > SNI > then we can then just leave it to the PGP keyservers to develop nice > admin interfaces for managing the collections of certs which a given > keyserver will use. I'm not against this, but there are a few practical caveats. Libcurl added SNI support around a year ago in 7.18.1 (assuming of course that the backend crypto library supports it). A year isn't very long at all, so there are loads of libcurl installations that don't yet have the proper version of libcurl (and/or openssl, etc). On those systems, we (meaning libcurl, really) can only do common name and subject alternate name checks. The three systems I just checked had libcurl 7.16.3 (no SNI), 7.19.4 (SNI), and 7.15.5 (no SNI). That does not bode well for quick adoption. By all means, have the servers provide the right info though. The clients will eventually catch up. This doesn't require any code change to GPG - if libcurl supports SNI, then GPG does too. If GPG doesn't support SNI, and the server presents the wrong certificate, the request will fail. No real harm, since that is what would have happened anyway if the server didn't support SNI. Having said all that, I'm not sure if all this peer validation isn't a bit of overkill. My main desire for hkps is that the data on the wire is encrypted to avoid casual snooping, and you don't need any peer validation for that. Anyway, I might feel differently if it required a major code commitment in GPG, but as things stand now, if the keyserver operators want to band together and make sure their servers have a particular cert setup and make the proper cainfo data available for clients who want to use it, why not? David From dshaw at jabberwocky.com Thu Apr 2 04:51:18 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Wed, 1 Apr 2009 22:51:18 -0400 Subject: hkps port Message-ID: After some pondering about the proper port for hkps, I think that 443 makes the most sense (in other words, use the same port number as https). The reality is that there was never a particular reason why regular hkp needed to be on port 11371. The protocol is really http, and may as well have lived on the proper http port. I don't see a reason to repeat this for hkps, so in the interest of simplicity, 443 seems to be the best choice. David From gnupg-devel at spodhuis.org Thu Apr 2 06:24:04 2009 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Wed, 1 Apr 2009 21:24:04 -0700 Subject: hkps port In-Reply-To: References: Message-ID: <20090402042404.GB85242@redoubt.spodhuis.org> On 2009-04-01 at 22:51 -0400, David Shaw wrote: > After some pondering about the proper port for hkps, I think that 443 > makes the most sense (in other words, use the same port number as > https). The reality is that there was never a particular reason why > regular hkp needed to be on port 11371. The protocol is really http, > and may as well have lived on the proper http port. I don't see a > reason to repeat this for hkps, so in the interest of simplicity, 443 > seems to be the best choice. I have a web-server running providing http/https. I have a keyserver running on 11371 and would like to have TLS keyserver support. The nearest I could do in this case is to disable SSL support in whichever keyserver I run, configure Apache to proxy for that vhost and find some way to turn off the normal logging, since I don't query-log HKP retrievals. This means that the set-up which respects user-privacy is more work to support (given default configurations). Since HKP isn't directly for human consumption, does the need to tunnel everything over the same two ports really stand? If you're set on 443, how about using SRV records for hkps, always, and only have 443 be the fallback port in the absence of SRV records? _hkps._tcp.spodhuis.org. IN SRV 10 10 11372 sks.spodhuis.org. That would let me specify hkps://spodhuis.org/ and split the ports as I need? Thanks, -Phil -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From gnupg-devel at spodhuis.org Thu Apr 2 06:15:15 2009 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Wed, 1 Apr 2009 21:15:15 -0700 Subject: HKP keyservers over SSL In-Reply-To: <78B4295E-EA47-48A7-BA93-B47248FF1BEA@jabberwocky.com> References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> <78B4295E-EA47-48A7-BA93-B47248FF1BEA@jabberwocky.com> Message-ID: <20090402041515.GA85242@redoubt.spodhuis.org> On 2009-04-01 at 22:42 -0400, David Shaw wrote: > I'm not against this, but there are a few practical caveats. Libcurl > added SNI support around a year ago in 7.18.1 (assuming of course > that the backend crypto library supports it). A year isn't very long > at all, so there are loads of libcurl installations that don't yet > have the proper version of libcurl (and/or openssl, etc). On those > systems, we (meaning libcurl, really) can only do common name and > subject alternate name checks. The three systems I just checked had > libcurl 7.16.3 (no SNI), 7.19.4 (SNI), and 7.15.5 (no SNI). That does > not bode well for quick adoption. hkps support is not out there now. If hkps support means, in practice, SNI support, then operators can rely upon this. If it doesn't, we're just perpetuating the continued poor state of affairs. How about some notes on this in the README's "BUILD INSTRUCTIONS" stating pretty much what you just said and strongly recommending this? Ideally with ./configure CAPS WARNINGS if hkps support is requested but this support is not present? (I'm not requesting work of others and doing nothing myself; I worked on better SNI support in a variety of tools yesterday and am planning on extending the Perl/Python patches today/tomorrow to include server-side support). > Having said all that, I'm not sure if all this peer validation isn't a > bit of overkill. My main desire for hkps is that the data on the wire > is encrypted to avoid casual snooping, and you don't need any peer > validation for that. When some hotel or wifi AP has played funny buggers with DNS because they don't understand that Internet != Web, it's useful to have tools gracefully report the problem. I like it when tools are able to report that the problems are because it couldn't actually get an unmolested connection to the server, rather than something else being wrong. So if verification is a simple enough hack, I'm all for it. > Anyway, I might feel differently if it required a major code > commitment in GPG, but as things stand now, if the keyserver operators > want to band together and make sure their servers have a particular > cert setup and make the proper cainfo data available for clients who > want to use it, why not? Okay, once SKS 1.1.1 is out with IPv6 support, my next patching effort there is likely to involve TLS. Thanks, -Phil -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From gordian.klein at gmx.de Thu Apr 2 09:53:54 2009 From: gordian.klein at gmx.de (Gordian Klein) Date: Thu, 02 Apr 2009 09:53:54 +0200 Subject: problem with Decypher and Openpgp card In-Reply-To: <49D24A56.4020507@gmx.de> References: <49D24A56.4020507@gmx.de> Message-ID: <49D46F12.90303@gmx.de> Hello, just for the record: it turned out the error meant that the cryptogram was wrong. Regards, Gordian Gordian Klein schrieb: > Hello, > > i have a problem with the decypher command from the OpenPGP Card. > I verify CHV2 and then send my decypher cmd, but the card always returns > SW_USE_CONDITIONS = 0x6985. > What does this error mean? I couldnt find anything with google.. > > Thank you for any hint. > Regards, > Gordian > > _______________________________________________ > Gnupg-devel mailing list > Gnupg-devel at gnupg.org > http://lists.gnupg.org/mailman/listinfo/gnupg-devel > From wk at gnupg.org Thu Apr 2 11:33:13 2009 From: wk at gnupg.org (Werner Koch) Date: Thu, 02 Apr 2009 11:33:13 +0200 Subject: hkps port In-Reply-To: (David Shaw's message of "Wed, 1 Apr 2009 22:51:18 -0400") References: Message-ID: <87tz57flie.fsf@wheatstone.g10code.de> On Thu, 2 Apr 2009 04:51, dshaw at jabberwocky.com said: > After some pondering about the proper port for hkps, I think that 443 > makes the most sense (in other words, use the same port number as > https). The reality is that there was never a particular reason why > regular hkp needed to be on port 11371. The protocol is really http, That is true for HTTP but not for HTTPS. Using 443 for hkps usually requires the use of a separate IP adress - something I really like to avoid. Another port also allow the use of a differentserver certificate. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From dshaw at jabberwocky.com Thu Apr 2 13:26:27 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Thu, 2 Apr 2009 07:26:27 -0400 Subject: hkps port In-Reply-To: <87tz57flie.fsf@wheatstone.g10code.de> References: <87tz57flie.fsf@wheatstone.g10code.de> Message-ID: On Apr 2, 2009, at 5:33 AM, Werner Koch wrote: > On Thu, 2 Apr 2009 04:51, dshaw at jabberwocky.com said: > >> After some pondering about the proper port for hkps, I think that 443 >> makes the most sense (in other words, use the same port number as >> https). The reality is that there was never a particular reason why >> regular hkp needed to be on port 11371. The protocol is really http, > > That is true for HTTP but not for HTTPS. Using 443 for hkps usually > requires the use of a separate IP adress - something I really like to > avoid. Another port also allow the use of a differentserver > certificate. I know. It's a messy situation, and there is no really good answer for all the problems. Some sites can *only* connect over 443 because of firewalling rules. I'm rather liking Phil's SRV suggestion at the moment. If nothing else, it lets server operators pick what they want for whatever internal reasons they want (firewall support, certificate issues, IP issues), and just tell the client where to find things. David From dshaw at jabberwocky.com Thu Apr 2 13:35:27 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Thu, 2 Apr 2009 07:35:27 -0400 Subject: hkps port In-Reply-To: <20090402042404.GB85242@redoubt.spodhuis.org> References: <20090402042404.GB85242@redoubt.spodhuis.org> Message-ID: <9980072E-377B-458F-95C5-01365601FD10@jabberwocky.com> On Apr 2, 2009, at 12:24 AM, Phil Pennock wrote: > On 2009-04-01 at 22:51 -0400, David Shaw wrote: >> After some pondering about the proper port for hkps, I think that 443 >> makes the most sense (in other words, use the same port number as >> https). The reality is that there was never a particular reason why >> regular hkp needed to be on port 11371. The protocol is really http, >> and may as well have lived on the proper http port. I don't see a >> reason to repeat this for hkps, so in the interest of simplicity, 443 >> seems to be the best choice. > > I have a web-server running providing http/https. I have a keyserver > running on 11371 and would like to have TLS keyserver support. > > The nearest I could do in this case is to disable SSL support in > whichever keyserver I run, configure Apache to proxy for that vhost > and > find some way to turn off the normal logging, since I don't query-log > HKP retrievals. This means that the set-up which respects user- > privacy > is more work to support (given default configurations). > > Since HKP isn't directly for human consumption, does the need to > tunnel > everything over the same two ports really stand? > > If you're set on 443, how about using SRV records for hkps, always, > and > only have 443 be the fallback port in the absence of SRV records? > > _hkps._tcp.spodhuis.org. IN SRV 10 10 11372 sks.spodhuis.org. I'm certainly not set on anything, yet. I'm just proposing. This is one of those unfortunate problems where there is no one solution that makes everyone happy. We may have to be satisfied with least- unhappy. ;) In terms of SRV, I'm all for it. The catch is that libcurl doesn't support them (which actually makes sense, however unfortunate it is for this immediate problem - SRV was often proposed, but never formally standardized for HTTP). No reason we can't support it in hkp/ hkps, of course. I already have some code to support SRV for the non- curl HTTP handler in GPG, and it would be reasonably simple to port this over to dereference (for example) hkps://spodhuis.org into https://sks.spodhuis.org:11372 . We could thus guarantee that hkps supports SRV from day one. David From dshaw at jabberwocky.com Thu Apr 2 13:45:37 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Thu, 2 Apr 2009 07:45:37 -0400 Subject: HKP keyservers over SSL In-Reply-To: <20090402041515.GA85242@redoubt.spodhuis.org> References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> <78B4295E-EA47-48A7-BA93-B47248FF1BEA@jabberwocky.com> <20090402041515.GA85242@redoubt.spodhuis.org> Message-ID: On Apr 2, 2009, at 12:15 AM, Phil Pennock wrote: > On 2009-04-01 at 22:42 -0400, David Shaw wrote: >> I'm not against this, but there are a few practical caveats. Libcurl >> added SNI support around a year ago in 7.18.1 (assuming of course >> that the backend crypto library supports it). A year isn't very long >> at all, so there are loads of libcurl installations that don't yet >> have the proper version of libcurl (and/or openssl, etc). On those >> systems, we (meaning libcurl, really) can only do common name and >> subject alternate name checks. The three systems I just checked had >> libcurl 7.16.3 (no SNI), 7.19.4 (SNI), and 7.15.5 (no SNI). That >> does >> not bode well for quick adoption. > > hkps support is not out there now. If hkps support means, in > practice, > SNI support, then operators can rely upon this. If it doesn't, we're > just perpetuating the continued poor state of affairs. I quite agree. It's just that SNI support in libcurl and friends does not yet exist in sufficient percentage of the "market". We can't make the hkps==SNI guarantee. We can strongly suggest it, and we will eventually get there once the newer libraries percolate through the world, but it's not a guarantee we can make today. > How about some notes on this in the README's "BUILD INSTRUCTIONS" > stating pretty much what you just said and strongly recommending this? > Ideally with ./configure CAPS WARNINGS if hkps support is requested > but > this support is not present? I'm okay with this, except there isn't a good way to tell this at compile (or even run) time. SNI in curl is hidden fairly deep under the covers. Even if I do the inadvisable thing and warn for any version of curl older than 7.18.1, that doesn't really give a reliable answer, as curl might be linked against a SSL library that doesn't support it. >> Having said all that, I'm not sure if all this peer validation >> isn't a >> bit of overkill. My main desire for hkps is that the data on the >> wire >> is encrypted to avoid casual snooping, and you don't need any peer >> validation for that. > > When some hotel or wifi AP has played funny buggers with DNS because > they don't understand that Internet != Web, it's useful to have tools > gracefully report the problem. I like it when tools are able to > report > that the problems are because it couldn't actually get an unmolested > connection to the server, rather than something else being wrong. > So if > verification is a simple enough hack, I'm all for it. Oh, don't get me wrong: verification exists in the code today. It's even on by default (another example of a least-unhappy choice). I'm just pointing out that it's not my main desire. If it meets someone else's desire, that's great, though. I'm not writing this for me personally. :) David From bernhard at intevation.de Thu Apr 2 14:28:20 2009 From: bernhard at intevation.de (Bernhard Reiter) Date: Thu, 2 Apr 2009 14:28:20 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root Message-ID: <200904021428.20569.bernhard@intevation.de> Something is wrong with CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US,serial#: 01A5 http://www.telesec.de/service/GTE-CyberTrust-Global-Root.der gpgsm 2.0.11 can import, but not use it. The GPG_ERR_NO_VALUE is not conclusive. What is going on? See the following session: rm -r dot.gnupg/ mkdir dot.gnupg LANGUAGE=C GPGHOME=dot.gnupg gpgsm --no-common-certs-import --list-keys gpgsm: keybox `/home/bernhard/tmp/dot.gnupg/pubring.kbx' created LANGUAGE=C GPGHOME=dot.gnupg gpgsm --list-keys LANGUAGE=C GPGHOME=dot.gnupg gpgsm --import GTE-CyberTrust-Global-Root.der gpgsm: total number processed: 1 gpgsm: imported: 1 LANGUAGE=C GPGHOME=dot.gnupg gpgsm --list-keys /home/bernhard/tmp/dot.gnupg/pubring.kbx ---------------------------------------- ID: 0x367EF474 S/N: 01A5 Issuer: /CN=GTE CyberTrust Global Root/OU=GTE CyberTrust Solutions, Inc./O=GTE Corporation/C=US Subject: /CN=GTE CyberTrust Global Root/OU=GTE CyberTrust Solutions, Inc./O=GTE Corporation/C=US validity: 1998-08-13 00:29:00 through 2018-08-13 23:59:00 key type: 1024 bit RSA chain length: none fingerprint: 97:81:79:50:D8:1C:96:70:CC:34:D8:09:CF:79:44:31:36:7E:F4:74 LANGUAGE=C GPGHOME=dot.gnupg gpgsm -e -r 97:81:79:50:D8:1C:96:70:CC:34:D8:09:CF:79:44:31:36:7E:F4:74 gpgsm: can't encrypt to `97:81:79:50:D8:1C:96:70:CC:34:D8:09:CF:79:44:31:36:7E:F4:74': No value LANGUAGE=C GPGHOME=dot.gnupg gpgsm -vvv --debug-all -e -r 97:81:79:50:D8:1C:96:70:CC:34:D8:09:CF:79:44:31:36:7E:F4:74 gpgsm: NOTE: no default option file `X/dot.gnupg/gpgsm.conf' gpgsm: no key usage specified - assuming all usages gpgsm: DBG: BEGIN Certificate `target': gpgsm: DBG: serial: 01A5 gpgsm: DBG: notBefore: 1998-08-13 00:29:00 gpgsm: DBG: notAfter: 2018-08-13 23:59:00 gpgsm: DBG: issuer: CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US gpgsm: DBG: subject: CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US gpgsm: DBG: hash algo: 1.2.840.113549.1.1.4 gpgsm: DBG: SHA1 Fingerprint: 97:81:79:50:D8:1C:96:70:CC:34:D8:09:CF:79:44:31:36:7E:F4:74 gpgsm: DBG: END Certificate gpgsm: DBG: connection to agent established gpgsm: DBG: signature value: 28 37 3A 73 69 67 2D 76 61 6C 28 33 3A 72 73 61 28 31 3A 73 31 32 38 3A 6D EB 1B 09 E9 5E D9 51 DB 67 22 61 A4 2A 3C 48 77 E3 A0 7C A6 DE 73 A2 14 03 85 3D FB AB 0E 30 C5 83 16 33 81 13 08 9E 7B 34 4E DF 40 C8 74 D7 B9 7D DC F4 76 55 7D 9B 63 54 18 E9 F0 EA F3 5C B1 D9 8B 42 1E B9 C0 95 4E BA FA D5 E2 7C F5 68 61 BF 8E EC 05 97 5F 5B B0 D7 A3 85 34 C4 24 A7 0D 0F 95 93 EF CB 94 D8 9E 1F 9D 5C 85 6D C7 AA AE 4F 1F 22 B5 CD 95 AD BA A7 CC F9 AB 0B 7A 7F 29 29 28 34 3A 68 61 73 68 33 3A 6D 64 35 29 29 gpgsm: DBG: encoded hash: 00 01 FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF 00 30 20 30 0C 06 08 2A 86 48 86 F7 0D 02 05 05 00 04 10 E1 B3 4A 19 37 4F C7 10 C6 16 67 B8 2E 8F 1C 2C DBG: pubkey_verify: algo=1 pkey:: 950FA0B6F0509CE87AC788CDDD170E2EB094D01B3D0EF694C08A94C706C89097C8B8641A7A7E6C3C53E1372873607FB29753079F53F96D5894D2AF8D6D886780E6EDB295CF7231CAA51C72BA5C02E76442E7F9A92CD63A0DAC8D42AA240139E69C3F0185570D588745F8D385AA936926857048803F1215C779B41F052F3B6299 pkey:: 10001 sig:: 6DEB1B09E95ED951DB672261A42A3C4877E3A07CA6DE73A21403853DFBAB0E30C58316338113089E7B344EDF40C874D7B97DDCF476557D9B635418E9F0EAF35CB1D98B421EB9C0954EBAFAD5E27CF56861BF8EEC05975F5BB0D7A38534C424A70D0F9593EFCB94D89E1F9D5C856DC7AAAE4F1F22B5CD95ADBAA7CCF9AB0B7A7F hash:: 1FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF003020300C06082A864886F70D020505000410E1B34A19374FC710C61667B82E8F1C2C gpgsm: DBG: gcry_pk_verify: Success gpgsm: validation model used: shell gpgsm: can't encrypt to `97:81:79:50:D8:1C:96:70:CC:34:D8:09:CF:79:44:31:36:7E:F4:74': No value random usage: poolsize=600 mixed=0 polls=0/0 added=0/0 outmix=0 getlvl1=0/0 getlvl2=0/0 secmem usage: 0/16384 bytes in 0 blocks -- Managing Director - Owner: www.intevation.net (Free Software Company) Germany Coordinator: fsfeurope.org. Coordinator: www.Kolab-Konsortium.com. Intevation GmbH, Osnabr?ck, DE; Amtsgericht Osnabr?ck, HRB 18998 Gesch?ftsf?hrer Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner From wk at gnupg.org Thu Apr 2 14:43:14 2009 From: wk at gnupg.org (Werner Koch) Date: Thu, 02 Apr 2009 14:43:14 +0200 Subject: hkps port In-Reply-To: (David Shaw's message of "Thu, 2 Apr 2009 07:26:27 -0400") References: <87tz57flie.fsf@wheatstone.g10code.de> Message-ID: <87ocvffcpp.fsf@wheatstone.g10code.de> On Thu, 2 Apr 2009 13:26, dshaw at jabberwocky.com said: > for all the problems. Some sites can *only* connect over 443 because > of firewalling rules. I'm rather liking Phil's SRV suggestion at the We have port 80 keyservers as well but they are not the default. These keyservers exists because of the firewall problems. What about round robin DNS names: We could put the port 443 keyservers into http-keys.gnupg.net - they are used by people with firewall problems and thus we can be quite sure that those firewalls will also allow port 443. I think this is a less surprising way than to maintain another list of https-keys.gnupg.net with the hkps servers which can't be bound to port 443. I have no problems with the SRV record suggestion, either. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From dshaw at jabberwocky.com Thu Apr 2 14:56:48 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Thu, 2 Apr 2009 08:56:48 -0400 Subject: hkps port In-Reply-To: <87ocvffcpp.fsf@wheatstone.g10code.de> References: <87tz57flie.fsf@wheatstone.g10code.de> <87ocvffcpp.fsf@wheatstone.g10code.de> Message-ID: On Apr 2, 2009, at 8:43 AM, Werner Koch wrote: > On Thu, 2 Apr 2009 13:26, dshaw at jabberwocky.com said: > >> for all the problems. Some sites can *only* connect over 443 because >> of firewalling rules. I'm rather liking Phil's SRV suggestion at the > > We have port 80 keyservers as well but they are not the default. > These > keyservers exists because of the firewall problems. Yes. > What about round robin DNS names: We could put the port 443 keyservers > into http-keys.gnupg.net - they are used by people with firewall > problems and thus we can be quite sure that those firewalls will also > allow port 443. I think this is a less surprising way than to > maintain > another list of https-keys.gnupg.net with the hkps servers which can't > be bound to port 443. I think this is a good idea and should be the standard practice. > I have no problems with the SRV record suggestion, either. Ideally, curl would support SRV internally. It can do a better job than we can do as a wrapper from outside, as it can properly walk the list of returned servers until one answers. The best we can do is do a SRV lookup, run the selection algorithm, and then hope that the best choice is actually running. Still, it is better than nothing. If I had more spare time, I'd just write SRV for curl and donate it to them. David From dkg at fifthhorseman.net Thu Apr 2 16:45:28 2009 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Thu, 02 Apr 2009 10:45:28 -0400 Subject: keyserver pools using SRV records for HKP and HKPS [Was: Re: hkps port] In-Reply-To: References: <87tz57flie.fsf@wheatstone.g10code.de> <87ocvffcpp.fsf@wheatstone.g10code.de> Message-ID: <49D4CF88.3000403@fifthhorseman.net> On 04/02/2009 08:56 AM, David Shaw wrote: > Ideally, curl would support SRV internally. It can do a better job than > we can do as a wrapper from outside, as it can properly walk the list of > returned servers until one answers. The best we can do is do a SRV > lookup, run the selection algorithm, and then hope that the best choice > is actually running. Still, it is better than nothing. If I had more > spare time, I'd just write SRV for curl and donate it to them. I agree that using SRV records is a good idea for HKP and HKPS. And David's suggestion here is structurally the right way to go, even if we're not there yet. But i also note that i don't see any keyserver pools publishing their pool as SRV records at the moment -- only A records. If we're going to say that we're making a least-unhappy choice (which is bound to make some operators unhappy), and that SRV records will be the mitigating factor, we should probably clearly encourage keyserver pool operators to publish their pool as SRV records directly in addition to A records. Or are they already doing this, and i'm just querying the wrong way? dig -t SRV _hkp._tcp.pool.sks-keyservers.net dig -t SRV _hkp._tcp.keys.gnupg.net both return empty sets. --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: From gnupg-devel at spodhuis.org Thu Apr 2 17:07:02 2009 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Thu, 2 Apr 2009 08:07:02 -0700 Subject: keyserver pools using SRV records for HKP and HKPS [Was: Re: hkps port] In-Reply-To: <49D4CF88.3000403@fifthhorseman.net> References: <87tz57flie.fsf@wheatstone.g10code.de> <87ocvffcpp.fsf@wheatstone.g10code.de> <49D4CF88.3000403@fifthhorseman.net> Message-ID: <20090402150701.GA98321@redoubt.spodhuis.org> On 2009-04-02 at 10:45 -0400, Daniel Kahn Gillmor wrote: > On 04/02/2009 08:56 AM, David Shaw wrote: > > Ideally, curl would support SRV internally. It can do a better job than > > we can do as a wrapper from outside, as it can properly walk the list of > > returned servers until one answers. The best we can do is do a SRV > > lookup, run the selection algorithm, and then hope that the best choice > > is actually running. Still, it is better than nothing. If I had more > > spare time, I'd just write SRV for curl and donate it to them. > > I agree that using SRV records is a good idea for HKP and HKPS. And > David's suggestion here is structurally the right way to go, even if > we're not there yet. > > But i also note that i don't see any keyserver pools publishing their > pool as SRV records at the moment -- only A records. If we're going to > say that we're making a least-unhappy choice (which is bound to make > some operators unhappy), and that SRV records will be the mitigating > factor, we should probably clearly encourage keyserver pool operators to > publish their pool as SRV records directly in addition to A records. > > Or are they already doing this, and i'm just querying the wrong way? > > dig -t SRV _hkp._tcp.pool.sks-keyservers.net > dig -t SRV _hkp._tcp.keys.gnupg.net > > both return empty sets. Well, there's already a well-established port for 11371 so there's been no need to, really; sure, port numbers can be published but there's no point if most clients won't actually use the port number. The issue was what to do with hkps. At the moment, hkps *can* be done with a reverse proxy in front of the KHP service providing the SSL stuff (and probably query-logging too). So it's plausible to develop a client which can be tested in use. Developing native SSL support in the servers could do with client support to test against. And since David has kindly written this for us, we just need to wait for the next release to test. While I don't run any public pools I do know what's involved, as for self-education I maintain a pool name (under a deliberately long DNS name to discourage use); really, exporting hostnames as well as IPs would be trivial and once hkps support is in SKS, including the port on the stats page and including that in the results will also be trivial. Of course, I don't issue SSL certs so probably shouldn't publish hkps records. ;-) -Phil -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From dshaw at jabberwocky.com Fri Apr 3 00:54:58 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Thu, 2 Apr 2009 18:54:58 -0400 Subject: keyserver pools using SRV records for HKP and HKPS [Was: Re: hkps port] In-Reply-To: <49D4CF88.3000403@fifthhorseman.net> References: <87tz57flie.fsf@wheatstone.g10code.de> <87ocvffcpp.fsf@wheatstone.g10code.de> <49D4CF88.3000403@fifthhorseman.net> Message-ID: On Apr 2, 2009, at 10:45 AM, Daniel Kahn Gillmor wrote: > On 04/02/2009 08:56 AM, David Shaw wrote: >> Ideally, curl would support SRV internally. It can do a better job >> than >> we can do as a wrapper from outside, as it can properly walk the >> list of >> returned servers until one answers. The best we can do is do a SRV >> lookup, run the selection algorithm, and then hope that the best >> choice >> is actually running. Still, it is better than nothing. If I had >> more >> spare time, I'd just write SRV for curl and donate it to them. > > I agree that using SRV records is a good idea for HKP and HKPS. And > David's suggestion here is structurally the right way to go, even if > we're not there yet. > > But i also note that i don't see any keyserver pools publishing their > pool as SRV records at the moment -- only A records. If we're going > to > say that we're making a least-unhappy choice (which is bound to make > some operators unhappy), and that SRV records will be the mitigating > factor, we should probably clearly encourage keyserver pool > operators to > publish their pool as SRV records directly in addition to A records. > > Or are they already doing this, and i'm just querying the wrong way? > > dig -t SRV _hkp._tcp.pool.sks-keyservers.net > dig -t SRV _hkp._tcp.keys.gnupg.net Some pools have SRV already and some don't. Try "_hkp._tcp.subkeys.pgp.net". David From zbiggy at o2.pl Fri Apr 3 00:38:25 2009 From: zbiggy at o2.pl (Zbigniew Luszpinski) Date: Fri, 3 Apr 2009 00:38:25 +0200 Subject: dirmngr 1.0.2: fetch-crl causes memory protection fault Message-ID: <200904030038.25422.zbiggy@o2.pl> Hello, I can not import crl files because dirmngr terminates with memory protection fault: dirmngr --fetch-crl http://crl.verisign.com/pca2-g2.crl dirmngr[834]: permanently loaded certificates: 0 dirmngr[834]: runtime cached certificates: 0 Memory protection fault dirmngr --fetch-crl http://onsitecrl.verisign.com/OnSitePublic/LatestCRL.crl dirmngr[881]: error opening `/root/.gnupg/dirmngr_ldapservers.conf': No such file or directory dirmngr[881]: permanently loaded certificates: 0 dirmngr[881]: runtime cached certificates: 0 Memory protection fault have a nice day, Zbigniew 'zbiggy' Luszpinski From wk at gnupg.org Fri Apr 3 10:29:08 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 10:29:08 +0200 Subject: dirmngr 1.0.2: fetch-crl causes memory protection fault In-Reply-To: <200904030038.25422.zbiggy@o2.pl> (Zbigniew Luszpinski's message of "Fri, 3 Apr 2009 00:38:25 +0200") References: <200904030038.25422.zbiggy@o2.pl> Message-ID: <87iqlmf8dn.fsf@wheatstone.g10code.de> On Fri, 3 Apr 2009 00:38, zbiggy at o2.pl said: > dirmngr --fetch-crl http://crl.verisign.com/pca2-g2.crl > dirmngr[834]: permanently loaded certificates: 0 > dirmngr[834]: runtime cached certificates: 0 > Memory protection fault Ooop. You are the first one to try it for more than half a year. Fixed in SVN; a new release is planed in about 2 weeks. Patch against 1.0.2 attached. Thanks, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: dirmngr-090403.diff URL: From bernhard at intevation.de Fri Apr 3 13:21:27 2009 From: bernhard at intevation.de (Bernhard Reiter) Date: Fri, 3 Apr 2009 13:21:27 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <200904021428.20569.bernhard@intevation.de> References: <200904021428.20569.bernhard@intevation.de> Message-ID: <200904031321.27504.bernhard@intevation.de> Am Donnerstag, 2. April 2009 14:28:20 schrieb Bernhard Reiter: > Something is wrong with > CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE > Corporation,C=US,serial#: 01A5 > http://www.telesec.de/service/GTE-CyberTrust-Global-Root.der > > gpgsm 2.0.11 can import, but not use it. > The GPG_ERR_NO_VALUE is not conclusive. > What is going on? Looks like some check in gpg-agent rejects that certificate. gpg-agent[4432.8] DBG: <- ISTRUSTED 97817950D81C9670CC34D809CF794431367EF474 gpg-agent[4432.8] DBG: -> ERR 67108962 Nicht vertrauensw?rdig gpgsm: can't encrypt to `GTE': No value Some user reported that "relax" for the trustlist.txt might help, I am going to test this next. Already the documentation for what relax does is a bit fuzzy on this and so is the status message. "Not trusted, failing check" would be much better. Shall I open an issue for this already? Note that there are a lot of SMIME users with this certificate around, though it certainly is not the best certificate someone could get. > See the following session: > > ? rm -r dot.gnupg/ > ? mkdir dot.gnupg > ? LANGUAGE=C GPGHOME=dot.gnupg gpgsm --no-common-certs-import --list-keys Typo in the script should have been GNUPGHOME of course (I did set it in the environement before this is why my typo still gave me the right behaviour.. sorry for the potential confusion.) Bernhard -- Managing Director - Owner: www.intevation.net (Free Software Company) Germany Coordinator: fsfeurope.org. Coordinator: www.Kolab-Konsortium.com. Intevation GmbH, Osnabr?ck, DE; Amtsgericht Osnabr?ck, HRB 18998 Gesch?ftsf?hrer Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner From wk at gnupg.org Fri Apr 3 14:03:21 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 14:03:21 +0200 Subject: Blackberry integration In-Reply-To: <4255c2570903250845j14f2976eq4725b26eba05579e@mail.gmail.com> (aoz.syn@gmail.com's message of "Wed, 25 Mar 2009 09:45:48 -0600") References: <4255c2570903250845j14f2976eq4725b26eba05579e@mail.gmail.com> Message-ID: <873acqdjw6.fsf@wheatstone.g10code.de> On Wed, 25 Mar 2009 16:45, aoz.syn at gmail.com said: > The PGP offering also requires a minimum seat purchase, driving the > cost way up for individual users. I, too, would love to see > Blackberry and WM6 support, but fear those may be down the road a > distance or even on myself. Not that I know. In fact I once started with a project to write a simple OpenPGP implementation for use on small devices. Lacking any project support this is in a deep sleep mode. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 14:06:11 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 14:06:11 +0200 Subject: Using GPG exported key In-Reply-To: <49D3C2AB.5020401@Mozilla-Enigmail.org> (John Clizbe's message of "Wed, 01 Apr 2009 14:38:19 -0500") References: <1a27245d0904011211s79b6ec23va71036b23906c57d@mail.gmail.com> <49D3C2AB.5020401@Mozilla-Enigmail.org> Message-ID: <87y6uic570.fsf@wheatstone.g10code.de> On Wed, 1 Apr 2009 21:38, John at Mozilla-Enigmail.org said: > FWIW, the GnuPG keyring files {pubring,secring,trustdb}.gpg are binary > compatible. I find it's just as convenient to copy or import entire > keyrings. Just for the record here is my usual remark: That works only with the same GnuPG versions. Future GnuPG versions and some older versions may/will use a different format. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 14:23:17 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 14:23:17 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <200904031321.27504.bernhard@intevation.de> (Bernhard Reiter's message of "Fri, 3 Apr 2009 13:21:27 +0200") References: <200904021428.20569.bernhard@intevation.de> <200904031321.27504.bernhard@intevation.de> Message-ID: <87prfuc4ei.fsf@wheatstone.g10code.de> On Fri, 3 Apr 2009 13:21, bernhard at intevation.de said: > Looks like some check in gpg-agent rejects that certificate. > gpg-agent[4432.8] DBG: <- ISTRUSTED 97817950D81C9670CC34D809CF794431367EF474 > gpg-agent[4432.8] DBG: -> ERR 67108962 Nicht vertrauensw?rdig That merely means that the certificate is not in the trustlist.txt. > Some user reported that "relax" for the trustlist.txt might help, I am going > to test this next. Already the documentation for what relax does is a bit > fuzzy on this and so is the status message. "Not trusted, failing check" This is on purpose, so that we can disable one or the other check without being required to stick to it with the next release of GnuPG. The certifictate is broken and relax is just a workaround. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 14:20:10 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 14:20:10 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <200904021428.20569.bernhard@intevation.de> (Bernhard Reiter's message of "Thu, 2 Apr 2009 14:28:20 +0200") References: <200904021428.20569.bernhard@intevation.de> Message-ID: <87tz56c4jp.fsf@wheatstone.g10code.de> On Thu, 2 Apr 2009 14:28, bernhard at intevation.de said: > Something is wrong with > CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE > Corporation,C=US,serial#: 01A5 > http://www.telesec.de/service/GTE-CyberTrust-Global-Root.der > > gpgsm 2.0.11 can import, but not use it. > The GPG_ERR_NO_VALUE is not conclusive. > What is going on? Here are the reasons for this error code: GPG_ERR_NO_VALUE No value GNUPG: - A timestamp value is expect but there is none. KSBA: - A timestamp value is expect but there is none. - A certificate is missing a required property. - A CMS object is missing a required property. - Converting a Distinguised Name to an RFC2253 string failed. I doubt that this will help you. I'll check the certificate. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 14:34:26 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 14:34:26 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <200904021428.20569.bernhard@intevation.de> (Bernhard Reiter's message of "Thu, 2 Apr 2009 14:28:20 +0200") References: <200904021428.20569.bernhard@intevation.de> Message-ID: <87ljqhdigd.fsf@wheatstone.g10code.de> On Thu, 2 Apr 2009 14:28, bernhard at intevation.de said: > Something is wrong with > CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE > Corporation,C=US,serial#: 01A5 > http://www.telesec.de/service/GTE-CyberTrust-Global-Root.der In libksba/tests you find a useful tool for such cases: $ ./cert-basic GTE-CyberTrust-Global-Root.der Certificate in `GTE-CyberTrust-Global-Root.der': serial....: (#01A5# ) issuer....: `CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US' subject...: `CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US' notBefore.: 1998-08-13 00:29:00 notAfter..: 2018-08-13 23:59:00 hash algo.: 1.2.840.113549.1.1.4 (md5withRSAEncryption) cert-basic.c:285: enumerating extensions failed: No value SubjectKeyIdentifier: none AuthorityKeyIdentifier: none cert-basic.c:343: ksba_cert_is_ca failed: No value KeyUsage: Not specified ExtKeyUsages: none CertificatePolicies: none cert-basic.c:453: ksba_cert_get_crl_dist_point failed: No value cert-basic.c:472: ksba_cert_get_authority_info_access failed: No value cert-basic.c:491: ksba_cert_get_subject_info_access failed: No value "ksba_cert_is_ca failed" is the problem with that certificate. It is a root certificate but it does not say so in its signedAttributes. Hmmm, there are no signed attributes at all. BTW, I consider this a feature of GnuPG: Wouldyou really trust a CA which issues a root certificate valid for 20 years? That was even ridiculous back in 1998. The use of MD5 was kind of justified 11 years ago. Don't spend any more time on this, you better use plaintext than GTE Cybertrust "secured" encryption. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 14:38:54 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 14:38:54 +0200 Subject: Cross-compiling libgcrypt fails In-Reply-To: <75A244ADD1BE9147A96165D2217F3350063B0BC7@BUDMLVEM07.e2k.ad.ge.com> (Jussi Eronen's message of "Tue, 24 Mar 2009 08:14:18 +0100") References: <75A244ADD1BE9147A96165D2217F3350063B0BC7@BUDMLVEM07.e2k.ad.ge.com> Message-ID: <87hc15di8x.fsf@wheatstone.g10code.de> On Tue, 24 Mar 2009 08:14, jussi.eronen at med.ge.com said: > ../src/.libs/libgcrypt.so: undefined reference to `_gcry_mpih_sub_n' > ... > > I am building for powerpc 603e target with toolchain gcc-4.2.4, glibc-2.6.1 and binutils-2.19 on Fedora8. Sorry, I have no test platform for this. It used to work thus either the toolchain changed or we have regression. As a workaround you may use ./configure --disable-asm which might be what the distributions are doing for power32. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 14:46:32 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 14:46:32 +0200 Subject: HKP client certificates (was: HKP keyservers over SSL) In-Reply-To: <20090323175651.GA74822@jabberwocky.com> (David Shaw's message of "Mon, 23 Mar 2009 13:56:51 -0400") References: <49B95F0A.4060309@fifthhorseman.net> <20090312210856.GC1295@jabberwocky.com> <49B97FAF.7080500@fifthhorseman.net> <20090312222036.GE1295@jabberwocky.com> <87tz5x5g1z.fsf@wheatstone.g10code.de> <20090323175651.GA74822@jabberwocky.com> Message-ID: <87d4btdhw7.fsf_-_@wheatstone.g10code.de> On Mon, 23 Mar 2009 18:56, dshaw at jabberwocky.com said: > communications, rather than the client to server communications. The > catch, of course, is that given how the keyserver gossip protocol > works, a given keyserver pool must be willing to exclude everyone who > does not similarly use client certs. You will end up with the usual trust problem. Why should a server trust a user certificate? Well, it would allow to actually implement the No-modify keyserver preference we set on new keys for ages. But how shall this work for revocations? A user without access to his secret key still needs a way to upload revocations. PKIs used beyond a closed user group just don't work. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 15:09:55 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 15:09:55 +0200 Subject: HKP keyservers over SSL In-Reply-To: <49C97828.2070808@fifthhorseman.net> (Daniel Kahn Gillmor's message of "Tue, 24 Mar 2009 20:17:44 -0400") References: <49B95F0A.4060309@fifthhorseman.net> <20090312210856.GC1295@jabberwocky.com> <49B97FAF.7080500@fifthhorseman.net> <20090312222036.GE1295@jabberwocky.com> <87tz5x5g1z.fsf@wheatstone.g10code.de> <49C97828.2070808@fifthhorseman.net> Message-ID: <8763hldgt8.fsf@wheatstone.g10code.de> On Wed, 25 Mar 2009 01:17, dkg at fifthhorseman.net said: >> 3. Use a list of server certificate fingerprints and compare against >> them. For example in the DNS which is secure enough for our threat >> model. Recall that the servers can still track key requests. > > I don't think i understand this option. Why is the DNS sufficiently > secure here? The idea is that we maintain a list of server certificate hashes and at connection time we compare against that list and thus there is no need to discuss about the so-called benefits of a root certificate. My threat model is a casual snooping attack and thus I consider DNS secure enough; of course it depends on what you call casual attack. That list could also be maintained on a website or signed by a few trustworthy keys. However that would immediately introduce a PKI again. > ;) I think a GNUNet service that focuses on distributing key material > would be a great thing to have, but i don't see it replacing HKP any > time soon, with all the HKP clients that exist. Me too. We have to live with HKP. Look only on how hard it is to convince people nut to use the old broken pgp keyservers. We are trying for years to abolish the sue of them but without much success. > queries, users can opt to use tor or not without needing to change gpg, > no? Can't gpg users already use tor for keyserver lookups in fact? (i > haven't tried it myself). Sure, you can. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 15:19:45 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 15:19:45 +0200 Subject: HKP keyservers over SSL In-Reply-To: <49D387E7.1090401@fifthhorseman.net> (Daniel Kahn Gillmor's message of "Wed, 01 Apr 2009 11:27:35 -0400") References: <8DB37A9C-106C-419D-AFD7-707ABE1A5226@jabberwocky.com> <20090331215338.GA32245@redoubt.spodhuis.org> <49D387E7.1090401@fifthhorseman.net> Message-ID: <871vs9dgcu.fsf@wheatstone.g10code.de> On Wed, 1 Apr 2009 17:27, dkg at fifthhorseman.net said: > However, i think that the arguments for using HKPS tend to suggest that > a user would want to use a single known keyserver (or at least a pool > maintained by the same administrator who can give them all the same host > keys or get them all independently certified). I agree here. Thus we do not need to care about pooled keyservers becuase you won't be abale to trust them. The trust you have in a certain keyserver was the original reason you asked for TLS support. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From dshaw at jabberwocky.com Fri Apr 3 15:26:05 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Fri, 3 Apr 2009 09:26:05 -0400 Subject: HKP client certificates (was: HKP keyservers over SSL) In-Reply-To: <87d4btdhw7.fsf_-_@wheatstone.g10code.de> References: <49B95F0A.4060309@fifthhorseman.net> <20090312210856.GC1295@jabberwocky.com> <49B97FAF.7080500@fifthhorseman.net> <20090312222036.GE1295@jabberwocky.com> <87tz5x5g1z.fsf@wheatstone.g10code.de> <20090323175651.GA74822@jabberwocky.com> <87d4btdhw7.fsf_-_@wheatstone.g10code.de> Message-ID: <52B2DC78-BD40-4015-A24A-5260DA432F6C@jabberwocky.com> On Apr 3, 2009, at 8:46 AM, Werner Koch wrote: > On Mon, 23 Mar 2009 18:56, dshaw at jabberwocky.com said: > >> communications, rather than the client to server communications. The >> catch, of course, is that given how the keyserver gossip protocol >> works, a given keyserver pool must be willing to exclude everyone who >> does not similarly use client certs. > > You will end up with the usual trust problem. Why should a server > trust > a user certificate? Well, it would allow to actually implement the > No-modify keyserver preference we set on new keys for ages. But how > shall this work for revocations? A user without access to his secret > key still needs a way to upload revocations. PKIs used beyond a > closed > user group just don't work. I'm referring to the server to server communications, rather than the client (GPG) to server communications. I.e. the SKS "gossip" protocol it uses to exchange keys internally. I can see reasons why server A might want to authenticate server B before it allows it to contribute to the shared keyring. The catch is as I stated: you need to be willing to exclude every server who doesn't use keys (and even gossips with those who don't use keys), which is a nonstarter. David From bernhard at intevation.de Fri Apr 3 15:43:29 2009 From: bernhard at intevation.de (Bernhard Reiter) Date: Fri, 3 Apr 2009 15:43:29 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <87tz56c4jp.fsf@wheatstone.g10code.de> References: <200904021428.20569.bernhard@intevation.de> <87tz56c4jp.fsf@wheatstone.g10code.de> Message-ID: <200904031543.30020.bernhard@intevation.de> Am Freitag, 3. April 2009 14:20:10 schrieb Werner Koch: > On Thu, ?2 Apr 2009 14:28, bernhard at intevation.de said: > > Something is wrong with > > CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE > > Corporation,C=US,serial#: 01A5 > > http://www.telesec.de/service/GTE-CyberTrust-Global-Root.der > > > > gpgsm 2.0.11 can import, but not use it. > > The GPG_ERR_NO_VALUE is not conclusive. > > What is going on? > > Here are the reasons for this error code: > > GPG_ERR_NO_VALUE????????????????No value > > ? ? GNUPG: ?- A timestamp value is expect but there is none. > ? ? KSBA: ? - A timestamp value is expect but there is none. > ? ? ? ? ? ? - A certificate is missing a required property. > ? ? ? ? ? ? - A CMS object is missing a required property. > ? ? ? ? ? ? - Converting a Distinguised Name to an RFC2253 string failed. > > I doubt that this will help you. ?I'll check the certificate. Thanks for the responses, yes, they do help me. The certificate came out of a real use case and eat up time from real users. The message "No value" is not enough for them and not for their supporting administrators to get the idea that the certificate is to blame (as compared to other setting, e.g. validation and so on). So even if the result would be as simple as "Certificate failing basic consistency checks" this would be very helpful. Then they could look up the documentation which could potentially read like The CMS implementation does a number of basic consistency checks before using a certificate. For 2.0.11 these checks for instance are about a) certificate length > X years b) no use of MD5 c).. The exact check parameters are subject to change for each version, if your certificate fails you should consult an expert. If you are the expert, check the source code or user ./cert-basic from libksba/tests Bernhard -- Managing Director - Owner: www.intevation.net (Free Software Company) Germany Coordinator: fsfeurope.org. Coordinator: www.Kolab-Konsortium.com. Intevation GmbH, Osnabr?ck, DE; Amtsgericht Osnabr?ck, HRB 18998 Gesch?ftsf?hrer Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner From wk at gnupg.org Fri Apr 3 16:03:04 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 16:03:04 +0200 Subject: Bug#519333: gnupg: Please include support for encrypted keyserver queries [PATCH] In-Reply-To: <49C7B800.6050001@fifthhorseman.net> (Daniel Kahn Gillmor's message of "Mon, 23 Mar 2009 12:25:36 -0400") References: <20090311221142.16436.77877.reportbug@pond.riseup.net> <87skl8rph8.fsf@mid.deneb.enyo.de> <49C3F19B.10400@fifthhorseman.net> <87tz5k1r0q.fsf@wheatstone.g10code.de> <49C799E3.6020707@fifthhorseman.net> <877i2gtfth.fsf@wheatstone.g10code.de> <49C7B800.6050001@fifthhorseman.net> Message-ID: <87r609bzs7.fsf@wheatstone.g10code.de> On Mon, 23 Mar 2009 17:25, dkg at fifthhorseman.net said: > Actually, i do run one of the public keyservers. Even if i didn't, > there are some keyservers run by organizations which i believe have my > interests in mind more than others. For example, i might prefer an > organization who commits to the following behaviors: Okay, so a reason for using TLS would be a private or company keyserver. I can see that an the wish t use use TLS instead of a VPN. > I agree that it should be impossible for a malicious keyserver to forge > new signatures. However, a malicious keyserver could non-detectably > strip signatures (e.g. removing revocation certificates, or certificates Which happened in the past oft enough due to buggy keyserver codes and we can't be 100 percent sure that this won't occur in the future again. > But not all keyservers have enough guaranteed traffic, and not everyone > wants to (or can afford to) saturate the network with filler queries. Anonymity comes at a price. > Furthermore, tor introduces an additional communications delay and a > layer of fragility to keyserver queries. Have you ever run "gpg > --refresh-keys" on a keyring with several hundred entries? Yes, this takes long even without TLS > My threat model is a motivated attacker looking to glean information > about the relationships of interest to a particular entity based on > their keyserver queries. Since many keyservers are on fairly public > network segments (in colocation facilities, for example), the > opportunity for sniffing traffic at the very least is high for an > attacker with moderate resources. A big local keyring will be helpful. The only problem is that gpg is too slow for this. For refreshing keys you would randomly select some keys to refresh in addition to those you really need to refresh. > servers and clients using SSH and TLS). Regular connections to a > keyserver to check for updated keys, etc, can leak a significant amount > of information about (for example) who is authorized to access a given > service. A large organization using OpenPGP for this type of The question is how to setup a reliable revocation service. I don't think that keyservers are the proper solution. Keyservers have a similar problem as CRLs do. OCSP has other problems - it is all a mess. We better don't mix this up with the TLS thing; that might be useful even without a reliabale revocation service.. > Could you point me toward some examples or something i should read up on > to better understand the vulnerabilities? Without functional revocation > certificates, the OpenPGP infrastructure is significantly weaker than > i'd like it to be. We learned in the past that some keyservers garbled the OpenPGP keys. We have some fixes in gpg to remove invalid packets and those packets might be revocation certificates. It does not happen often but keyservers employ similar code and we don't know what people can do with a targeted attack on the keyservers and thus eventually on gpg. Right that will be DoS - people will in turn not refresh their keys to avoid the trouble of waiting to send out the encrypted mail. > Could you propose a different mechanism that you feel is superior? I'm > happy to evaluate alternatives, as i quite like the public keyserver > network as i understand it (though i'm concerned by your unsettling > remark above about the ease of corruption of public keyservers). The data on the keyservers will technically not get corrupted but a DoS would add so man valid packets to a keyserver that the net effect of such a DoS is the same as corrupted data. Consider a well known key, with a a couple of thousand bogus signature packets and the time it takes for the keyservers and far worse for all clients to filter them out. > I hope moving this to gnupg-devel is the right place. It may also be > relevant to ietf-openpgp if you have serious qualms about the utility > revocation certificates in general. It may also be of interest to OpenPGP provides the message format but no operational advice on how to setup a PKI or a revocation services. Thus this is out of scope for the WG. However, we have used the WG list for such discussion in the past because all the OpenPGP people should be on this list, which is not the case for gnupg-devel or sks-devel. The format is not the problem, the reliable service is the problem. Salam-Shalom, Werner ps. Sorry, for the late replay, I was a bit too busy with other projects the last weeks. -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 16:18:17 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 16:18:17 +0200 Subject: HKP client certificates In-Reply-To: <52B2DC78-BD40-4015-A24A-5260DA432F6C@jabberwocky.com> (David Shaw's message of "Fri, 3 Apr 2009 09:26:05 -0400") References: <49B95F0A.4060309@fifthhorseman.net> <20090312210856.GC1295@jabberwocky.com> <49B97FAF.7080500@fifthhorseman.net> <20090312222036.GE1295@jabberwocky.com> <87tz5x5g1z.fsf@wheatstone.g10code.de> <20090323175651.GA74822@jabberwocky.com> <87d4btdhw7.fsf_-_@wheatstone.g10code.de> <52B2DC78-BD40-4015-A24A-5260DA432F6C@jabberwocky.com> Message-ID: <87ab6xbz2u.fsf@wheatstone.g10code.de> On Fri, 3 Apr 2009 15:26, dshaw at jabberwocky.com said: > I'm referring to the server to server communications, rather than the > client (GPG) to server communications. I.e. the SKS "gossip" protocol Okay, that is a different thing of course. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Fri Apr 3 16:17:18 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 03 Apr 2009 16:17:18 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <200904031543.30020.bernhard@intevation.de> (Bernhard Reiter's message of "Fri, 3 Apr 2009 15:43:29 +0200") References: <200904021428.20569.bernhard@intevation.de> <87tz56c4jp.fsf@wheatstone.g10code.de> <200904031543.30020.bernhard@intevation.de> Message-ID: <87eiw9bz4h.fsf@wheatstone.g10code.de> On Fri, 3 Apr 2009 15:43, bernhard at intevation.de said: > consistency checks" this would be very helpful. Then they could look up the > documentation which could potentially read like > The CMS implementation does a number of basic consistency checks > before using a certificate. For 2.0.11 these checks for instance are about > a) certificate length > X years [...] Basically you want a complete description of every error case in GnuPG. That is not going to happenn given the sheer amount of checks we do. There is this --audit feature I am working on and if this certain error will occur more often, it makes sense to add specific support for this. FWIW, I started to write a list of error code usages in GnuPG et al. This will take quite some time to finish but eventualy if provide a useful cross reference. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From gnupg-devel at spodhuis.org Sat Apr 4 01:54:20 2009 From: gnupg-devel at spodhuis.org (Phil Pennock) Date: Fri, 3 Apr 2009 16:54:20 -0700 Subject: HKP keyservers over SSL In-Reply-To: <8763hldgt8.fsf@wheatstone.g10code.de> References: <49B95F0A.4060309@fifthhorseman.net> <20090312210856.GC1295@jabberwocky.com> <49B97FAF.7080500@fifthhorseman.net> <20090312222036.GE1295@jabberwocky.com> <87tz5x5g1z.fsf@wheatstone.g10code.de> <49C97828.2070808@fifthhorseman.net> <8763hldgt8.fsf@wheatstone.g10code.de> Message-ID: <20090403235419.GA77136@redoubt.spodhuis.org> On 2009-04-03 at 15:09 +0200, Werner Koch wrote: > Me too. We have to live with HKP. Look only on how hard it is to > convince people nut to use the old broken pgp keyservers. We are trying > for years to abolish the sue of them but without much success. Meh. Most people I deal with don't even know of the problems with things like pgp.mit.edu. What has actually been tried? Suggestion for gpg: WARNING: you are using keyserver "pgp.mit.edu" which is broken WARNING: reason: does not accept subkey updates WARNING: result: keys appear invalid because correct updates not received Default list of keyservers to whine about, gpg.conf option to hush up the warning for a particular keyserver -- the goal being to persuade, not coerce. And besides, pgp.mit.edu might someday be updated and stop being a blight and this would let people use it. -Phil -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From gordian.klein at gmx.de Sun Apr 5 18:47:50 2009 From: gordian.klein at gmx.de (Gordian Klein) Date: Sun, 05 Apr 2009 18:47:50 +0200 Subject: pcsc_connect and PCSC_SHARE_EXCLUSIVE Message-ID: <49D8E0B6.1060903@gmx.de> Hello, im trying to use gnupg with some other application that is accessing my OpenPGP card. Thats not possible because gnupg wants PCSC_SHARE_EXCLUSIVE rights for the card. Why doesnt it use PCSC_SHARE_SHARED to connect to a smartcard? Regards, Gordian From bernhard at intevation.de Mon Apr 6 09:58:14 2009 From: bernhard at intevation.de (Bernhard Reiter) Date: Mon, 6 Apr 2009 09:58:14 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <87eiw9bz4h.fsf@wheatstone.g10code.de> References: <200904021428.20569.bernhard@intevation.de> <200904031543.30020.bernhard@intevation.de> <87eiw9bz4h.fsf@wheatstone.g10code.de> Message-ID: <200904060958.14670.bernhard@intevation.de> Am Freitag, 3. April 2009 16:17:18 schrieb Werner Koch: > On Fri, ?3 Apr 2009 15:43, bernhard at intevation.de said: > > consistency checks" this would be very helpful. Then they could look up > > the documentation which could potentially read like > > > > ? ?The CMS implementation does a number of basic consistency checks > > ? ?before using a certificate. For 2.0.11 these checks for instance are > > about a) certificate length > X years > > ? ? ?[...] > > Basically you want a complete description of every error case in GnuPG. > That is not going to happenn given the sheer amount of checks we do. First I want a useful overview message, that states that the problem actually is within the internal certificate checks (and not with the chain or the CRLs or so). Having an idea which kind of stuff is checked is a potential other step. > There is this --audit feature I am working on and if this certain error > will occur more often, it makes sense to add specific support for this. Given that I have it twice in real support cases, having a "bad" certificate is a common case. > FWIW, I started to write a list of error code usages in GnuPG et al. > This will take quite some time to finish but eventualy if provide a > useful cross reference. Sounds useful! Bernhard -- Managing Director - Owner: www.intevation.net (Free Software Company) Germany Coordinator: fsfeurope.org. Coordinator: www.Kolab-Konsortium.com. Intevation GmbH, Osnabr?ck, DE; Amtsgericht Osnabr?ck, HRB 18998 Gesch?ftsf?hrer Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner From wk at gnupg.org Tue Apr 7 12:19:07 2009 From: wk at gnupg.org (Werner Koch) Date: Tue, 07 Apr 2009 12:19:07 +0200 Subject: gpgsm: Cert trouble GPG_ERR_NO_VALUE for GTE CyberTrust Global Root In-Reply-To: <200904060958.14670.bernhard@intevation.de> (Bernhard Reiter's message of "Mon, 6 Apr 2009 09:58:14 +0200") References: <200904021428.20569.bernhard@intevation.de> <200904031543.30020.bernhard@intevation.de> <87eiw9bz4h.fsf@wheatstone.g10code.de> <200904060958.14670.bernhard@intevation.de> Message-ID: <874ox0ahr8.fsf@wheatstone.g10code.de> On Mon, 6 Apr 2009 09:58, bernhard at intevation.de said: > First I want a useful overview message, that states that the problem actually > is within the internal certificate checks (and not with the chain or the CRLs > or so). Nope, you can't view an X.509 certifciate without considering the certifciates of the issuer. And no, there is no problem with the certifciate check - gpgsm merely rejects it as bogus. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Tue Apr 7 12:16:57 2009 From: wk at gnupg.org (Werner Koch) Date: Tue, 07 Apr 2009 12:16:57 +0200 Subject: pcsc_connect and PCSC_SHARE_EXCLUSIVE In-Reply-To: <49D8E0B6.1060903@gmx.de> (Gordian Klein's message of "Sun, 05 Apr 2009 18:47:50 +0200") References: <49D8E0B6.1060903@gmx.de> Message-ID: <878wmcahuu.fsf@wheatstone.g10code.de> On Sun, 5 Apr 2009 18:47, gordian.klein at gmx.de said: > Thats not possible because gnupg wants PCSC_SHARE_EXCLUSIVE rights for > the card. > Why doesnt it use PCSC_SHARE_SHARED to connect to a smartcard? Because it requires full control over the card. Think of scdaemon as a replacement for pcscd. If you don't like that, you may use the ssdaemon option --card-timeout to release the context if no client is connected to scdaemon. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From rfransix at comcast.net Tue Apr 7 20:16:00 2009 From: rfransix at comcast.net (Richard Francis) Date: Tue, 7 Apr 2009 13:16:00 -0500 Subject: Libgcrypt compile on aix 4.3.2.0 failing, any help? In-Reply-To: Message-ID: don't flame me...i'm struggling to libgcrypt made, any ideas are greatly appreciated...i see fragments of this issue frequently...surely there is a straightforward resolution...hoping so. thanks in advance. _____ From: Richard Francis [mailto:rfransix at comcast.net] Sent: Friday, March 13, 2009 11:15 PM To: gnupg-devel at gnupg.org Subject: Libgcrypt compile on aix 4.3.2.0 failing, any help? Any help on resolving this is greatly appreciated. This is from libgcrypt-1.4.3, and the make for 1.4.4 is below (both fail). # make make all-recursive Making all in mpi source='mpih-div.c' object='mpih-div.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gc c -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -Wpointer-arith -c -o mpih-div.lo mpih-div.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -Wpointer-arith -c mpih-div.c -Wp,-MD,.deps/mpih-div.TPlo -DPIC -o .libs /mpih-div.o mpih-div.c: In function `_gcry_mpih_mod_1': mpih-div.c:84: warning: implicit declaration of function `__udiv_w_sdiv' mpih-div.c:98: Invalid `asm' statement: mpih-div.c:98: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:99: Invalid `asm' statement: mpih-div.c:99: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:104: Invalid `asm' statement: mpih-div.c:104: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:105: Invalid `asm' statement: mpih-div.c:105: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:134: Invalid `asm' statement: mpih-div.c:134: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:134: Invalid `asm' statement: mpih-div.c:134: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c: In function `_gcry_mpih_divrem': mpih-div.c:288: Invalid `asm' statement: mpih-div.c:288: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:352: Invalid `asm' statement: mpih-div.c:352: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c: In function `_gcry_mpih_divmod_1': mpih-div.c:445: Invalid `asm' statement: mpih-div.c:445: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:446: Invalid `asm' statement: mpih-div.c:446: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:451: Invalid `asm' statement: mpih-div.c:451: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:452: Invalid `asm' statement: mpih-div.c:452: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:480: Invalid `asm' statement: mpih-div.c:480: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:480: Invalid `asm' statement: mpih-div.c:480: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. make: 1254-004 The error code from the last command is 1. Stop. make: 1254-004 The error code from the last command is 1. Stop. make: 1254-004 The error code from the last command is 2. Stop. libgcrypt-1.4.4 make: # make make all-recursive Making all in mpi source='mpi-add.c' object='mpi-add.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-add.lo mpi-add.c mkdir .libs gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-add.c -Wp,-MD,.deps/mpi-add.TPlo -DPIC -o .libs/mpi-add.o source='mpi-bit.c' object='mpi-bit.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-bit.lo mpi-bit.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-bit.c -Wp,-MD,.deps/mpi-bit.TPlo -DPIC -o .libs/mpi-bit.o source='mpi-cmp.c' object='mpi-cmp.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-cmp.lo mpi-cmp.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-cmp.c -Wp,-MD,.deps/mpi-cmp.TPlo -DPIC -o .libs/mpi-cmp.o source='mpi-div.c' object='mpi-div.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-div.lo mpi-div.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-div.c -Wp,-MD,.deps/mpi-div.TPlo -DPIC -o .libs/mpi-div.o source='mpi-gcd.c' object='mpi-gcd.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-gcd.lo mpi-gcd.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-gcd.c -Wp,-MD,.deps/mpi-gcd.TPlo -DPIC -o .libs/mpi-gcd.o source='mpi-inline.c' object='mpi-inline.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compil e gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-inline.lo mpi-inline.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-inline.c -Wp,-MD,.deps/mpi-inline.TPlo -DPIC -o .libs/mpi-inline. o source='mpi-inv.c' object='mpi-inv.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-inv.lo mpi-inv.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-inv.c -Wp,-MD,.deps/mpi-inv.TPlo -DPIC -o .libs/mpi-inv.o source='mpi-mul.c' object='mpi-mul.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-mul.lo mpi-mul.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-mul.c -Wp,-MD,.deps/mpi-mul.TPlo -DPIC -o .libs/mpi-mul.o source='mpi-mod.c' object='mpi-mod.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-mod.lo mpi-mod.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-mod.c -Wp,-MD,.deps/mpi-mod.TPlo -DPIC -o .libs/mpi-mod.o source='mpi-pow.c' object='mpi-pow.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-pow.lo mpi-pow.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-pow.c -Wp,-MD,.deps/mpi-pow.TPlo -DPIC -o .libs/mpi-pow.o source='mpi-mpow.c' object='mpi-mpow.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gc c -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-mpow.lo mpi-mpow.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-mpow.c -Wp,-MD,.deps/mpi-mpow.TPlo -DPIC -o .libs/mpi-mpow.o source='mpi-scan.c' object='mpi-scan.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gc c -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpi-scan.lo mpi-scan.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpi-scan.c -Wp,-MD,.deps/mpi-scan.TPlo -DPIC -o .libs/mpi-scan.o source='mpicoder.c' object='mpicoder.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gc c -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpicoder.lo mpicoder.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpicoder.c -Wp,-MD,.deps/mpicoder.TPlo -DPIC -o .libs/mpicoder.o source='mpih-div.c' object='mpih-div.lo' libtool=yes DEPDIR=.deps depmode=gcc /bin/sh ../depcomp /bin/sh ../libtool --tag=CC --mode=compile gc c -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c -o mpih-div.lo mpih-div.c gcc -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I/usr/local/include -g -O2 -Wall -c mpih-div.c -Wp,-MD,.deps/mpih-div.TPlo -DPIC -o .libs/mpih-div.o mpih-div.c: In function `_gcry_mpih_mod_1': mpih-div.c:84: warning: implicit declaration of function `__udiv_w_sdiv' mpih-div.c:98: Invalid `asm' statement: mpih-div.c:98: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:99: Invalid `asm' statement: mpih-div.c:99: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:104: Invalid `asm' statement: mpih-div.c:104: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:105: Invalid `asm' statement: mpih-div.c:105: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:134: Invalid `asm' statement: mpih-div.c:134: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:134: Invalid `asm' statement: mpih-div.c:134: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c: In function `_gcry_mpih_divrem': mpih-div.c:288: Invalid `asm' statement: mpih-div.c:288: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:352: Invalid `asm' statement: mpih-div.c:352: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c: In function `_gcry_mpih_divmod_1': mpih-div.c:445: Invalid `asm' statement: mpih-div.c:445: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:446: Invalid `asm' statement: mpih-div.c:446: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:451: Invalid `asm' statement: mpih-div.c:451: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:452: Invalid `asm' statement: mpih-div.c:452: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:480: Invalid `asm' statement: mpih-div.c:480: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. mpih-div.c:480: Invalid `asm' statement: mpih-div.c:480: fixed or forbidden register 64 (mq) was spilled for class MQ_REGS. make: 1254-004 The error code from the last command is 1. Stop. make: 1254-004 The error code from the last command is 1. Stop. make: 1254-004 The error code from the last command is 2. Stop. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cswiger at mac.com Tue Apr 7 20:25:30 2009 From: cswiger at mac.com (Chuck Swiger) Date: Tue, 07 Apr 2009 11:25:30 -0700 Subject: Libgcrypt compile on aix 4.3.2.0 failing, any help? In-Reply-To: References: Message-ID: <1D835DD6-92C7-41A5-85EE-FB8EAD531179@mac.com> Hi-- On Apr 7, 2009, at 11:16 AM, Richard Francis wrote: > don't flame me...i'm struggling to libgcrypt made, any ideas are > greatly appreciated...i see fragments of this issue > frequently...surely there is a straightforward resolution...hoping > so. thanks in advance. A quick search on the error message suggests that passing "--disable- asm" to ./configure might help. It's not clear whether this is an actual problem with the inline assembly code within libgcryppt, or a result of a GCC bug...if the latter, it's also possible that using a newer or different version of the compiler itself might be helpful. Regards, -- -Chuck From rfransix at comcast.net Tue Apr 7 22:03:40 2009 From: rfransix at comcast.net (Richard Francis) Date: Tue, 7 Apr 2009 15:03:40 -0500 Subject: Libgcrypt compile on aix 4.3.2.0 failing, any help? In-Reply-To: <1D835DD6-92C7-41A5-85EE-FB8EAD531179@mac.com> Message-ID: Thanks for the reply. The --disable-asm fails to help MAKE complete, same errors result. I've read a report where the IBM C compiler does not encounter these errors. I will try an earlier version of GCC++/GCC. -----Original Message----- From: Chuck Swiger [mailto:cswiger at mac.com] Sent: Tuesday, April 07, 2009 1:26 PM To: rfransix at comcast.net Cc: gnupg-devel at gnupg.org Subject: Re: Libgcrypt compile on aix 4.3.2.0 failing, any help? Hi-- On Apr 7, 2009, at 11:16 AM, Richard Francis wrote: > don't flame me...i'm struggling to libgcrypt made, any ideas are > greatly appreciated...i see fragments of this issue > frequently...surely there is a straightforward resolution...hoping so. > thanks in advance. A quick search on the error message suggests that passing "--disable- asm" to ./configure might help. It's not clear whether this is an actual problem with the inline assembly code within libgcryppt, or a result of a GCC bug...if the latter, it's also possible that using a newer or different version of the compiler itself might be helpful. Regards, -- -Chuck From brian.kumfer at ggp.com Wed Apr 8 18:08:35 2009 From: brian.kumfer at ggp.com (bkumfer) Date: Wed, 8 Apr 2009 09:08:35 -0700 (PDT) Subject: Public key issue out of GnuPG Message-ID: <22953995.post@talk.nabble.com> I have implemented GnuPG on a server and created a public and private key. In working with a partner, they were able to use the public key to encrypt a file. However, in working with the banks (the reason for the need), they tell me the key is causing the issue below. Wed Apr 1 12:30:11 EDT 2009 Encrypt - complete failure during an encode. : encrypt (3063:key unable to encrypt) : command /usr/bin/pgpce3.sh -e ack.252268.testfile.2482414 252268 Any help would be great? -- View this message in context: http://www.nabble.com/Public-key-issue-out-of-GnuPG-tp22953995p22953995.html Sent from the GnuPG - Dev mailing list archive at Nabble.com. From wk at gnupg.org Thu Apr 16 10:08:34 2009 From: wk at gnupg.org (Werner Koch) Date: Thu, 16 Apr 2009 10:08:34 +0200 Subject: gpgme_set_passphrase_cb seems to be ignored In-Reply-To: (Gary Suggett's message of "Sat, 21 Mar 2009 02:24:04 -0500") References: Message-ID: <87myahqav1.fsf@wheatstone.g10code.de> On Sat, 21 Mar 2009 08:24, admin at garyshood.com said: > When I set a passphrase callback, it doesn't seem to be used, and > pinentry runs instead. That's intended. GnuPG2 requires the gpg-agent for secret key operations and does not bother the application with passphrases etc. There are ways to workaround it by using a custom pinentry script controlled by envvar PINENTRY_USER_DATA or by using gpg-preset-passphrase. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From wk at gnupg.org Thu Apr 16 10:05:52 2009 From: wk at gnupg.org (Werner Koch) Date: Thu, 16 Apr 2009 10:05:52 +0200 Subject: gnupg 2.0.10 broke kgpg 3.5.9 ??? In-Reply-To: <200903301501.00961.andreas.huettel@physik.uni-regensburg.de> (Andreas K. Huettel's message of "Mon, 30 Mar 2009 15:00:53 +0200") References: <200903301501.00961.andreas.huettel@physik.uni-regensburg.de> Message-ID: <87r5ztqazj.fsf@wheatstone.g10code.de> Hi, I approved a couple of pending posts today, thus this old mail. The problem seems to be solved already. On Mon, 30 Mar 2009 15:00, andreas.huettel at physik.uni-regensburg.de said: > For details, see http://bugs.gentoo.org/show_bug.cgi?id=263454 > > Do you have any clue what the problem in detail is, and/or how to fix it? For reference: The problem was that kgpg did not used --fixed-list-mode and we made that option the default with 2.0.10. That option was introduced with GnuPG 1.0.5, released on 2001-04-29 and I assumed that all applications are using it by now. Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From marcus.brinkmann at ruhr-uni-bochum.de Sat Apr 18 14:15:45 2009 From: marcus.brinkmann at ruhr-uni-bochum.de (Marcus Brinkmann) Date: 18 Apr 2009 14:15:45 +0200 Subject: gpgme_set_passphrase_cb seems to be ignored In-Reply-To: References: Message-ID: <49E9C471.2090608@ruhr-uni-bochum.de> Gary Suggett wrote: > When I set a passphrase callback, it doesn't seem to be used, and > pinentry runs instead. That's the correct behaviour with gpg2. > The application compiles fine, and the function return_pass is setup > correctly, but pinentry runs no matter what I change. Is there a way > to force this callback? This is half-a-lie, but the short answer is no, there isn't. The application is not allowed to insert itself into the trusted path. Think about it: If your program would work with a smart card terminal, you also wouldn't get access to the PIN inquiry. I suspect that your actual question is how to do unattended decryption/signing. This has many answers depending on circumstances (using no passphrase at all, presetting the password in the agent and caching it there, writing a custom pinentry, disabling pinentry in an custom configuration, etc). Thanks, Marcus From dshaw at jabberwocky.com Tue Apr 21 04:23:14 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Mon, 20 Apr 2009 22:23:14 -0400 Subject: keyserver pools using SRV records for HKP and HKPS [Was: Re: hkps port] In-Reply-To: References: <87tz57flie.fsf@wheatstone.g10code.de> <87ocvffcpp.fsf@wheatstone.g10code.de> <49D4CF88.3000403@fifthhorseman.net> Message-ID: <7CD76F69-1FD4-4551-9E74-4702073F91E6@jabberwocky.com> On Apr 2, 2009, at 6:54 PM, David Shaw wrote: > On Apr 2, 2009, at 10:45 AM, Daniel Kahn Gillmor wrote: > >> On 04/02/2009 08:56 AM, David Shaw wrote: >>> Ideally, curl would support SRV internally. It can do a better >>> job than >>> we can do as a wrapper from outside, as it can properly walk the >>> list of >>> returned servers until one answers. The best we can do is do a SRV >>> lookup, run the selection algorithm, and then hope that the best >>> choice >>> is actually running. Still, it is better than nothing. If I had >>> more >>> spare time, I'd just write SRV for curl and donate it to them. >> >> I agree that using SRV records is a good idea for HKP and HKPS. And >> David's suggestion here is structurally the right way to go, even if >> we're not there yet. >> >> But i also note that i don't see any keyserver pools publishing their >> pool as SRV records at the moment -- only A records. If we're >> going to >> say that we're making a least-unhappy choice (which is bound to make >> some operators unhappy), and that SRV records will be the mitigating >> factor, we should probably clearly encourage keyserver pool >> operators to >> publish their pool as SRV records directly in addition to A records. >> >> Or are they already doing this, and i'm just querying the wrong way? >> >> dig -t SRV _hkp._tcp.pool.sks-keyservers.net >> dig -t SRV _hkp._tcp.keys.gnupg.net > > Some pools have SRV already and some don't. Try > "_hkp._tcp.subkeys.pgp.net". I just committed the last bit for re-enabling SRV for hkp (which gives us hkps as well). Sorry for the delay. I just started a new job, and it's been a wee bit busy around here. If anyone who is tracking the 1.4 branch from svn wants to give it a whirl (especially for hkps), please do, and let me know how it works for you. Once I get a response or two, I'll integrate it over to the 2.0 branch. David From simon at josefsson.org Thu Apr 23 15:31:01 2009 From: simon at josefsson.org (Simon Josefsson) Date: Thu, 23 Apr 2009 15:31:01 +0200 Subject: rfc5280 chain validation implementation? Message-ID: <87hc0f1oq2.fsf@mocca.josefsson.org> Hi. I think GnuTLS needs a proper RFC 5280 chain validation implementation (see section 6), and it occurred to me that somewhere in the X.509 code in GnuPG2 there may be one. I couldn't find one in libksba. Is there one? What do you think about implementing it in libksba? /Simon From wk at gnupg.org Fri Apr 24 10:02:38 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 24 Apr 2009 10:02:38 +0200 Subject: rfc5280 chain validation implementation? In-Reply-To: <87hc0f1oq2.fsf@mocca.josefsson.org> (Simon Josefsson's message of "Thu, 23 Apr 2009 15:31:01 +0200") References: <87hc0f1oq2.fsf@mocca.josefsson.org> Message-ID: <873abyzdgh.fsf@wheatstone.g10code.de> On Thu, 23 Apr 2009 15:31, simon at josefsson.org said: > Hi. I think GnuTLS needs a proper RFC 5280 chain validation > implementation (see section 6), and it occurred to me that somewhere in > the X.509 code in GnuPG2 there may be one. I couldn't find one in > libksba. The algorithm given there is hard to implement but other algorithms are allowed as well, as long as they yield the same result. (From memory, I have not looked at 5280). Libksba has no such code. The reason is that you need too many parameters and those parameters (e.g. certificates) might not be available right aways but computed in the course of the validation. CRL or OCSP validation has also been done, policy checks and so on. A lot of stuff and code to handle this all in a generic ways would be more complicated for the caller and and the library than a straightforward implementation. At least this is my conclusion from implementing the chain validation two times. It would be possible to do this on another way, e.g. collect all certificates that might be needed for the validation before calling ten validation function. That would require to build the chain before the validation but that is not possible, because you don't always know what rule to use without actually validating the chain. Sometimes you need to do trial verification to decide whether a certificate can be used or whether to resort to another method of locating it. Its quite close to a catch-22. Right the algorithm given for PKIX work in a properly setup PKI - but we are far away from such an environment. > Is there one? There is code in gnupg and a different (and newer one) in dirmngr. "dirmngr-client --validate" shows how to validate a certificate using dirmngr as a validation server. It might be worth to think about a validation server because such a server could cache validation results on some realm base (say for TLS, for X.509 mail, for OpenVPN etc.). Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From simon at josefsson.org Fri Apr 24 10:52:31 2009 From: simon at josefsson.org (Simon Josefsson) Date: Fri, 24 Apr 2009 10:52:31 +0200 Subject: rfc5280 chain validation implementation? In-Reply-To: <873abyzdgh.fsf@wheatstone.g10code.de> (Werner Koch's message of "Fri, 24 Apr 2009 10:02:38 +0200") References: <87hc0f1oq2.fsf@mocca.josefsson.org> <873abyzdgh.fsf@wheatstone.g10code.de> Message-ID: <87bpqmph68.fsf@mocca.josefsson.org> Werner Koch writes: > On Thu, 23 Apr 2009 15:31, simon at josefsson.org said: > >> Hi. I think GnuTLS needs a proper RFC 5280 chain validation >> implementation (see section 6), and it occurred to me that somewhere in >> the X.509 code in GnuPG2 there may be one. I couldn't find one in >> libksba. > > The algorithm given there is hard to implement but other algorithms are > allowed as well, as long as they yield the same result. (From memory, I > have not looked at 5280). Right. It is difficult to document any another algorithm and prove that it leads to the same result though. The RFC 5280 algorithm is well described and seems possible to implement directly. > Libksba has no such code. The reason is that you need too many > parameters and those parameters (e.g. certificates) might not be > available right aways but computed in the course of the validation. CRL > or OCSP validation has also been done, policy checks and so on. A lot > of stuff and code to handle this all in a generic ways would be more > complicated for the caller and and the library than a straightforward > implementation. At least this is my conclusion from implementing the > chain validation two times. > > It would be possible to do this on another way, e.g. collect all > certificates that might be needed for the validation before calling ten > validation function. That would require to build the chain before the > validation but that is not possible, because you don't always know what > rule to use without actually validating the chain. Sometimes you need to > do trial verification to decide whether a certificate can be used or > whether to resort to another method of locating it. Its quite close to > a catch-22. Right the algorithm given for PKIX work in a properly setup > PKI - but we are far away from such an environment. I think we must separate path validation from path construction. Building a chain using various locally trusted certificates, and auxilliary certificates, is difficult, but what I need is only path validation. >> Is there one? > > There is code in gnupg and a different (and newer one) in dirmngr. Ah, I know I have seen it somewhere before, thanks for the pointer. > "dirmngr-client --validate" shows how to validate a certificate using > dirmngr as a validation server. It might be worth to think about a > validation server because such a server could cache validation results > on some realm base (say for TLS, for X.509 mail, for OpenVPN etc.). Indeed, I have started to think about separating the path validation from GnuTLS into a separate server. Protocol ideas in: http://redmine.josefsson.org/projects/gnutls/wiki/GnuTLSExternalValidation The best would be if GnuTLS would not have to implement path validation or private key operations in the same process as the TLS implementation. /Simon From wk at gnupg.org Fri Apr 24 16:25:15 2009 From: wk at gnupg.org (Werner Koch) Date: Fri, 24 Apr 2009 16:25:15 +0200 Subject: rfc5280 chain validation implementation? In-Reply-To: <87bpqmph68.fsf@mocca.josefsson.org> (Simon Josefsson's message of "Fri, 24 Apr 2009 10:52:31 +0200") References: <87hc0f1oq2.fsf@mocca.josefsson.org> <873abyzdgh.fsf@wheatstone.g10code.de> <87bpqmph68.fsf@mocca.josefsson.org> Message-ID: <87d4b2xh6c.fsf@wheatstone.g10code.de> On Fri, 24 Apr 2009 10:52, simon at josefsson.org said: > Right. It is difficult to document any another algorithm and prove that > it leads to the same result though. The RFC 5280 algorithm is well > described and seems possible to implement directly. Well, I didn't looked at that algorithm but assume that it is similar to the to the one in rfc 3280. Lets see... oh yes, I am not yet up to the latest standards. Anyway in a world where you can't use AES for S/MIME encryption because the majority of software (Outlook) can't handle it, I am not sure whether always updating to the latest standards is a good idea ;-) > I think we must separate path validation from path construction. > Building a chain using various locally trusted certificates, and > auxilliary certificates, is difficult, but what I need is only path > validation. Well, if you already know the complete chain, the validation is pretty easy (except for the more exotic features). If you don't know, you need to check the certificates anyway while constructing the chain. Look at some of the comments in gnupg/sm/certchain and consider that we had a whole lot a bunch of support requests to handle some real world PKIs. > Indeed, I have started to think about separating the path validation > from GnuTLS into a separate server. Protocol ideas in: > > http://redmine.josefsson.org/projects/gnutls/wiki/GnuTLSExternalValidation > > The best would be if GnuTLS would not have to implement path validation > or private key operations in the same process as the TLS implementation. For private key operation you may use gpg-agent directly (or if you prefer pkcs11 with the help of scute). Looking at the Wiki: Private key protocol The sign operation is actually implemented and used by Scute for its job. 9We once had to add that funny SHA1-MD5 thing to get that working.) X.509 Chain Validation Regarding the DoS problem: If you already know all the certificates, we could add a mode which won't lookup any extra certificates and just do the chain validation. There is still the CRL/OCSP DoS problem but that is something you can't avoid. If you have the fingerprint of the certificates hey should be used instead of passing along the certificates themself. Dirmngr will cache them and ask back to for a certificate it does not know about. IIRC, there is some infrastructure for your TRUSTED/UNTRUSTED commands already available. I am not sure whether Dirmngr is the right server for this as it overloads its real purpose. However it is already available as a system daemon. Shalom-Salam, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. From dkg at fifthhorseman.net Wed Apr 29 20:35:27 2009 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Wed, 29 Apr 2009 14:35:27 -0400 Subject: the "pgp" trust model: the relationship between classic ownertrust designation and trust signatures Message-ID: <49F89DEF.6070301@fifthhorseman.net> Hey folks-- I'm a bit confused by the implementation of trust signatures and their relationship to classic ownertrust. With current implementations of gpg and gpg2, it looks like classic ownertrust designation is equivalent to a level 1 local (non-exportable) trust signature. Is this correct? For some reason, i had been assuming that under the "pgp" trust model, classic ownertrust designation would be equivalent to a level ? local trust signature. Can you help de-confuse me? Here's the scenario that uncovered this: ----------- Alice has certified Bob's key/uid, and grants Bob full ownertrust directly (not via a trust signature). Bob has certified Carol's key/uid with a level 1 trust signature, full trust (value 120) Carol has certified David's key/uid. ----------- (the attached tarball contains 4 GNUPGHOMEs that I believe are described by the above) Both gpg and gpg2 seem to claim undefined uid validity for David from Alice's perspective: > 0 wt215 at pip:~/trustsigs$ GNUPGHOME=alice gpg --trust-model pgp --check-sigs --list-options show-uid-validity > alice/pubring.gpg > ----------------- > pub 1024R/CC10A80E 2009-04-29 [expires: 2009-05-06] > uid [ultimate] alice (DO NOT USE -- TESTING) > sig!3 CC10A80E 2009-04-29 alice (DO NOT USE -- TESTING) > > pub 1024R/23CF8068 2009-04-29 [expires: 2009-05-06] > uid [ full ] bob (DO NOT USE -- TESTING) > sig!3 23CF8068 2009-04-29 bob (DO NOT USE -- TESTING) > sig! CC10A80E 2009-04-29 alice (DO NOT USE -- TESTING) > > pub 1024R/F4A06F8A 2009-04-29 [expires: 2009-05-06] > uid [ full ] carol (DO NOT USE -- TESTING) > sig!3 F4A06F8A 2009-04-29 carol (DO NOT USE -- TESTING) > sig! 1 23CF8068 2009-04-29 bob (DO NOT USE -- TESTING) > > pub 1024R/8B4A0C91 2009-04-29 [expires: 2009-05-06] > uid [ undef ] david (DO NOT USE -- TESTING) > sig!3 8B4A0C91 2009-04-29 david (DO NOT USE -- TESTING) > sig! F4A06F8A 2009-04-29 carol (DO NOT USE -- TESTING) > > 0 wt215 at pip:~/trustsigs$ My old reasoning was: shouldn't Alice see David as "full" instead of "undef", since she fully trusts Bob's certifications, and Bob says that Carol is a trusted introducer? Is there an obvious (or not-so-obvious) rejoinder to that question that makes sense given the current implementation? Thanks for helping me understand this better. --dkg [0] http://tools.ietf.org/html/rfc4880#section-5.2.3.13 -------------- next part -------------- A non-text attachment was scrubbed... Name: trustsigs.tgz Type: application/x-gtar Size: 6853 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: From dkg at fifthhorseman.net Wed Apr 29 21:03:11 2009 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Wed, 29 Apr 2009 15:03:11 -0400 Subject: more on classic ownertrust and tsig interactions Message-ID: <49F8A46F.6070102@fifthhorseman.net> OK, as i'm playing around with this more, i've found what seems to be disturbing behavior by gnupg's "pgp" trust model in the intersection between classic ownertrust and trust signatures. If i use classic ownertrust designation to explicitly say "I DO NOT Trust" a given key, but then a chain of trust signatures suggests that i should trust the key, gpg and gpg2 both appear to honor the trust signature instead of my explicitly-stated preference. That is: * Alice trusts Bob with a level 2 tsig * Alice explicitly tells gnupg "I Do Not Trust" certifications made by Carol via a classic ownertrust designation. * Bob trusts Carol with a level 1 tsig * Carol certifies David's key/uid. It seems to me that Alice should *not* have any positive calculated validity on David's key/uid based only on Carol's signature, because she told her client explicitly "i do not trust this keyholder". Instead, gpg ignores her wishes, considers Bob's tsig valid, and calculates full validity on David's key/uid. In the case of a conflict between trust signatures and explicitly-set classical ownertrust, i'd expect GnuPG with the "pgp" trust model to either: a) honor the classical ownertrust over the tsigs, or b) choose the most conservative (least trusting) interpretation. Is there a reason that it should do something else? Does anyone else consider this a bug? --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: From dshaw at jabberwocky.com Wed Apr 29 22:36:07 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Wed, 29 Apr 2009 16:36:07 -0400 Subject: the "pgp" trust model: the relationship between classic ownertrust designation and trust signatures In-Reply-To: <49F89DEF.6070301@fifthhorseman.net> References: <49F89DEF.6070301@fifthhorseman.net> Message-ID: <4D9B3A31-1D65-4260-B3F1-6746155EDDD8@jabberwocky.com> On Apr 29, 2009, at 2:35 PM, Daniel Kahn Gillmor wrote: > I'm a bit confused by the implementation of trust signatures and their > relationship to classic ownertrust. > > With current implementations of gpg and gpg2, it looks like classic > ownertrust designation is equivalent to a level 1 local (non- > exportable) > trust signature. Is this correct? > > For some reason, i had been assuming that under the "pgp" trust model, > classic ownertrust designation would be equivalent to a level ? > local > trust signature. A classic signature is exactly equivalent to a level 0 trust signature. It means (more or less) "I assert this user ID matches the right human". A classic signature + setting ownertrust (i.e. "I assert this user ID matches the right human and I also trust them to make good key signatures"), as you've noted, is effectively a level 1 non-exportable signature, which makes the same statement in a slightly different way. Classic ownertrust cannot be an infinite level of validity - that would imply that if Alice (owner)trusts Baker, she would find that Zenobia (many hops away from Baker) is valid even though she has no ownertrust set for Charlie, David, Egbert, Frankie, George, etc, etc. > Alice has certified Bob's key/uid, and grants Bob full ownertrust > directly (not via a trust signature). > > Bob has certified Carol's key/uid with a level 1 trust signature, full > trust (value 120) > > Carol has certified David's key/uid. > ----------- > > (the attached tarball contains 4 GNUPGHOMEs that I believe are > described > by the above) > > Both gpg and gpg2 seem to claim undefined uid validity for David from > Alice's perspective: > >> 0 wt215 at pip:~/trustsigs$ GNUPGHOME=alice gpg --trust-model pgp -- >> check-sigs --list-options show-uid-validity >> alice/pubring.gpg >> ----------------- >> pub 1024R/CC10A80E 2009-04-29 [expires: 2009-05-06] >> uid [ultimate] alice (DO NOT USE -- TESTING) >> sig!3 CC10A80E 2009-04-29 alice (DO NOT USE -- TESTING) >> >> pub 1024R/23CF8068 2009-04-29 [expires: 2009-05-06] >> uid [ full ] bob (DO NOT USE -- TESTING) >> sig!3 23CF8068 2009-04-29 bob (DO NOT USE -- TESTING) >> sig! CC10A80E 2009-04-29 alice (DO NOT USE -- TESTING) >> >> pub 1024R/F4A06F8A 2009-04-29 [expires: 2009-05-06] >> uid [ full ] carol (DO NOT USE -- TESTING) >> sig!3 F4A06F8A 2009-04-29 carol (DO NOT USE -- TESTING) >> sig! 1 23CF8068 2009-04-29 bob (DO NOT USE -- TESTING) >> >> pub 1024R/8B4A0C91 2009-04-29 [expires: 2009-05-06] >> uid [ undef ] david (DO NOT USE -- TESTING) >> sig!3 8B4A0C91 2009-04-29 david (DO NOT USE -- TESTING) >> sig! F4A06F8A 2009-04-29 carol (DO NOT USE -- TESTING) >> >> 0 wt215 at pip:~/trustsigs$ > > My old reasoning was: shouldn't Alice see David as "full" instead of > "undef", since she fully trusts Bob's certifications, and Bob says > that > Carol is a trusted introducer? Carol is a trusted introducer to Bob, not to Alice. If Alice signed Bob with a level 2 or above signature (thus granting Bob meta- introducer status (i.e. an introducer of trusted introducers) then David should be fully valid to Alice. David From dkg at fifthhorseman.net Wed Apr 29 22:58:27 2009 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Wed, 29 Apr 2009 16:58:27 -0400 Subject: the "pgp" trust model: the relationship between classic ownertrust designation and trust signatures In-Reply-To: <4D9B3A31-1D65-4260-B3F1-6746155EDDD8@jabberwocky.com> References: <49F89DEF.6070301@fifthhorseman.net> <4D9B3A31-1D65-4260-B3F1-6746155EDDD8@jabberwocky.com> Message-ID: <49F8BF73.6060604@fifthhorseman.net> On 04/29/2009 04:36 PM, David Shaw wrote: > A classic signature is exactly equivalent to a level 0 trust signature. > It means (more or less) "I assert this user ID matches the right > human". A classic signature + setting ownertrust (i.e. "I assert this > user ID matches the right human and I also trust them to make good key > signatures"), as you've noted, is effectively a level 1 non-exportable > signature, which makes the same statement in a slightly different way. Thanks, this is very clear! > Classic ownertrust cannot be an infinite level of validity - that would > imply that if Alice (owner)trusts Baker, she would find that Zenobia > (many hops away from Baker) is valid even though she has no ownertrust > set for Charlie, David, Egbert, Frankie, George, etc, etc. This would only be true if Bob himself *published* an infinite trust signature on Charlie, etc. My original thought was that it was equivalent to an infinite *local* tsig, though, which wouldn't result in the same chain. Also, wouldn't --max-cert-depth (default: 5) kick in before we reach Zenobia? Does --max-cert-depth have any meaning outside of the "pgp" trust model for gpg? If not, why do we need it as an explicitly separate value, since each trust signature made by the ultimately-trusted key would imply a more-specific cert-depth limit anyway? > Carol is a trusted introducer to Bob, not to Alice. If Alice signed Bob > with a level 2 or above signature (thus granting Bob meta-introducer > status (i.e. an introducer of trusted introducers) then David should be > fully valid to Alice. Gotcha. That does work. It's gradually making more sense. --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: From dshaw at jabberwocky.com Wed Apr 29 23:21:07 2009 From: dshaw at jabberwocky.com (David Shaw) Date: Wed, 29 Apr 2009 17:21:07 -0400 Subject: the "pgp" trust model: the relationship between classic ownertrust designation and trust signatures In-Reply-To: <49F8BF73.6060604@fifthhorseman.net> References: <49F89DEF.6070301@fifthhorseman.net> <4D9B3A31-1D65-4260-B3F1-6746155EDDD8@jabberwocky.com> <49F8BF73.6060604@fifthhorseman.net> Message-ID: <80D64330-0B74-4373-95B0-12D7BD6462EA@jabberwocky.com> On Apr 29, 2009, at 4:58 PM, Daniel Kahn Gillmor wrote: > On 04/29/2009 04:36 PM, David Shaw wrote: >> >> Classic ownertrust cannot be an infinite level of validity - that >> would >> imply that if Alice (owner)trusts Baker, she would find that Zenobia >> (many hops away from Baker) is valid even though she has no >> ownertrust >> set for Charlie, David, Egbert, Frankie, George, etc, etc. > > This would only be true if Bob himself *published* an infinite trust > signature on Charlie, etc. My original thought was that it was > equivalent to an infinite *local* tsig, though, which wouldn't > result in > the same chain. I see what you were going for now. An (implied) infinite trust signature from Alice on Baker would be a fairly dangerous thing. It gives Baker vastly more power than he would have in the classic trust model. In classic, he could just sign one level down from himself. In pgp, he could make introducers of introducers of introducers, down to whatever level he wanted. For safety, it's best to require Alice to explicitly grant that kind of power. > Does --max-cert-depth have any meaning outside of the "pgp" trust > model > for gpg? If not, why do we need it as an explicitly separate value, > since each trust signature made by the ultimately-trusted key would > imply a more-specific cert-depth limit anyway? --max-cert-depth is used in both the classic and pgp trust models. In both cases it just puts a cap on the calculations, using the idea that the further you get away from the ultimately trusted key, the less likely you are to trust that signer. You are right that a "pure" trust model does imply a --max-cert-depth of infinity. It's just that we don't live in a pure world. David From dkg at fifthhorseman.net Wed Apr 29 23:29:18 2009 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Wed, 29 Apr 2009 17:29:18 -0400 Subject: the "pgp" trust model: the relationship between classic ownertrust designation and trust signatures In-Reply-To: <80D64330-0B74-4373-95B0-12D7BD6462EA@jabberwocky.com> References: <49F89DEF.6070301@fifthhorseman.net> <4D9B3A31-1D65-4260-B3F1-6746155EDDD8@jabberwocky.com> <49F8BF73.6060604@fifthhorseman.net> <80D64330-0B74-4373-95B0-12D7BD6462EA@jabberwocky.com> Message-ID: <49F8C6AE.1030205@fifthhorseman.net> On 04/29/2009 05:21 PM, David Shaw wrote: > An (implied) infinite trust > signature from Alice on Baker would be a fairly dangerous thing. It > gives Baker vastly more power than he would have in the classic trust > model. In classic, he could just sign one level down from himself. In > pgp, he could make introducers of introducers of introducers, down to > whatever level he wanted. For safety, it's best to require Alice to > explicitly grant that kind of power. This reasoning makes a lot of sense, and i'm glad that gnupg implements it this way now that it's been explained to me. :P >> Does --max-cert-depth have any meaning outside of the "pgp" trust model >> for gpg? If not, why do we need it as an explicitly separate value, >> since each trust signature made by the ultimately-trusted key would >> imply a more-specific cert-depth limit anyway? > > --max-cert-depth is used in both the classic and pgp trust models. How does max-cert-depth work in the classic trust model? I'm afraid i don't understand how a chain of length > 1 can exist in that model. What am i missing? > You are right that a "pure" trust > model does imply a --max-cert-depth of infinity. It's just that we > don't live in a pure world. Should there be warnings, then, when issuing a trust-sig with a level greater than max-cert-depth? Or should you need to have --expert enabled to do so? There's no current indication that creating such a signature won't have the intended effect. --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: From andreas.huettel at physik.uni-regensburg.de Thu Apr 16 10:48:01 2009 From: andreas.huettel at physik.uni-regensburg.de (Andreas K. Huettel) Date: Thu, 16 Apr 2009 08:48:01 -0000 Subject: gnupg 2.0.10 broke kgpg 3.5.9 ??? In-Reply-To: <87r5ztqazj.fsf@wheatstone.g10code.de> References: <200903301501.00961.andreas.huettel@physik.uni-regensburg.de> <87r5ztqazj.fsf@wheatstone.g10code.de> Message-ID: <200904161047.37879.andreas.huettel@physik.uni-regensburg.de> Hi Werner, yes, that is correct. The curresponding patch went into KDE HEAD, and will hopefully be backported to 3.5.(9,10) soon, too. For details see https://bugs.kde.org/show_bug.cgi?id=188473 So, problem solved. Best, Andreas On Thursday 16 April 2009 10:05:52 Werner Koch wrote: > Hi, > > I approved a couple of pending posts today, thus this old mail. The > problem seems to be solved already. > > On Mon, 30 Mar 2009 15:00, andreas.huettel at physik.uni-regensburg.de > said: > > > For details, see http://bugs.gentoo.org/show_bug.cgi?id=263454 > > > > Do you have any clue what the problem in detail is, and/or how to fix it? > > For reference: > > The problem was that kgpg did not used --fixed-list-mode and we made > that option the default with 2.0.10. That option was introduced with > GnuPG 1.0.5, released on 2001-04-29 and I assumed that all applications > are using it by now. > > > Salam-Shalom, > > Werner > -- Dr. Andreas K. Huettel Institute for Experimental and Applied Physics University of Regensburg D-93040 Regensburg Germany tel. +49 151 241 67748 (mobile) e-mail mail at akhuettel.de http://www.akhuettel.de/research/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: