Using the OTR plugin with Pidgin for verifying GPG public key fingerprints
firasmr786 at gmail.com
Fri Mar 12 13:46:28 CET 2010
On 3/12/2010 5:33 PM, Robert J. Hansen wrote:
>> I don't think OTR technology can claim to solve the gun-to-the-head
>> scenario. Although it claims to give users the benefit of
>> perfect-forward-secrecy and repudiation, I think such things matter
>> little in a court of law. People get convicted either wrongly or
>> rightly, based on spoofed emails and plain-text emails all the time.
> Sources, please: I'd like to see citations for "people get convicted ... based on spoofed emails and plain-text emails all the time." Based on plain-text emails, sure. Spoofed emails, though, that's a bit of a stretch and I'm going to need to see cites.
Umm I'm not an expert or anything but I think it really depends on where
you live. If you belong to a minority people susceptible to persecution
by a state agency, then yea sure there are many records of wrongful
detention and arbitrary human rights abuses based on false pretenses.
Even if those pretenses are 'cooked up', or 'spoofed'. Amnesty
International and Human Rights Watch are sites worth checking out for
information on this. It's difficult to achieve immunity from a rogue
state agency. Technology and encryption isn't the way to stop these
kinds of things. In some cases it might help, but only up to a point. I
think Bruce Schneier is right when he says that what are needed instead
are laws and legal mechanisms for protection of human rights and civil
liberties. See Schneier's talk, "Future of Privacy":
> Either way, this kind of raises the question, "so why do you want to use OTR, anyway?" If the entire point of OTR is PFS/R, and you don't believe OTR can solve PFS/R, then why use OTR?
Interesting question. I think OTR is a great theoretical concept. I just
ignore the PFS/R part when using it. PFS/R might be effective if you're
potentially up against a rogue employer, etc. but it's got its
limitations when it comes to dealing with agencies of the State IMHO.
Same goes for Plausible Deniability, etc. This is what Bruce Schneier
noted in an article from 2008 on PD BTW:
"*So we cannot break the deniability feature in TrueCrypt 6.0. But*,
*honestly, I wouldn't trust it* ".
If you really think about it, when you look at people who've gotten
convicted and/or framed based on plain text unsigned email, then it goes
to show that there's no point in inventing a technology that
specifically provides PD from a cryptographic perspective, because
unsigned email is already plausibly deniable. Yet juries & courts
regularly convict people despite their best efforts to claim innocence,
Who's to say what a regular Joe jury would think of about such things.
The fact is that we're living in an era when the vast majority of people
use technology without really understanding the nuances that underlie it.
Second, even with PD encryption technologies such as Truecrypt, it's
easy to look at the problem from a law enforcement officer's
perspective. Compel the individual to lie to a question. Compel him to
take a polygraph on his statement. And then convict him based on a
polygraph. Add in rubber hose attack techniques to the mix and it could
>> So I envision myself using OTR-based-IM and GPG-based-email-encryption
>> only with a prior understanding of these deficiencies. If I'm confident
>> enough that the end-points are secure during an OTR-IM session that has
>> then been authenticated, can I use such an IM session to exchange and
>> crosscheck my friend's GPG public key fingerprint that I've downloaded
>> from a keyserver for email encryption purposes?
> The question isn't whether you can. The question is whether it's wise. The principle of using one credential to authorize the use of another credential is about as old as the hills. The ways to exploit this are about as old as the hills, too. I'm out the door for work in a few minutes so I can't spend the 20m looking up a definitive cite, but I'd suggest looking in Ross Anderson's _Security Engineering_. It's pretty comprehensive; it's where I'd start looking.
Thanks :-) . I'll try and take a look.
PS: On that last point on SSL pitfalls I mentioned in my earlier email a
couple of additional points that I thought would be good to understand.
All it takes is for one CA to be compromised (a rogue element within the
CA perhaps, etc. ) and the entire system comes crumbling down. Also, a
typical browser such as
Firefox will have almost 200 root certificates from various CAs. Each of
these adds a given amount of risk, that really should be made
transparent to end-users IMHO. Some belong to well known CAs, while
others belong to less reputable ones. Plus some CAs will still use
outdated hash algorithms to sign certificates. This has allowed people
in some cases to generate fake certificates and spoof well known
websites. I learned about this last point from a Security Now episode.
BTW Schneier did a nice interview discussing some SSL pitfalls here
More information about the Gnupg-users