Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

erythrocyte firasmr786 at
Fri Mar 12 08:36:46 CET 2010

On 3/12/2010 10:54 AM, Doug Barton wrote:
> "Secure" in this context is a relative term. (Note, I'm a long time user
> of pidgin+OTR and a longer-time user of PGP, so I'm actually familiar
> with what you're proposing.) If you know the person you're IM'ing well
> enough, you can do a pretty good job of validating their OTR
> fingerprint. But how "secure" that is depends on your threat model. Are
> you going to be encrypting sensitive financial data? Fruit cake recipes?
> Blueprints for nuclear weapons? Is the security of your communication
> something that you're wagering your life (or the lives of others) on?

Hmmm...if I understand it correctly, if and when the OTR session is
fully verified/authenticated it doesn't matter what the content of the
data you transmit is. It could be any of the above - fruit cake recipes,
financial data, et al.

> Is your communication of high enough value that your associate could have a
> gun to their head held by someone who is forcing them to answer your OTR
> questions truthfully? (Remember, you can't see them, or hear stress in
> their voice, you can only see what they type.) Have you and your
> associate pre-established a code question to handle the gun-to-the-head
> scenario?
> Hopefully that's enough questions to illustrate the point. :)

I don't think OTR technology can claim to solve the gun-to-the-head
scenario. Although it claims to give users the benefit of
perfect-forward-secrecy and repudiation, I think such things matter
little in a court of law. People get convicted either wrongly or
rightly, based on spoofed emails and plain-text emails all the time.

I think the same goes for GPG based email encryption as well.
GPG-encryption doesn't protect end-points. It only protects the channel
between them. The more end-points there are, the more vulnerable such
encrypted emails become.

The only scenario I see that minimizes end-point vulnerability is to
encrypt data to oneself. One end-point, one source of potential
compromise. Even that is susceptible to a rubber hose attack. In some
countries people are required to decrypt data if asked by law
enforcement and refusal to comply means jail time.

Bottom-line IMHO, you can't let out your inner demons just because
there's encryption technology. That isn't what it was built for afaik.

The safest possible place for data to reside in is within the confines
of one's own brain.

So I envision myself using OTR-based-IM and GPG-based-email-encryption
only with a prior understanding of these deficiencies. If I'm confident
enough that the end-points are secure during an OTR-IM session that has
then been authenticated, can I use such an IM session to exchange and
crosscheck my friend's GPG public key fingerprint that I've downloaded
from a keyserver for email encryption purposes?

PS: Despite the much hyped security behind SSL based websites such as
online banking, if you care to look around you'll soon realize that even
that isn't as bullet-proof as one would like to think. There have been
instances where unscrupulous people have gotten digitally signed
certificates from TTPs/CAs (reputed ones I might add) for businesses
that don't exist, etc. And with companies like Thawte that besides their
traditional for-profit CA business model, also provide individual users
free SSL certificates using email-based authentication, a lay person who
doesn't recognize the different kinds of Thawte certificates could as
well trust that a given bank website is genuine when in fact it might be
a fraud.

All in all, encryption isn't the panacea that we'd like it to be. At
least not yet. There are multiple attack vectors that crop up all the
time - from social engineering to mathematical/technological.


More information about the Gnupg-users mailing list