the TOFU lie - or why I want my meat...

Brian Minton brian at
Fri Apr 10 01:41:26 CEST 2015

Hash: SHA256

IMHO here is one advantage of TOFU:  Imagine that at any given time, a
third party is intercepting (and potentially modifying) your
communications. In many cases they may not control all paths between you
and the keyserver network. Therefore you may receive the legitimate key as
well as a fake one, causing an alarm (or at least extra caution). This
doesn't help if the adversary controls all traffic between your computer
and the outside world, but in some cases you could go to a cyber cafe, etc.
to get a second path to the keyservers.

For example, when using ssh, which uses TOFU, I often connect to a host
from multiple ISPs to make sure the host key fingerprint matches.

I agree with the idea of using WoT  in conjunction with TOFU, not as a
Version: OpenKeychain v3.1.2


On Thu, Apr 9, 2015, 5:53 PM Christoph Anton Mitterer <calestyo at>

> Hey.
> On Thu, 2015-04-09 at 17:16 +0200, Neal H. Walfield wrote:
> > I apologize for the late reply.  Your arguments are well stated and I
> > needed some time to digest them and to do some reading and thinking of
> > my own.
> No worries :)
> > I think it is fair to summarize your post as follows: TOFU is
> > significantly weaker than the Web of Trust and adoption of TOFU will
> > weaken the WoT.
> Not quite. Actually I think I didn't write so much about why the WoT
> might be good, but more why TOFU is evil (the later doesn't make the
> former necessary better).
> > Although you provide a number of arguments that
> > support your claim that TOFU is weak, you didn't provide any arguments
> > that the WoT is significantly stronger.  I think that this is where
> > your argument breaks down.
> I don't think so, actually I think when you compare TOFU with the WoT
> then you compare apples and oranges.
> The WoT is a trust model, like the strict hierarchical model employed
> with X.509.
> TOFU is not really a trust model, IMHO, since it's simply a wrong
> assumption that it would give any real trust about the identity.
> It's basically like a decision that you believe that the first time
> someone tells you an answer about some question, then you believe that
> very answer and consider any other possible future answer to be wrong.
> So when you ask me the first time in your like: "Are apples a fruit?"
> and I tell you "no they're a species of monkeys" you'd believe and trust
> that.
> (No one who's sane would IMHO do that in real life, but anyway...)
> So if you really want to call TOFU a trust model, than I'd call it
> "believe anything", again nothing which I would want to do, when it goes
> about security.
> Apart from that, I didn't say that the WoT is perfect... actually it's
> not.
> The only thin that's really perfect is if you do direct mutual
> authentication with your peers (which is actually one of the big usage
> scenarios for OpenPGP).
> But even then it's much better than the strict hierarchical model from
> X.509 or the "trust model" from TOFU (i.e. trust anything on first use),
> cause the user has still full control:
> He decides which direct peers he trusts and "how much" and he decides
> how many level an indirect trust path can go + how many different paths
> to a indirect peer are required.
> And usually people know whom of their direct peers they can trust (not
> to be evil + to do the identification process correctly). Of course this
> isn't a 100% guarantee (my friends could be NSA spies).
> But it's still way better than what the other two give you:
> - X.509
>   The CA is in full control, whatever it says you'll trust
> - TOFU
>   no control at all,... or maybe controlled by good luck or your network
>   provider, or the one who is the first in the line to be reachable by
>   the target you try to contact
>   i.e. it's IMHO even worse than the CA model, cause you have even more
>   potential places where you can be attacked (MitM).
> >  - When you rely on the WoT, you rely on the people who made the
> >    signatures to have done due diligence (which is itself not very
> >    well defined).
> Sure, absolutely,... never disputed this.
> But when I really use the WoT (in contrast to what I do most, i.e.
> direct mutual authentication), than I know my friends and I know whom of
> them signs just because he knows the guy XYZ promises him that he really
> is XYZ (but has accidentally forgotten his ID document or whatever) and
> whom of them demands passport + DNA sample ;-)
> It's still me who decides where I want to give a lesser level of trust
> than with direct mutual authentication.
> >    There are, however, many examples of people signing keys that they
> >    haven't checked or checked poorly.  In 2006, for instance, Martin
> >    Krafft used a "fake" id at the DebConf KSP.  Only 1 in 10 people
> >    called him out.  Here's his explanation and some reactions:
> >
> >
> >
> >
> >
> >
> >
> >    More recently (2014), Martin tweeted:
> >
> >      Received signatures for my #GPG key again at #DebConf14 although
> >      I did not attend the keysigning event.
> >
> >
> Sure... happens especially with more "famous" persons...
> >  - You bring up nation states as potential threats multiple times.
> >    This is ironic, because key signatures are typically based on
> >    verifying government issued id.  If the government wants to
> >    infiltrate the WoT, it apparently just has to create a few fake ids
> >    and send some agents to a Debian KSP after which they'll quickly be
> >    in the stongly connected set and can certify any key they like.
> Again, this is basically true. But I don't think it's such a big problem
> either.
> a) As mentioned before, I can still decide whom I want to believe. And
> if some information is really precious, I simply only rely on direct
> mutual authentication.
> b) The problem you describe boils basically down to "what's an
> identity". Is "Linus Torvalds" the guy born somewhere in Finland whose
> certificate of birth says Linus Torvalds, or is he the guy of any name
> who makes major contributions under that name to Linux and so on for
> years.
> So yes, a government can of course fake his passport, but since he's
> famous, they'd also need a double. And if he'd have signed all his
> contributions then people would probably go for "the guy who made that
> contributions signed by key ABCDE" and not "the guy whose officially
> named Linus Torvalds" (at least from a security PoV and in an "ideal"
> world).
> c) For a stranger, when you want to sign his key to improve the WoT, is
> there any better way to prove his identity (one that doesn't require to
> check through all his life and history) than ID docs?
> In other words, when you sign strangers to enrich the WoT, then you
> cannot do much that would be better, can you? And you personally
> probably won't communicate with that stranger anyway.. and others that
> use your signature as part of an indirect trust path have again (a).
> And for those people I really know personally, I don't care much what
> their passport says - it's just the name by whom I recognise them,
> whether it's now the right or a fake name. The trust for their
> identification comes from how they look, sound, behave etc.
> d) Last but not least:
> Again your argument is nothing which X.509 or TOFU would solve better.
> In X.509, CAs also use ID documents... at best... and more realistically
> they do a challenge response to some email address (at least when it
> comes about server certificates)... which is IMHO a bad joke, when you
> afterwards use fancy XXXXbit keys to "secure" things.
> And TOFU again, makes no identity check at all.
> >
> >    See this note from Mike Perry (Tor Project) covering this as well
> >    as other weaknesses in the WoT:
> >
> >
> September/030235.html
> I just had a short glance over this, but he seems to misunderstand many
> basic principles.
> Like "It leaks information", which is I guess unavoidable when you want
> to have a system where you can authenticated against peers which you've
> never met.
> Trust and especially "visible trust paths" are IMHO the only chance to
> get at least some kind of identification.
> His point that everyone would be a CA... well sure but where's the
> problem? A CA is not much more as something the assures an identity,
> whether I trust it or not, is a completely different topic.
> There are a number of problems with communication to the keyserver
> network, but actually not so much the ones he describes, but rather
> blocking and/or downgrade attacks.
> But these problems exist in similar (worse) forms for X.509 either, and
> with TOFU again you have nothing at all.
> The scenario he devises is IMHO, while realistic, simply stupid and made
> up just to counter-argue the WoT:
> A single "assuring" key (Roger) whose key is either compromised or to
> whom Edward has no trust path at all (and thus a fake Roger key could be
> presented by the sysadmins).
> Of course it's realistic that one wants to communicate with someone else
> one has never met, but then relying on a single other key?! Well you
> can't protect people from shooting into their own feet, can you?
> Better look at whom you give full trust.
> And in the other direction (i.e. the journalist verifying Edward, there
> seems to be now trust path at all?)... well realistic,... but
> nevertheless a stupid evaluation... it's like I'd give my laptop a bad
> evaluation because it cannot fly.
> Actually this example shows the following:
> You have a situation (secure communication between people that never
> directly met is required) that needs to be solved.
> A situation wich is inherently unsolvable... and he measures the WoT on
> failing in this situation.
> But both, X.509 and TOFU would perform much worse. We've all seen how
> easy it apparently is to trick commercial CAs into giving you what you
> want (or you are China and simply run your own).
> And TOFU... well if the sysadmins are in between, why should they be
> able to attack the WoT (where they need at least that compromised key)
> but not the fully unsecured trust on first use?
> This example just shows why TOFU is an illusion, especially when someone
> decides to do something (MitM) against it and especially since there is
> no single reason, which would prevent e.g. the NSA from doing mass MitM.
> >  - In practice, the WoT is hard to use.  If you endow marginal trust
> >    in others' signatures, then it can be hard to find a good path.
> >    The other day, I tried to verify a friend's key.  Even though I
> >    have about 100 signatures on my main key and he has 37, gpg said
> >    his key was not trusted.
> Sure, but a) you can control yourself how much you want and b) you want
> it to solve a problem that is inherently unsolvable.
> What kind of magic should it do to create the trust it doesn't have? At
> least it does a bit more than the other models.
> >  - The practical result is that exploiting the WoT is hard.  You
> >    either need to directly verify someone's identity (which isn't
> >    really WoT)
> As I've said, the WoT shouldn't be expected to magically give everyone
> trust between everyone.
> If you have a better solution how this could be done,... just tell us =)
> But again, nothing where TOFU would give you anything more.
> >  get a lot of signatures or just ignore the frequent
> >    not trusted warnings (which I and many others often do).
> You can't blame the system for using it explicitly and intentionally
> wrong.
> >  - Ignoring these warning *is* a serious problem as Erinn Clark, the
> >    release manager for Tor, has recently observed.
> And TOFU is basically like *always* and *automatically* ignoring that
> warning (of course just on the first use, but that's no difference from
> what you'd do with your WoT or first time SSH), while still promising
> people that they'd be now more secure against "something".
> >   Someone uploaded a
> >    key with her identity to the public key servers.  If people have
> >    gotten into the bad habit of using trust=always (or ignoring the
> >    warning), then they'll happily accept signatures from this bad key.
> >    TOFU and its emphasis on consistency could potentially help here.
> The either you or I misunderstand something about it ^^
> How does TOFU help here? Trust on first use (without any further
> checks)... what's the different from blindly saying trust=always?
> >  - Indeed, some well-known cryptographers, such as Peter Gutmann,
> >    argue that continuity (i.e., TOFU) is strictly better than third
> >    party attestations (i.e., signatures):
> >
> > (Page 8).
> I haven't read through all of this, but I don't see him advertising
> TOFU, does he?
> What he says is "Key Continuity",... which is basically what OpenPGP
> (and others give) you by using the same key, and not creating e.g. a new
> OpenPGP key for each communication.
> That's basically why we use public-key cryptography, that we don't need
> to securely exchange a new symmetric key for each message (respectively
> the PKC does that for us).
> He also says "Verify that the current key is the same as the one that
> you got previously" which *is not just* TOFU.
> TOFU does this (after the first use), but so do the others.
> And when looking at his example for SSH he says:
> "On first connect, client software asks the user to verify the key" and
> "Done via the key fingerprint"... and this is actually not-TOFU.
> He want's people to *verify* the key... (e.g. via the fingerprint) and
> not simply trust it on first use.
> >  - The WoT suffers from the revocation problem.  For instance, it
> >    takes hours for key updates to propagate between the servers
> >    participating in
> First, this isn't really a part of the WoT.
> It's simply the question of how to make key and key revocation
> directories.
> Admittedly, the OpenPGP keyserver network has it's problems (IIRC I
> reported them here or at a gnupg mailing list already years ago),... but
> they still perform *way* better than how the same problem is solved at
> the X.509 side (in practice and always just voluntary OOCP query, or
> CRLs which are even less often updated or mostly not even reachable)...
> and infinitely better than what TOFU does (no revocation directory at
> all,.. or at best some periodic auto-revocation like you have with
> HPKP).
> The keyserver network is actually the only thing right now which could
> at least in principle be used to somewhat protect against evil keyserver
> operators, in that each query and each submission is always made to
> multiple of them (and something like hkps be used).
> > Further, GnuPG doesn't
> >    check for key updates automatically so the problems are actually
> >    worse than when using PKI.
> I assume with PKI you mean the X.509 PKI?
> Not really,... first, this is nothing what the standard would mandate.
> So if at all it would only be a gnupg issue.
> And for X.509 it doesn't check for new keys, if at all then only for
> revocations. And there we all know that basically all systems do this
> just opportunistically, because the OSCP/CRL system is so fragile.
> Browsers typically don't even warn you when a OSCP query failed (if they
> even do it at all).
> >  - The WoT leaks lots of information.
> You had that before I think, but again... nothing that you could really
> solve by any other system (and especially not TOFU, which definitely
> would need some way to fetch the keys of arbitrary peers).
> It's basically like you'd want a phone book (to communicate with
> strangers) but that phone book wouldn't be allowed to contain any names
> (and perhaps not even numbers).
> Long story short,... I don't think the the opponents (X.509, TOFU)
> perform in any of your points better than the WoT does.
> But more general, my post about TOFU was less an advertisement for the
> WoT - it was just a collection of arguments against TOFU, both at the
> technical level itself and at the higher "meta" level of the long term
> political and social problems it introduces.
> If you compare it with other trust models than you miss my actual
> points.
> For me the central point of TOFU is the anonymous authentication which
> is entrusted on the first use.
> And this is also what makes it worse than any of the other trust models,
> simply because it gives you nothing (unless you have good luck).
> And the really evil part about it is, that it's now sold to people as a
> good way against mass surveillance, which is in turn based on completely
> unrealistic assumptions (i.e. that NSA&Co. would stand still and do
> nothing).
> Cheers,
> Chris.
> _______________________________________________
> Gnupg-devel mailing list
> Gnupg-devel at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/attachments/20150409/577c7faf/attachment-0001.html>

More information about the Gnupg-devel mailing list