the TOFU lie - or why I want my meat...

Neal H. Walfield neal at walfield.org
Fri Apr 10 11:37:18 CEST 2015


At Thu, 09 Apr 2015 23:51:06 +0200,
Christoph Anton Mitterer wrote:
> > I think it is fair to summarize your post as follows: TOFU is
> > significantly weaker than the Web of Trust and adoption of TOFU will
> > weaken the WoT.
> Not quite. Actually I think I didn't write so much about why the WoT
> might be good, but more why TOFU is evil (the later doesn't make the
> former necessary better).

I understood from your mails that you thought the WoT is better than
TOFU based on the following comment (among others):

  At Wed, 01 Apr 2015 00:40:35 +0200,
  Christoph Anton Mitterer wrote:
  > OpenPGP and similar schemes ... [are] typically only used by
  > people who want stronger security (i.e.  those who don't trust
  > that fragile strict hierarchical and CA based model of X.509). Or
  > in cases where it needs to be sure that a 3rd party cannot forge
  > anything (e.g. when distributing packages of a Linux distro).

> TOFU is not really a trust model, IMHO, since it's simply a wrong
> assumption that it would give any real trust about the identity.
> It's basically like a decision that you believe that the first time
> someone tells you an answer about some question, then you believe that
> very answer and consider any other possible future answer to be wrong.
> So when you ask me the first time in your like: "Are apples a fruit?"
> and I tell you "no they're a species of monkeys" you'd believe and trust
> that.
> (No one who's sane would IMHO do that in real life, but anyway...)
> 
> So if you really want to call TOFU a trust model, than I'd call it
> "believe anything", again nothing which I would want to do, when it goes
> about security.

I think we have fundamentally different perspectives.

Using an analogy from statistics, you seem to have a frequentist
perspective and I have a Bayesian perspective.  Neither one is better
than the other.  They simply interpret probabilities in different ways
and thus provide different information.  When a frequentist says that
the probability of an event is 10%, then we expect that the event will
occur 1 out of 10 times.  For a Bayesian, probabilities corresond to
degrees of *belief*.  Thus, when a Bayesian says that the probability
of an event is 10%, it is very much grounded in a set of beliefs.  As
the Bayesian observes more evidence, the beliefs are updated and they
(hopefully) begin to converge to the truth.  Because the Bayesian has
so little data at the beginning (i.e., prior to any evidence) a
so-called prior is used.  This represents the belief in the absence of
evidence.

For more information about Frequentists vs. Bayesian methodologies,
see Larry Wasserman's blog post:

  https://normaldeviate.wordpress.com/2012/11/17/what-is-bayesianfrequentist-inference/

So, how does this apply to TOFU?  You seem to be arguing that once we
see a key, TOFU asserts that it is 100% correct and can be relied upon
until we see evidence to the contrary.  I'd argue that this it the
wrong approach: this is just a little bit of evidence and we need to
be skeptical.  Instead, we should use a strong prior that says we
don't know if we can trust this key.  Then, after seeing one example,
we might assert P(K is correct) = (say) 10%.  As we observe evidence
that the key is really controlled by the same person, we increase our
belief that the observed key is correct.  We get more evidence by
observing more signed mails from the same person (which are preferably
received via different network paths, which decreases our belief that
there is an active MITM attack).  Note: that our prior also means that
we will never assert with 100% confidence that the key is correct.

> b) The problem you describe boils basically down to "what's an
> identity". Is "Linus Torvalds" the guy born somewhere in Finland whose
> certificate of birth says Linus Torvalds, or is he the guy of any name
> who makes major contributions under that name to Linux and so on for
> years.
> So yes, a government can of course fake his passport, but since he's
> famous, they'd also need a double. And if he'd have signed all his
> contributions then people would probably go for "the guy who made that
> contributions signed by key ABCDE" and not "the guy whose officially
> named Linus Torvalds" (at least from a security PoV and in an "ideal"
> world).

This is an example of consistency and exactly what TOFU can identify.

> c) For a stranger, when you want to sign his key to improve the WoT, is
> there any better way to prove his identity (one that doesn't require to
> check through all his life and history) than ID docs?
> In other words, when you sign strangers to enrich the WoT, then you
> cannot do much that would be better, can you? And you personally
> probably won't communicate with that stranger anyway.. and others that
> use your signature as part of an indirect trust path have again (a).
> And for those people I really know personally, I don't care much what
> their passport says - it's just the name by whom I recognise them,
> whether it's now the right or a fake name. The trust for their
> identification comes from how they look, sound, behave etc.

Again, you argue for consistency!

> d) Last but not least:
> Again your argument is nothing which X.509 or TOFU would solve better.
> In X.509, CAs also use ID documents... at best... and more realistically
> they do a challenge response to some email address (at least when it
> comes about server certificates)... which is IMHO a bad joke, when you
> afterwards use fancy XXXXbit keys to "secure" things.
> And TOFU again, makes no identity check at all.

TOFU doesn't check identify.  It checks consistency.  This is what I
said in my original note:

  TOFU is good for checking an association between an identity (in our
  case, an email address) and a key.

> >  - The practical result is that exploiting the WoT is hard.  You
> >    either need to directly verify someone's identity (which isn't
> >    really WoT)
> As I've said, the WoT shouldn't be expected to magically give everyone
> trust between everyone.
> If you have a better solution how this could be done,... just tell us =)

If the WoT is rarely usable, then its utility is low.

> >  get a lot of signatures or just ignore the frequent
> >    not trusted warnings (which I and many others often do).
> You can't blame the system for using it explicitly and intentionally
> wrong.

Sure you can.  If such a system is in use and it is inadequate (in
this case, it is too sophisticated and too burdensome for most users),
then we should look for something else.

> >   Someone uploaded a
> >    key with her identity to the public key servers.  If people have
> >    gotten into the bad habit of using trust=always (or ignoring the
> >    warning), then they'll happily accept signatures from this bad key.
> >    TOFU and its emphasis on consistency could potentially help here.
> The either you or I misunderstand something about it ^^
> How does TOFU help here? Trust on first use (without any further
> checks)... what's the different from blindly saying trust=always?

The ability to identify inconsistencies.  If you just use
trust=always, then you won't notice changes.

> Long story short,... I don't think the the opponents (X.509, TOFU)
> perform in any of your points better than the WoT does.
> But more general, my post about TOFU was less an advertisement for the
> WoT - it was just a collection of arguments against TOFU, both at the
> technical level itself and at the higher "meta" level of the long term
> political and social problems it introduces.
> If you compare it with other trust models than you miss my actual
> points.

I guess I misunderstood parts of your posts.

I acknowledge that TOFU has limitations, but I see it's ability to
check consistency as being very valuable.

Neal



More information about the Gnupg-devel mailing list