Difficulty of fixing reconciliation

Peter Lebbing peter at digitalbrains.com
Tue Aug 13 13:07:07 CEST 2019


On 13/08/2019 09:54, Alessandro Vesely via Gnupg-users wrote:
> More than a reasonable number of signatures makes no sense in
> practice, so I agree lists should somehow be "fixed" so as not to
> accept an unreasonable number of signatures (reasonable == 2??)

Others tell us: the synchronization protocol as it is cannot be fixed.
You reply: I agree it should - somehow - be fixed?

The sort and unhelpful answer of course is: read the first sentence
again, or disprove it mathematically just like the algorithm was
mathematically proven to work in the first place.

But I'll try to sketch it. I don't know the synchronization protocol.
What I do know: the design goals included resistance to oppressive
regimes interested in deleting anything from the network, so it was to
be designed so it was technically impossible to delete anything.

Furthermore, this is from memory, but when I read basic information
about the SKS reconciliation algorithm, I see the terms "monotonicity
requirement for progress" or something alike.

I don't know if this is how it works, but this is how it could work:

Somebody has worked years on an algorithm for their PhD thesis which
fulfills an important task, and through those years a lot of
mathematical proofs have been spent on proving that it does what was
asked of it. One of these demands was that it could provably never
delete data that had been available earlier.

In order to make this algorithm, the PhD student discovered that they
needed to have some properties hold at *every* single step: otherwise
they could not prove that the algorithm was correct.

So let's see a single step in the reconciliation process as the
reconciliation of a single piece of data that has been added to the
network. The PhD student found they needed a requirement they called the
monotonicity requirement. I don't know what it means precisely in this
case, but I can take a guess: if two hosts are reconciling, the
monotonicity requirement could mean that the data at either one of these
hosts

- gets data added and everything that was there stays
- the data it has stays exactly the same

Those are the only two cases that satisfy the monotonicity requirement.

Furthermore, the end goal is for all hosts to have the same dataset, so
let's define progress as that the hosts become more and more alike: when
the number of differences between the hosts has reduced, we have made
progress. Once completed, it has progressed to the point where the
number of differences is zero. As an aside, it has to be proven that we
eventually progress to that point where they are the same, otherwise
there could be a dataset that just keeps spinning, running the algorithm
endlessly. In fact, it's well possible that this is where the
monotonicity requirement plays a role.

Let's do this with your max 2.

Somebody uses SKS keyserver network host A to add two signatures, call
them A1 and A2 to the key in question, which did not have any signatures
yet. Host A accepts them, they fall within the limits.

Either this same person or someone else adds two signatures on SKS
server B on the same key, call them B1 and B2. Hosts A and B haven't
reconciled yet, so B sees no signatures on the key and accepts.

Now they reconcile.

They compare their datasets. They are not the same: we still need to
have progress to get to completion. Let's reconcile signature A1. Server
A offers server B signature A1. Wait a minute, the key already has
signatures B1 and B2. Server B cannot accept signature A1, it would
break the contract that there are ever only 2 signatures on the key.

Now we have a deadlock. There is no following step that would fulfill
the monotonicity requirement as well as make any progress. The only way
to fulfill the monotonicity requirement is when server A keeps the
signatures A1 and A2, and server B keeps B1 and B2. But the progression
has halted and the process never finishes.

Ah, you say, why don't you /swap/ B1 for A1? Well, there is no such
thing as swapping. Swapping is simply the deleting of B1 followed by the
addition of A1. And monotonicity says we can't delete anything.

Besides, any limit on the number of signatures is a Denial-of-Service.
In your case, if Alice, Bob and Carol all sign eachother's keys, there's
no more room for other signatures. And if Mallory creates two bogus keys
and signs all the keys of Alice, Bob and Carol, the three can't add any
real signatures anymore.

The algorithm is designed to withstand very well funded actors,
oppressive regimes were what the designers were thinking of. Obviously,
the algorithm does this regardless of who is trying to do something
otherwise, no matter whether it is a repressive regime or the legitimate
operators and designers of the system.

> The bug, however, is in the program that chokes on poisoned keys!

It seems to me there is (almost?) always a denial of service. Offline
systems have more trouble with this than online systems, and even
websites (online systems) have really expensive world-spanning networks
to mitigate (rather than truly deal with) DoS-attacks.

Asymmetric crypto has a very unfavourable runtime <-> data size ratio.
With just a few bytes, you need a hell of a lot of mathematical
processing to interpret those bytes correctly. It's a really unfortunate
amplification factor: when an attacker hands you a very small payload,
you need a lot of runtime to decide if it was actually something you
wanted. There are better and worse ways to design the data for this
aspect, but it's always going to be on the computation-intensive side of
things.

So GnuPG gets a key with a lot of third-party signatures. It can keep
wading through all the crap looking for the real signatures the user
does want between all the crap added by attackers, and this takes a lot
of runtime.

Or it can decide: this is taking to long, I bail out.

In neither case will the user get that signature that they actually
want, and which according to Murphy is actually near the end of where
GnuPG will be looking.

So the user is denied the signatures it was looking for in either case.

> Was that fixed, yet?

How? You keep looking through the piles of crap, or you stop looking. In
either case, you don't find what you are looking for in a desirable
length of time.

I think the solution needs to be sought in a direction where GnuPG
doesn't have to look for valid data amidst a lot of invalid crap.
Because evaluating the invalid crap can always be made expensive, so
there should be other means to say "I'm not going to parse this, find
another way to get me the proper data where it's not buried in crap".

HTH,

Peter.

-- 
I use the GNU Privacy Guard (GnuPG) in combination with Enigmail.
You can send me encrypted mail if you want some privacy.
My key is available at <http://digitalbrains.com/2012/openpgp-key-peter>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <https://lists.gnupg.org/pipermail/gnupg-users/attachments/20190813/149a42aa/attachment.sig>


More information about the Gnupg-users mailing list