Network Neutrality

Eric erpo41 at hotpop.com
Wed Mar 22 09:11:39 CET 2006


On Fri, 2006-03-17 at 07:09 -0800, Robert Wohleb wrote:
> This morning I was
> surprised to find my download and upload speed higher than normal. Hell,
> a 2.8GB download i supposed to complete in 12 hours. That hasn't
> happened for a while on Cox. Hopefully this isn't a fluke. I'll report
> back if this keeps up. 

As far as I can tell, Cox stopped sniping bittorrent and gnutella
connections with reset packets the day or the day after I told them that
I'd expose their practices. Maybe they started again afterwards. Or
maybe they only stopped corrupting my and my friends' traffic.

If encrypting your connections gives a speed boost, then maybe some more
investigating needs to be done.

> I'm sure it is also only a matter of time before
> Cox gets around this if this is really helping.

I doubt it. Provided that bittorrent end to end encryption means
something akin to Diffie-Hellman key exchange at the start of each
connection, there are two ways "around" this, that I can think of, both
of which suck for Cox:

1. Content-based whitelisting, meaning you can't make any kind of
connection in or out unless Cox can identify the type of traffic by its
content.

If Cox can't determine the content of the connection because it's
encrypted and Cox has not broken the encryption, then Cox terminates the
connection. This would mean lots of work for Cox, and lots of support
calls from lots of unhappy customers ("My streaming video never works!"
"I'm sorry, but we haven't programmed our systems to track all of your
streaming video viewing yet. You'll have to wait.").

2. A man in the middle attack, meaning Cox decides to break the
encryption, which is a mostly straightforward process in this case. This
creates several interesting problems. 

The first is that Cox would have to attempt such an attack on each
unidentifiable connection ("Oh, that's not HTTP. Better mess with it.").
The result would be that any connection using a protocol that Cox's
system isn't set up to interpret and that is NOT using bittorrent end to
end encryption (think multiplayer games, NFS, whatever) would almost
certainly be corrupted. This is maybe worse for the end user than
whitelisting. 

The second is that provided Cox wants to keep its activities secret (as
seems to be the case so far), it would have to throttle encrypted
bittorrent connections instead of terminating them entirely. That would
mean that a Cox computer would have to participate in each encrypted
connection from start to finish. Let's be conservative and say that
there are 5,000 bittorrent connections in and out of humboldt county via
Cox's network at any given time. Then Cox's servers would have to
perform the encrypting and decrypting work normally parcelled out to
10,000 home PCs continuously.*



Eric

*P.S.
There is a neat game to be played here. Suppose that Cox can purchase
enough computing power to do the job (hardware+software+electricity
+maintenance), and that the massive P2P throttling system pays for
itself in bandwidth savings. Then suppose peer to peer developers start
layering symmetric ciphers. Then the CPUs participating in the peer to
peer network will be a little more loaded down, but Cox will need a much
larger throttle farm to do the job. Will P2P users not be able to
participate in the network because they don't have fast enough computers
to do all the encryption? Or will Cox decide that the throttle farm
costs more to operate than they are saving in bandwidth? Who will give
up first?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : /pipermail/attachments/20060322/fff5e952/attachment.pgp


More information about the Gnupg-users mailing list