Encryption on Mailing lists sensless?

Robert J. Hansen rjh at sixdemonbag.org
Tue Nov 18 17:09:05 CET 2014


> Would this not at the same time make it simple for MUAs to discover
> that "this message is not from anyone you say you know.  Delete
> without reading?"

Sure, but that also destroys the email ecosystem.  One of email's
strongest points has been that no introduction is necessary to begin a
conversation.  This year I found myself re-engaging with a friend I lost
touch with a decade ago, who found me on a mailing list and figured to
drop an email and see if maybe I was the same Rob Hansen she knew from
back when.  If my MUA/MTA had hidden it from me just because there was
no introduction, or urged me to delete it without reading...

Could email as a platform survive the shift to introduction-based
systems?  Sure.  But it would totally transform the email experience,
and maybe in ways we wouldn't like.  That's why I'm so skeptical of
proposals to fix email in this way: we might fix email, but we might
also kill it at the same time.

> Again, if it's provably from no one you say that you trust, the MUA
> could refuse to execute runnable content without explicit
> permission. (Which I say should be the normal and only setting for
> all content, but I know I'm a crank.)

It already is.  Double-click on an executable attachment and a window
will pop up with a warning about how you should only run code from
people you know and trust, click "OK" to cancel running this, click "I
know the risks" to run it, etc.

An awful lot of people click "I know the risks."

I've told this story before, but it bears repeating --

During my grad school days I had a colleague named Peter Likarish.
Peter did some great work in using Bayesian statistics to detect
phishing sites.  Ultimately, he had an algorithm that could look at
webpage content and decide with 95% accuracy whether it was real or
phish-phood.  He packaged this up inside a Firefox extension: when you
browsed to a site and the plugin detected a phishing attempt, it would
put a narrow red stripe over the top of the screen saying, "Warning:
this may be a phishing attempt!"

He put it into human trials using the University's HCI (Human-Computer
Interactions) lab.  The results were dismal.  Post-experience interviews
revealed that people weren't looking at the top of the web page.  They
genuinely didn't notice a red stripe across the top of the screen.

So Peter went back to the drawing board and made a new interface.  Now,
the banner started off small, but there was a "Click to dismiss" button
on it.  Further, the banner would grow larger over time.  Peter knew
that the human eye is sensitive to motion: our eyes naturally are drawn
to things that change.  By making the banner grow larger, he figured he
could increase its visibility.

Back to the lab, and ... still dismal, soul-crushing results.  This
time, the overwhelming majority of the users confirmed they saw the
warning.  When Peter asked them why they chose to ignore it, the
majority said they thought it was just another Flash ad that was hyping
some "fix your PC fast, now!" solution.

I ran into Peter shortly after he finished his final day of human
trials.  He was normally a very cheerful guy, but this day he just
looked shattered.  I suggested we walk down to the nearest watering hole
and grab a beer, but he was too dejected.  He said that of all the
outcomes he imagined for his Ph.D., he never dreamed that it would be
that his research could be accurately summed up as, "the technology
works fine, it's *people* who are completely broken."

Shortly after I left grad school Peter found a warning mechanism that
worked, incidentally.  It's a cute technology and one I really wish more
browsers would incorporate.  I don't have a URL for a PDF of the paper
handy, but the poster he presented at SOUPS 2009 is available online at:

https://cups.cs.cmu.edu/soups/2009/posters/p9-likarish.pdf




More information about the Gnupg-users mailing list