integrating GPG with deniable steganography

Matthias Urlichs smurf at
Wed Mar 21 12:31:11 CET 2001


Florian Weimer:
> This way, you can only defeat an attacker whose noise model is worse
> than yours.  That's not a big deal, and it's *not* secure.
Since you can always conceive of with a better noise model, that would
be a killer argument against any kind of "conventional" steganography.

You obviously need a data model where the original noise is masked in a
computationally intractable way. My idea of introducing little
variations in an image or an audio stream so that the result of
lossy compression, decompression (a step which might not be necessary),
cryptographic hashing, and selecting single bits from that, ends up
equal to the message you want to transmit (which would be a cryptogram,
i.e. random bits, with _exactly_ the same randomness behavior as any
other sequence of random bits.

My point is that the MP3 algorithm introduces noise in the source data, 
meaning that there are many inputs which compress down to the same
data stream. Conversely, if you want to prove that I did in fact encode
a message, you need to generate every single sequence of input bytes
which might have resulted in the suspect MP3 file. Then you'd need to
prove that _none_ of these inputs have the same noise print as a
comparable audio file -- which might not even exist.

I assume here, of course, that there's no good noise model for the
target domain, i.e. the decompressed MP3 file or JPEG image.
Considering the substantial errors introduced by these algorithms, I
don't think such noise models can work without access to the
uncompressed source file.

Matthias Urlichs     |     noris network AG     |

More information about the Gnupg-devel mailing list