zlib-compressed gpg-encrypted data files

Markus Bauer mc_mc_mc at lycos.de
Wed Nov 10 11:29:23 CET 2004


Hi,


(if you do not want to help me, do you have an idea, who could help me/us?
I would give a lot if such a system would work)


I have (not my own) an important gpg file on a formatted hard drive which occupies
exactly 8811 data blocks on an ext2 file system (one block 4096 bytes), so the file
size is ca. 37MB. The different data blocks are distributed on a file system with
3500000 blocks (14GB). The file is very important so I started recovering it.

The facts:
- I already recovered whole gpg meta data, keyring etc
- I wrote a program to find the beginning of the gpg file. I've found the first blocks.
- There is a file that contains the file size of the file before formatting.
  So I know that the exact size should be 8811 blocks.

The file was encrypted with default settings, so also compressed with zlib. I've already
recovered the first 13 blocks (because I found the beginning as already mentioned. Luckily
there was only one gpg file on the hard disk). And inspecting the file (decrypting with
debug options of gpg) tell me, that it's exactly the right file. The encrypted file was
'data.zip' and even the original size was printed by gpg.

Now I downloaded gpg sources and changed the sources a bit and wrote a small shellscript in
order to find the whole file (by scanning the whole partition).

Among other files, I added the following line to g10/compress.c, ~ line 176 (sources from
1.0.6):

fprintf(stdout, "%u\n", (long long)iobuf_tell(a));

I know that this isn't completely correct but my file is not larger than 50MB.

How big is the "zlib block size" by default? Looking at g10/compress.c ~ line 65, I would
think that block size is 8192 bytes, so it's 2 blocks on the file system, so I should have
luck that at least two blocks are back-to-back.

I hope, you know what I mean.

The followong script should scan the whole file system trying to string together the original
file. My modified gpg executeable is in the same directory as the script and when starting
the script, the beinning of the file is in file .current, size 53248. Trying to compress this
file I get 51496 bytes from the fprintf line inserted above:

#!/bin/bash

declare -i lastsize
declare -i cursize
declare -i divisor
declare -i i

# make sure that all error output from gpg is redirected to /dev/null
# as well as decrpted output and normal messages so that we only get
# my number from inserted fprintf line
lastsize=`./gpg -q --batch --homedir /home/mark/.gnupg_recover --output /dev/null -d .current 2>/dev/null`

i=0
while true; do
	# try to read 10 blocks ar once
	dd if=/sda1/data.bin bs=4096 count=10 skip=$i of=.buffer 2>/dev/null

	# merge old and new file to a new file '.test' in order
	# to test if more fragments could be decrypted
	cat .current .buffer > .test

	# read out length of successfully decrypted data from new file
	cursize=`./gpg -q --batch --homedir /home/mark/.gnupg_recover --output /dev/null -d .test 2>/dev/null`

	# debug (Block, lastsize and currentsize)
	echo -e "B: $i\tL: $lastsize, C: $cursize"

	if test $cursize -gt $lastsize; then
		# YEEEEAH!!! We've found another piece!!!
		echo "Woooow, we've found a new fragment. old size: $lastsize, new size: $cursize" | tee log.txt

		# how many blocks on the filesystem is that?
		divisor=$(( $cursize / 4096 ))
		if test $(( $cursize % 4096 )) -ne 0; then divisor=$divisor+1; fi

		# add new data found to file .current
		rm -f .current
		dd if=.test of=.current bs=4096 count=$divisor 2>/dev/null

		lastsize=$cursize
		i=$(( $i + $divisor))

	else
		# no new data was found. Just increase loop variable
		i=$i+1
	fi
done



The script runs now for ~15 days (yes, big harddrive) and there were no new fragments found.
First I startet the script with i=3081920 (the offset where I found the beginning of the gpg file)
and after that, I searched the data before (starting from block 0).


I hope you can understand what I do here.
Why does this script not find the next piece? The file is very old and so it should not be too
much fragmented.

But is my beginning correct or is anything wrong with the offset where the zblib compressor
outputs its error message?


I would be very, very happy, if anybody could help me because the file is important and the facts
are not so bad ("small" size, length known, beginning known, all keys known, encryption settings known,
chance is good that file is not much fragmented)



Thanks a lot!!

Markus


PS: This file is from a good friend of mine, and no. He does not have backups! The file
also contains projects and source code for programs we developed together a long time
ago.

PPS: Sorry that I write to this list but I think my problem can only answer a developer
who knows interna of the zlib compression...
null

Sichern Sie sich Ihre Premium E-Mail-Adresse bei Lycos: eigene Domain, 1000 MB Mailspeicher, 100 free SMS, POP3, Weiterleitung und Hightech Spamschutz: http://mail.lycos.de



More information about the Gnupg-devel mailing list