Make incremental backups using rsync

17. June 2012


rsync has the ability to move files to a backup folder instead of deleting them. Having a new backup directory every time rsync is executed will result in a perfect way to follow which files were deleted at which run and the possibility to recover any state ever backed up.

Note that the –backup-dir is relative to the destination directory so you probably should use a path outside.

The core command is:

rsync -av --backup --backup-dir=../backup-diff-$number --delete /media/sourceMedia/ backup-full

¬ written by gimi in Uncategorized

Simple but effective JPG image recovery tool written in c: jpeg-recover2

19. June 2011


This is a c implementation of the jpeg-recover from Adam Glass written in perl.

Though I was happy to see quite a lot images restored from my camera’s memory card using the original tool, it was way to slow for the 4gb it had to process. So this program was written and extended in some ways so it would match more pictures at a fraction of that time.

Please consider that this was written 2-3 years ago. I do not guarantee for success or any possible damage taken through this software. Contact me in case anything went wrong.

How does it work?
Most filesystems such as FAT used on SD cards and other portable memory write a continuos file to the device. This enables us to search for typical jpeg headers to match the beginning and some other pattern to find the end of that photo. So its possible of course, that you won’t find any image if the headers of your camera are not known to that program.

How to use it?
You can compile it on your own from source or download the binary for 32bit (take this one if unsure) or 64bit.

gcc -o jpeg-recover -s -O3 jpeg-recover2.c

Now you can use it to scan an image made from the device or the device itself. Don’t worry, it won’t be modified.

gimi@meerkat8471:.../jpeg-recover$ ./jpeg-recover -h
Usage: ./jpeg-recover [parameter] disk-image
    -o p  prepend given p to every written image, defaults to RIMG_
    -s b  skip b bytes before searching
    -a    save all matching JPEG header and endings
          (!) requires much space and manual selection
              default is to save first matching
    -l b  match images greater than b bytes, defaults to 102400
    -u b  match images smaller than b bytes, defaults to 5120000
    -v    be verbose
    -h    display this help
    ./jpeg-recover64 -o out/RIMG_ -s 10000000 /tmp/defect.dd
For more info, please visit:
gimi@meerkat8471:.../jpeg-recover$ ./jpeg-recover /dev/sda1

¬ written by gimi in Snippets

urlencode and urldecode for bash scripting using sed

19. June 2011


To urlencode or -decode strings in bash scripts, one can use simple sed scripts containing one rule for every character. You can download the urlencode.sed and urldecode.sed files.

encodedUrl=$(echo "$url" | sed -f urlencode.sed)
url=$(echo "$encodedUrl" | sed -f urldecode.sed)
echo "encoded: $encodedUrl"
echo "decoded: $url"


encoded: http%3a%2f%2fgimi%2ename%2fsnippets%2furlencode%2dand%2durldecode%2dfor%2dbash%2dscripting%2dusing%2dsed%3frandomOption=foo%2cbar%2cbaz

¬ written by gimi in console, Scripts, Snippets

Wget download and index script

2. January 2011


Sometimes there is the need for local copies of websites one found interesting or one has to work with later. This script downloads a single page including pictures and style of a website using wget. For convenience there is an index generated which contains links to all fetched pages in the same directory as well as some detailed information.

Example usage: wget-page

DATEFORMAT='%Y-%m-%d %H:%M'
while [ -n "$1" ]; do
	# download page
	wget --timestamping --page-requisites --convert-links --wait=0 "$1"
	# add to index
	echo "$1" >> .index
# sort index
cat .index | sort | uniq > .index-new
mv -f .index-new .index
# regenerate index.html
	echo "<html>"
	echo "  <head>"
	echo "    <title>WGET-Page Index</title>"
	echo "  </head>"
	echo "  <body>"
	echo "    <h1>WGET-Page Index</h1>"
	echo "    <ul>"
	cat .index | while read line; do
		local=$(echo "$line" | sed -e 's,http://,./,')
		if [ -e "$local" ]; then
			link=$(echo "$local" | sed -e 's,\?,%3F,')
			stamp=$(date +"$DATEFORMAT" -r "$local")
			# credit for next line goes to
			title=$(awk -vRS="</title>" '/<title>/{gsub(/.*<title>|\n+/,"");print;exit}' "$local")
			echo "      <li>"
			if [ -n "$title" ]; then
				echo "        <b>$title</b><br />"
			echo "        $line<br />"
			echo "        go to <a href='$link'>local copy</a> or <a href='$line'>original page</a><br />"
			echo "        last modified $stamp</li><br />"
			echo "$0: Page '$line' not found" 1>&2
	echo "    </ul>"
	echo "    <hr />"
	echo "    <i>generated by $(basename "$0"), last modification $(date +"$DATEFORMAT")</i>"
	echo "  </body>"
	echo "</html>"
) > index.html

¬ written by gimi in console, Handy, Scripts

Extract ogg audio from any clip like flv, mp4, mov, wmv

4. September 2010


I like to download youtube videos using youtube-dl. Sometimes an audio file of that clip would be cool. This can by done with 2ogg movie_file.mp4 "artist name" "song name"

# 2ogg: Takes given video to play with mplayer and encode audio to an ogg file.
# Consider the loss of quality.
if [[ $# -ne 1 && $# -ne 3 ]]; then
	echo "Usage: $0 <video> [<artist> <title>]"
mplayer -quiet -ao pcm:fast:file="$temp" -vc dummy -vo null "$1" &&
oggenc -a "$2" -t "$3" -c "source=$1" -q 7 -o "$1.ogg" "$temp"
rm "$temp"

¬ written by gimi in Handy, Scripts

Based on a theme by BenediktRB • Top left picture from, cc-by • Powered by WordpressRSS Feed