Today, however, I ran into a little bit a snag - I need to push thousands of files to S3, but I don't have full control over the server. Installing various libraries and such wasn't a possibility. My first assumption was that I was going to need to push the data to a server that did have s3cmd installed (probably using ssh+tar). Turns out, there was a better and simpler way: use s3-bash.
s3-bash, as the name implies, is a variety of s3 related scripts that leverage commonly installed tools. To my amazement (and appreciation!), I was able to upload these scripts to the serer in question, ssh in, and use them to access S3.
I took a peek in the files, and they really are masterfully written. Integrating with Amazon has some tricky requirements (such as the need to derive a request signature), and finding work arounds to do all this using standard tools is quite slick.
With that said, I couldn't help make my life a little easier by writing a wrapper shell script around the s3-* family of commands. The idea is that rather than specify the credentials on every command line use, my wrapper script could take care of that. Here's my wrapper:
#!/bin/bash ## ## A custom wrapper around s3-bash to make calling it easier ## bash_home=/path/to/the/s3-bash/commands secret=$bash_home/amz.secret key=`cat $bash_home/amz.key` headers=$bash_home/amz.headers what=$1 ; shift $bash_home/s3-$what -k $key -s $secret -a $headers "$@"
I invoke this script by saying:
s3 get -T foo /bucket.name/file/path/bar.txt
Notice how the script provides the -k and -s option, which are used for authentication. It also provides a -a argument, which provides Amazon specif headers. In my case, this file contains:
This forces all files that I upload to be readable by the public.
With my s3 command in place, I was able to push a whole tree of files by running the command:
find * -name '*.xlv' | while read f ; do echo $f ; s3 put -T $f /bucket.client.com/$f ; done
Gosh I love Linux! Add s3-bash to your toolkit today, you'll be glad you did.