Monday, June 04, 2012

WP, S3 and Apache - 3 Tech Recipes I Used Today

As the title suggests, here are three recipe I found to be quite handy today. Perhaps you'll find one or more of them handy?

WordPress: Accessing Additional Query Parameters Within A Theme

I needed to support a sort parameter on a WordPress search page. The URL looks like so:

 http://somesite.com/?s=the+query&sort=date

You'd think you could access $_GET['sort'] to get the value from the query string, but you'd be wrong. Apparently WordPress explicitly removes any query parameters it doesn't expect. A handy security feature, but a bit painful when custom parameters need used.

The solution? Add a hook on the init action, which invokes $wp->add_query_var('sort') and the variable will be automatically available off the $wp_query global object.

Here's what the code looks like:

function prepare_custom_params() {
  global $wp;

  $wp->add_query_var('sort');
  // ... maybe others ...
}

// Later on, in my theme
<?
 global $wp_query;
 ...
 echo "Sorting by: " . $wp_query->get('sort');
?>

Apache: Denying files based on a referring URL

For another client, I needed to reject all requests for images from a site that was misbehaving. Apache Rewrite Rules to the rescue, of course.

The rule set looks like so:

RewriteCond %{HTTP_REFERER} foo.com   [NC]
RewriteRule (.*[.](jpg|gif|png)) /empty?t=$1        [L,NC,R]

I created a 0 byte file named empty. I suppose I could have gotten a bit fancier and made an image which had some clever text on it ("Stop hotlinking these images here, please"), but simpler was better.

Note, I added the file being requested as a query parameter to the URL /empty. I did this so I could easily eyeball what files were being denied in the Apache access_log. Rather than see a whole bunch of requests to /empty/, I see requests to /empty?t=logo.gif.

S3 - A Belt & Suspenders Approach To S3 File Verification

I love using s3cmd in shell scripts. It allows me to trivially push files to Amazon's S3. It has a slick md5 checking feature that notifies me if for some reason a file can't be pushed over properly to S3. While useful feedback for debugging, I found it hard to script against. For example, I was disappointed to learn that s3cmd always exits with a return code of 0 ($? = 0), even if a file fails to get pushed.

In this particular client's case, it was absolutely essential that every file make it to S3 every time. My solution ended up being like so:

  s3cmd put somefile.any  s3://bucketname.aws.amazon.com/somefile.any           # put
  s3cmd get --force s3://bucketname.aws.amazon.com/somefile.any /tmp/s3.pushed  # pull it back down
  cmp -s somefile.any /tmp/s3.pushed                                            # compare, byte for byte
  if [ $? = 0 ] ; then
    # Take any action that needs to happen when we're 100% sure the file made it to S3
    rm somefile.any
  fi

Sure, this is slow and chews up bandwidth. But, local files only get deleted when the script has confirmed they made it to Amazon and can be pulled down successfully. That piece of mind is well worth the extra bytes.

Hope these recipes help. Have one you'd like to share from your day?

No comments:

Post a Comment