Yesterday the DMV was hit by a "major snow storm" that shut down the area. As the new day began I was curious what conditions were like around the area. But how to know? Turn on the news and trust the Main Stream Media? Don't think so.
No, I wanted to get a first hand view of the situation, and I wanted to do it while sipping hot tea in my PJs. So it was off to the Internet!
TrafficLand has a DC area map that allows you to view the area traffic cameras. It's this kind of data I was after. But manually selecting each camera feed is just so tedious. I'm on Linux baby, surely I can do better than hunting and pecking on a Google Map.
The first thing to note is that if you right mouse click on any traffic cam, you can access an image URL that looks like this guy:
Assuming you have the feh image viewer (sudo yum install feh), you can grab and view this image like so:
curl -s 'http://ie.trafficland.com/660/full?system=trafficland-www&pubtoken=defa0f069743edb951716d02aa17111bf26&cache=56104' | feh -. -
That's already an improvement over the Google Maps UI, but we can do even better. Next up, if you've got ImageMagick installed (sudo yum install ImageMagick), then you've got the montage command at your disposal. montage combines images into a sort of old school contact sheet. So if we can grab multiple camera images we can trivially combine them into one combined image.
Another item worth noticing is that the above URL contains a parameter named pubtoken. That value almost certainly expires after a few minutes, meaning that if you hard code the above URL in a script, it's going to stop working after some time.
So what's the right way to get that camera URL with a fresh pubtoken? Well, it's not particularly obvious. But by examining the web traffic under Firebug, it's also not that hard to figure out. At the end of the day, the page in your browser needs a valid URL, and your script is simulating the web browser, so it needs to simulate this step as well.
With those kinks worked out I ended up with the script printed below. Now when I type trafficcams I get an image like the following to pop-up:
That's a montage of 6 cameras that interest me. I came up with the camera ID's (see below) by examining the URL mentioned above.
I suppose one could ask if what I'm doing is appropriate from an ethical stand point. That is, is it OK to lift image data off TrafficLand without using their UI? I'm far from qualified to answer this question, but I will suggest this: we're used to thinking of web browsers as being Internet Explorer, Chrome or Firefox. But the reality is, a web browser is any tool that allows you to, well, browse the web. If you were blind, you'd expect your web browsing experience to be audio based. If you're Linux user, and lazy, then why not have your web browser experience cut right to chase? The agreement has always been that a web browser requests content from a site, that site returns it and it's the job of the browser to render this content. I've just decided that I'd like to render the TrafficLand site a little differently than most users.
It certainly wouldn't be appropriate to grab the image data from TrafficLand and sell it, or utilize it in a project without their permission.
I mention all this because I think it's easy to get stuck of thinking of the web browsing experience as sort of like watching TV. All TVs show the same rendering (though, quality obviously differs) of the same content. But, on the web, we're not limited to that narrow mindset. Make your tools work for you!
OK, enough chatter. Here's that script:
#!/bin/bash cache=`echo $RANDOM` base_dir=/tmp/tc.$$ cams="401807 200003 930 401770 200113 640" mkdir -p $base_dir for c in $cams ; do url=`curl -s "http://www.trafficland.com/video_feeds/url/$c?cache=$cache"` echo -n "Grabbing: $c ... " > /dev/stderr curl -s "$url" > $base_dir/$c.jpg echo "Done" > /dev/stderr done dest=$1 ; shift if [ "$dest" = "-" ] ; then output="cat -" else output="feh -. -B black - " fi montage $base_dir/*.jpg -geometry '400x400>' - | $output rm -fr $base_dir