Tuesday, March 31, 2015

Teddy Bear Beach, A Screaming Tree and Other Features of an Outdoor Office

Between my post on Microadventures and the need to tackle some mainly cerebral work, I couldn't resist stepping away from my computer for a little outdoor office time. Below are some photos I snapped while I did some strolling (and thinking) around Theodore Roosevelt Island.

Man, I wouldn't last half a day in a traditional office.

Monday, March 30, 2015

The Why and How of Microadventures

I've got a working hypothesis that there are 4 different forms of travel, each deserving our time and attention. First off, you've got international travel, that one is fairly obvious. Then you've got national travel, where you see amazing sights yet can skip the custom lines and (most of the time) speak English. Then, you've got outdoor travel, which is all about appreciating nature with a healthy dose of self sufficiency. And then there's community travel, where you try to look at your local surroundings with a fresh eye.

It's this last form of travel that has me gleefully blogging when I find an interesting statue, fruit, park or whatever the heck this thing is. It's about noticing your environment and thinking like an explorer, even when you're just out on an afternoon walk.

Turns out, I'm not alone when it comes to appreciating these tiny but wonderful adventures. Exploriment has a nice review of Microadventures: Local Discoveries for Great Escapes. I love the suggestions that Exploriment highlights:

Do you commute via train? Why not get off a stop (or a few) early and walk the rest of the way. Seen a wooded area along the way? Why not get off and explore it in the handful of hours of daylight you have. Is there a river near you? Why not swim out to that island in the middle and camp out overnight. Do you have a friend driving somewhere? Bring your bike, have them drop you off somewhere and make your way home from there. Are there bus routes in your city? Take one out to the end of a route, somewhere you’ve never been, and walk home.

The author of Microadventures, Alastair Humphreys, has a number of excellent resources for creating your own microadventures. To pack in the most adventure, Humphreys suggests a recipe similar to the biking community's S24O. So, he's all about incorporating a overnight. You'll want to check out his planning tips, reading list, gear list (with all of 10 items on it) and a challenge to get you started. For some arm chair microadventure reading, check out Humphreys' walking lap of the M25, the adventure that started it all.

Man, all this talk of adventures. Doesn't it make you want to turn off your computer, walk out your front door and discover something amazing?

Friday, March 27, 2015

The 3 Dev Tools That Made My Week: mysqlreport, WP SQL Executioner, Chaosreader

Here are three tools that saved my butt during this last week. The least I can do is heap a little praise on them.

mysqlreport

I've written about mysqlreport before. And everything I've said about it still stands: it's the most efficient tool I've found to gather and present performance data on a MySQL database. Unfortunately, the project behind mysqlreport has been retired. This means, among other things, that the 1-line install I've used previously for setting up mysqlreport no longer works.

Fortunately, mysqlreport and it's exceedingly helpful guide has been archived over at github. The command line install, therefore, still works. You just need to do:

 wget https://raw.githubusercontent.com/daniel-nichter/hackmysql.com/master/mysqlreport/mysqlreport

to grab the latest (and apparently final) version mysqlreport. From there, you can start debugging away.

WordPress SQL Executioner

Like mysqlreport, I've blogged about this plugin before (nearly 4 years ago). Last night I was working on a GoDaddy managed WordPress blog and found that I needed to tweak a bunch of posts. I had imported the posts using the very slick WP All Import plugin, but I needed to take the posts from being drafts to being scheduled, and I needed to set their post dates. It was far too painful to consider manually updating each post. Normally I'd craft a MySQL or PHP solution by manually connecting up to the database and just "fixing" these posts. Unfortunately, because this is a managed environment, I don't have access to the underlying database.

Fortunately, GoDaddy allowed me to install WP SQL Executioner, in all it's ridiculously unsafe glory. I then kicked off an SQL statement like so:

  UPDATE $posts SET post_date = ..., post_status = 'future' WHERE id IN (...)

And all the relevant posts became scheduled. When I realized the blog was setup to be in the UTC timezone and I needed to update the posts to be in EST, I simply ran the following:

  UPDATE $posts SET post_date_gmt = post_date + INTERVAL 4 HOUR WHERE id in (...)

To make sure WordPress fully absorbed these behind the scene changes, I went into the wp-admin UI and bulk edited all the posts. I simply set a category on them that they all already had. The result was properly scheduled posts.

Note to self: I've got to find (or write) a similar plugin that runs arbitrary PHP code. Yeah, how could that cause problems?

Chaosreader

While I've never blogged directly about tcpdump, I've made passing mention of it. It's one of those low level, super powerful, tools for debugging all things network related. The problem I have with it is that it tends to give me *too* much information. Enter Chaosreader, a tool I learned about from this post. Chaosreader slurps in a raw tcpdump file and delivers to you a neatly organized breakdown of the high and low level traffic.

I recently used it to understand how the Flash Media Server's Admin Console functions. Because the Admin Console is written in Flash, a number of my normal techniques for debugging weren't possible (read: firebug and fiddler). Tcpdump, on the other hand, runs on the server and could care less about where the connections are coming in from. So I generated a massive tcpdump file, fed it to Chaosreader and worked out from there which FMS Admin API calls were being invoked. Turns out getUserStats returns the stats for individual client connections, not FMS users, like I assumed the name implied.

Bottom line, for seeing both the forest and the trees, at least when it comes to TCP network traffic, you need Chaosreader.

Thursday, March 26, 2015

The Origin of Arlington's (Future) Black Rabbit Population

In Arlington we have a population of Black Squirrels which shouldn't really be here. We also have a large population of bunny rabbits in the area. So imagine my surprise when I looked out my window and saw this guy:

At first I wasn't sure what I was looking at: a racoon (saw one during the day over the weekend)? Black cat? Some sort of mutant mini kangaroo? (Oooh, how I hoped it was the latter). Nope, it was a black rabbit.

Upon further inspection, I realized it wasn't a wild rabbit, but was the pet rabbit of one of town home that borders our back property line. I could see the cage that it had been let out of.

So yeah, this isn't some remarkable wildlife sighting. But, when you start seeing little black baby bunny rabbits running around, I do believe we'll have our explanation of where they came from. Cause if rabbits have a reputation for one thing...

Wednesday, March 25, 2015

csveach: Simple, Yet Powerful CSV File Processing On The Command Line

Ahhh yes, the CSV format: it looks so easy to process. Yet an embedded newline here, a stray quote there, and it becomes an awful pain to work with. To that end, I give you csveach, a small perl script that attempts to make working with CSV files on the command line headache free.

Here's how it works. Suppose you've got some data:

# file: foo.csv
Alice,555-1212,"To Be,
Or not to Be,
That is the question!"
Bob,546-7777,"Better to keep your mouth shut and be thought a fool,
then to open it and remove all doubt."
Charlie,930-9808,"Faster.
Just run Faster."

(Note the annoying embedded newlines)

First, you craft a short shell script that's going to get executed for each row in the CSV file. Each column is handed to the script as a parameter. $1 is the row number in the CSV file, $2 is the first column, $3 the second, and so on. Here's a trivial shell script that would extract just the name and phone number from the above:

# file: name.sh
#!/bin/bash

echo "$2: $3"

Finally, you run:

  csveach ./name.sh foo.csv

Which gives the output:

Alice: 555-1212
Bob: 546-7777
Charlie: 930-9808

The shell script can just as easily work with the newline embedded text. For example:

# file: quote.sh
#!/bin/bash

quote=`echo "$4" | tr '\n' '|'`
len=`echo $quote  | wc -c`
echo "$2's $len words of wisdom: $quote"

This can be run as:

 csveach ./quote.sh foo.csv

and gives the output:

Alice's 44 words of wisdom: To Be,|Or not to Be,|That is the question!|
Bob's 93 words of wisdom: Better to keep your mouth shut and be thought a fool,|then to open it and remove all doubt.|
Charlie's 26 words of wisdom: Faster.|Just run Faster.|

By letting a shell script (or really any executable) do the heavy lifting, it's possible to reformat or process CSV data any way you want. And best of all, you can ignore all those exceptional cases that make CSV such a pain.

Enjoy!

#!/bin/perl

##
## For each row of a CSV file kick off a system command
##

use Text::CSV;

$script = shift(@ARGV);

my $csv = Text::CSV->new ({ binary => 1, eol => $/ });
my $row_num = 0;

foreach my $file (@ARGV) {
  open my $io, "<", $file or die "$file: $!";

  while (my $row = $csv->getline ($io)) {
    $row_num++;
    my @args = @$row;
    unshift(@args, $row_num);
    (system $script, @args) == 0 or die "system @command failed: $?";
  }
}

Tuesday, March 24, 2015

gdget: Inching Towards a more Linux Friendly Google Drive Solution

My default document strategy is to stash everything in Google Drive. If a client sends me a Word Doc, it's getting uploaded and converted to a Google Doc so we can both discuss it as well as have a reliable permanent record. This especially makes senses on Windows where I gravitate towards browser friendly solutions. On Linux, however, I'd prefer an emacs / command line friendly solution; and while Google Docs and Google Drive are accessible from Linux, the situation isn't exactly ideal.

What I truly want is ange-ftp for Google Docs. If you've never used emacs's remote file editing capability, you're missing out. It's like magic to pull in and edit a file on some random Linux box halfway across the world.

But I digress. While full editing support of Google Docs from within emacs woulds be ideal, a much simpler solution would suffice: if I could export a snapshot of various documents from Google Drive and store them locally, I could use emacs and other command line tools (I'm looking at you grep) to quickly access them. In other words, I'd gladly trade the browser and rich formatting for a simple, yet occasionally out of date, text file. And for the times when I need rich formatting, I'm glad to access Google Drive using a browser.

The good news is that the Google Drive API offers a download option which does this conversion to basic text. I just needed a tool that would execute this API. Google CL was a very promising initial solution. I was easily able to build a wrapper script around this command to pull down a subset of docs and store them locally. The big catch I found was that Google CL's title matching algorithm meant that attempts to download a specific file pulled down more than I wanted (trying to download "Foo" also pulled down "Foo Status" and "Foo Report"). More than that, it only worked with Google Docs; spreadsheets and other resources weren't accessible.

So I decided to try my hand at writing my own little client. The Google Drive API is actually quite HTTP GET friendly. The only catch (as usual) is authentication. Gone are the days where I could even consider being sloppy and putting my username and password in a shell script (I suppose a Thank You is in order for this). I was going to have to go the route of oauth.

I followed the instructions for an installed app (including setting up an app in my Google Developer's console) and while they initially looked daunting, everything came together without issue. I ended up hacking together 3 scripts:

  • gdauth - This script sets up the initial Google OAuth authentication. It also supports providing the access_token as needed.
  • gdget - This is a very thin wrapper around curl. It essentially adds the correct Authorization header to an arbitrary request and lets curl do the rest of the work
  • gdocs - A simple shell script for pulling down various docs and sheets and storing them locally. It works by associating various names (Foo) to document ID's (which is visible while editing a document within Google Docs).

gdauth and gdget both take a -c flag which sets a context. The context allows you to access multiple Google accounts. For example, you may authenticate with your work account using -c work or your personal account as -c personal. This way you can access various Google Drive accounts with a minimum of hassle.

You can use the low level tools like so:

$ gdauth -c blog init
https://accounts.google.com/ServiceLogin?....
Code? [enter code shown after visiting the above URL]
Done

$ gdget -c blog 'https://docs.google.com/a/ideas2executables.com/spreadsheets/d/1zjkVhrv-f0nvSOFNGuyEPJQSlSp-fXV66DYUFNJxGv8/export?exportFormat=csv' | head -4
,,,,
,#,Use,Reference,HTML
,1,for writing,,<li>for writing</li>
,2,as a straw,,<li>as a straw</li>
,3,"as a toy ""telescope"" for kids",,"<li>as a toy ""telescope"" for kids</li>"

For regular usage, I invoke gdocs pull from cron every few hours and I'm good to go.

Note that you're not limited to using the above tools to only download snapshots of files. You can access any part of the Google Drive REST API. For example:

 gdget -c blog 'https://www.googleapis.com/drive/v2/about'
 gdget -c blog 'https://www.googleapis.com/drive/v2/changes'
 gdget -c blog 'https://www.googleapis.com/drive/v2/files/0B53sMu55xO1GN3ZObG1HdzhXdXM/children'

Below are all three scripts. Hopefully you'll find them inspirational and educational. Better yet, hopefully someone will point me to an emacs or command line solution that makes this script look like the toy that it is. For now though, it's one heck of a useful toy.

# ------------------------------------------------------------------------
# gdauth
# ------------------------------------------------------------------------
#!/bin/bash

##
## Authenticate with Google Drive
##
USAGE="`basename $0` {auth|refresh|token} ctx"
CTX_DIR=$HOME/.gdauth
CLIENT_ID=__GET_FROM_API_CONSOLE__
CLIENT_SECRET=__GET_FROM_API_CONSOLE__

ctx=default

function usage {
  echo "Usage: `basename $0` [-h] [-c context] {init|token}"
  exit
}

function age {
  modified=`stat -c %X $1`
  now=`date +%s`
  expr $now - $modified
}

function refresh {
  refresh_token=`cat $CTX_DIR/$ctx.refresh_token`
  curl -si \
       -d client_id=$CLIENT_ID \
       -d client_secret=$CLIENT_SECRET \
       -d refresh_token=$refresh_token \
       -d grant_type=refresh_token \
       https://www.googleapis.com/oauth2/v3/token > $CTX_DIR/$ctx.refresh
  grep access_token $CTX_DIR/$ctx.refresh | sed -e 's/.*: "//' -e 's/",//' > $CTX_DIR/$ctx.access_token
}

while getopts :hc: opt ; do
  case $opt in
    c) ctx=$OPTARG ;;
    h) usage ;;
  esac
done
shift $(($OPTIND - 1))

cmd=$1 ; shift

mkdir -p $CTX_DIR
case $cmd in
  init)
    url=`curl -gsi \
         -d scope=https://www.googleapis.com/auth/drive \
         -d redirect_uri=urn:ietf:wg:oauth:2.0:oob \
         -d response_type=code \
         -d client_id=$CLIENT_ID\
         https://accounts.google.com/o/oauth2/auth | \
      grep Location: | \
      sed 's/Location: //'`
    echo $url | xclip -in -selection clipboard
    echo $url
    echo -n "Code? "
    read code
    curl -s \
         -d client_id=$CLIENT_ID \
         -d client_secret=$CLIENT_SECRET \
         -d code=$code \
         -d grant_type=authorization_code \
         -d redirect_uri=urn:ietf:wg:oauth:2.0:oob \
         https://www.googleapis.com/oauth2/v3/token > $CTX_DIR/$ctx.init
    grep access_token $CTX_DIR/$ctx.init | sed -e 's/.*: "//' -e 's/",//' > $CTX_DIR/$ctx.access_token
    grep refresh_token $CTX_DIR/$ctx.init | sed -e 's/.*: "//' -e 's/"//' > $CTX_DIR/$ctx.refresh_token
    echo "Done"
    ;;
  token)
    if [ ! -f $CTX_DIR/$ctx.access_token ] ; then
      echo "Unknown context: $ctx. Try initing first."
      exit
    fi
    age=`age $CTX_DIR/$ctx.access_token`
    if [ $age -gt 3600 ] ; then
      refresh
    fi
    cat $CTX_DIR/$ctx.access_token
    ;;
  *)
    usage
esac

# ------------------------------------------------------------------------
# gdget
# ------------------------------------------------------------------------

#!/bin/bash

##
## Run a GET request against an authorized Google
## URL.
##
CTX_DIR=$HOME/.gdauth
ctx=default

function usage {
  echo "Usage: `basename $0` [-c ctx] [-h] url"
  exit
}

while getopts :hc: opt ; do
  case $opt in
    c) ctx=$OPTARG ;;
    h) usage ;;
  esac
done
shift $(($OPTIND - 1))

if [ ! -f $CTX_DIR/$ctx.access_token ] ; then
  echo "Unknown context: $ctx. Try init'ing first."
  exit
fi

token=`gdauth -c $ctx token`

curl -s -H "Authorization: Bearer $token" $*
     
# ------------------------------------------------------------------------
# gdocs
# ------------------------------------------------------------------------
#!/bin/bash

##
## A tool for experimenting with Google Docs
##

GDOCS=$HOME/gdocs

function grab_all {
  url=$1    ; shift
  fmt=$1    ; shift
  for t in $* ; do
    name=`echo $t | cut -d: -f1`
    docid=`echo $t | cut -d: -f2`
    echo "$name.$fmt"
    u=`printf $url $docid $fmt`
    gdget -c i2x $u > $GDOCS/$name.$fmt
  done
}

##
## docs and sheets have the format:
##  LocalFileName:Docid
## Ex:
##  StatusReport:14SIesx827XPU4gF09zxRs9CJF3yz4bJRzWXu208266WPiUQyw
##  ...
##

docs=" ... "
sheets=" ... ""

commands='pull'
cmd=$1 ; shift

case $cmd in
  pull)
    grab_all "https://docs.google.com/feeds/download/documents/export/Export?id=%s&exportFormat=%s" txt $docs
    grab_all "https://docs.google.com/a/ideas2executables.com/spreadsheets/d/%s/export?exportFormat=%s" csv $sheets
    ;;
  *)
    echo "Usage: `basename $0` {$commands}"
    exit
    ;;
esac

Monday, March 23, 2015

Close by, and blazingly fast - OpenWRT helps construct the WiFi infrastructure I should have had in the first place

While I was psyched to install OpenWRT on a TP-Link TL-WA850RE and produce the cutest / James-Bondiest device ever, I admit that it didn't have much practical value. I bought this device with the intention of boosting the WiFi signal emanating from the access point in our basement, but that didn't seem work. WiFi speeds were still unimpressive even after configuring the TP-Link.

Then, over the weekend, it hit me: to sidestep the issue with performance, I'm using a wired Ethernet drop for my work computers upstairs. What if I plugged the TP-Link device into my wired network, and then had it offer up an access point of it's own? If I did this, laptops upstairs would benefit from having a close proximity WiFi access point, yet the connection to the router in the basement would be over wires.

After some research, I realized that the desired configuration was pretty close to the stock configuration of OpenWRT. The key difference is that the default version of OpenWRT expects the outgoing network to be the wireless network (wan), not the wired network (lan). It took a bit of fiddling, but here's the working configuration:

# /etc/config/network
config interface 'loopback'
        option ifname 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

# The "real" network
config interface 'lan'
        option ifname 'eth0'
        option force_link '1'
        option proto 'static'
        option ipaddr '192.168.1.75'
        option netmask '255.255.255.0'
        option dns '192.168.1.1'
        option gateway '192.168.1.1'


# The close proximity wireless access point 
config interface 'wan'
        option proto 'static'
        option ipaddr '10.0.0.1'
        option netmask '255.255.255.0'

# /etc/config/wireless
config wifi-device 'radio0'
      option type 'mac80211'
      option channel '5'
      option hwmode '11g'
      option path 'platform/ar934x_wmac'
      option htmode 'HT40-'
      option disable '0'
      option noscan '1'
      option txpower '20'

config wifi-iface
      option device 'radio0'
      option network 'wan'
      option mode 'ap'
      option ssid 'Pipsqueak'
      option encryption 'psk2'
      option key 'XXXXXXXXXXXXXXX'

# /etc/config/firewall
# ...
config zone
      option name             lan
      list   network          'lan'
      option input            ACCEPT
      option output           ACCEPT
      option forward          ACCEPT
      option masq             1       # CRITICAL

config zone
      option name             wan
      list   network          'wan'
      list   network          'wan6'
      option input            ACCEPT
      option output           ACCEPT
      option forward          REJECT

# Reversed from default config
config forwarding
      option src            wan
      option dest           lan
# ....

# /etc/config/dhcp
config dnsmasq
        option domainneeded '1'
        option boguspriv '1'
        option filterwin2k '0'
        option localise_queries '1'
        option rebind_protection '1'
        option rebind_localhost '1'
        option local '/lan/'
        option domain 'lan'
        option expandhosts '1'
        option nonegcache '0'
        option authoritative '1'
        option readethers '1'
        option leasefile '/tmp/dhcp.leases'
        option resolvfile '/tmp/resolv.conf.auto'

config dhcp 'lan'
        option ignore '1'

config dhcp 'wan'
        option interface 'wan'
        option 'start' '50'
        option 'limit' '200'
        option 'leasetime' '1h'
        option ignore '0'

A configuration similar to the one above worked, however, initial WiFi performance was nothing particularly special. Thanks to various forum discussion I ended up tweaking txpower, noscan and a few other parameters before I got this impressive speed test:

Compared to the 4~5Mb/s I was getting, this is absolutely huge. This is just about the same performance I've been seeing over a wired connection.

I've got to say, I'm amazed. Who would have thought that I'd actually put this OpenWRT device to work? But no doubt about it, it's providing real value.

Thursday, March 19, 2015

The "Funny, It Doesn't Look Like a Linux Server", Server

Check out the cutest edition to our Linux family:

What you're looking at is a TP-Link TL-WA850RE WiFi range extender. A while back, I was having WiFi woes, so I picked up this $30 WiFi extender from Amazon. Turns out, the extender didn't help matters much, so I decided to put it to use in another way.

I installed OpenWRT on the device. OpenWRT is a Linux distribution designed for routers and the like, and it caught my eye because it had confirmed support for this particular device. Installing OpenWRT was almost too easy. I grabbed the .bin file (it was in the ar71xx » generic subdirectory) and used the upload firmware option that was available in the built in web based UI.

In just a few minutes I turned this hunk of white plastic into a Linux box, which, well did nothing. Through some small miracle, I was able to hook it up to a cable and telnet to it.

The first order of business was to configure this device as a WiFi client (or station) rather than the default configuration of being an access point. My hope that was once the device was in client mode, I could plug it into the wall in a random spot in our house, it would then boot up and I'd be able to telnet/ssh to it.

I found this and this article handy is setting up client mode on the device. However, it was ultimately this bit of advice that made all the difference:

If the target network uses the 192.168.1.0/24 subnet, you must change the default LAN IP address to a different subnet, e.g. 192.168.2.1 . You can determine the assigned WAN address with the following command: ...

I had wanted to setup the lan (wired side) of the device to have a static IP and the wan (WiFi side) to have a DHCP picked up IP. It wasn't obvious, but attempting to have both the static IP and dynamic IP be on the same network caused it to fail. The static IP would be set, but the WiFi side wouldn't ever be properly configured. Here's the configuration that ended up working for me:

# /etc/config/wireless
config wifi-device 'radio0'
        option type 'mac80211'
        option channel 'auto'
        option hwmode '11g'
        option path 'platform/ar934x_wmac'
        option htmode 'HT20'
        option disable '0'

config wifi-iface
        option device 'radio0'
        option network 'wan'
        option mode 'sta'
        option ssid 'SSID_TO_CONNECT_TO_GOES_HERE'  # [1]
        option encryption 'wep'
        option key 'PASSWORD_GOES_HERE_SEE_BELOW'   # [2]


# /etc/config/network
config interface 'loopback'
        option ifname 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

config interface 'lan'
        option ifname 'eth0'
        option force_link '1'
        option proto 'static'
        option ipaddr '192.168.2.75'     # [3]
        option netmask '255.255.255.0'

# /etc/config/firewall

# ... trimmed ...
config zone
        option name             wan
        list   network          'wan'
        list   network          'wan6'
        option input            ACCEPT  # [4]
        option output           ACCEPT
        option forward          REJECT
# ... trimmed ...

Some notes from above:

[1] - This is where you specify your router's SSID to connect up with
[2] - For WEP encryption I entered a hex value here, versus text. I used this site to do the conversion.
[3] - This was key: my router will give a 192.168.1.x IP, so this needs to be off that network.
[4] - Once I got everything set up, I was getting a connection refused message when trying to telnet to the server. The wan firewall needed to be changed to allow access

Once this all got hashed out, I was able plug the device into a random spot on the wall and telnet to it. Success! And yet, where do I go from here?

Obviously this is useful for educational purposes. I've already had to brush up on my basic networking skills to get this far, and there's plenty more to learn. Heck, you could use this $30.00 router to learn about life on the command line and generally how to be a Unix geek.

OpenWRT, however, is more than just a learning platform. There's a large number of software packages available, and they can be installed using opkg with ease. Turning this bad boy into a web server or the like should be easy enough. I was even able to install a version of scheme, by grabbing an older sigscheme package:

root@pipsqueak:/# opkg install http://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/sigscheme_0.8.3-2_ar71xx.ipk
Downloading http://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/sigscheme_0.8.3-2_ar71xx.ipk.
Installing sigscheme (0.8.3-2) to root...
Configuring sigscheme.
root@pipsqueak:/# sscm 
sscm> (map (lambda (x) (* x 9)) '( 1 2 3 4 5 6))
(9 18 27 36 45 54)
sscm> (exit)

Ultimately, what will make this useful is if I can find an application for the device that leverages its near invisible profile and dirt cheap price. If I was in the security business, or a nerd-action-novel writer, then the uses would be pretty obvious. Walk in, plug in device, walk out. And bam! you've got a server that can try to worm it's way onto the network. But for myself, I'm going to have to think a little more on this. Perhaps the device should live in my car? Or maybe it'll be useful in a hotel room? Not sure, but the technology is just too cool to ignore.

Tuesday, March 17, 2015

More Miva Merchant for Geeks: Pulling in an admin controlled template to custom MivaScript code

I've got to say, I'm thoroughly enjoying my snippet approach to Miva, whereby I can embed MivaScript code in arbitrary Miva Merchant templates. To recap, I can put code similar to the following in a Miva template:

    <mvt:item name="ads-extfile"
              param="function|/mm5/snippets/seo/title.mvc"/> 

and the MivaScript code in /mm5/snippets/seo/title.mv gets executed and the output included on the page. I can trivially maintain these snippet files using emacs, and track them in subversion.

Recently I found myself wanting to push this setup further. Specifically, I'd like to allow my customers to customize parts of the snippet content. This will make far more sense with an example.

Suppose I've got the attribute packaging type, and I want to allow shoppers to choose between Stress Free and Stress Inducing. Miva Merchant allows me to create such an attribute, but doesn't provide an easy way to include instructions for the attribute. I've got a snippet that generates the HTML for the attribute in question. What's the best way to get customized text inserted before the radio buttons associated with the attribute?

The easiest solution is to ask my client for the text and hard code it in the snippet. But this is asking for trouble, as no doubt this text will need to get tweaked over time. Furthermore, if I had a general purpose method of pulling in customized text, I'd be able to find all sorts of uses for this capability.

Not quite sure which approach take, I created a new page in Miva with the code Packaging-Type-Info. For the page's content I put some place holder text ("Sending a gift to your Mom? Choose stress free. Sending a gift to a frenemy? Select Stress Inducing!").

I confirmed that I could pick up that text by visiting:

  http://mysite.com/mm5/merchant.mvc?Screen=Packaging-Type-Info

One option then, is to embed an invocation of MvCALL in my snippet with the above URL. That's terribly inefficient though, as it means every call to my snippet will result in an additional HTTP request. I kept digging around. I finally caught a break when I realized that the template was being written out to the following .mvc file:

  /mm5/5.00/templates/s01/packaging-type-info.mvc

Now all I needed to do was to read in that .mvc file and hopefully include the contents of the template in my snippet.

My first attempt was to add the following to my snippet:

  <MvDO FILE="/mm5/5.00/templates/s01/packaging-preference-info.mvc" />

While that didn't generate an error, it also didn't include the text within the template. I looked all over the MivaScript documentation for a function that would pull in my template contents. I could find none. Luckily, I stumbled onto the docs for miva_template_compile, which actually cleared the whole situation up. Specifically, the docs contain this tidbit:

Compiled templates contain a single function Template_Render(). Execute the template by calling that function, not by executing the compile template .mvc file directly

In other words, there's no function to pull in a template because templates are self executing. Including my template file is as simple as saying:

  
 <MvASSIGN NAME="l.path" VALUE="/mm5/5.00/templates/s01/packaging-preference-info.mvc"/>
 <MvEVAL EXPR="{ [ l.path ].Template_Render(l.page, l.settings) }"/>

l.page is unset, and l.settings is passed through from the settings variable provided to the snippet.

The above is awfully close to what I want, but it still hard codes the path to the particular template. Furthermore, the snippet I'm working on is used to render all option based templates, not just the packaging one that has instructions. My final code, therefore, looks like so:

  <MvASSIGN NAME="l.info_template" VALUE="{ g.store_template_path $ tolower(l.attr:code) $ '-info.mvc' }"/>
  <MvIF EXPR="{ sexists(l.info_template) EQ 1 }">
    <div class='att-info leading'>
      <MvEVAL EXPR="{ [ l.info_template ].Template_Render(l.page, l.settings) }"/>
    </div>
  </MvIF>

g.store_template_path is that ugly path to where templates are stored on disk, and sexists() allows me to check for the existence of the template before I attempt to load it. This allows for some attributes to have info blocks and some to skip this feature.

I've now got the best of all worlds: a snippet to cleanly maintain the code of the site, and the ability to efficiently pull in arbitrary content maintained in the UI.

Now if I could just get rid of the silly dependency on the admin UI for maintaining site templates, and perhaps come up with a far terser way to write MivaScript I'd be all set. Stay tuned...

Monday, March 16, 2015

WriteOnly: Painful adventures in append only writing

Exhibit A: Last week my Dad shared this clever article with me: Watch me write this article. The article centers around a Chrome plugin that allows you to playback your writing. This may not seem immediately useful, but consider this:

Somers started all this because he thinks the way we teach writing is broken. “We know how to make a violinist better. We know how to make a pitcher better. We do not know how to make a writer better,” Somers told me. In other disciplines, the teaching happens as the student performs. A music instructor may adjust a student’s finger placement, or a pitching coach may tweak a lefty’s mechanics. But there’s no good way to look over a writer’s shoulder as she’s writing; if anything, that’ll prevent good writing.

Exhibit B: Over the weekend I randomly picked up one of Shira's old Organic Chemistry Lab textbooks. As I flipped through the introduction, I came across the section describing the need to keep a lab notebook. One of the recommendations included was to never erase anything. If you made an error, you were supposed to cross out, not remove, the text. This would raise fewer questions about the integrity of the notebook.

These two ideas swirled in my head and came out as: writeonly, an experiment in append only writing.

When you click over to WriteOnly (here, try it now), you're taken to a blank page that allows you to enter in text. You can't, however, backspace, delete or otherwise change the text. The only "feature," if you can call it that, is that two -'s in a row strike out the previous word.

Here's a screenshot of a sample session:

At this point, WriteOnly is a trivial experiment. If I thought it was useful (or heck, if you thought it was useful), I'd have to hook it up to some persistent storage. But once that's done, the 'app' should be ready to use. The bigger question of course is, is there any use for this bad boy?

Play with it and let me know.

Weekend Snapshots: Sushi, Spring and a Pizza Pie!

The above sign was spotted during our Kosher Mart trip this last weekend. Seriously, Sushi with Quinoa for Passover? I've got to think that even just a couple of years ago this sign would have been thought impossible. For a religion which tends to err on the side of caution, it can get wild and crazy now and then (assuming you define including Sushi and Quinoa in your Passover preps as wild and crazy).


Spring has nearly sprung! I caught sight of these croci and snowdrops while on a run. They look especially stunning next to all the brown detritus surrounds them.


We finally tried Pupatella, a rigorously authentic Neapolitan pizzeria. How rigorous? Oven bricks made from the volcanic ash of Mt. Vesuvius, rigorous. The pizza was mighty good. Alas, I can't call it the best Pizza in Arlington, like many do. But that's because I'm equally satisfied with a Pizza Hut pie as I am with one of Pupatell's hand made creations. Yeah, a foodie I am not.

Friday, March 13, 2015

My First Killer App, Circa 1994

Recently Jeff Atwood published a blog entry about a retro piece of software that profoundly shaped his coding life. This reminded me of an experience I had with a now ancient piece of software, that had a similar impact.

Back in the days of 4" monitors and luggable computers (yes, 4 inches, that's not a typo), my Dad encouraged me to use one of his favorite programs, an outliner. This was a time when people weren't quite sure what a computer was going to even be useful for. Was it just a fancy typewriter, or elaborate desk calculator?

This outliner software suggested computers could be far more powerful: they could help you think, be creative and quickly solve problems. It was a software Swiss Army Knife before I knew I needed a software Swiss Army Knife (nowadays, emacs fills that role). This app had two other things going for it. First, it was my earliest encounter with software that traded ease of use for power and efficiency. I'd see this again when I jumped on the Unix bandwagon. Second, the manual that came with the software was valuable in its own right. I recall that smooshed between sections describing key strokes and how the software functioned were important insights and mind expanding philosophical musings. After reading the manual I didn't just want to use the software, I wanted to be a better thinker.

But this all happened 20 years ago or so, and is hazy to say the least. For longest time I'd forgotten the name of this influential piece of software. Every few years it would occur to me to retrace my steps and rediscover this software, but I never had any luck doing so.

Thanks to Jeff's article, I was inspired once again to look around. And this time, I do believe I've put all the pieces together.

This life altering software was none other than MaxThink. I was delighted to see that it has a website, and that it's filled with the same software philosophy I remember. It's all about being the ideal tool for writers, planners and thinkers. Heck, you can still download and buy the software. I was also psyched to see that at least one dedicated user has some recent blog posts on MaxThink.

But that's not all, I also found a copy of the 1994 manual that was so mind altering. You can find the PDF version here. Does it hold up as the inspirational and life altering document that I've imagined it was? Hard to say. Though I was very pleased to see the following on page 7-3:

That story has stuck with me since I read it nearly two decades ago. To this day when I face a challenge my default response is to follow that very advice: 1. make a list; 2. start on the first item. I've searched for this story over the years and never found mention of it on the web. I was beginning to think I had invented it. To see it printed in the manual is actually a bit of a relief.

While my days of using MaxThink are behind me, the concept of leveraging a powerful outliner to get work done is alive and well. That's where emacs' orgmode comes in. Lately I've been using orgmode to track a number of different lists. And while I'm far from a power user on the subject, I do love that I'm getting to put my DOS outliner skills back to use. I'll have to re-read the MaxThink manual and really brush up on my skills.

Incidentally, I had no idea that MaxThink is credited with being an early implementation of hyper-links. So not only did I benefit from this software, but anyone who's ever clicked on a link in a browser (that's you!) has too.

So what early software influenced you?

Thursday, March 12, 2015

Msg Loc: A Location Aware Messaging App at its Ugliest

I'm making my way through Spycraft: The Secret History of the CIA's Spytechs, from Communism to Al-Qaeda, and in Chapter 10 there's mention of a device called BUSTER (all caps, of course). BUSTER was a sort of radio transceiver that would send out and receive a text message when covertly deployed. This apparently allowed a spy and his handler to exchange messages without either making physical contact. Because it was a low powered device, the sender and receiver still needed to be relatively close to each other. Operating at low power was a good thing, because it helped make the device harder to detect.

This notion of dropping off and picking up messages within a certain geography got me thinking: wouldn't that be a fun app to build?

So here it is, the very uncreatively named: Msg Loc. Yeah, that's a .apk file. If you're feeling especially lucky (and trustworthy?), you're welcome to download and try the app out.

As if the name weren't ugly enough, it's got UI to match:

(Remember people, we're prototyping here)

Operating the app couldn't be simpler: clicking Pickup Message, derives your current GPS location, picks up the text left at location and reads it to you. The Leave Message prompts you for text, which is then uploaded. The system uses BigHugeMap.com for storing and retrieving the text. So yeah, none of this is remotely secure. But all messages are public, so there really isn't any expectation of security.

So far, if I were you, I wouldn't really be impressed. All I've done is is figure out the user's GPS location, make a couple of HTTP requests and leverage the built in text to speech capability that comes with all Android devices. Here's the cool part: the entire app, including generating the APK was done in Tasker. That was concept to working app file in less than an hour and most of that time was spent learning about scenes (that is, building your own custom UI) and other Tasker details. It's hard to believe that you can create a truly functional little app, all without ever writing a line of code.

Incidentally, the app may be crude, but the lack of any way to browse or derive where messages may actually be found is intentional. I think there's something slightly magical about standing at a bus stop and hearing someone else's message. Or, knowing that you're leaving a message for some other stranger. Sure, it's kind of like geocaching, but not quite.

Below is the textual description of the tasker actions that power this guy.

Loc Msg Put (49)
A1: Vibrate [ Time:200 ]
A2: Perform Task [ Name:ApxLoc  ]
A3: Get Voice [ Title:Your Message  ]
A4: Variable Set [ Name:%value To:%VOICE  ]
A5: Variable Set [ Name:%key To:Loc-Msg, ]
A6: Variable Set [ Name:%key To:%APXLOC Append:On ]
A7: HTTP Post [ Server:Port:bighugemap.com Path:/put Data / File:key=%key
value=%value]
A8: Say [ Text:Got it. ]

Loc Msg Get (50)
A1: Vibrate [ Time:200 ]
A2: Perform Task [ Name:ApxLoc Return Value Variable:%loc ]
A3: Variable Set [ Name:%key To:Loc-Msg,  ]
A4: Variable Set [ Name:%key To:%loc Append:On ]
A6: HTTP Get [ Server:Port:bighugemap.com Path:/get Attributes:key=%key  ]
A7: If [ %HTTPD !Set ]
A8: Say [ Text:I've got nothing. ]
A9: Else
A10: Say [ Text:%HTTPD ]
A11: End If

ApxLoc (54)
A1: Get Location [ Source:GPS ]
A2: Variable Split [ Name:%LOC Splitter:, ]
A3: Variable Set [ Name:%lat To:floor(%LOC1 * 10000) / 10000 Do Maths:On ]
A4: Variable Set [ Name:%lng To:floor(%LOC2 * 10000) / 10000 Do Maths:On ]
A5: Variable Set [ Name:%APXLOC To:%lat,%lng ]
A6: Return [ Value:%APXLOC ]

Loc Msg App (57)
A1: Show Scene [ Name:Msg Loc Display As:Dialog ] 

Wednesday, March 11, 2015

BigHugeMap.com: For when you need instant access to an anonymous, no-authentication needed, web accessible global hashmap

I give you my latest creation: BugHugeMap.com.

The Inspiration

This past weekend I was mulling over how I could implement a prototype of a location aware messaging app. It occurred to me that if I had a globally accessible hashmap, I could do this almost trivially using a tool like Tasker. There are various hacks for storing content on the web without setting up a server side database. For example, you can accomplish this using twitter, Pastebin, or Google Docs. But I wanted something even easier to access, and I wanted not just publish ability, but quick retrieval. I wanted a globally accessible hashtable that didn't enforce any sort of authentication or API requirements.

But what fool would offer such a free-for-all space on the web?

Well, I suppose I'm that fool. As that's exactly what bighugemap.com offers.

How it Works

There's only two things you can do with BigHugeMap.com: put data and get data.

The "API" looks like so:

  // Get some data
  http://bighugemap.com/get?key=SOMEKEY

  // Put some data
  http://bighugemap.com/put?key=SOMEKEY&value=SOMEVALUE

That's it.

Key's can be up to 128 characters in length, values up to 255. Feel free to make GET or POST requests, both work. Other than that, there are no rules.

Naturally, the above URL structure is CURL friendly. Here's a silly example:

 $ curl -d key=epoch -d value="`date`" 'http://bighugemap.com/put'
 OK

 $ curl 'http://bighugemap.com/get?key=epoch' ; echo
 Wed, Mar 11, 2015  8:10:30 AM

Is this Secure?

BigHugeMap.com offers the same security guarantees as Mailinator, which is to say none. Every key is globally accessible and settable by any user. If you set epoch, like I did in the example above, then I can come along and both read and overwrite that value. Yeah, it's the Wild West.

Is this Useful?

For a hacker, I think so. For a regular user, probably not. But just think about it, you can store and retrieve data from any device that can make an HTTP request. If you want to experiment with having your watch post data that your toaster then reads, and you want do this without any server side infrastructure, BigHugeMap.com will get the job done.

In Summary

Be careful. Be creative. Enjoy.

Monday, March 09, 2015

Joan, Dante and McMillian - A DC Run

Yesterday was the perfect day for road running. The weather was sunny and 50+ degrees, which made running through the snow a bit surreal. But unlike a trail run, it was far less muddy on concrete. We started our run at Meridian Hill Park and ran over to McMillan Reservoir.

I was hoping we could run around the reservoir, but it turns out the roads shown on the map are quite off limits. In fact, when I casually slipped through a gate to snap a few photos of the Pump House I was met by a very angry and determined official who wanted me off the private property absolutely this second. I don't think she appreciated that without having a sign saying where the property begins, it's hard to comply with this order. Turns out my experience wasn't unique. Anyway, after being threatened with arrest, I decided not to press the point and just moved on.

We finished up our run with stroll through Meridian Hill Park. Unlike my night visit, I got to actually see the complete park, including the statue of Dante and Joan of Arc. They're both excellent pieces of art. And there's something special about finding Joan, the only female equestrian riding monument in the District.

Friday, March 06, 2015

Thursday, March 05, 2015

Snowday, Shmoday. Let's Purim!

I've got to give our Shul credit, while the rest of DC shut down due to snow, we still had early morning Minyan to read the Whole Megillah!

And don't worry, we had the essentials on hand to stave off hypothermia...

Wednesday, March 04, 2015

wine: The 'but I have to run Windows' excuse killer for Linux

When I switched one of my work laptops to Linux, I assumed that I'd run into a handful of Windows-only apps that would make the switch untenable. This hasn't turned out to be the case. The majority of Windows requirements were addressed through web based apps and standard Linux based tools. For more obscure replacements (say, Picsa image management), I was able to find a Linux variant that worked as well (say, shotwell). More importantly, I'm finding the scriptable philosophy of Linux is making me even more productive than I was on Windows.

That's not to say that I haven't run into a couple of Windows specific challenges. To my delight, I've found wine is a lightweight, almost invisible solution, to this problem.

Two apps that I've run under wine include an ancient evoice audio player (needed to playback audio files from a free evoice account I use) and Anyplace Control's desktop viewing software.

The evoice player was my first foray into running a Windows app under Linux. It was more proof of concept than anything else, as the alternative was replacing the ancient evoice account with a more modern (though probably not free) solution. Using wine couldn't have been easier: I simply downloaded the .exe file and then ran: wine evoicesetup.exe. And to my amazement, the Windows installation popped up. Once the process was done evoice was installed, though it wasn't immediately obvious where.

Now, when I want to play back an evoice file I run this script:

#!/bin/bash

## A wrapper around the evoice windows app

if [ ! -f "$1" ] ; then
  echo "Usage: `basename $0` file.evc"
  exit
fi

wine 'c:\\Program Files (x86)\\eVoice Player 1.0\\eVoicePlayer.exe' $1

I derived that path by looking under ~/.wine/drive_c and converted it to the standard Windows format.

Admittedly, The above setup isn't perfect. When I run evoice under wine my xterm is filled with warning message like so:

But it plays back the audio just fine. And given how often I have to play these files, this solution is perfect.

Anyplace Control, a type of Remote Desktop software that one of my clients use, seemed like a taller order for wine. I ran wine AnyplaceControlInstall.exe and was greeted with this promising dialog:

I hit Next but nothing happened. After much experimentation I realized that the install was kicking off a bunch of windows and ratpoison was probably not displaying them properly. To address this I kicked off Xnest with the following script:

#!/bin/bash

##
## Kick off a full screen Xnest session
##
geom=`xwininfo -root | grep geometry`
target=:10

Xnest $target  $geom &
sleep 1

export DISPLAY=$target
twm &
xsetroot -solid SteelBlue
xterm &

This started up the Very Old School twm as its own little desktop universe. The install ran just fine from there.

To run Anyplace Control, I simply kick off an Xnest session like above and then run:

  wine 'c:\\Program Files (x86)\\Anyplace Control\\apc_Admin.exe'

Turns out, Anyplace Control runs even better than evoice, with far fewer warning messages. In fact it runs just as well on my Linux box, if not better, than it does on my Windows box.

I had no idea wine was such an unsung hero on Linux. It's a project that's been around forever (well, since 1993!) and effortelssly knocked down some of the trickiest hurdles to a Linux only lifestyle. Well done!

Tuesday, March 03, 2015

Hacker Workarounds for Lollipop on the Galaxy S5

I know I should be excited that my Galaxy S5 picked up Lollipop, but so far, it's been a bigger headache than help. Let's see:

Terminal IDE, the command line Swiss Army Knife, is now broken. I get this gem of a message when I use vi and various git commands:

(That reads: error: only position independent executables (PIE) are supported.)

Terminal IDE was my Goto Tool for both ssh and git. I used vi for quick edits, but Droid Edit is my main tool of choice for editing source code on my device. For git replacement, I'm now depending on Droid Edit's built in git support. And if that isn't flexible enough, I'll probably end up buying Pocket Git which is by the same author as Droid Edit.

As an ssh replacement, I've found vSSH to be the ideal option. There are a number of slick ssh options out there, like JuiceSSH and the classic ConnectBot, but I've found that they don't play nice with my Bluetooth Keyboard setup. vSSH Just Works, and allows me to use all the keyboard commands I'd expect.

Blocking Mode is gone. Well, mostly. It appears that Interruptions has replaced Blocking Mode, which I don't really get. Perhaps this makes sense because Blocking Mode was a Samsung thing and Interruptions is pure Android? Or maybe Interruptions is better? All I know is that Blocking Mode is no longer found in the settings menu. However, it's not completely gone. From within Tasker you can manage Blocking Mode by using:

  Actions > Plugins > Secure Settings > Configuration > 
    Custom ROM Actions > Samsung Modes > Blocking Mode

Though it's probably smarter to get rid of any calls you have to Blocking Mode and use the built in Interruptions Capability. Within Tasker, you can access this as follows:

 Actions > Audio > Interrupt Mode

There are a bunch of older posts on the web that imply that Tasker doesn't interact properly with the Interruption facility. That's all dated, the most recent version of Tasker has no problem with interruptions.

Silent Mode in Tasker is now a trap. I had a bunch of tasks that invoked: Actions > Audio > Silent Mode. When they executed and turned Silent Mode on, the system switched into Priority access only. That's reasonable, as Silent Mode is no longer available in Lollipop. The catch comes when you turn Silent Mode off. This is effectively a no-op, and doesn't re-enable interruptions.

Consider this example: I've got this task setup:

 Flipped:
 Orientation: Face Down
  → Notification Volume 0
         System Volume 0
         Silent Mode On
  ← Notification Volume 7
         System Volume 7
         Silent Mode Off

Before I understood how broken Silent Mode was, I'd flip my phone over and it would switch into priority interruptions. I'd then flip my phone back and it would remain in priority interruptions. This left me baffled: why on Earth was my device switching into this Priority only mode. I replaced any calls with Silent Mode with calls to Interrupt Mode, and I'm good to go.

Finally, a question: Which browser should I be using on my device? The built in Internet app has support for saving pages locally. That's huge for working on ProgrammingPraxis problems mid plan flight. But the Chrome app has the new and improved Chrome Window support, which I'd like to take for a spin. So, which browser do you use? Why can't Chrome support saving pages locally?

Monday, March 02, 2015

Food Randomness: Ethiopian Lent and Ecuadorian Hamantaschen

I know almost nothing about Ethiopian Lent. Heck, I don't even know if that's the appropriate name for it. But I do know that during this section of the calendar some Ethiopian's refrain from eating meat. And why does this matter? It means that our favorite Ethiopian restaurant, Dama Pastry and Restaurant, has special vegetarian additions to the menu. Perhaps your favorite Ethiopian place does this, too? It's certainly a nice vegetarian bonus to find.

Incidentally, the Hebrew word for fast is Tzom, as in Tzom Gedalia. That's awfully close to the Amharic Abiy Tsom. Coincidence? Nope.

If I learned anything from our trip to Ecuador, it's that it's OK to put a massive monument to the equator 240 meters from the actual Equator. That, and cheese goes everywhere! Like in hot chocolate.

So as a nod to my Ecuadorian Jewish Brethren, I just had to insist that Shira let me create some Mozzarella Hamantaschen:

I've got to say, right out of the oven, they tasted pretty good. It was a cheesy, gooey mess, but in a good way. I could almost argue that the savory cheese complimented the sweet dough well. This morning, however, they were a little less appetizing. Oh well, I still think I'm on to something here. Next year, I'll have to refine the procedure further.

Note: the above cheese Hamentaschen are a poorly shaped because I'm not in charge of the folding process during baking. I'm strictly in charge of flattening the dough. Shira's Hamantaschen all come out looking like perfect little triangles.

Pretty in Ice, and Recovering Lost Photos on Linux

Slowly but surely we're transitioning out of Winter. A few days ago we had snow and today we awoke to a beautiful (as seen from a non-commuter) ice storm. The photos just don't do this sort of weather justice, but I had to try.

These photos almost weren't. While copying these photos using a script on my Linux laptop I managed to delete most of the photos. Luckily, I found photorec, a very impressive interactive program which helps recover accidentally deleted photos from an SD Card. It installs with the command: sudo yum install testdisk and is super easy to use.

Along with recovering the photos that I had shot this morning, it also found hundreds more. Let this be a lesson to anyone who thinks that deleting stuff actually causes it to be deleted.

As I type this, the weather has warmed up enough that the trees are now freed from their ice captivity. It was beautiful while it lasted. Now it's just wet.