Friday, March 27, 2015

The 3 Dev Tools That Made My Week: mysqlreport, WP SQL Executioner, Chaosreader

Here are three tools that saved my butt during this last week. The least I can do is heap a little praise on them.

mysqlreport

I've written about mysqlreport before. And everything I've said about it still stands: it's the most efficient tool I've found to gather and present performance data on a MySQL database. Unfortunately, the project behind mysqlreport has been retired. This means, among other things, that the 1-line install I've used previously for setting up mysqlreport no longer works.

Fortunately, mysqlreport and it's exceedingly helpful guide has been archived over at github. The command line install, therefore, still works. You just need to do:

 wget https://raw.githubusercontent.com/daniel-nichter/hackmysql.com/master/mysqlreport/mysqlreport

to grab the latest (and apparently final) version mysqlreport. From there, you can start debugging away.

WordPress SQL Executioner

Like mysqlreport, I've blogged about this plugin before (nearly 4 years ago). Last night I was working on a GoDaddy managed WordPress blog and found that I needed to tweak a bunch of posts. I had imported the posts using the very slick WP All Import plugin, but I needed to take the posts from being drafts to being scheduled, and I needed to set their post dates. It was far too painful to consider manually updating each post. Normally I'd craft a MySQL or PHP solution by manually connecting up to the database and just "fixing" these posts. Unfortunately, because this is a managed environment, I don't have access to the underlying database.

Fortunately, GoDaddy allowed me to install WP SQL Executioner, in all it's ridiculously unsafe glory. I then kicked off an SQL statement like so:

  UPDATE $posts SET post_date = ..., post_status = 'future' WHERE id IN (...)

And all the relevant posts became scheduled. When I realized the blog was setup to be in the UTC timezone and I needed to update the posts to be in EST, I simply ran the following:

  UPDATE $posts SET post_date_gmt = post_date + INTERVAL 4 HOUR WHERE id in (...)

To make sure WordPress fully absorbed these behind the scene changes, I went into the wp-admin UI and bulk edited all the posts. I simply set a category on them that they all already had. The result was properly scheduled posts.

Note to self: I've got to find (or write) a similar plugin that runs arbitrary PHP code. Yeah, how could that cause problems?

Chaosreader

While I've never blogged directly about tcpdump, I've made passing mention of it. It's one of those low level, super powerful, tools for debugging all things network related. The problem I have with it is that it tends to give me *too* much information. Enter Chaosreader, a tool I learned about from this post. Chaosreader slurps in a raw tcpdump file and delivers to you a neatly organized breakdown of the high and low level traffic.

I recently used it to understand how the Flash Media Server's Admin Console functions. Because the Admin Console is written in Flash, a number of my normal techniques for debugging weren't possible (read: firebug and fiddler). Tcpdump, on the other hand, runs on the server and could care less about where the connections are coming in from. So I generated a massive tcpdump file, fed it to Chaosreader and worked out from there which FMS Admin API calls were being invoked. Turns out getUserStats returns the stats for individual client connections, not FMS users, like I assumed the name implied.

Bottom line, for seeing both the forest and the trees, at least when it comes to TCP network traffic, you need Chaosreader.

Thursday, March 26, 2015

The Origin of Arlington's (Future) Black Rabbit Population

In Arlington we have a population of Black Squirrels which shouldn't really be here. We also have a large population of bunny rabbits in the area. So imagine my surprise when I looked out my window and saw this guy:

At first I wasn't sure what I was looking at: a racoon (saw one during the day over the weekend)? Black cat? Some sort of mutant mini kangaroo? (Oooh, how I hoped it was the latter). Nope, it was a black rabbit.

Upon further inspection, I realized it wasn't a wild rabbit, but was the pet rabbit of one of town home that borders our back property line. I could see the cage that it had been let out of.

So yeah, this isn't some remarkable wildlife sighting. But, when you start seeing little black baby bunny rabbits running around, I do believe we'll have our explanation of where they came from. Cause if rabbits have a reputation for one thing...

Wednesday, March 25, 2015

csveach: Simple, Yet Powerful CSV File Processing On The Command Line

Ahhh yes, the CSV format: it looks so easy to process. Yet an embedded newline here, a stray quote there, and it becomes an awful pain to work with. To that end, I give you csveach, a small perl script that attempts to make working with CSV files on the command line headache free.

Here's how it works. Suppose you've got some data:

# file: foo.csv
Alice,555-1212,"To Be,
Or not to Be,
That is the question!"
Bob,546-7777,"Better to keep your mouth shut and be thought a fool,
then to open it and remove all doubt."
Charlie,930-9808,"Faster.
Just run Faster."

(Note the annoying embedded newlines)

First, you craft a short shell script that's going to get executed for each row in the CSV file. Each column is handed to the script as a parameter. $1 is the row number in the CSV file, $2 is the first column, $3 the second, and so on. Here's a trivial shell script that would extract just the name and phone number from the above:

# file: name.sh
#!/bin/bash

echo "$2: $3"

Finally, you run:

  csveach ./name.sh foo.csv

Which gives the output:

Alice: 555-1212
Bob: 546-7777
Charlie: 930-9808

The shell script can just as easily work with the newline embedded text. For example:

# file: quote.sh
#!/bin/bash

quote=`echo "$4" | tr '\n' '|'`
len=`echo $quote  | wc -c`
echo "$2's $len words of wisdom: $quote"

This can be run as:

 csveach ./quote.sh foo.csv

and gives the output:

Alice's 44 words of wisdom: To Be,|Or not to Be,|That is the question!|
Bob's 93 words of wisdom: Better to keep your mouth shut and be thought a fool,|then to open it and remove all doubt.|
Charlie's 26 words of wisdom: Faster.|Just run Faster.|

By letting a shell script (or really any executable) do the heavy lifting, it's possible to reformat or process CSV data any way you want. And best of all, you can ignore all those exceptional cases that make CSV such a pain.

Enjoy!

#!/bin/perl

##
## For each row of a CSV file kick off a system command
##

use Text::CSV;

$script = shift(@ARGV);

my $csv = Text::CSV->new ({ binary => 1, eol => $/ });
my $row_num = 0;

foreach my $file (@ARGV) {
  open my $io, "<", $file or die "$file: $!";

  while (my $row = $csv->getline ($io)) {
    $row_num++;
    my @args = @$row;
    unshift(@args, $row_num);
    (system $script, @args) == 0 or die "system @command failed: $?";
  }
}

Tuesday, March 24, 2015

gdget: Inching Towards a more Linux Friendly Google Drive Solution

My default document strategy is to stash everything in Google Drive. If a client sends me a Word Doc, it's getting uploaded and converted to a Google Doc so we can both discuss it as well as have a reliable permanent record. This especially makes senses on Windows where I gravitate towards browser friendly solutions. On Linux, however, I'd prefer an emacs / command line friendly solution; and while Google Docs and Google Drive are accessible from Linux, the situation isn't exactly ideal.

What I truly want is ange-ftp for Google Docs. If you've never used emacs's remote file editing capability, you're missing out. It's like magic to pull in and edit a file on some random Linux box halfway across the world.

But I digress. While full editing support of Google Docs from within emacs woulds be ideal, a much simpler solution would suffice: if I could export a snapshot of various documents from Google Drive and store them locally, I could use emacs and other command line tools (I'm looking at you grep) to quickly access them. In other words, I'd gladly trade the browser and rich formatting for a simple, yet occasionally out of date, text file. And for the times when I need rich formatting, I'm glad to access Google Drive using a browser.

The good news is that the Google Drive API offers a download option which does this conversion to basic text. I just needed a tool that would execute this API. Google CL was a very promising initial solution. I was easily able to build a wrapper script around this command to pull down a subset of docs and store them locally. The big catch I found was that Google CL's title matching algorithm meant that attempts to download a specific file pulled down more than I wanted (trying to download "Foo" also pulled down "Foo Status" and "Foo Report"). More than that, it only worked with Google Docs; spreadsheets and other resources weren't accessible.

So I decided to try my hand at writing my own little client. The Google Drive API is actually quite HTTP GET friendly. The only catch (as usual) is authentication. Gone are the days where I could even consider being sloppy and putting my username and password in a shell script (I suppose a Thank You is in order for this). I was going to have to go the route of oauth.

I followed the instructions for an installed app (including setting up an app in my Google Developer's console) and while they initially looked daunting, everything came together without issue. I ended up hacking together 3 scripts:

  • gdauth - This script sets up the initial Google OAuth authentication. It also supports providing the access_token as needed.
  • gdget - This is a very thin wrapper around curl. It essentially adds the correct Authorization header to an arbitrary request and lets curl do the rest of the work
  • gdocs - A simple shell script for pulling down various docs and sheets and storing them locally. It works by associating various names (Foo) to document ID's (which is visible while editing a document within Google Docs).

gdauth and gdget both take a -c flag which sets a context. The context allows you to access multiple Google accounts. For example, you may authenticate with your work account using -c work or your personal account as -c personal. This way you can access various Google Drive accounts with a minimum of hassle.

You can use the low level tools like so:

$ gdauth -c blog init
https://accounts.google.com/ServiceLogin?....
Code? [enter code shown after visiting the above URL]
Done

$ gdget -c blog 'https://docs.google.com/a/ideas2executables.com/spreadsheets/d/1zjkVhrv-f0nvSOFNGuyEPJQSlSp-fXV66DYUFNJxGv8/export?exportFormat=csv' | head -4
,,,,
,#,Use,Reference,HTML
,1,for writing,,<li>for writing</li>
,2,as a straw,,<li>as a straw</li>
,3,"as a toy ""telescope"" for kids",,"<li>as a toy ""telescope"" for kids</li>"

For regular usage, I invoke gdocs pull from cron every few hours and I'm good to go.

Note that you're not limited to using the above tools to only download snapshots of files. You can access any part of the Google Drive REST API. For example:

 gdget -c blog 'https://www.googleapis.com/drive/v2/about'
 gdget -c blog 'https://www.googleapis.com/drive/v2/changes'
 gdget -c blog 'https://www.googleapis.com/drive/v2/files/0B53sMu55xO1GN3ZObG1HdzhXdXM/children'

Below are all three scripts. Hopefully you'll find them inspirational and educational. Better yet, hopefully someone will point me to an emacs or command line solution that makes this script look like the toy that it is. For now though, it's one heck of a useful toy.

# ------------------------------------------------------------------------
# gdauth
# ------------------------------------------------------------------------
#!/bin/bash

##
## Authenticate with Google Drive
##
USAGE="`basename $0` {auth|refresh|token} ctx"
CTX_DIR=$HOME/.gdauth
CLIENT_ID=__GET_FROM_API_CONSOLE__
CLIENT_SECRET=__GET_FROM_API_CONSOLE__

ctx=default

function usage {
  echo "Usage: `basename $0` [-h] [-c context] {init|token}"
  exit
}

function age {
  modified=`stat -c %X $1`
  now=`date +%s`
  expr $now - $modified
}

function refresh {
  refresh_token=`cat $CTX_DIR/$ctx.refresh_token`
  curl -si \
       -d client_id=$CLIENT_ID \
       -d client_secret=$CLIENT_SECRET \
       -d refresh_token=$refresh_token \
       -d grant_type=refresh_token \
       https://www.googleapis.com/oauth2/v3/token > $CTX_DIR/$ctx.refresh
  grep access_token $CTX_DIR/$ctx.refresh | sed -e 's/.*: "//' -e 's/",//' > $CTX_DIR/$ctx.access_token
}

while getopts :hc: opt ; do
  case $opt in
    c) ctx=$OPTARG ;;
    h) usage ;;
  esac
done
shift $(($OPTIND - 1))

cmd=$1 ; shift

mkdir -p $CTX_DIR
case $cmd in
  init)
    url=`curl -gsi \
         -d scope=https://www.googleapis.com/auth/drive \
         -d redirect_uri=urn:ietf:wg:oauth:2.0:oob \
         -d response_type=code \
         -d client_id=$CLIENT_ID\
         https://accounts.google.com/o/oauth2/auth | \
      grep Location: | \
      sed 's/Location: //'`
    echo $url | xclip -in -selection clipboard
    echo $url
    echo -n "Code? "
    read code
    curl -s \
         -d client_id=$CLIENT_ID \
         -d client_secret=$CLIENT_SECRET \
         -d code=$code \
         -d grant_type=authorization_code \
         -d redirect_uri=urn:ietf:wg:oauth:2.0:oob \
         https://www.googleapis.com/oauth2/v3/token > $CTX_DIR/$ctx.init
    grep access_token $CTX_DIR/$ctx.init | sed -e 's/.*: "//' -e 's/",//' > $CTX_DIR/$ctx.access_token
    grep refresh_token $CTX_DIR/$ctx.init | sed -e 's/.*: "//' -e 's/"//' > $CTX_DIR/$ctx.refresh_token
    echo "Done"
    ;;
  token)
    if [ ! -f $CTX_DIR/$ctx.access_token ] ; then
      echo "Unknown context: $ctx. Try initing first."
      exit
    fi
    age=`age $CTX_DIR/$ctx.access_token`
    if [ $age -gt 3600 ] ; then
      refresh
    fi
    cat $CTX_DIR/$ctx.access_token
    ;;
  *)
    usage
esac

# ------------------------------------------------------------------------
# gdget
# ------------------------------------------------------------------------

#!/bin/bash

##
## Run a GET request against an authorized Google
## URL.
##
CTX_DIR=$HOME/.gdauth
ctx=default

function usage {
  echo "Usage: `basename $0` [-c ctx] [-h] url"
  exit
}

while getopts :hc: opt ; do
  case $opt in
    c) ctx=$OPTARG ;;
    h) usage ;;
  esac
done
shift $(($OPTIND - 1))

if [ ! -f $CTX_DIR/$ctx.access_token ] ; then
  echo "Unknown context: $ctx. Try init'ing first."
  exit
fi

token=`gdauth -c $ctx token`

curl -s -H "Authorization: Bearer $token" $*
     
# ------------------------------------------------------------------------
# gdocs
# ------------------------------------------------------------------------
#!/bin/bash

##
## A tool for experimenting with Google Docs
##

GDOCS=$HOME/gdocs

function grab_all {
  url=$1    ; shift
  fmt=$1    ; shift
  for t in $* ; do
    name=`echo $t | cut -d: -f1`
    docid=`echo $t | cut -d: -f2`
    echo "$name.$fmt"
    u=`printf $url $docid $fmt`
    gdget -c i2x $u > $GDOCS/$name.$fmt
  done
}

##
## docs and sheets have the format:
##  LocalFileName:Docid
## Ex:
##  StatusReport:14SIesx827XPU4gF09zxRs9CJF3yz4bJRzWXu208266WPiUQyw
##  ...
##

docs=" ... "
sheets=" ... ""

commands='pull'
cmd=$1 ; shift

case $cmd in
  pull)
    grab_all "https://docs.google.com/feeds/download/documents/export/Export?id=%s&exportFormat=%s" txt $docs
    grab_all "https://docs.google.com/a/ideas2executables.com/spreadsheets/d/%s/export?exportFormat=%s" csv $sheets
    ;;
  *)
    echo "Usage: `basename $0` {$commands}"
    exit
    ;;
esac

Monday, March 23, 2015

Close by, and blazingly fast - OpenWRT helps construct the WiFi infrastructure I should have had in the first place

While I was psyched to install OpenWRT on a TP-Link TL-WA850RE and produce the cutest / James-Bondiest device ever, I admit that it didn't have much practical value. I bought this device with the intention of boosting the WiFi signal emanating from the access point in our basement, but that didn't seem work. WiFi speeds were still unimpressive even after configuring the TP-Link.

Then, over the weekend, it hit me: to sidestep the issue with performance, I'm using a wired Ethernet drop for my work computers upstairs. What if I plugged the TP-Link device into my wired network, and then had it offer up an access point of it's own? If I did this, laptops upstairs would benefit from having a close proximity WiFi access point, yet the connection to the router in the basement would be over wires.

After some research, I realized that the desired configuration was pretty close to the stock configuration of OpenWRT. The key difference is that the default version of OpenWRT expects the outgoing network to be the wireless network (wan), not the wired network (lan). It took a bit of fiddling, but here's the working configuration:

# /etc/config/network
config interface 'loopback'
        option ifname 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

# The "real" network
config interface 'lan'
        option ifname 'eth0'
        option force_link '1'
        option proto 'static'
        option ipaddr '192.168.1.75'
        option netmask '255.255.255.0'
        option dns '192.168.1.1'
        option gateway '192.168.1.1'


# The close proximity wireless access point 
config interface 'wan'
        option proto 'static'
        option ipaddr '10.0.0.1'
        option netmask '255.255.255.0'

# /etc/config/wireless
config wifi-device 'radio0'
      option type 'mac80211'
      option channel '5'
      option hwmode '11g'
      option path 'platform/ar934x_wmac'
      option htmode 'HT40-'
      option disable '0'
      option noscan '1'
      option txpower '20'

config wifi-iface
      option device 'radio0'
      option network 'wan'
      option mode 'ap'
      option ssid 'Pipsqueak'
      option encryption 'psk2'
      option key 'XXXXXXXXXXXXXXX'

# /etc/config/firewall
# ...
config zone
      option name             lan
      list   network          'lan'
      option input            ACCEPT
      option output           ACCEPT
      option forward          ACCEPT
      option masq             1       # CRITICAL

config zone
      option name             wan
      list   network          'wan'
      list   network          'wan6'
      option input            ACCEPT
      option output           ACCEPT
      option forward          REJECT

# Reversed from default config
config forwarding
      option src            wan
      option dest           lan
# ....

# /etc/config/dhcp
config dnsmasq
        option domainneeded '1'
        option boguspriv '1'
        option filterwin2k '0'
        option localise_queries '1'
        option rebind_protection '1'
        option rebind_localhost '1'
        option local '/lan/'
        option domain 'lan'
        option expandhosts '1'
        option nonegcache '0'
        option authoritative '1'
        option readethers '1'
        option leasefile '/tmp/dhcp.leases'
        option resolvfile '/tmp/resolv.conf.auto'

config dhcp 'lan'
        option ignore '1'

config dhcp 'wan'
        option interface 'wan'
        option 'start' '50'
        option 'limit' '200'
        option 'leasetime' '1h'
        option ignore '0'

A configuration similar to the one above worked, however, initial WiFi performance was nothing particularly special. Thanks to various forum discussion I ended up tweaking txpower, noscan and a few other parameters before I got this impressive speed test:

Compared to the 4~5Mb/s I was getting, this is absolutely huge. This is just about the same performance I've been seeing over a wired connection.

I've got to say, I'm amazed. Who would have thought that I'd actually put this OpenWRT device to work? But no doubt about it, it's providing real value.

Thursday, March 19, 2015

The "Funny, It Doesn't Look Like a Linux Server", Server

Check out the cutest edition to our Linux family:

What you're looking at is a TP-Link TL-WA850RE WiFi range extender. A while back, I was having WiFi woes, so I picked up this $30 WiFi extender from Amazon. Turns out, the extender didn't help matters much, so I decided to put it to use in another way.

I installed OpenWRT on the device. OpenWRT is a Linux distribution designed for routers and the like, and it caught my eye because it had confirmed support for this particular device. Installing OpenWRT was almost too easy. I grabbed the .bin file (it was in the ar71xx » generic subdirectory) and used the upload firmware option that was available in the built in web based UI.

In just a few minutes I turned this hunk of white plastic into a Linux box, which, well did nothing. Through some small miracle, I was able to hook it up to a cable and telnet to it.

The first order of business was to configure this device as a WiFi client (or station) rather than the default configuration of being an access point. My hope that was once the device was in client mode, I could plug it into the wall in a random spot in our house, it would then boot up and I'd be able to telnet/ssh to it.

I found this and this article handy is setting up client mode on the device. However, it was ultimately this bit of advice that made all the difference:

If the target network uses the 192.168.1.0/24 subnet, you must change the default LAN IP address to a different subnet, e.g. 192.168.2.1 . You can determine the assigned WAN address with the following command: ...

I had wanted to setup the lan (wired side) of the device to have a static IP and the wan (WiFi side) to have a DHCP picked up IP. It wasn't obvious, but attempting to have both the static IP and dynamic IP be on the same network caused it to fail. The static IP would be set, but the WiFi side wouldn't ever be properly configured. Here's the configuration that ended up working for me:

# /etc/config/wireless
config wifi-device 'radio0'
        option type 'mac80211'
        option channel 'auto'
        option hwmode '11g'
        option path 'platform/ar934x_wmac'
        option htmode 'HT20'
        option disable '0'

config wifi-iface
        option device 'radio0'
        option network 'wan'
        option mode 'sta'
        option ssid 'SSID_TO_CONNECT_TO_GOES_HERE'  # [1]
        option encryption 'wep'
        option key 'PASSWORD_GOES_HERE_SEE_BELOW'   # [2]


# /etc/config/network
config interface 'loopback'
        option ifname 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

config interface 'lan'
        option ifname 'eth0'
        option force_link '1'
        option proto 'static'
        option ipaddr '192.168.2.75'     # [3]
        option netmask '255.255.255.0'

# /etc/config/firewall

# ... trimmed ...
config zone
        option name             wan
        list   network          'wan'
        list   network          'wan6'
        option input            ACCEPT  # [4]
        option output           ACCEPT
        option forward          REJECT
# ... trimmed ...

Some notes from above:

[1] - This is where you specify your router's SSID to connect up with
[2] - For WEP encryption I entered a hex value here, versus text. I used this site to do the conversion.
[3] - This was key: my router will give a 192.168.1.x IP, so this needs to be off that network.
[4] - Once I got everything set up, I was getting a connection refused message when trying to telnet to the server. The wan firewall needed to be changed to allow access

Once this all got hashed out, I was able plug the device into a random spot on the wall and telnet to it. Success! And yet, where do I go from here?

Obviously this is useful for educational purposes. I've already had to brush up on my basic networking skills to get this far, and there's plenty more to learn. Heck, you could use this $30.00 router to learn about life on the command line and generally how to be a Unix geek.

OpenWRT, however, is more than just a learning platform. There's a large number of software packages available, and they can be installed using opkg with ease. Turning this bad boy into a web server or the like should be easy enough. I was even able to install a version of scheme, by grabbing an older sigscheme package:

root@pipsqueak:/# opkg install http://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/sigscheme_0.8.3-2_ar71xx.ipk
Downloading http://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/sigscheme_0.8.3-2_ar71xx.ipk.
Installing sigscheme (0.8.3-2) to root...
Configuring sigscheme.
root@pipsqueak:/# sscm 
sscm> (map (lambda (x) (* x 9)) '( 1 2 3 4 5 6))
(9 18 27 36 45 54)
sscm> (exit)

Ultimately, what will make this useful is if I can find an application for the device that leverages its near invisible profile and dirt cheap price. If I was in the security business, or a nerd-action-novel writer, then the uses would be pretty obvious. Walk in, plug in device, walk out. And bam! you've got a server that can try to worm it's way onto the network. But for myself, I'm going to have to think a little more on this. Perhaps the device should live in my car? Or maybe it'll be useful in a hotel room? Not sure, but the technology is just too cool to ignore.

Tuesday, March 17, 2015

More Miva Merchant for Geeks: Pulling in an admin controlled template to custom MivaScript code

I've got to say, I'm thoroughly enjoying my snippet approach to Miva, whereby I can embed MivaScript code in arbitrary Miva Merchant templates. To recap, I can put code similar to the following in a Miva template:

    <mvt:item name="ads-extfile"
              param="function|/mm5/snippets/seo/title.mvc"/> 

and the MivaScript code in /mm5/snippets/seo/title.mv gets executed and the output included on the page. I can trivially maintain these snippet files using emacs, and track them in subversion.

Recently I found myself wanting to push this setup further. Specifically, I'd like to allow my customers to customize parts of the snippet content. This will make far more sense with an example.

Suppose I've got the attribute packaging type, and I want to allow shoppers to choose between Stress Free and Stress Inducing. Miva Merchant allows me to create such an attribute, but doesn't provide an easy way to include instructions for the attribute. I've got a snippet that generates the HTML for the attribute in question. What's the best way to get customized text inserted before the radio buttons associated with the attribute?

The easiest solution is to ask my client for the text and hard code it in the snippet. But this is asking for trouble, as no doubt this text will need to get tweaked over time. Furthermore, if I had a general purpose method of pulling in customized text, I'd be able to find all sorts of uses for this capability.

Not quite sure which approach take, I created a new page in Miva with the code Packaging-Type-Info. For the page's content I put some place holder text ("Sending a gift to your Mom? Choose stress free. Sending a gift to a frenemy? Select Stress Inducing!").

I confirmed that I could pick up that text by visiting:

  http://mysite.com/mm5/merchant.mvc?Screen=Packaging-Type-Info

One option then, is to embed an invocation of MvCALL in my snippet with the above URL. That's terribly inefficient though, as it means every call to my snippet will result in an additional HTTP request. I kept digging around. I finally caught a break when I realized that the template was being written out to the following .mvc file:

  /mm5/5.00/templates/s01/packaging-type-info.mvc

Now all I needed to do was to read in that .mvc file and hopefully include the contents of the template in my snippet.

My first attempt was to add the following to my snippet:

  <MvDO FILE="/mm5/5.00/templates/s01/packaging-preference-info.mvc" />

While that didn't generate an error, it also didn't include the text within the template. I looked all over the MivaScript documentation for a function that would pull in my template contents. I could find none. Luckily, I stumbled onto the docs for miva_template_compile, which actually cleared the whole situation up. Specifically, the docs contain this tidbit:

Compiled templates contain a single function Template_Render(). Execute the template by calling that function, not by executing the compile template .mvc file directly

In other words, there's no function to pull in a template because templates are self executing. Including my template file is as simple as saying:

  
 <MvASSIGN NAME="l.path" VALUE="/mm5/5.00/templates/s01/packaging-preference-info.mvc"/>
 <MvEVAL EXPR="{ [ l.path ].Template_Render(l.page, l.settings) }"/>

l.page is unset, and l.settings is passed through from the settings variable provided to the snippet.

The above is awfully close to what I want, but it still hard codes the path to the particular template. Furthermore, the snippet I'm working on is used to render all option based templates, not just the packaging one that has instructions. My final code, therefore, looks like so:

  <MvASSIGN NAME="l.info_template" VALUE="{ g.store_template_path $ tolower(l.attr:code) $ '-info.mvc' }"/>
  <MvIF EXPR="{ sexists(l.info_template) EQ 1 }">
    <div class='att-info leading'>
      <MvEVAL EXPR="{ [ l.info_template ].Template_Render(l.page, l.settings) }"/>
    </div>
  </MvIF>

g.store_template_path is that ugly path to where templates are stored on disk, and sexists() allows me to check for the existence of the template before I attempt to load it. This allows for some attributes to have info blocks and some to skip this feature.

I've now got the best of all worlds: a snippet to cleanly maintain the code of the site, and the ability to efficiently pull in arbitrary content maintained in the UI.

Now if I could just get rid of the silly dependency on the admin UI for maintaining site templates, and perhaps come up with a far terser way to write MivaScript I'd be all set. Stay tuned...

Monday, March 16, 2015

WriteOnly: Painful adventures in append only writing

Exhibit A: Last week my Dad shared this clever article with me: Watch me write this article. The article centers around a Chrome plugin that allows you to playback your writing. This may not seem immediately useful, but consider this:

Somers started all this because he thinks the way we teach writing is broken. “We know how to make a violinist better. We know how to make a pitcher better. We do not know how to make a writer better,” Somers told me. In other disciplines, the teaching happens as the student performs. A music instructor may adjust a student’s finger placement, or a pitching coach may tweak a lefty’s mechanics. But there’s no good way to look over a writer’s shoulder as she’s writing; if anything, that’ll prevent good writing.

Exhibit B: Over the weekend I randomly picked up one of Shira's old Organic Chemistry Lab textbooks. As I flipped through the introduction, I came across the section describing the need to keep a lab notebook. One of the recommendations included was to never erase anything. If you made an error, you were supposed to cross out, not remove, the text. This would raise fewer questions about the integrity of the notebook.

These two ideas swirled in my head and came out as: writeonly, an experiment in append only writing.

When you click over to WriteOnly (here, try it now), you're taken to a blank page that allows you to enter in text. You can't, however, backspace, delete or otherwise change the text. The only "feature," if you can call it that, is that two -'s in a row strike out the previous word.

Here's a screenshot of a sample session:

At this point, WriteOnly is a trivial experiment. If I thought it was useful (or heck, if you thought it was useful), I'd have to hook it up to some persistent storage. But once that's done, the 'app' should be ready to use. The bigger question of course is, is there any use for this bad boy?

Play with it and let me know.

Weekend Snapshots: Sushi, Spring and a Pizza Pie!

The above sign was spotted during our Kosher Mart trip this last weekend. Seriously, Sushi with Quinoa for Passover? I've got to think that even just a couple of years ago this sign would have been thought impossible. For a religion which tends to err on the side of caution, it can get wild and crazy now and then (assuming you define including Sushi and Quinoa in your Passover preps as wild and crazy).


Spring has nearly sprung! I caught sight of these croci and snowdrops while on a run. They look especially stunning next to all the brown detritus surrounds them.


We finally tried Pupatella, a rigorously authentic Neapolitan pizzeria. How rigorous? Oven bricks made from the volcanic ash of Mt. Vesuvius, rigorous. The pizza was mighty good. Alas, I can't call it the best Pizza in Arlington, like many do. But that's because I'm equally satisfied with a Pizza Hut pie as I am with one of Pupatell's hand made creations. Yeah, a foodie I am not.

Friday, March 13, 2015

My First Killer App, Circa 1994

Recently Jeff Atwood published a blog entry about a retro piece of software that profoundly shaped his coding life. This reminded me of an experience I had with a now ancient piece of software, that had a similar impact.

Back in the days of 4" monitors and luggable computers (yes, 4 inches, that's not a typo), my Dad encouraged me to use one of his favorite programs, an outliner. This was a time when people weren't quite sure what a computer was going to even be useful for. Was it just a fancy typewriter, or elaborate desk calculator?

This outliner software suggested computers could be far more powerful: they could help you think, be creative and quickly solve problems. It was a software Swiss Army Knife before I knew I needed a software Swiss Army Knife (nowadays, emacs fills that role). This app had two other things going for it. First, it was my earliest encounter with software that traded ease of use for power and efficiency. I'd see this again when I jumped on the Unix bandwagon. Second, the manual that came with the software was valuable in its own right. I recall that smooshed between sections describing key strokes and how the software functioned were important insights and mind expanding philosophical musings. After reading the manual I didn't just want to use the software, I wanted to be a better thinker.

But this all happened 20 years ago or so, and is hazy to say the least. For longest time I'd forgotten the name of this influential piece of software. Every few years it would occur to me to retrace my steps and rediscover this software, but I never had any luck doing so.

Thanks to Jeff's article, I was inspired once again to look around. And this time, I do believe I've put all the pieces together.

This life altering software was none other than MaxThink. I was delighted to see that it has a website, and that it's filled with the same software philosophy I remember. It's all about being the ideal tool for writers, planners and thinkers. Heck, you can still download and buy the software. I was also psyched to see that at least one dedicated user has some recent blog posts on MaxThink.

But that's not all, I also found a copy of the 1994 manual that was so mind altering. You can find the PDF version here. Does it hold up as the inspirational and life altering document that I've imagined it was? Hard to say. Though I was very pleased to see the following on page 7-3:

That story has stuck with me since I read it nearly two decades ago. To this day when I face a challenge my default response is to follow that very advice: 1. make a list; 2. start on the first item. I've searched for this story over the years and never found mention of it on the web. I was beginning to think I had invented it. To see it printed in the manual is actually a bit of a relief.

While my days of using MaxThink are behind me, the concept of leveraging a powerful outliner to get work done is alive and well. That's where emacs' orgmode comes in. Lately I've been using orgmode to track a number of different lists. And while I'm far from a power user on the subject, I do love that I'm getting to put my DOS outliner skills back to use. I'll have to re-read the MaxThink manual and really brush up on my skills.

Incidentally, I had no idea that MaxThink is credited with being an early implementation of hyper-links. So not only did I benefit from this software, but anyone who's ever clicked on a link in a browser (that's you!) has too.

So what early software influenced you?

LinkWithin

Related Posts with Thumbnails