Thursday, February 25, 2021

Preparing for the Unthinkable, Part 1- Until Help Arrives Training

Last week Shira and I took Arlington County's Until Help Arrives (UHA) training. This is one of the County's strategies for preparing for mass casualty events, like shootings, terrorist attacks and other unthinkable events.

UHA focuses both on mindset and skills. The skills are designed to deal with a handful of medical conditions that meet two essential criteria: they're deadly and they can be easily treated. This all took place over Zoom, though I assume we'll follow-up with in-person training in the future.

I recommend the course, and think it compliments CPR and other first aid training well. It gives you specific skills you can use in a crisis without information overload.

If you're on the fence as to whether this training is for you, or you're not in Arlington County, I'd suggest watching the prerequisite video. This isn't a replacement for the class, but does provide a significant amount course material and is well done. I've embedded the video below.

Another resource to check out is the Tactical Emergency Casualty Care (TECC) Guidelines for Active Bystanders. UHA is based on these guidelines. You could assume, like I did, that a document with such a verbose name would be a dense and complex read. You'd be wrong. The TECC Active Bystander Guidelines is essentially a 3 page, bulleted list of what a non-trained individual can do to help in a mass casulty event. It's quite accessible.

Whenever I attend training like this, I try to take action afterwards to cement my learning. I'm currently listening to the book, Unthinkable: Who Survives When Disaster Strikes - and Why by Amanda Ripley that they recommend during the session. I've updated my First Aid Cheatsheet to include the TECC recommendations. Heck, I'm publishing this blog post. However, the main action I've undertaken is to organize a trauma kit. That's the subject of part 2 of this series; stay tuned.

Wednesday, February 24, 2021

Review: Rogue Protocol, Exit Strategy and Network Effect

I binge read (well, listened to) the three remaining books in Martha Well's Murderbot Diary series Rogue Protocol, Exit Strategy and Network Effect. I did so with the gusto normally reserved for binge watch a TV series I'm hooked on. While the books have similar plots, the characters and scenarios were different enough that I never tired of them. What a joy it is to find a series that you just can't put down.

I continue to be impressed by the diversity that the Wells brings to her characters, both human and non-human. At times, it's bots that show brilliance and empathy and at other times it's humans that do so.

I kept thinking back to the TV show The Good Doctor. Murderbot and the protagonist of the TV Show, Dr. Shaun Murphy, are brilliant problem solvers with super human gifts, yet they can be stumped by the simplest of human interactions. One feature of the TV show is that they manage to show Murphy in both these lights, casting a truly fascinating shadow on those around him. There may be times you may feel sympathy for Murphy and wish he could be 'normal', though just as often you feel sorry for his co-workers who are, alas, just 'normal.'

Wells brings this same dynamic to the Murderbot, and it adds real depth to the characters.

I noted in my review of Artificial Condition that Wells cleverly highlights ethical questions about AI and cloning. I say cleverly, because she manages to wrap these very thorny problems in a veneer of fun and adventure. I liked how she continued this practice in the three remaining books I read.

I don't recall which book it was, but at one point one of the humans explains to Murderbot that her ancestors, like him, arrived on their planet packed in the cargo hold. I found this exchange to be incredibly powerful, as it brought to mind images of slave ships and their implication.

Like all good series, I was both pleased and bummed to finally finish it. Though, I see from looking up the books on Amazon that there's a 6th book coming out. I can't wait!

Monday, February 22, 2021

A Tasker based OPM Status Monitor

The DC Area's 'Snowday For Adults' Indicator

The DC area just endured another bust of a winter storm. While we hoped and prepared for inches of snow, we got sleet instead.

One indicator for how extreme upcoming weather will be is the Federal Government's Office of Personnel Management's (OPM) status page, The directives on this page tell Federal employees when they need to report to the office. At a minimum, it impacts something like 280,000 local residents.

When the OPM status page announces that the government is closed, thousands of people stay home and adults get to bask in the joy of hearing there's a snow day. On a practical level, you can expect many local organizations will follow OPM's lead and be open or closed for the day.

During this last winter-storm event I learned that not only does OPM have a web page and app to check the current status, but they have an API endpoint as well. I wasn't quite sure what I could do with this API, but I couldn't resist doing something with it.

OPM Alerts in Tasker

One easy way to experiment with the API was to leverage Tasker. I used a now familiar pattern to do so.

  1. I used the HTTP Request action to invoke the API at
  2. I used the JavaScriplet action to parse the JSON response from the API, storing the relevant information in local variables.
  3. I used standard Tasker actions to process this parsed data, including: checking to see if the status had changed since I last checked it, using the Say action to read the new status aloud, and using the Notify action to a trigger a notification.
  4. I created a profile that runs every two minutes throughout the day to invoke the task described above.

All of this came together surprisingly easily. My plan is to leave the profile that queries OPM off, and turn it on next time we have a storm on the horizon. I should then be notified if and when OPM decides to change their status.

You can grab the code for the OPM Monitor Profile and Task from TaskerNet.

But Does it work?

I should have to wait until the next big snow to know if the OPM Monitor above works. However, last Saturday morning at 12:07am, I was dramatically awoken by my phone making an announcement:


Apparently I had left the profile active, and the OPM site was resetting its status back to 'open.' So yeah, the profile works. Actually, it may work too well. I may need to adjust it so that between certain hours of the day it doesn't make a verbal announcement.

Thursday, February 18, 2021

Dialing Up The Romance on My iPad Love Note Delivery System

One could argue that my Tasker / iPad powered Love Note Delivery System lacked a certain romantic spark. But I've fixed that. Check out v2:

It looks awfully similar to version 1 with one important difference. In the bottom right hand corner there's a number (in the snapshot above, it reads 17). That's the magic, right there.

I updated the Tasker code so that before the love note is sent to the iPad it's recorded in a Google Sheet. This change means that notes aren't just displayed on the iPad, but recorded for posterity. If I keep my love note authoring up by the end of the year I should have hundreds of them. Perhaps I'll compile them into a book or poster to be delivered for next Valentine's Day.

Recording data in a Google Sheet from Tasker is a problem I recently tackled, so that part of the code was trivial. What was new territory was deriving the sequence number to show in the bottom right hand corner.

My first plan: query the Google Sheet using the GSheets API to determine how many data rows are in the current spreadsheet. Alas, I couldn't find an API endpoint that returned this information.

I could pull back all the love notes currently stored in the Google Sheet and count them, but that's inefficient and would get more so as I added notes.

Fortunately, there was an easy solution: after appending rows to a sheet the API returns back a JSON object reporting what changed. From this, I can determine which row was inserted into the spreadsheet, and from there I can determine the sequence number.

I updated my GSheets Append Row task to return the server's API response. I then parsed this response like so:

var r = JSON.parse(results);
var seq = r.updates.updatedRange.replace(/.*[A-Z]/, '') - 1;
var payload = JSON.stringify({
  message: message,
  sequence: seq

(Find the complete Tasker code here)

I've updated the web page the iPad uses to receive messages to parse the payload structure described above. In it, it now finds a sequence value which it can proudly display in the bottom right hand corner of the screen.

Now if you'll excuse me, I better get writing. These love notes aren't going to compose themselves!*

*Or could they? Note to self: investigate machine learning strategies to generate love notes.

Wednesday, February 17, 2021

Just A Bit Overengineered: A JavaScript Based, MQTT Powered, iPad Love Note System

The Inspiration: Itty Bitty Sidekick Screens

A few months ago, on a trip to to Best Buy I found myself impressed with their prices tags. Not their prices, but the tags themselves. Apparently, Best Buy replaced old school plastic tags with E-Ink displays. I left that store wondering if I could get my hands on my own little E-Ink display and use it to power a project.

Fast forward to this last weekend where I read about two more e-ink based projects: an Always On E-Ink Org Agenda and a Literary Clock Made form an E-reader. The second project is especially interesting because it describes repurposing an old Kindle rather than building an e-ink project from scratch.

I didn't have a Kindle lying around, but I did have an iPad that wasn't getting much use. It wasn't an e-ink display, but I wondered if it could stand in for one.

And then Valentines Day arrived and I realized what I could use a sidekick screen for: as a message board for leaving love notes to my beloved.

Did I need to turn an iPad into the equivalent of a digital Post-It note? Uh yes, yes I did.

Wait a Minute, Didn't I Just Do This project?

I just got done building PimgStack a project to recycle hardware by using my phone to deliver messages to a device that would display said messages on screen. Isn't this the same thing, but instead of using a Raspberry Pi I'm using an iPad, and instead of delivering images I'm delivering love notes? Yes and yes.

So if they're basically the same project, I found myself wondering if I could recycle any code to make my iPad project come together faster.

Powering an iPad Love Note Delivery System

What made PimgStack work was the message delivery mechanism, MQTT. This left me wondering how I could push MQTT messages to an iPad, and once they arrived, how I could react to them. I got an important clue when I setup the MQTT broker on Amazon's MQ service. Check this out:

Amazon's MQ service provides multiple connection points that talk different protocols. In my Raspberry Pi project I made use of MQTT. A quick Google Search revealed that 'wss' is the Secure Web Sockets protocol. I'd never used Web Sockets, but I figured they must be built into the latest version of Safari running on the iPad.

The Plan: Web Sockets talking to Amazon MQ

If I could write a web page that opened up a Web Socket to talk to Amazon MQ's WSS endpoint, then I'd be home free. It could listen on the socket for messages and when they arrived, render the message in HTML. The page would then repeat the process, waiting for another message to arrive.

I whipped up a quick web page, attempted to open up a WebSocket using the WSS URL and ... nothing. What I'd hoped would Just Work, didn't. As I Googled around, I started to get the impression that it wasn't possible to just connect up to a WSS URL from plain JavaScript.

Fortunately, I kept digging and eventually found Paho.MQTT, a library for bridging WebSockets and MQTT. I updated my simple HTML page to pull in the Paho.MQTT library from this CDN:

  <script src="" type="text/javascript"></script>

I then plugged in the WSS host, port, username and password from the Amazon MQ connection area into this code:

var client = new Paho.MQTT.Client(, Env.port, Env.path, Env.clientId);

client.onMessageArrived = function(m) {
  console.log("Message arrived!", m);

  userName: Env.username,
  password: Env.password,
  useSSL: true,
  mqttVersion: 3,
  onFailure: function(e) {
  onSuccess: function() {

After punching a hole in the security group to allow 61619, I found that the socket would successfully connect. I wrote a quick Tasker action using the same MQTT Client that I used in my last project, and sent off a message. And Bam! the message was received on the iPad. It was like magic. Damn Paho.MQTT is cool.

I updated the code above to render the message in HTML, picked a sexy font from Google Fonts and made sure to set the meta tag apple-mobile-web-app-capable. The result is a full screen message board which looks surprisingly polished. As a bonus, emoji seamlessly come through. How cool is that?

I whipped up a quick Tasker action that lets me push text to the iPad. I realized that because this is backed by Tasker, any automation I can imagine on my phone can now use the iPad as a display. Say I want to display my step count on the iPad or the last received text message or last incomning phone number. This would all be pretty easy to do now that I've got the iPad listening on an MQTT channel that Tasker can publish to.

Find the source code that powers the iPad component here. It really is amazing how much functionality you can get out of a little plain HTML web app.

Now if you'll excuse me, I need to go compose sweet nothings to my wife.

Friday, February 12, 2021

PimgStack Part 3 - A Tasker Based Phone UI

I have my stack based digital photo frame, PimgStack, nearly functional. The Raspberry Pi side of the equation is working well. I can send it image stack commands and have it react by showing the relevant photo on screen. All that was left to do was to create a simple Android app for sending images and stack commands to the Pi.

Tasker, Of Course

My go to strategy for building uber-lightweight mobile apps is to turn to Tasker. I tackled building this app in 3 steps.

Step 1. MQTT Communication

I created a Task that pushes messages to PimgStack by publishing an MQTT message. This may sound tricky, but was actually quite simple. There are a number of MQTT Tasker plugins already written, so I all had to do was pick one, fill in the message parameters and I was done.

I first built my Task around the MQTT Publish Plugin, but found that it wasn't reliable. It seemed to randomly fail to publish messages. I then switched to MQTT Client and now message sending is realiable.

Step 2. Core Tasks

I Implemented Tasks that correspond to each of the PimgStack operations: push, pop and clear. Pop and clear are trivial because they are static messages. Push is trickier, because it works in terms of a URL. But what if I have a local file and not a URL?

I solved this issue by falling back on another project of mine: 3shrink. I've setup 3shrink so that I can turn any file on my phone into a URL. You can see that I'm using this service to translate anything that doesn't look like a URL into one:

Step 3. Add Shortcuts

Finally, I needed a hassle free way to invoke the tasks above. For the Pop and Clear Tasks I used the Tasker Widget to add 1-click shortcuts to my home screen. They are now a press away from being executed. Push is trickier, as it requires an associated image or URL to an image to operate.

The heavy lifting to solve this problem is done by AutoShare. AutoShare connects a Task with the Android Share menu. From my photo gallery, for example, I can share a pic with the 'pimgstack:push' AutoShare command. When I do this, the file is delivered to the Push Task. This Task detects that it's working with a file, converts it to a URL via 3shrink and then delivers the URL to PimgStack via an MQTT message.

And It Works!

It all works! There are definitely improvements to be made. The process of delivering PimgStack commands takes a few seconds, not the instant behavior I'd hoped for. But the system is stable, I've put an unused Raspberry Pi and Monitor to use, and I can beam images from my phone to a screen. I'm calling this a win!

Tuesday, February 09, 2021

PimgStack Part 2 (again) - The Joy of MQTT

My attempt at using AWS SQS for pushing messages to my Raspberry Pi based digital photo frame (aka PimgStack) was a bust. I needed a lighter-weight solution for pushing messages to the Pi. MQTT provides message passing services and does so while being featherweight, so I decided to give it a try.

The Proof of Concept

First, I headed to an AWS EC2 Linux server and installed and started mosquitto:

# Server IP:
$ sudo yum install mosquitto
$ mosquitto -v
1612876887: mosquitto version 1.6.10 starting
1612876887: Using default config.
1612876887: Opening ipv4 listen socket on port 1883.
1612876887: Opening ipv6 listen socket on port 1883.
1612876888: New connection from ... on port 1883.
1612876888: New client connected from ... as mosqsub|2145-pimgstack1 (p2, c1, k60).

I also had to update the server's security group to allow all traffic for port 1883. (This was a temporary measure, of course. Once my proof-of-concept was done, I turned this off.)

I then connected up to my Raspberry Pi, installed mosquitto-clients and kicked off the mosquitto_sub command.

$  sudo apt install mosquitto-clients
$ mosquitto_sub -h -t foo/bar -C 1

-h connects to the server above. -t says to listen on topic foo/bar and -C says to listen for a single message and quit.

Running the above command just hung there. That was perfect.

Finally, I connected to a third Linux box, installed mosquitto there, and invoked mosquito_pub like so:

$ sudo yum install mosquitto
$ mosquitto_pub -h -t foo/bar -m "Hello World"

To my shock and amazement, "Hello World" was printed on the Pi's screen and the command exited.

I had just demonstrated the pieces of the puzzle needed to power PimgStack.

The Real Deal

Now that I had a way to receive messages from the cloud, I needed to implement code to handle those messages. That was the easy part. You can see the entire source code for the PimgStack Raspberry Pi client here, however the interesting part is as follows:

while true; do
  message=$(mos_sub pimgstack/1)
  case $message in
      do_push $message


      do_error $message

The above sets up an infinite loop to receive messages. The shell wrapper function mos_sub is invoked to receive messages. When a message arrives, it is trivially parsed. The stack operations 'push', 'pop' and 'clear' are defined, everything else is ignored.

Once the stack is modified, img_display is called and the top most image on the stack is shown. If the stack is empty, a placeholder image is shown.

At this point, I had a very clunky, but functional digital photo frame. The commands below push two images, displaying them each, then pops the top one off, showing the first image that was pushed.

$ mosquitto_pub  -h -t pimgstack/1 -m push:
$ mosquitto_pub  -h -t pimgstack/1 -m push:
$ mosquitto_pub  -h -t pimgstack/1 -m pop

The Raspberry Pi side of the photo frame is now finished. Up next, I need to implement sending MQTT messages from my Android device. And then I'll be ready to push and pop images from my phone with ease.

Thursday, February 04, 2021

Staying Close To Home: Using AutoHotKey to Jump Between Screens

Like most programmers, I've got two monitors and a dislike for taking my fingers off the home-row. On my Mac, I was inspired to write a Keyboard Maestro script to hot-key jump between my screens.

On Windows, I got by Alt-Tab'ing. I've got a new Windows box, and in the spirit of experimentation, I wondered if I could write a similar screen-jumping script for Windows.

The obvious choice was to build this in AutoHotKey. A quick Google Search turned up a similar request. A bit of MsgBox experimentation revealed that MouseGetPos returns positive x values for 'Monitor 1' and negative for 'Monitor 2'. SysGet can be used to determine the bounds of a monitor and MouseMove can be used to, well, move the mouse.

Once I had this figured out, I was able to build a version of a ScreenJump that hopped the mouse between the monitors. Version 1 plopped the mouse pointer in the center of the screen:


;; Windows+J does the jump
If(X > 0) {
  SysGet, M, Monitor, 2
  Zone := -1
} else {
  SysGet, M, Monitor,1
  Zone := 1
CenterX := ((MRight - MLeft) / 2) * Zone
CenterY := (MBottom - MTop) / 2
MouseMove CenterX, CenterY, 0

After a few days, I refined the above function to capture the mouse coordinate and then jump to the other screen. I use these captured coordinates as the destination to jump back to, rather than always jumping to screen-center. Additionally, I added logic to give focus to the window the mouse lands on.

This function, triggered by Windows-j has become so embedded in muscle memory I can't help but wonder how I lived without.  Such is the joy of keyboard shortcuts.


ScreenJump() {
  global LastM1X, LastM1Y, LastM2X, LastM2Y

  ;; X > 0 is one screen, X < 0 is another screen
  ;; Before we jump away from the screen, capture our
  ;; 'last' coordinates so we can return there.
  If(X > 0) {
    LastM1X := X
    LastM1Y := Y
    SysGet, M, Monitor, 2
    Zone := -1
    LastX := LastM2X
    LastY := LastM2Y
  } else {
    LastM2X := X
    LastM2Y := Y
    SysGet, M, Monitor,1
    Zone := 1
    LastX := LastM1X
    LastY := LastM1Y

  ;; Do we know our last position?
  ;; Great, jump there. If not, go to the center of the window.
  if(LastX != "") {
    CenterX := LastX
    CenterY := LastY
  } else {
    CenterX := ((MRight - MLeft) / 2) * Zone
    CenterY := (MBottom - MTop) / 2

  ;; Give Focus to the window the Mouse is hovering over.
  MouseMove CenterX, CenterY
  WinGetTitle, Title, ahk_id %GuideUnderCursor%
  WinActivate, %Title%

#j:: ScreenJump()

Wednesday, February 03, 2021

PimgStack Part 2 - An AWS Dead End

With my Raspberry Pi set up I was ready to move on to the next step of my stack based digital photo-frame project, aka, PimgStack. I wanted to tackle the question of how the Pi would pick up messages from my phone, or really any source, telling it which stack operation to perform next. One strategy I'd used in the past was to depend on AWS's Simple Queue Service (SQS). My phone, and other entities, interested in manipulating the PimgStack would drop commands into an SQS queue, and the Raspberry Pi would perform a long-poll to pick up these directives.

The AWS command line tool appeared to support pulling items from an SQS queue. So the plan was simple: install the AWS command line tool on the Pi and write a wrapper script to pull commands from a queue. Easy peasy.

AWS CLI - Strike 1

I downloaded AWS CLI version 2 from Amazon, unpacked it and ran the 'aws' command. No dice, the binary wouldn't run. Apparently, the Pi architecture wasn't compatible with the binaries Amazon provides. Undeterred, I moved on to plan B: I'd install version 1 of the AWS command. That version is Python based, which should work on a Pi.

AWS CLI - Strike 2

I ran sudo pip install aws and patiently waited for the command to finish. Instead of being successful, I got an error. The interesting part was:

    arm-linux-gnueabihf-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=/build/python2.7-InigCj/python2.7-2.7.16=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/include/python2.7 -c c/_cffi_backend.c -o build/temp.linux-armv7l-2.7/c/_cffi_backend.o
      c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory
       #include <ffi.h>
      compilation terminated.
      error: command 'arm-linux-gnueabihf-gcc' failed with exit status 1

There were a number of recommendations for fixing this on web. Ultimately, I got around this error by insalling libffi-dev.

  sudo apt install -y libffi-dev

When aws was finally installed, I ran it and got this error:

$ aws ec2 help
Traceback (most recent call last):
  File "/usr/local/bin/aws", line 6, in <module>
    from aws.main import main
  File "/usr/local/lib/python2.7/dist-packages/aws/", line 7, in <module>
    from fabric import api as fab
ImportError: cannot import name api

Turns out, I installed the wrong command altogether. 'aws' refers to a now defunct library. What I wanted to install was 'awscli':

$ sudo pip uninstall aws ; sudo pip install awscli

AWS CLI - Strike 3

I finally had the 'aws' command installed, but then ran into a new problem. Kicking off the 'aws' command took forever. And by forever, I mean over 4 seconds:

$ time aws
Note: AWS CLI version 2, the latest major version of the AWS CLI, is now stable and recommended for general use. For more information, see the AWS CLI version 2 installation instructions at:

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: too few arguments

real    0m4.224s
user    0m3.611s
sys     0m0.612s

That's over 4 seconds to do nothing more than print out an error message.

This tells me that while my Raspberry Pi can technically run the aws command line utility, it's really not the right tool for the job. I needed to stop thinking like a programmer working on a general purpose system and start thinking like a programmer working in a resource-limited embedded environment.

So if AWS SQS wasn't the right tool, what was? I had a vague notation that MQTT might help. All I knew about MQTT was that it was used to pass messages to Internet-of-Things devices, may of which have far fewer resources than the Pi. Time to get schooled in MQTT.

Thursday, January 28, 2021

A Stack Based Digital Photo Frame - Part 1

The Idea

I wanted to put some old monitors I had lying around to use. As I contemplated how I could turn them into a sort of digital whiteboard / pinboard, a solution begin to emerge. I could snap pics with my phone, click a button and bam! the photos would appear on the monitor.

So yeah, I just revinented the digital photo frame. While the idea wasn't unique, that didn't make it any less worthy of building. In the spirit of digital recycling, I decided to power my frame with an old Raspberry Pi I had lying around.

To dress things up further, my plan is to setup the frame to be stack based. New images will be pushed on the stack, and I'll support a 'pop' command that will reveal older images. Depending on how this works, I may add other stack operations like swap and roll.

The Plan

Step one of my plan was relatively simple: I'd install fresh software on the Pi and confirm that I can programatically display images on the monitor. Once that's accounted for, I'll figure out how to beam images to the Pi from my phone.

On paper, step one should be easy. And yet, as I thought about, all I could see were hurdles.

Here's how I anticipated it all going down: I'd try to download the latest version of the Pi OS, only to get stuck figuring out which version of the Pi I owed. And when I did finally download the right OS, there was no way I was going to be able to burn it to a Micro SD card from my Chromebook. Surely, I'd have to switch to a Mac or Windows box and download special software to do this. And when I did finally try to boot up the Pi, I was sure it would glitch out on the an ancient monitor. Like I said, all I could see were ways this could fail.

The Execution

So you can imagine how elated I was when every prediction of mine was wrong.

The good people at Raspberry Pi make it so that their OS is compatible with all versions of their hardware. From a fancy new Pi to the ancient one I have, the OS Just Works.

On to top of that, they provide instructions for installing from a Chromebook which worked flawlessly. I had no idea that Chrome OS would give me a way to burn images to an SD card, but they do.

I slid the micro sd card containing the OS image into the Pi, connected the Pi via HDMI cable to a clunky adapter on the back of the monitor and then plugged in the power cable. A few moments later, the Pi booted up and I found myself at GUI setup screen. I provided my Wifi settings, told the Pi I wanted to run in console mode and rebooted. I then found myself at a bash prompt.

I couldn't believe how smoothly this all had gone. I installed fbi using the command:

  sudo apt install -y fbi

And then finally, put the setup to the test by grabbing a JPEG off of one of my servers and attempted to display it:

wget -O us.jpg
fbi -a us.jpg

To my complete shock and amazement, the image rendered on the screen. I had the foundation for the photo frame done without a single hiccup. (And with that statement, I've successfully cursed part 2 of this project).

Up next, I need to figure out how I'm going to deliver images from my Android Phone to the Pi. Still, I couldn't be more amazed at the work the Raspberry Pi foundation has done. Outstanding.

Monday, January 25, 2021

Review: Annihilation - Book 1

I finished listening to Jeff VanderMeer's Annihilation: Book 1 of the Southern Reach Trilogy and I'm so confused. I didn't start that way. The book sucked me in with a Sci-Fi expedition vibe, but by the end I was lost. I see now that the book is part of a series and was turned into a film. Hopefully book two will shed some light on what the heck I just listened to.

In the spirit of the main character of the book, I'm going to record some observations I made during my reading. When I do finally get around to Googling the book, I'll be curious to see how off I was in my hypothesis.

Spoilers Below

The book definitely brings the biblical symbolism. Area X has many of the properties you'd expect from the Garden of Eden. Multiple eco-systems are smooshed together and the boundaries of the space don't behave like physical dimensions.

When the biologist ingests the 'fruit' of the spores, like Eve, she doesn't die but has the vale of her surroundings lifted.

The words written on the wall have a biblical feel to them. The encounter with tower monster recalls the struggle prophets have had as they try to describe an encounter with the divine.

Yet the biologist is having none of this. She comes to Area X armed with a unique perspective. She's used to observing ecosystems from a safe distance, understanding that within an ecosystem chaos may may reign, while a broader view shows a different story. Her description of encountering a 'Destroy of Worlds' starfish encapsulates this well. The night may be calm and beautiful to her, but in the tide pool shared by the starfish, the world is collapsing.

The biologist manages to muster this perspective as she finds herself face to face with her own 'Destroyer of Worlds.' If you drop a philosopher in to Area X, you'd no doubt end up with a religious text. But because of the biologist's discipline, we're left with a series of observations that eschew the supernatural and focus on the facts.

Consider how she looks past the words written on the wall, uninterested in considering their meaning and possible implication as a divine message, and focuses on the biology that makes them possible. That would be like Moses ignoring the text of the Ten Commandments, instead focusing on the mechanism that etched letters into stone.

Philosophical ramblings aside, none of this actually explains what the heck is going on in Area X. Are things what they seem? Is this an alien life form that the government sends expeditions to explore; and does so cautiously because they are so over-matched by the creatures that inhabit Area X?

Surely there's more to it than that.

We know that far more expeditions have operated in Area X than the biologist was told. Are they perhaps in some sort of time loop, repeatedly exploring the same area? This may explain why new technology isn't added to the expeditions; it's the same set of expeditions over and over again.

And the biologist's husband refers to an area within Area X as the Southern Reach. Yet, the explorers are exploring on behalf of the Southern Reach. Is that a critical clue? Is all of the world engulfed by Area X and citizens don't know this yet. Or, is the name simply a coincidence.

Heck, is there even enough information in the text of book one to untangle Area X?

It's tempting to Google this all and find out what the Internet has to say. May plan is to stick with my information embargo until I get through the series. For now, I'm enjoying letting the mystery marinate.

Thursday, January 21, 2021

Hiking the Columbia Air Center Loop

While looking for family friendly local hikes, I came across the verbosely named Columbia Air Center Loop Hike through Patuxent River Park. It checked a number of key boxes: it's close by (only 45 minutes!), is a kid friendly distance (just 4 short miles!) and it's a loop. But what made this hike a must-do for me was the history of the location. 'Columbia Air Center' refers to the name of the air field that was operating on the grounds of the hike in the 1940's. It served the area's African American pilots because other air fields were whites-only. This article explains how the air field got its start:

The [African-American] pilots, members of a local aviation organization called the Cloud Club, had recently been kicked off a white-controlled airport in Virginia, so in 1941 they began leasing for $50 a month the 450-acre lot along the Patuxent River. Historical records show that it was among the first black-operated airports in the nation.

I kept an eye on the weather, and anytime there was even a chance of making this hike work, I nagged Shira to go for it. I finally wore her down this past weekend. The weather was iffy at best, and we had our friend's one year old with us. But months of nagging had left Shira's defenses reduced.

We pulled into the parking area for the trail and I quickly went off to snap photos. There's not a whole lot of evidence of the former Air Center's glory. There are historic plaques, a compass rose on the ground and a flag pole flying a CAC windsock. Still, the history is real here and I was glad to be among it. There was also a kids play area off in the woods and signs referring to camp sites and other amenities in the area.

As I mentioned, we had R., our friend's one year old along for the adventure. Shira declared that she was going to carry him for the hike. At about 29 lbs, R. is a whole lot of one year old so this was a bold promise. Shira, in full beast mode, didn't waiver and carried him the entire hike.

The hike's route is a shaped like a figure 8. Shira had us hike it in reverse which insured we got the lengthy road walk out of the way. The second loop of the hike was nearly all in the woods. I recommend doing the route in this direction.

R. started off annoyed that he'd been woken from the nap he'd started on the drive to the trailhead. Combined with the blustery weather, and one could understand why he was not the happiest camper. After completing the road walk and hiking in the proper forest, he finally started to the warm to the idea of being in the wilderness. His attention was ultimately captured by the trees and other sights of nature around us and he began to genuinely enjoy himself.

There's a waypoint on the hike marking the wreckage of a Piper J-3 Cub. When we arrived at this location I searched but wasn't able to find any evidence of the plane. I gave up, but was rewarded further up the trail with some ancient wreckage. I snapped some photos, and from looking at the distinct triangular shape of the front of frame, it does look like I found the J-3 cub. So if you do the hike and don't see the wreckage where the waypoint says it should be, don't give up.

Oh, and another pro tip: there's a detour on the red trail around some mud. I insisted to Shira that we folllow the GPX track and not the red arrow and found ourselves ankle deep in said mud. I thought it was actually the most interesting part of the hike, but then again, your definition of 'interesting' may be different than mine. Best to follow the detour and not to obsess about following the GPX track.

Overall, this hike was a pleasant one, though I'm not in a hurry to return to the park. The history of the area is impressive, though, there's minimal evidence of the air field and I could have done without the lengthy road-walk. Still, a walk in the woods with family and friends is always going to beat being indoors. If you find yourself near the park, you should absolutely stop by revel in the soul of the place.

Monday, January 18, 2021

Tasker and Google Sheets: Back On Speaking Terms

Combining Tasker and Google Sheets has on more than one occasion been a massive win. Recently the Tasker Spreadsheet Plugin stopped working and my attempt at using AutoWeb to access the Google Sheets API was a bust (I kept getting this exception). This left me with Plan C: use Tasker's built-in HTTP Actions to roll my own add-to-Google Sheets Task.

I braced myself for the hassle of implementing OAuth 2 using native Tasker actions. And then I learned about HTTP Auth and realized the heavy lifting was done. Building on the progress I made during my AutoWeb attempt, I was able to get my Task working almost too easily. I give you the: GSheet Append Row Task, a generic task that makes adding a row to a spreadsheet, dare I say, simple. Here's how you can use it.

Step 1. Setup

Import the GSheet Append Row Task into your Tasker.

Now comes the annoying part: you need to create a Google Cloud Project and OAuth 2 credentials in the Google Developer's Console. You can follow Step 1 from this tutorial to do this.

Ultimately, you're looking for two values: a Client ID and Client Secret. Once you have these values, edit the GSheet Append Row Task, and update the appropriate Set Variable actions at the top.

Once those settings are in place, you can close GSheet Append Row; if all goes well you won't need to edit it again.

Step 2. Call GSheet Append Row

You need to call GSheet Append Row with %par1 set to JSON with the following shape:

  ssid: '[spreadsheet ID - find this in the URL of the spreadsheet]',
  sheet: '[the name of the sheet you wish to append to]',
  row: [
    '[column 1 data]',
    '[column 2 data]',

You can prepare this JSON using the strategy outlined here. Though, I prefer to use a JavaScriplet:

  // the actions before this set:
  //  %ssid to the the spreadsheet ID
  //  %now to %DATE
  //  %url to the URL to learn more
  //  %description to the description of the entry
  // I'm looking to add a row with the format:
  //  date | description | url
  // Note: date is being changed from m-d-y to m/d/Y
  var args = JSON.stringify({
   ssid: ssid,
   sheet: "From the Web",
   row: [
     now.replace(/[-]/g, "/"),

  // %args is now ready to be passed as %par1.

Once %args has been prepared, it's trivial to invoke PerformTask, setting the task to GSheet Append Row and %args as %par1.

Here's a Sample Task to see this in action:

Step 3.

Celebrate! If all went well, the first time you run GSheet Append Row it will ask you for permission to run. See this tutorial for details about this first interaction. Once permissions has been provided, appending rows should Just Work.

In Conclusion

This turned out to be quite a bit easier than I imagined it (thanks João, you rock!). Though, I don't love the fact that HTTP Auth depends on João's auth.html. I'm tempted to make my own version of 'HTTP Auth' that works locally and doesn't require a 3rd party server. Still, for now I'm just happy to to back to writing Tasks that integrate with Google Sheets.

Friday, January 15, 2021

PowerShell For the Win: Short and Sweet Edition

I just wrote this long and winding tale about how Windows PowerShell exceeded my expecations. Consider this post a sort of TL;DR for that monster.

I found myself needing to scale an image. On Linux and MacOS, I'd use the command line friendly ImageMagick. On Windows, I'd turn to Gimp. But I've got a shiny new tool in PowerShell, so I was curious if it could provide a Windows friendly command line solution. It does!

Step 1: I downloaded the Resize-Image Module from Microsoft's Technet Gallery.

Step 2: I launched PowerShell and typed:

## OK, let's do this...

PS C:\Users\benji\Downloads> Import-Module .\Resize-Image.psm1

# Wait, that worked? Oooh, cool! A quick look at the docs says that I
# can use -Display to preview image. Let me try that.

PS C:\Users\benji\Downloads> Resize-Image -InputFile .\icon.png -Height 200 -Display

## The image was displayed, but smooshed

PS C:\Users\benji\Downloads> Resize-Image -InputFile .\icon.png -Height 200 -Width 200 -Display

## The image is displayed and looks good. Let's make this official.

PS C:\Users\benji\Downloads> Resize-Image -InputFile .\icon.png -Height 200 -Width 200 -OutputFile icon.200x200.png

## That's it? No errors. No verbose out. I love it.

One gotcha: the OutputFile was stored in my hme directory, not the same directory as icon.png.

Step 3: I wrote this post.

I'm telling you, if you're a command line user and find yourself on Windows, PowerShell is your friend.

Thursday, January 14, 2021

Review: Artificial Condition: Murderbot Diaries

Move over Reacher, Rapp, and Bond there's a new action hero in town named, well, he doesn't have a name. But he calls himself Murderbot. Technically Murderbot isn't a person, it's a construct--part robot, part lab-grown organic material. But in so many ways, he embodies all our are struggles. On one hand, he's got a strong moral code, the urge to protect and defend and epic hacking and smashing skills. On the other hand, more than anything else, he wants to find a dark space to sit and watch 'media.' I just finished book two of the Muderbot Diaries, Artificial Condition by Martha Wells and it held up to the high bar set by book one. Below are some thoughts on this second book and the series in general. Careful, there spoilers, so best to skip this post if you haven't read books one and two. And if that's the case what are you waiting for? Go read them, they're short, smart and fun.

[Spoilers ahead!]

I'm fond of saying that what really makes a book a win for me is when I learn something. Learning in this context can mean almost anything. Consider the body tweaks Murderbot decides to have Art perform on him. When he makes special note of the data port in the back of his head, I assumed he meant he was going to have Art remove it. Murderbot explains that some humans have data ports, but most don't. If he's going to blend in with people, removing it would be the obvious choice.

During the climactic close combat scene we learn the details of how Murderbot had his data port 'taken care of.' He didn't have it removed, instead, he had it internally disconnected.

This was a genius move. To anyone who figured out Murderbot's identity, they'd assume that the data port was his kryptonite. Slap a combat override module in the port and you're good to go. That's precisely what the baddies do, of coures. And it doesn't go well for them because the port is dead.

My lesson: think twice before you hide a perceived weakness. It may be possible to turn that weakness into a strength by leaving it in plain sight and letting other's assumptions work to your advantage.

This strategy reminds me of the tactics employed by physical security tester Jason E. Street. He describes showing up at sites he wishes to breach in a wheelchair with boxes on his lap. Who's going to be the jerk who doesn't get the door for him?

Zooming out from combat tactics, I really like how Wells tackles the thorny ethical issues that go with the topics of AI, lab-grown human parts and smarter-than-human machines. On their surface, the universe the Murderbot occupies is fairly striaightforward. There are humans, augmented humans, bots and constructs. Humans, including the augmented variety, have rights, non-humans don't. There's plenty of action and humor to occupy the reader, so they need not question this social structure.

But, look a little deeper and you see things get more complex. Consider the makeup of each type of being. Humans are completely organic, augemented humans are a mix of organic and machine, bots are fully machine and constructs are again a mix of organic and machine. In this context, why should constructs be denied rights when they are built from the same materials as an augmented human? Is there that much difference from having organic material grown in a womb than a lab?

Wells uses our hero, Murderbot, to drive this point home. Not only does he look and act human, but he finds joy in relishing his freedom. When a human asks him to complete a task, it's the ability to say no that helps awaken the Murderbot to his full potential.

Finally, on a completely unrelated note, I can't help but wonder what those in the Autistic community think of Wells' Murderbot. It's quite possible my naive understanding of the Autism spectrum has me connecting unrelated dots, so you'll have to forgive me if this is a reach.

It seems that Murderbot has many mannerisms that would be associated with those on the spectrum. He's a brilliant tactician and combat specialist, and yet he's uncomfortable with even the most basic social interactions. He regularly opts for a 3rd person video view of a scene, rather than looking people in the eye. During a number of interactions, he and Art teamed up to perform sophisticated real time analysis to understand simple body language cues and mannerism, the type that a 5 year old would have no problem processing. And finally, in the closing scene in Artificial Condition Murerbot treat a request for a hug the same way I'd treat a request for a root canal. Did Wells intentionally model Murerbot after those with Autism?

Is Murderbot a hero the in the Autistic community? An insult? Or, a figure that's no more connected to them than I am to James Bond. From a bit of research, it appears that I'm on to something. There's also this bit of explanation from Martha Wells herself:

Question: As a mental health professional, I can't help but notice that, were he a human, Murderbot would likely be considered to be on the autistic spectrum. Was that a conscious choice or more of a coincidence? If it was an intentional decision to have Murderbot and autism overlap, what did you study to better represent neurodiversity on the page?

Answer: Those aspects of the character were based on my own experience. I'm not neurotypical, and I've been affected by depression and anxiety all my life.

That's a lot of insight and questions from a novella. If nothing else, take that as a sign of how good the Artificial Condition is.

Wednesday, January 13, 2021

A Windows Friendly Solution to the Video Packaging Problem

A Windows Version, Please

A friend wanted to use my ffmpeg video packaging script. But there was a catch: he needed it to run on Windows. As a bash script, it runs seamlessly on Linux and MacOS, Windows not so mch.

The immediate solutions that came to mind weren't very helpful. I could re-write the script to be cloud based, but that would be terrifically inefficient in terms of both my time and complexity. I could have have my friend install a Unix environment on his Windows computer, but that seemed overkill for one script.

I could re-write my script in PHP and have my buddy install XAMPP. But again, that felt like swatting a fly with a sledge hammer.

The exercise I was going through was a familiar one. As a programmer, I'd been given an issue and I was making my way through the 5 stages of grief. Denial: nobody can expect me to make a Windows version of this script. Anger: why can't Windows be Linux! Bargaining: OK, you can have this script, but only if you turn your Windows box into a Linux box. Depression: ugh, Windows. And finally, acceptance: OK, so how would one do this on Windows?

Let's Do This!

I had an idea: I'd previously experimented PowerShell. And by experimented, I mean I launched the program, was excited to see it did tab completion and then quickly tired of using it. I'm sure it was step up from the standard DOS box, but it felt far inferior to the bash environment Cygwin provided. Still, PowerShell was a shell, so may be it had a scripting language I could use?

Finding sample PowerShell scripts was easy. At first I was overwhelmed: PowerShell was considerably more verbose than bash. But the ingredients for shell scripting were all there. After cursing Microsoft, it looked like PowerShell may be just the solution I was looking for.

Fast forward a couple of days, and I now have a version of my video packaging script that runs on Windows; no massive install of support tools required. Grab it here. You can run it by right mouse clicking on package.ps1 selecting Run with PowerShell. You can also launch the script within PowerShell using command line arguments. For example:

> .\package.ps1 -Source c:\Users\ben\Downloads\bigvid.mp4 `
                   -Config c:\Users\ben\Documents\blog\video-settings.ini

The Case for PowerShell

Here's a number of features of PowerShell that have left me more than a little impressed.

PowerShell let's me trivially launch a file picker, so that I can allow users to select the video and configuration file in a graphical way. It's also easy to confirm the files are valid and kick up a graphical alert box and if this isn't the case. In other words, PowerShell lets me do Windowsy things easily.

# Let users pick a file
function Prompt-File {
  param($Title, $Filter)
  $FileBrowser = New-Object System.Windows.Forms.OpenFileDialog -Property @{
    InitialDirectory = [Environment]::GetFolderPath('MyDocuments')
    Title = $Title
    Filter = $Filter
  $null = $FileBrowser.ShowDialog()

  Return $FileBrowser.FileName

# Prompt for the video and config files
$Source = Prompt-File -Title  "Choose Video" -Filter 'MP4 (*.mp4)|*.mp4|QuickTime (*.mov)|*.mov|AVI (*.avi)|*.avi'
$Config = Prompt-File -Title  "Choose Settings File" -Filter 'INI File (*.ini)|*.ini|Text File (*.txt)|*.txt'

PowerShell doesn't natively support parsing ini files, but writing the code to do so was straightforward. I was able to adapt code on the web such that reading an ini file took in an existing set of values.

$settings = @{}
$settings = Parse-IniFile -File $PSScriptRoot\defaults.ini -Init $settings
$settings = Parse-IniFile -File $Config -Init $settings

Here I'm setting the variable $settings to an empty hashtable. I'm then filling the hashtable with the value of defaults.ini and then overriding these values with the user selected config file. This is similar to the variable overriding I did in the Unix version of the script, though arguably it's cleaner.

I was able to move my custom code to separate files in lib directory, thereby keeping the main script readable.

I'm impressed how PowerShell natively handles named parameters. Not only can I define functions with named parameters, but I was able to have the script itself trivially take in optional parameters. At the top of package.ps1, I have:

param($Source, $Config, $SkipFcsGeneration)

I then check to see if -Source or -Config was passed in. If either are missing, Prompt-File is invoked.

if(-not $Source -Or (-not $(Test-Path $Source))) {
  $Source = Prompt-File -Title  "Choose Video" -Filter 'MP4 (*.mp4)|*.mp4|QuickTime (*.mov)|*.mov|AVI (*.avi)|*.avi'

if(-not $Config -Or (-not $(Test-Path $Config))) {
  $Config = Prompt-File -Title  "Choose Settings File" -Filter 'INI File (*.ini)|*.ini|Text File (*.txt)|*.txt'

These named parameters let me treat my script as a command line tool, and let's a typical Windows user think of it as graphical app.

I also improved the packaged video file itself. I added support for playing audio over the 'pre' title screen. The script uses ffprobe -show_streams to figure out the length of the audio clip and arranges for the title screen to be shown for this duration. PowerShell let me trivially kick off ffprobe and process its output, just like I would do in a bash environment.

Working with the audio stream forced me to understand how ffmpeg's complex filters worked when both video and audio streams are being manipulated. My Aha Moment came when I realized that you filter audio and video separately and that ffmpeg will piece them together. Essentially, I have one concat expression to join all the video streams and one to join the audio streams, and ffmpeg does the right thing to combine the audio and video at the end.

A Happy Ending

I finished this little project with not just a video packaging script that works on Windows, but with a fresh perspective on Windows scripting. What PowerShell lacks in tradition and terseness, it more than makes up for in capability and completeness. In short, PowerShell isn't an attempt to implement bash on Winows; it's a fresh and modern take on scripting that really delivers.

Check out my video packaging code over at github.

Wednesday, January 06, 2021

Seeing All The Pretty Colors: optimizing emacs + screen + Secure Shell Chrome App

The Secure Shell Chrome App combined with a remote Linux box, screen and emacs is a game-changer. Among other things, it turns my Chromebook into a programmer friendly hacking environment.

One quirk of this combination is that the color scheme emacs loaded by default was often illegible. I'd typically run M-x customize-themes to pick a better color scheme, but no matter the theme I selected the colors were always harsh and often left text hard to read.

Turns out, there's a quick fix to this issue. The problem and its fix is described here. It boils down to this: the Secure Shell App sets the terminal type to xterm-256color. Emacs supports this terminal type, granting it access to, well, 256 colors. But I'm running screen, and screen morphs that terminal type to screen.term-256color. Emacs doesn't know what to make of this terminal type so it falls back to to 8 color mode.

This becomes clear when I ran M-x list-colors-display.

The following tells emacs that screen's terminal type is one that it knows:

(add-to-list 'term-file-aliases
             '("screen.xterm-256color" . "xterm-256color"))

I added this code to my init.el, restarted and suddenly got access to 256 colors.

Now the default emacs color choices make sense. Such a simple fix, and I didn't even realize I had this problem.