Thursday, February 25, 2021

Preparing for the Unthinkable, Part 1- Until Help Arrives Training

Last week Shira and I took Arlington County's Until Help Arrives (UHA) training. This is one of the County's strategies for preparing for mass casualty events, like shootings, terrorist attacks and other unthinkable events.

UHA focuses both on mindset and skills. The skills are designed to deal with a handful of medical conditions that meet two essential criteria: they're deadly and they can be easily treated. This all took place over Zoom, though I assume we'll follow-up with in-person training in the future.

I recommend the course, and think it compliments CPR and other first aid training well. It gives you specific skills you can use in a crisis without information overload.

If you're on the fence as to whether this training is for you, or you're not in Arlington County, I'd suggest watching the prerequisite video. This isn't a replacement for the class, but does provide a significant amount course material and is well done. I've embedded the video below.

Another resource to check out is the Tactical Emergency Casualty Care (TECC) Guidelines for Active Bystanders. UHA is based on these guidelines. You could assume, like I did, that a document with such a verbose name would be a dense and complex read. You'd be wrong. The TECC Active Bystander Guidelines is essentially a 3 page, bulleted list of what a non-trained individual can do to help in a mass casulty event. It's quite accessible.

Whenever I attend training like this, I try to take action afterwards to cement my learning. I'm currently listening to the book, Unthinkable: Who Survives When Disaster Strikes - and Why by Amanda Ripley that they recommend during the session. I've updated my First Aid Cheatsheet to include the TECC recommendations. Heck, I'm publishing this blog post. However, the main action I've undertaken is to organize a trauma kit. That's the subject of part 2 of this series; stay tuned.

Update: your wait over. Check out the trauma kit post.

Wednesday, February 24, 2021

Review: Rogue Protocol, Exit Strategy and Network Effect

I binge read (well, listened to) the three remaining books in Martha Well's Murderbot Diary series Rogue Protocol, Exit Strategy and Network Effect. I did so with the gusto normally reserved for binge watch a TV series I'm hooked on. While the books have similar plots, the characters and scenarios were different enough that I never tired of them. What a joy it is to find a series that you just can't put down.

I continue to be impressed by the diversity that the Wells brings to her characters, both human and non-human. At times, it's bots that show brilliance and empathy and at other times it's humans that do so.

I kept thinking back to the TV show The Good Doctor. Murderbot and the protagonist of the TV Show, Dr. Shaun Murphy, are brilliant problem solvers with super human gifts, yet they can be stumped by the simplest of human interactions. One feature of the TV show is that they manage to show Murphy in both these lights, casting a truly fascinating shadow on those around him. There may be times you may feel sympathy for Murphy and wish he could be 'normal', though just as often you feel sorry for his co-workers who are, alas, just 'normal.'

Wells brings this same dynamic to the Murderbot, and it adds real depth to the characters.

I noted in my review of Artificial Condition that Wells cleverly highlights ethical questions about AI and cloning. I say cleverly, because she manages to wrap these very thorny problems in a veneer of fun and adventure. I liked how she continued this practice in the three remaining books I read.

I don't recall which book it was, but at one point one of the humans explains to Murderbot that her ancestors, like him, arrived on their planet packed in the cargo hold. I found this exchange to be incredibly powerful, as it brought to mind images of slave ships and their implication.

Like all good series, I was both pleased and bummed to finally finish it. Though, I see from looking up the books on Amazon that there's a 6th book coming out. I can't wait!

Monday, February 22, 2021

A Tasker based OPM Status Monitor

The DC Area's 'Snowday For Adults' Indicator

The DC area just endured another bust of a winter storm. While we hoped and prepared for inches of snow, we got sleet instead.

One indicator for how extreme upcoming weather will be is the Federal Government's Office of Personnel Management's (OPM) status page, opm.gov/status. The directives on this page tell Federal employees when they need to report to the office. At a minimum, it impacts something like 280,000 local residents.

When the OPM status page announces that the government is closed, thousands of people stay home and adults get to bask in the joy of hearing there's a snow day. On a practical level, you can expect many local organizations will follow OPM's lead and be open or closed for the day.

During this last winter-storm event I learned that not only does OPM have a web page and app to check the current status, but they have an API endpoint as well. I wasn't quite sure what I could do with this API, but I couldn't resist doing something with it.

OPM Alerts in Tasker

One easy way to experiment with the API was to leverage Tasker. I used a now familiar pattern to do so.

  1. I used the HTTP Request action to invoke the API at https://www.opm.gov/json/operatingstatus.json.
  2. I used the JavaScriplet action to parse the JSON response from the API, storing the relevant information in local variables.
  3. I used standard Tasker actions to process this parsed data, including: checking to see if the status had changed since I last checked it, using the Say action to read the new status aloud, and using the Notify action to a trigger a notification.
  4. I created a profile that runs every two minutes throughout the day to invoke the task described above.

All of this came together surprisingly easily. My plan is to leave the profile that queries OPM off, and turn it on next time we have a storm on the horizon. I should then be notified if and when OPM decides to change their status.

You can grab the code for the OPM Monitor Profile and Task from TaskerNet.

But Does it work?

I should have to wait until the next big snow to know if the OPM Monitor above works. However, last Saturday morning at 12:07am, I was dramatically awoken by my phone making an announcement:

OPM Alert: STATUS: OPEN WITH MAXIMUM TELEWORK FLEXIBILITIES TO ALL CURRENT TELEWORK ELIGIBLE EMPLOYEES, PURSUANT TO DIRECTION FROM AGENCY HEADS

Apparently I had left the profile active, and the OPM site was resetting its status back to 'open.' So yeah, the profile works. Actually, it may work too well. I may need to adjust it so that between certain hours of the day it doesn't make a verbal announcement.

Thursday, February 18, 2021

Dialing Up The Romance on My iPad Love Note Delivery System

One could argue that my Tasker / iPad powered Love Note Delivery System lacked a certain romantic spark. But I've fixed that. Check out v2:

It looks awfully similar to version 1 with one important difference. In the bottom right hand corner there's a number (in the snapshot above, it reads 17). That's the magic, right there.

I updated the Tasker code so that before the love note is sent to the iPad it's recorded in a Google Sheet. This change means that notes aren't just displayed on the iPad, but recorded for posterity. If I keep my love note authoring up by the end of the year I should have hundreds of them. Perhaps I'll compile them into a book or poster to be delivered for next Valentine's Day.

Recording data in a Google Sheet from Tasker is a problem I recently tackled, so that part of the code was trivial. What was new territory was deriving the sequence number to show in the bottom right hand corner.

My first plan: query the Google Sheet using the GSheets API to determine how many data rows are in the current spreadsheet. Alas, I couldn't find an API endpoint that returned this information.

I could pull back all the love notes currently stored in the Google Sheet and count them, but that's inefficient and would get more so as I added notes.

Fortunately, there was an easy solution: after appending rows to a sheet the API returns back a JSON object reporting what changed. From this, I can determine which row was inserted into the spreadsheet, and from there I can determine the sequence number.

I updated my GSheets Append Row task to return the server's API response. I then parsed this response like so:

var r = JSON.parse(results);
var seq = r.updates.updatedRange.replace(/.*[A-Z]/, '') - 1;
var payload = JSON.stringify({
  message: message,
  sequence: seq
});

(Find the complete Tasker code here)

I've updated the web page the iPad uses to receive messages to parse the payload structure described above. In it, it now finds a sequence value which it can proudly display in the bottom right hand corner of the screen.

Now if you'll excuse me, I better get writing. These love notes aren't going to compose themselves!*


*Or could they? Note to self: investigate machine learning strategies to generate love notes.

Wednesday, February 17, 2021

Just A Bit Overengineered: A JavaScript Based, MQTT Powered, iPad Love Note System

The Inspiration: Itty Bitty Sidekick Screens

A few months ago, on a trip to to Best Buy I found myself impressed with their prices tags. Not their prices, but the tags themselves. Apparently, Best Buy replaced old school plastic tags with E-Ink displays. I left that store wondering if I could get my hands on my own little E-Ink display and use it to power a project.

Fast forward to this last weekend where I read about two more e-ink based projects: an Always On E-Ink Org Agenda and a Literary Clock Made form an E-reader. The second project is especially interesting because it describes repurposing an old Kindle rather than building an e-ink project from scratch.

I didn't have a Kindle lying around, but I did have an iPad that wasn't getting much use. It wasn't an e-ink display, but I wondered if it could stand in for one.

And then Valentines Day arrived and I realized what I could use a sidekick screen for: as a message board for leaving love notes to my beloved.

Did I need to turn an iPad into the equivalent of a digital Post-It note? Uh yes, yes I did.

Wait a Minute, Didn't I Just Do This project?

I just got done building PimgStack a project to recycle hardware by using my phone to deliver messages to a device that would display said messages on screen. Isn't this the same thing, but instead of using a Raspberry Pi I'm using an iPad, and instead of delivering images I'm delivering love notes? Yes and yes.

So if they're basically the same project, I found myself wondering if I could recycle any code to make my iPad project come together faster.

Powering an iPad Love Note Delivery System

What made PimgStack work was the message delivery mechanism, MQTT. This left me wondering how I could push MQTT messages to an iPad, and once they arrived, how I could react to them. I got an important clue when I setup the MQTT broker on Amazon's MQ service. Check this out:

Amazon's MQ service provides multiple connection points that talk different protocols. In my Raspberry Pi project I made use of MQTT. A quick Google Search revealed that 'wss' is the Secure Web Sockets protocol. I'd never used Web Sockets, but I figured they must be built into the latest version of Safari running on the iPad.

The Plan: Web Sockets talking to Amazon MQ

If I could write a web page that opened up a Web Socket to talk to Amazon MQ's WSS endpoint, then I'd be home free. It could listen on the socket for messages and when they arrived, render the message in HTML. The page would then repeat the process, waiting for another message to arrive.

I whipped up a quick web page, attempted to open up a WebSocket using the WSS URL and ... nothing. What I'd hoped would Just Work, didn't. As I Googled around, I started to get the impression that it wasn't possible to just connect up to a WSS URL from plain JavaScript.

Fortunately, I kept digging and eventually found Paho.MQTT, a library for bridging WebSockets and MQTT. I updated my simple HTML page to pull in the Paho.MQTT library from this CDN:

  <script src="https://cdnjs.cloudflare.com/ajax/libs/paho-mqtt/1.0.1/mqttws31.min.js" type="text/javascript"></script>

I then plugged in the WSS host, port, username and password from the Amazon MQ connection area into this code:

var client = new Paho.MQTT.Client(Env.host, Env.port, Env.path, Env.clientId);

client.onMessageArrived = function(m) {
  console.log("Message arrived!", m);
};

client.connect({
  userName: Env.username,
  password: Env.password,
  useSSL: true,
  mqttVersion: 3,
  onFailure: function(e) {
    console.log("FAIL!");
  },
  onSuccess: function() {
    console.log("WHoo!");
    client.subscribe(Env.topic);
  }
});

After punching a hole in the security group to allow 61619, I found that the socket would successfully connect. I wrote a quick Tasker action using the same MQTT Client that I used in my last project, and sent off a message. And Bam! the message was received on the iPad. It was like magic. Damn Paho.MQTT is cool.

I updated the code above to render the message in HTML, picked a sexy font from Google Fonts and made sure to set the meta tag apple-mobile-web-app-capable. The result is a full screen message board which looks surprisingly polished. As a bonus, emoji seamlessly come through. How cool is that?

I whipped up a quick Tasker action that lets me push text to the iPad. I realized that because this is backed by Tasker, any automation I can imagine on my phone can now use the iPad as a display. Say I want to display my step count on the iPad or the last received text message or last incomning phone number. This would all be pretty easy to do now that I've got the iPad listening on an MQTT channel that Tasker can publish to.

Find the source code that powers the iPad component here. It really is amazing how much functionality you can get out of a little plain HTML web app.

Now if you'll excuse me, I need to go compose sweet nothings to my wife.

Friday, February 12, 2021

PimgStack Part 3 - A Tasker Based Phone UI

I have my stack based digital photo frame, PimgStack, nearly functional. The Raspberry Pi side of the equation is working well. I can send it image stack commands and have it react by showing the relevant photo on screen. All that was left to do was to create a simple Android app for sending images and stack commands to the Pi.

Tasker, Of Course

My go to strategy for building uber-lightweight mobile apps is to turn to Tasker. I tackled building this app in 3 steps.

Step 1. MQTT Communication

I created a Task that pushes messages to PimgStack by publishing an MQTT message. This may sound tricky, but was actually quite simple. There are a number of MQTT Tasker plugins already written, so I all had to do was pick one, fill in the message parameters and I was done.

I first built my Task around the MQTT Publish Plugin, but found that it wasn't reliable. It seemed to randomly fail to publish messages. I then switched to MQTT Client and now message sending is realiable.

Step 2. Core Tasks

I Implemented Tasks that correspond to each of the PimgStack operations: push, pop and clear. Pop and clear are trivial because they are static messages. Push is trickier, because it works in terms of a URL. But what if I have a local file and not a URL?

I solved this issue by falling back on another project of mine: 3shrink. I've setup 3shrink so that I can turn any file on my phone into a URL. You can see that I'm using this service to translate anything that doesn't look like a URL into one:

Step 3. Add Shortcuts

Finally, I needed a hassle free way to invoke the tasks above. For the Pop and Clear Tasks I used the Tasker Widget to add 1-click shortcuts to my home screen. They are now a press away from being executed. Push is trickier, as it requires an associated image or URL to an image to operate.

The heavy lifting to solve this problem is done by AutoShare. AutoShare connects a Task with the Android Share menu. From my photo gallery, for example, I can share a pic with the 'pimgstack:push' AutoShare command. When I do this, the file is delivered to the Push Task. This Task detects that it's working with a file, converts it to a URL via 3shrink and then delivers the URL to PimgStack via an MQTT message.

And It Works!

It all works! There are definitely improvements to be made. The process of delivering PimgStack commands takes a few seconds, not the instant behavior I'd hoped for. But the system is stable, I've put an unused Raspberry Pi and Monitor to use, and I can beam images from my phone to a screen. I'm calling this a win!

Tuesday, February 09, 2021

PimgStack Part 2 (again) - The Joy of MQTT

My attempt at using AWS SQS for pushing messages to my Raspberry Pi based digital photo frame (aka PimgStack) was a bust. I needed a lighter-weight solution for pushing messages to the Pi. MQTT provides message passing services and does so while being featherweight, so I decided to give it a try.

The Proof of Concept

First, I headed to an AWS EC2 Linux server and installed and started mosquitto:

# Server IP: 192.168.1.213
$ sudo yum install mosquitto
...
$ mosquitto -v
1612876887: mosquitto version 1.6.10 starting
1612876887: Using default config.
1612876887: Opening ipv4 listen socket on port 1883.
1612876887: Opening ipv6 listen socket on port 1883.
1612876888: New connection from ... on port 1883.
1612876888: New client connected from ... as mosqsub|2145-pimgstack1 (p2, c1, k60).
...

I also had to update the server's security group to allow all traffic for port 1883. (This was a temporary measure, of course. Once my proof-of-concept was done, I turned this off.)

I then connected up to my Raspberry Pi, installed mosquitto-clients and kicked off the mosquitto_sub command.

$  sudo apt install mosquitto-clients
...
$ mosquitto_sub -h 192.168.1.213 -t foo/bar -C 1

-h connects to the server above. -t says to listen on topic foo/bar and -C says to listen for a single message and quit.

Running the above command just hung there. That was perfect.

Finally, I connected to a third Linux box, installed mosquitto there, and invoked mosquito_pub like so:

$ sudo yum install mosquitto
...
$ mosquitto_pub -h 192.168.1.213 -t foo/bar -m "Hello World"

To my shock and amazement, "Hello World" was printed on the Pi's screen and the command exited.

I had just demonstrated the pieces of the puzzle needed to power PimgStack.

The Real Deal

Now that I had a way to receive messages from the cloud, I needed to implement code to handle those messages. That was the easy part. You can see the entire source code for the PimgStack Raspberry Pi client here, however the interesting part is as follows:

while true; do
  message=$(mos_sub pimgstack/1)
  case $message in
    push:*)
      do_push $message
      ;;

    pop)
      do_pop
      ;;

    clear)
      do_clear
      ;;
    *)
      do_error $message
      ;;
  esac
  img_display
done

The above sets up an infinite loop to receive messages. The shell wrapper function mos_sub is invoked to receive messages. When a message arrives, it is trivially parsed. The stack operations 'push', 'pop' and 'clear' are defined, everything else is ignored.

Once the stack is modified, img_display is called and the top most image on the stack is shown. If the stack is empty, a placeholder image is shown.

At this point, I had a very clunky, but functional digital photo frame. The commands below push two images, displaying them each, then pops the top one off, showing the first image that was pushed.

$ mosquitto_pub  -h 192.168.1.213 -t pimgstack/1 -m push:https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPoJ3YPU-CU74S9vR_dS0Qjon4BY8QB0aRgjToWVGEg6d3BM9DjugGBktFAODEGK19cAEtKVpfLs8SkIg7Phln7p8Romo2BPLJGMbjvzafBv21TLMljqsBxkGLATuioU-ZCipM/s4032/20200808_200622.jpg
$ mosquitto_pub  -h 192.168.1.213 -t pimgstack/1 -m push:https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGAEMuswEi-6PRl80WPhyphenhyphenJ_QP7oAxxabWO-4qWTbY9lez-0sd5JfUtDHvzmKLghuaPu8vrTLFXY-c4ZJ5J2X3GflMPAUAqSiwX6yNoUg1SfbuAPT1kVtGgtMDIgPoae6rpZBCk/s4032/20200809_093032.jpg
$ mosquitto_pub  -h 192.168.1.213 -t pimgstack/1 -m pop

The Raspberry Pi side of the photo frame is now finished. Up next, I need to implement sending MQTT messages from my Android device. And then I'll be ready to push and pop images from my phone with ease.

Thursday, February 04, 2021

Staying Close To Home: Using AutoHotKey to Jump Between Screens

Like most programmers, I've got two monitors and a dislike for taking my fingers off the home-row. On my Mac, I was inspired to write a Keyboard Maestro script to hot-key jump between my screens.

On Windows, I got by Alt-Tab'ing. I've got a new Windows box, and in the spirit of experimentation, I wondered if I could write a similar screen-jumping script for Windows.

The obvious choice was to build this in AutoHotKey. A quick Google Search turned up a similar request. A bit of MsgBox experimentation revealed that MouseGetPos returns positive x values for 'Monitor 1' and negative for 'Monitor 2'. SysGet can be used to determine the bounds of a monitor and MouseMove can be used to, well, move the mouse.

Once I had this figured out, I was able to build a version of a ScreenJump that hopped the mouse between the monitors. Version 1 plopped the mouse pointer in the center of the screen:

#SingleInstance,Force
CoordMode,Mouse,Screen

;; Windows+J does the jump
#j::
MouseGetPos,X,Y
If(X > 0) {
  SysGet, M, Monitor, 2
  Zone := -1
} else {
  SysGet, M, Monitor,1
  Zone := 1
}
CenterX := ((MRight - MLeft) / 2) * Zone
CenterY := (MBottom - MTop) / 2
MouseMove CenterX, CenterY, 0
return

After a few days, I refined the above function to capture the mouse coordinate and then jump to the other screen. I use these captured coordinates as the destination to jump back to, rather than always jumping to screen-center. Additionally, I added logic to give focus to the window the mouse lands on.

This function, triggered by Windows-j has become so embedded in muscle memory I can't help but wonder how I lived without.  Such is the joy of keyboard shortcuts.

#SingleInstance,Force
CoordMode,Mouse,Screen

ScreenJump() {
  MouseGetPos,X,Y
  global LastM1X, LastM1Y, LastM2X, LastM2Y

  ;; X > 0 is one screen, X < 0 is another screen
  ;; Before we jump away from the screen, capture our
  ;; 'last' coordinates so we can return there.
  If(X > 0) {
    LastM1X := X
    LastM1Y := Y
    SysGet, M, Monitor, 2
    Zone := -1
    LastX := LastM2X
    LastY := LastM2Y
  } else {
    LastM2X := X
    LastM2Y := Y
    SysGet, M, Monitor,1
    Zone := 1
    LastX := LastM1X
    LastY := LastM1Y
  }

  ;; Do we know our last position?
  ;; Great, jump there. If not, go to the center of the window.
  if(LastX != "") {
    CenterX := LastX
    CenterY := LastY
  } else {
    CenterX := ((MRight - MLeft) / 2) * Zone
    CenterY := (MBottom - MTop) / 2
  }

  ;; Give Focus to the window the Mouse is hovering over.
  MouseMove CenterX, CenterY
  MouseGetPos,,,GuideUnderCursor
  WinGetTitle, Title, ahk_id %GuideUnderCursor%
  WinActivate, %Title%
  return
}

#j:: ScreenJump()

Wednesday, February 03, 2021

PimgStack Part 2 - An AWS Dead End

With my Raspberry Pi set up I was ready to move on to the next step of my stack based digital photo-frame project, aka, PimgStack. I wanted to tackle the question of how the Pi would pick up messages from my phone, or really any source, telling it which stack operation to perform next. One strategy I'd used in the past was to depend on AWS's Simple Queue Service (SQS). My phone, and other entities, interested in manipulating the PimgStack would drop commands into an SQS queue, and the Raspberry Pi would perform a long-poll to pick up these directives.

The AWS command line tool appeared to support pulling items from an SQS queue. So the plan was simple: install the AWS command line tool on the Pi and write a wrapper script to pull commands from a queue. Easy peasy.

AWS CLI - Strike 1

I downloaded AWS CLI version 2 from Amazon, unpacked it and ran the 'aws' command. No dice, the binary wouldn't run. Apparently, the Pi architecture wasn't compatible with the binaries Amazon provides. Undeterred, I moved on to plan B: I'd install version 1 of the AWS command. That version is Python based, which should work on a Pi.

AWS CLI - Strike 2

I ran sudo pip install aws and patiently waited for the command to finish. Instead of being successful, I got an error. The interesting part was:

    arm-linux-gnueabihf-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=/build/python2.7-InigCj/python2.7-2.7.16=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/include/python2.7 -c c/_cffi_backend.c -o build/temp.linux-armv7l-2.7/c/_cffi_backend.o
      c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory
       #include <ffi.h>
                ^~~~~~~
      compilation terminated.
      error: command 'arm-linux-gnueabihf-gcc' failed with exit status 1

There were a number of recommendations for fixing this on web. Ultimately, I got around this error by insalling libffi-dev.

  sudo apt install -y libffi-dev

When aws was finally installed, I ran it and got this error:

$ aws ec2 help
Traceback (most recent call last):
  File "/usr/local/bin/aws", line 6, in <module>
    from aws.main import main
  File "/usr/local/lib/python2.7/dist-packages/aws/main.py", line 7, in <module>
    from fabric import api as fab
ImportError: cannot import name api

Turns out, I installed the wrong command altogether. 'aws' refers to a now defunct library. What I wanted to install was 'awscli':

$ sudo pip uninstall aws ; sudo pip install awscli

AWS CLI - Strike 3

I finally had the 'aws' command installed, but then ran into a new problem. Kicking off the 'aws' command took forever. And by forever, I mean over 4 seconds:

$ time aws
Note: AWS CLI version 2, the latest major version of the AWS CLI, is now stable and recommended for general use. For more information, see the AWS CLI version 2 installation instructions at: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: too few arguments

real    0m4.224s
user    0m3.611s
sys     0m0.612s

That's over 4 seconds to do nothing more than print out an error message.

This tells me that while my Raspberry Pi can technically run the aws command line utility, it's really not the right tool for the job. I needed to stop thinking like a programmer working on a general purpose system and start thinking like a programmer working in a resource-limited embedded environment.

So if AWS SQS wasn't the right tool, what was? I had a vague notation that MQTT might help. All I knew about MQTT was that it was used to pass messages to Internet-of-Things devices, may of which have far fewer resources than the Pi. Time to get schooled in MQTT.