Thursday, January 21, 2021

Hiking the Columbia Air Center Loop

While looking for family friendly local hikes, I came across the verbosely named Columbia Air Center Loop Hike through Patuxent River Park. It checked a number of key boxes: it's close by (only 45 minutes!), is a kid friendly distance (just 4 short miles!) and it's a loop. But what made this hike a must-do for me was the history of the location. 'Columbia Air Center' refers to the name of the air field that was operating on the grounds of the hike in the 1940's. It served the area's African American pilots because other air fields were whites-only. This article explains how the air field got its start:

The [African-American] pilots, members of a local aviation organization called the Cloud Club, had recently been kicked off a white-controlled airport in Virginia, so in 1941 they began leasing for $50 a month the 450-acre lot along the Patuxent River. Historical records show that it was among the first black-operated airports in the nation.

I kept an eye on the weather, and anytime there was even a chance of making this hike work, I nagged Shira to go for it. I finally wore her down this past weekend. The weather was iffy at best, and we had our friend's one year old with us. But months of nagging had left Shira's defenses reduced.

We pulled into the parking area for the trail and I quickly went off to snap photos. There's not a whole lot of evidence of the former Air Center's glory. There are historic plaques, a compass rose on the ground and a flag pole flying a CAC windsock. Still, the history is real here and I was glad to be among it. There was also a kids play area off in the woods and signs referring to camp sites and other amenities in the area.

As I mentioned, we had R., our friend's one year old along for the adventure. Shira declared that she was going to carry him for the hike. At about 29 lbs, R. is a whole lot of one year old so this was a bold promise. Shira, in full beast mode, didn't waiver and carried him the entire hike.

The hike's route is a shaped like a figure 8. Shira had us hike it in reverse which insured we got the lengthy road walk out of the way. The second loop of the hike was nearly all in the woods. I recommend doing the route in this direction.

R. started off annoyed that he'd been woken from the nap he'd started on the drive to the trailhead. Combined with the blustery weather, and one could understand why he was not the happiest camper. After completing the road walk and hiking in the proper forest, he finally started to the warm to the idea of being in the wilderness. His attention was ultimately captured by the trees and other sights of nature around us and he began to genuinely enjoy himself.

There's a waypoint on the hike marking the wreckage of a Piper J-3 Cub. When we arrived at this location I searched but wasn't able to find any evidence of the plane. I gave up, but was rewarded further up the trail with some ancient wreckage. I snapped some photos, and from looking at the distinct triangular shape of the front of frame, it does look like I found the J-3 cub. So if you do the hike and don't see the wreckage where the waypoint says it should be, don't give up.

Oh, and another pro tip: there's a detour on the red trail around some mud. I insisted to Shira that we folllow the GPX track and not the red arrow and found ourselves ankle deep in said mud. I thought it was actually the most interesting part of the hike, but then again, your definition of 'interesting' may be different than mine. Best to follow the detour and not to obsess about following the GPX track.

Overall, this hike was a pleasant one, though I'm not in a hurry to return to the park. The history of the area is impressive, though, there's minimal evidence of the air field and I could have done without the lengthy road-walk. Still, a walk in the woods with family and friends is always going to beat being indoors. If you find yourself near the park, you should absolutely stop by revel in the soul of the place.

Monday, January 18, 2021

Tasker and Google Sheets: Back On Speaking Terms

Combining Tasker and Google Sheets has on more than one occasion been a massive win. Recently the Tasker Spreadsheet Plugin stopped working and my attempt at using AutoWeb to access the Google Sheets API was a bust (I kept getting this exception). This left me with Plan C: use Tasker's built-in HTTP Actions to roll my own add-to-Google Sheets Task.

I braced myself for the hassle of implementing OAuth 2 using native Tasker actions. And then I learned about HTTP Auth and realized the heavy lifting was done. Building on the progress I made during my AutoWeb attempt, I was able to get my Task working almost too easily. I give you the: GSheet Append Row Task, a generic task that makes adding a row to a spreadsheet, dare I say, simple. Here's how you can use it.

Step 1. Setup

Import the GSheet Append Row Task into your Tasker.

Now comes the annoying part: you need to create a Google Cloud Project and OAuth 2 credentials in the Google Developer's Console. You can follow Step 1 from this tutorial to do this.

Ultimately, you're looking for two values: a Client ID and Client Secret. Once you have these values, edit the GSheet Append Row Task, and update the appropriate Set Variable actions at the top.

Once those settings are in place, you can close GSheet Append Row; if all goes well you won't need to edit it again.

Step 2. Call GSheet Append Row

You need to call GSheet Append Row with %par1 set to JSON with the following shape:

  ssid: '[spreadsheet ID - find this in the URL of the spreadsheet]',
  sheet: '[the name of the sheet you wish to append to]',
  row: [
    '[column 1 data]',
    '[column 2 data]',

You can prepare this JSON using the strategy outlined here. Though, I prefer to use a JavaScriplet:

  // the actions before this set:
  //  %ssid to the the spreadsheet ID
  //  %now to %DATE
  //  %url to the URL to learn more
  //  %description to the description of the entry
  // I'm looking to add a row with the format:
  //  date | description | url
  // Note: date is being changed from m-d-y to m/d/Y
  var args = JSON.stringify({
   ssid: ssid,
   sheet: "From the Web",
   row: [
     now.replace(/[-]/g, "/"),

  // %args is now ready to be passed as %par1.

Once %args has been prepared, it's trivial to invoke PerformTask, setting the task to GSheet Append Row and %args as %par1.

Here's a Sample Task to see this in action:

Step 3.

Celebrate! If all went well, the first time you run GSheet Append Row it will ask you for permission to run. See this tutorial for details about this first interaction. Once permissions has been provided, appending rows should Just Work.

In Conclusion

This turned out to be quite a bit easier than I imagined it (thanks João, you rock!). Though, I don't love the fact that HTTP Auth depends on João's auth.html. I'm tempted to make my own version of 'HTTP Auth' that works locally and doesn't require a 3rd party server. Still, for now I'm just happy to to back to writing Tasks that integrate with Google Sheets.

Friday, January 15, 2021

PowerShell For the Win: Short and Sweet Edition

I just wrote this long and winding tale about how Windows PowerShell exceeded my expecations. Consider this post a sort of TL;DR for that monster.

I found myself needing to scale an image. On Linux and MacOS, I'd use the command line friendly ImageMagick. On Windows, I'd turn to Gimp. But I've got a shiny new tool in PowerShell, so I was curious if it could provide a Windows friendly command line solution. It does!

Step 1: I downloaded the Resize-Image Module from Microsoft's Technet Gallery.

Step 2: I launched PowerShell and typed:

## OK, let's do this...

PS C:\Users\benji\Downloads> Import-Module .\Resize-Image.psm1

# Wait, that worked? Oooh, cool! A quick look at the docs says that I
# can use -Display to preview image. Let me try that.

PS C:\Users\benji\Downloads> Resize-Image -InputFile .\icon.png -Height 200 -Display

## The image was displayed, but smooshed

PS C:\Users\benji\Downloads> Resize-Image -InputFile .\icon.png -Height 200 -Width 200 -Display

## The image is displayed and looks good. Let's make this official.

PS C:\Users\benji\Downloads> Resize-Image -InputFile .\icon.png -Height 200 -Width 200 -OutputFile icon.200x200.png

## That's it? No errors. No verbose out. I love it.

One gotcha: the OutputFile was stored in my hme directory, not the same directory as icon.png.

Step 3: I wrote this post.

I'm telling you, if you're a command line user and find yourself on Windows, PowerShell is your friend.

Thursday, January 14, 2021

Review: Artificial Condition: Murderbot Diaries

Move over Reacher, Rapp, and Bond there's a new action hero in town named, well, he doesn't have a name. But he calls himself Murderbot. Technically Murderbot isn't a person, it's a construct--part robot, part lab-grown organic material. But in so many ways, he embodies all our are struggles. On one hand, he's got a strong moral code, the urge to protect and defend and epic hacking and smashing skills. On the other hand, more than anything else, he wants to find a dark space to sit and watch 'media.' I just finished book two of the Muderbot Diaries, Artificial Condition by Martha Wells and it held up to the high bar set by book one. Below are some thoughts on this second book and the series in general. Careful, there spoilers, so best to skip this post if you haven't read books one and two. And if that's the case what are you waiting for? Go read them, they're short, smart and fun.

[Spoilers ahead!]

I'm fond of saying that what really makes a book a win for me is when I learn something. Learning in this context can mean almost anything. Consider the body tweaks Murderbot decides to have Art perform on him. When he makes special note of the data port in the back of his head, I assumed he meant he was going to have Art remove it. Murderbot explains that some humans have data ports, but most don't. If he's going to blend in with people, removing it would be the obvious choice.

During the climactic close combat scene we learn the details of how Murderbot had his data port 'taken care of.' He didn't have it removed, instead, he had it internally disconnected.

This was a genius move. To anyone who figured out Murderbot's identity, they'd assume that the data port was his kryptonite. Slap a combat override module in the port and you're good to go. That's precisely what the baddies do, of coures. And it doesn't go well for them because the port is dead.

My lesson: think twice before you hide a perceived weakness. It may be possible to turn that weakness into a strength by leaving it in plain sight and letting other's assumptions work to your advantage.

This strategy reminds me of the tactics employed by physical security tester Jason E. Street. He describes showing up at sites he wishes to breach in a wheelchair with boxes on his lap. Who's going to be the jerk who doesn't get the door for him?

Zooming out from combat tactics, I really like how Wells tackles the thorny ethical issues that go with the topics of AI, lab-grown human parts and smarter-than-human machines. On their surface, the universe the Murderbot occupies is fairly striaightforward. There are humans, augmented humans, bots and constructs. Humans, including the augmented variety, have rights, non-humans don't. There's plenty of action and humor to occupy the reader, so they need not question this social structure.

But, look a little deeper and you see things get more complex. Consider the makeup of each type of being. Humans are completely organic, augemented humans are a mix of organic and machine, bots are fully machine and constructs are again a mix of organic and machine. In this context, why should constructs be denied rights when they are built from the same materials as an augmented human? Is there that much difference from having organic material grown in a womb than a lab?

Wells uses our hero, Murderbot, to drive this point home. Not only does he look and act human, but he finds joy in relishing his freedom. When a human asks him to complete a task, it's the ability to say no that helps awaken the Murderbot to his full potential.

Finally, on a completely unrelated note, I can't help but wonder what those in the Autistic community think of Wells' Murderbot. It's quite possible my naive understanding of the Autism spectrum has me connecting unrelated dots, so you'll have to forgive me if this is a reach.

It seems that Murderbot has many mannerisms that would be associated with those on the spectrum. He's a brilliant tactician and combat specialist, and yet he's uncomfortable with even the most basic social interactions. He regularly opts for a 3rd person video view of a scene, rather than looking people in the eye. During a number of interactions, he and Art teamed up to perform sophisticated real time analysis to understand simple body language cues and mannerism, the type that a 5 year old would have no problem processing. And finally, in the closing scene in Artificial Condition Murerbot treat a request for a hug the same way I'd treat a request for a root canal. Did Wells intentionally model Murerbot after those with Autism?

Is Murderbot a hero the in the Autistic community? An insult? Or, a figure that's no more connected to them than I am to James Bond. From a bit of research, it appears that I'm on to something. There's also this bit of explanation from Martha Wells herself:

Question: As a mental health professional, I can't help but notice that, were he a human, Murderbot would likely be considered to be on the autistic spectrum. Was that a conscious choice or more of a coincidence? If it was an intentional decision to have Murderbot and autism overlap, what did you study to better represent neurodiversity on the page?

Answer: Those aspects of the character were based on my own experience. I'm not neurotypical, and I've been affected by depression and anxiety all my life.

That's a lot of insight and questions from a novella. If nothing else, take that as a sign of how good the Artificial Condition is.

Wednesday, January 13, 2021

A Windows Friendly Solution to the Video Packaging Problem

A Windows Version, Please

A friend wanted to use my ffmpeg video packaging script. But there was a catch: he needed it to run on Windows. As a bash script, it runs seamlessly on Linux and MacOS, Windows not so mch.

The immediate solutions that came to mind weren't very helpful. I could re-write the script to be cloud based, but that would be terrifically inefficient in terms of both my time and complexity. I could have have my friend install a Unix environment on his Windows computer, but that seemed overkill for one script.

I could re-write my script in PHP and have my buddy install XAMPP. But again, that felt like swatting a fly with a sledge hammer.

The exercise I was going through was a familiar one. As a programmer, I'd been given an issue and I was making my way through the 5 stages of grief. Denial: nobody can expect me to make a Windows version of this script. Anger: why can't Windows be Linux! Bargaining: OK, you can have this script, but only if you turn your Windows box into a Linux box. Depression: ugh, Windows. And finally, acceptance: OK, so how would one do this on Windows?

Let's Do This!

I had an idea: I'd previously experimented PowerShell. And by experimented, I mean I launched the program, was excited to see it did tab completion and then quickly tired of using it. I'm sure it was step up from the standard DOS box, but it felt far inferior to the bash environment Cygwin provided. Still, PowerShell was a shell, so may be it had a scripting language I could use?

Finding sample PowerShell scripts was easy. At first I was overwhelmed: PowerShell was considerably more verbose than bash. But the ingredients for shell scripting were all there. After cursing Microsoft, it looked like PowerShell may be just the solution I was looking for.

Fast forward a couple of days, and I now have a version of my video packaging script that runs on Windows; no massive install of support tools required. Grab it here. You can run it by right mouse clicking on package.ps1 selecting Run with PowerShell. You can also launch the script within PowerShell using command line arguments. For example:

> .\package.ps1 -Source c:\Users\ben\Downloads\bigvid.mp4 `
                   -Config c:\Users\ben\Documents\blog\video-settings.ini

The Case for PowerShell

Here's a number of features of PowerShell that have left me more than a little impressed.

PowerShell let's me trivially launch a file picker, so that I can allow users to select the video and configuration file in a graphical way. It's also easy to confirm the files are valid and kick up a graphical alert box and if this isn't the case. In other words, PowerShell lets me do Windowsy things easily.

# Let users pick a file
function Prompt-File {
  param($Title, $Filter)
  $FileBrowser = New-Object System.Windows.Forms.OpenFileDialog -Property @{
    InitialDirectory = [Environment]::GetFolderPath('MyDocuments')
    Title = $Title
    Filter = $Filter
  $null = $FileBrowser.ShowDialog()

  Return $FileBrowser.FileName

# Prompt for the video and config files
$Source = Prompt-File -Title  "Choose Video" -Filter 'MP4 (*.mp4)|*.mp4|QuickTime (*.mov)|*.mov|AVI (*.avi)|*.avi'
$Config = Prompt-File -Title  "Choose Settings File" -Filter 'INI File (*.ini)|*.ini|Text File (*.txt)|*.txt'

PowerShell doesn't natively support parsing ini files, but writing the code to do so was straightforward. I was able to adapt code on the web such that reading an ini file took in an existing set of values.

$settings = @{}
$settings = Parse-IniFile -File $PSScriptRoot\defaults.ini -Init $settings
$settings = Parse-IniFile -File $Config -Init $settings

Here I'm setting the variable $settings to an empty hashtable. I'm then filling the hashtable with the value of defaults.ini and then overriding these values with the user selected config file. This is similar to the variable overriding I did in the Unix version of the script, though arguably it's cleaner.

I was able to move my custom code to separate files in lib directory, thereby keeping the main script readable.

I'm impressed how PowerShell natively handles named parameters. Not only can I define functions with named parameters, but I was able to have the script itself trivially take in optional parameters. At the top of package.ps1, I have:

param($Source, $Config, $SkipFcsGeneration)

I then check to see if -Source or -Config was passed in. If either are missing, Prompt-File is invoked.

if(-not $Source -Or (-not $(Test-Path $Source))) {
  $Source = Prompt-File -Title  "Choose Video" -Filter 'MP4 (*.mp4)|*.mp4|QuickTime (*.mov)|*.mov|AVI (*.avi)|*.avi'

if(-not $Config -Or (-not $(Test-Path $Config))) {
  $Config = Prompt-File -Title  "Choose Settings File" -Filter 'INI File (*.ini)|*.ini|Text File (*.txt)|*.txt'

These named parameters let me treat my script as a command line tool, and let's a typical Windows user think of it as graphical app.

I also improved the packaged video file itself. I added support for playing audio over the 'pre' title screen. The script uses ffprobe -show_streams to figure out the length of the audio clip and arranges for the title screen to be shown for this duration. PowerShell let me trivially kick off ffprobe and process its output, just like I would do in a bash environment.

Working with the audio stream forced me to understand how ffmpeg's complex filters worked when both video and audio streams are being manipulated. My Aha Moment came when I realized that you filter audio and video separately and that ffmpeg will piece them together. Essentially, I have one concat expression to join all the video streams and one to join the audio streams, and ffmpeg does the right thing to combine the audio and video at the end.

A Happy Ending

I finished this little project with not just a video packaging script that works on Windows, but with a fresh perspective on Windows scripting. What PowerShell lacks in tradition and terseness, it more than makes up for in capability and completeness. In short, PowerShell isn't an attempt to implement bash on Winows; it's a fresh and modern take on scripting that really delivers.

Check out my video packaging code over at github.

Wednesday, January 06, 2021

Seeing All The Pretty Colors: optimizing emacs + screen + Secure Shell Chrome App

The Secure Shell Chrome App combined with a remote Linux box, screen and emacs is a game-changer. Among other things, it turns my Chromebook into a programmer friendly hacking environment.

One quirk of this combination is that the color scheme emacs loaded by default was often illegible. I'd typically run M-x customize-themes to pick a better color scheme, but no matter the theme I selected the colors were always harsh and often left text hard to read.

Turns out, there's a quick fix to this issue. The problem and its fix is described here. It boils down to this: the Secure Shell App sets the terminal type to xterm-256color. Emacs supports this terminal type, granting it access to, well, 256 colors. But I'm running screen, and screen morphs that terminal type to screen.term-256color. Emacs doesn't know what to make of this terminal type so it falls back to to 8 color mode.

This becomes clear when I ran M-x list-colors-display.

The following tells emacs that screen's terminal type is one that it knows:

(add-to-list 'term-file-aliases
             '("screen.xterm-256color" . "xterm-256color"))

I added this code to my init.el, restarted and suddenly got access to 256 colors.

Now the default emacs color choices make sense. Such a simple fix, and I didn't even realize I had this problem.

Tuesday, January 05, 2021

hbo-blogger.el: A Simple Strategy for Editing Blogger posts in emacs

I'm writing this post in emacs. I know that may not seem like a big deal, but trust me, it's a big deal. See: here's a screenshot proving this:

My blog is hosted by Blogger, and with the exception of a bit of command line hacking, I've always edited my posts at

Every few years I look around for an emacs friendly Blogger solution but have never found a fit. That changed when I stumbled over oauth2.el. Using this library, I found that I could trivially make authenticated requests to the Blogger API. While interesting, it wasn't immediately obvious what I could use it for. On one hand, I knew I didn't want to build out a full Blogger emacs interface. On the other, it would be sweet if I could somehow leverage emacs for post editing. And thus, hbo-blogger.el was born.

The hbo in hbo-blogger stands for Half-Baked Opportunistic. In other words, this isn't some finely crafted Blogger emacs library. This is me taking every shortcut I can find to somehow mash emacs and Blogger together. As klugy as this sounds, I'm amazed at how well this all came together.

At it's core: hob-blogger offers two key functions:

  ;; Downloads the latest draft post and loads it into
  ;; an emacs buffer
  (hbo-blogger-edit-latest-draft-post "[your blog's URL]"))

  ;; Saves the current buffer, which is specially named with the
  ;; blog-id and post-id back to Blogger

With thest two functions, I can seamlessly take a draft post I've started in Blogger, load it into emacs and continue editing it there. I can then use the hbo-blogger-save-buffer to save my writing back at

To glue this together, I've added a new interactive command to my init.el:

(defun blogbyben-edit-latest-draft-post ()
  (hbo-blogger-edit-latest-draft-post ""))

This allows me to type M-x blogbyben-edit-latest-draft-post and I'm off and running. When this function runs it adds hbo-blogger-save-buffer to the post's after-save-hook. The result: saving a post saves the file locally and pushes the content back to Blogger.

Most of this functionality came together with ease. One big catch: the PATCH functionality promised in the Blogger API only works on published posts. If you look at hbo-blogger.el you'll notice I had to fallback on the more clunky PUT call.

While functional, I can see a number of obvious improvements to this library. It should be trivial to add hbo-blogger-new-draft-post to create an empty draft post via emacs. I'm sure there's a menu library I could leverage to show a list of recent Blogger posts and let me choose one to edit from there. I wrote my code using url-retrieve-synchronously when I almost certainly should be using url-retrieve. Finally, I'm sure I'm committing many a sin with the way I'm loading up post buffers. The hard coded call to web-mode, for example, is almost certainly a no-no.

But even with these limitations, I just composed my first Blogger post in emacs and it was awesome! The whole process Just Worked.

You can grab the code for hbo-blogger.el over at

Tuesday, December 29, 2020

Making emacs happier on a Chromebook: An ssh and console friendly browse-url-browser-function

I recently learned of, and started playing with, emacs' oauth2 support. This library simplifies access to a slew of APIs and has single handedly added a number of mini-projects to my TODO list. I first experimented with oauth2 on my Mac. Evaluating this code opened up a browser window and started me down the OAuth permission process:

(defvar bs-blogger-token
  "The token to access blogger with.")

I copied the access code from my browser back into the emacs prompt. With the token defined, I was able to make authenticated API requests to Blogger. It was magic.

A few days later, eager to make improvements to my code, I again evaluated the above snippet. This time I was ssh'd into an AWS EC2 instance running a terminal version of emacs and was greeted with with the error: No usable browser found. Gah!

I searched Google and haphazardly poked around the browse-url library. It didn't take long before I learned that I could make the message go away by telling browse-url to use a native emacs web browser such as eww. For example:

(setq browse-url-browser-function 'eww-browse-url)
(browse-url "")

While this is nifty, it's not very useful. When I invoke oauth2-auth-and-store the system attempts to open a convoluted URL that doesn't render right in eww. So while I fixed the "No usable browser found" error, I was no closer to using oauth2 on a console version of emacs.

What if I sidestepped the browser issue? What if I convinced browse-url to store the URL in a file or somewhere else out of the way. I could then grab the URL from the file, copy into my browser and be off and running. Turns out, this is surprisingly easy to do. And I don't need to store the URL in a file, I can store the URL in a temporary and easily accessible location: the kill ring. Here's the code to do this:

(defun kill-url-browse-url-function (url &rest ignore)
  (kill-new url)
  (message "Killed: %s" url))

(unless window-system
  (setq  browse-url-browser-function 'kill-url-browse-url-function))

On Windows or Mac, browse-url seamlessly opens URLs like it always has. In a console version of emacs, browse-url captures the URL in the kill ring where I can manually work with it.

In this particular case, I'm on a Chromebook using the Secure Shell App. I have the invaluable osc52.el library configured in my init like so:

(require 'osc52)

With this code in place, emacs' kill ring is automtically sync'd with my Chromebook's copy and paste buffer. The result: evaluating browse-url places the URL not only in the emacs kill ring, but also in my Chromebook's paste buffer. This means that I can evaluate (oauth2-auth-and-store ...), Alt-Tab over to a browser window and hit Control-V to paste the URL that emacs tried to visit. In the case of oauth2 it started the authentication process as smoothly as if I had been on a windowing system.

Magic restored!

Wednesday, December 23, 2020

Using Tasker's Bluetooth Connection Event to Tackle Android Annoyances

The Bluetooth Connection Event, which was released as part of Tasker 5.8, helped me solve two Android annoyances. Here's the deal.

Smarter Keyboard Handling

For years, I've depended on the External Keyboard Helper (EKH) Pro App to optimize my Android handset for use with a hardware keyboard. I first discovered this app when I hooked my Galaxy S5 up to a Bluetooth keyboard and realized that with a bit of tweaking, I could turn my mobile phone into a Linux power-user's dream. Over the years, my phone and keyboard have changed, but the main pain point hasn't. I'd yet to find a seamless way of switching to and from EKH.

Fast forward to Tasker 5.8 and my Galaxy S10+; I've finally found an elegant solution for intelligently toggling between EKH and SwiftKey.

The solution starts by detecting the Bluetooth Connection event with the name of my keyboard, the iClever IC-BK06 Bluetooth Keyboard:

When a connection event occurs, the system determines if the connection was established or ended. Depending on this, the correct input method is set. Setting the Input Method is cumbersome because it requires AutoTools's Secure Settings. Once you've installed this plugin and run the relevant adb shell pm grant command, configuring it isn't hard. Browse to: Plugins > Auto Tools Secure Settings > Services > Input Method to enable a particular keyboard.

While the Profile and Task has a number of moving parts, it's been working perfectly. I can now flip open my iClever keyboard, wait a few moments and my device switches over to EKH and is ready to be used in hacker mode. I close up the keyboard, wait a few moments, and SwiftKey is re-enabled. It's beautiful!

Download this profile and its dependencies from Taskernet.

A Quieter Do-Not-Disturb Mode

Back in the day, one of the first Tasker profiles I wrote was to silence my phone when I flip it over. I use this Profile on a daily, or rather nightly, basis as I flip my phone every night before bed. Over the years, the strategy has changed for the best way to silence my phone. These days, when I flip my phone over I enable Do Not Disturb (DND) Mode. When I flip my phone right side up, I turn off DND Mode.

One of the features of DND Mode is the ability to define exceptions. For example, I opt to allow phone calls from starred contacts to break through, as well as alarms. The challenge I recently ran into was that I wanted to silence all Media during DND, unless I had my bluetooth headphones connected. In that case, I wanted DND mode to allow Media just like it allows Alarms.

Using the Bluetooth Connection Event, this turns out to be easy to do. I monitor Bluetooth activity for my headphones, the impressively named MPOW Thor. When 'Thor' connects or disconnects, I set the DND_EXCEPTIONS variable appropriately.

In the code run by the Flipped Profile, I no longer explicitly state the "Allowed Categories." Instead, I plug in the DND_EXCEPTIONS variable.

The result: when I flip over my phone the Do Not Disturb Mode will either allow or block Media based on whether my headphones last connected or disconnected from my device. Like the solution above, it may appear clunky, but it actually works quite reliably.

You can grab my 'Flipped' profile and its dependencies from Taskernet. Enjoy!

Thursday, December 17, 2020

Review: All Systems Red: The Murderbot Diaries

I randomly checked out Martha Well's All Systems Red: The Murderbot Diaries and was hooked after a few minutes of listening. From there, I devoured the book, squeezing in listening time whenever I had a free moment.

The book checked all my Sci-Fi boxes: likable characters, clever problem solving, a plot that moved and a future that felt both familiar and alien. I liked that it wasn't just the main character who had flashes of brilliance. The scientists, while out of their depth, step up to help fight for their survival. The book manages to explore lofty topics like what it means to be human and have free will, all while keeping a brisk pace and not losing focus on the the story.

There were other touches about the book that worked, too. It's short! Not exactly short-story-short, but at a little more than 3 hours it's a fraction of the length of most fiction I listen to. I also liked the implied diversity of the crew. From the names, I'm guessing that most of the characters are either of Indian or Asian descent. This, too, was a welcome variation from most of the fiction I read.

It's not unusual for me to finish a book in a series, truly love it, and then intentionally stop reading that series. Childish, yes, but it let's me savor what I just read. In the case of the Murderbot Diaries I'm too hooked: as soon as I finished book one, I reserved book two. This series is too well written to put down.

Tuesday, December 15, 2020

Screw It: The Search for the Perfect EDC Screwdriver Kit

A few weeks back we gave a friend's kid a birthday gift. We came prepared with the triple-A batteries it needed, but forgot a tool to remove the battery cover's security screw. Back in my day, battery covers just clicked into place; not so, today. I didn't panic. I grabbed my keychain multi-tool and went to work removing the screw. Alas, try as I may, I couldn't get it removed.

Naturally, I returned from that experience a chastened man. I spent the next few days feverishly searching Amazon for the perfect compact screwdriver kit. Here's where I landed.

An Itty Bitty Bit Driver

After much research, what I thought I wanted was a tiny bit driver. The idea being that you can carry one main tool and a bunch of 1/4" bits and thefore have the equivalent of a full screwdriver set. I picked up a keychain bit holder and a cheap L-shaped bit driver, mainly for the collection of bits.. After I made my purchase, I found this video from MeZillch warning that the bit driver I'd just bought wasn't going to work. He explained that the keychain's depth made it incompatible with compact bits.

And he was right:

Fortunately, I didn't give up. I grabbed a wooden barbeque skewer and a serrated kitchen knife and cut off a 1cm chunk of the skewer. I dropped that into the bit holder, followed by a bit, and found that the setup worked.

Inspired by the Versatool, I attached a carabiner to the keychain to let me apply extra torque to the driver. In basic tests, it works:

Keychain Screwdrivers

The other item I picked up was a set of keychain screwdrivers. One of them claimed to have a 1/4" hex cutout in handle. This was perfect! For common tasks the flathead and phillips screwdrivers would work, and for more esoteric needs I could use the hex cutout as a simple bit driver.

Like the keychain bit holder above, I was almost immediately disappointed by my purchase. The screwdrivers were fine, but the hex cutout was 6mm not 1/4" (or 6.3mm). My plan of using the cutout as a bit driver was a hard no. After fuming in frustration about the size mixup I realized that the hex wrench did have a plus. Each screwdriver's shank is 6mm hex. That means that you can feed one screwdriver into the handle of the other.

This is useful because it forms a T-grip and lets you apply hefty amounts of torque. This makes the screwdrivers vastly more functional. Another happy surprise: the flat head screwdriver does a terrific job of opening up boxes. It cuts through tape and cardboard with ease, and is less likely to take off my finger than a utility blade.

On the surface, the keychain screwdrivers aren't as versatile as the bit driver. But from a practical perspective, they're the winner. They are compact, robust, and the ability to make them into a T-shape makes them able to take on jobs big and small. The box cutting ability of the flat head screwdriver is another big win and makes this tool something I'm using multiple times a week. The bit setup has promise, but it's not especially compact and all the parts make the setup feel fiddly. For now, it sits in the kitchen utility drawer waiting to shine.

Friday, December 11, 2020

Review: The Barren Grounds: The Misewa Saga

I won't lie: I listened to The Barren Grounds: The Misewa Saga by David Robertson because of the cover art. I was totally getting a Guardians of the Galaxy vibe from it. Just a couple of minutes into the book I realized there may be some truth to the advice about not judging a book by its cover.

Instead of a fun space drama, I found myself in an intense preteen drama. Morgan, the main character, is a young indigenous girl who's been in foster care for years. When we join her, we find she's got a relatively new foster brother Eli, who's also indigenous, and a strained relationship with her white foster parents. From home to school and everywhere in between Morgan feels like she doesn't fit in.

The Barren Grounds follows the clever journey Morgan takes to meet this challenge. Kudos to the text for leaning into, rather than avoiding, touchy subjects like foster care, child separation and cross-cultural relationships. I'm not sure how kids will respond to Robertson's approach, but as an Uncle and Foster Dad I'm glad I read the book and will be able to recommend it to the children in my life. While entertaining, the book clearly offers an opportunity to approach and discuss complex and seemingly out-of-bounds issues.

I always get nervous when I see foster parents portrayed in books, TV and movies. Sometimes, they get it right (I'm looking at you, This is Us), more often not. Robertson, to his credit, has portrayed Morgan's foster parents in a fair light. Sure, they're trying too hard and don't always know the right thing to say (been there, done that). Frequently, they fall back on their training, which is good but clumsy. Most importantly, they treat Morgan and Eli with unconditional love. They know that a child lashing out and slamming doors is normal and all part of growing up. In that respect, they made me proud.

Why Morgan has been bouncing around the foster care system for years raises another set of important questions. The fact that she hasn't been placed in a permanent home may be convenient for the story line, but is troubling to read. The safety net designed to help children like Morgan has failed. Is this an accurate criticism or an exaggeration for the plot? I fear it's probably a healthy dollop of both.

Bottom line: The Barren Grounds was an intriguing read and a solid gateway to important topics that all kids should be exposed to.

Tuesday, December 01, 2020

Auto Packaging of Videos | ffmpeg To The Rescue

The challenge: find a way to programmatically add a title screen, watermark, and credit screen to a video. My first thought was to turn to ffmpeg; it's gotten me out of every other video processing jam before, surely it wouldn't fail me now.

My initial plan was to generate content using ImageMagick and then use ffmpeg to combine the various images and source video into a finished product. However, the more I learned about ffmpeg's filters the more I realized that I could do the content generation directly in ffmpeg.

To do this, I needed to grok two ffmpeg concepts. First, ffmpeg's complex_filter option allows chaining filters together to create interesting effects using a technique reminiscent of Unix pipes. One source of confusion: ffmpeg allows for writing the same expression in various ways, from verbose to cryptically terse. Consider these examples:

ffmpeg -i input -vf [in]yadif=0:0:0[middle];[middle]scale=iw/2:-1[out] output # 2 chains form, one filter per chain, chains linked by the [middle] pad
ffmpeg -i input -vf [in]yadif=0:0:0,scale=iw/2:-1[out] output                 # 1 chain form, with 2 filters in the chain, linking implied
ffmpeg -i input -vf yadif=0:0:0,scale=iw/2:-1  output                         # the input and output are implied without ambiguity

These all do the same thing, combining the yadif and scale filter. The first one explicitly names each stream ([in], [middle] and [out]) while the last example relies on implicit naming and behavior. This tersification extends to the filter definitions as well. These are equivalent expressions:


This ability to compose expressions with varying degrees of verbosity means that many one-liners I found on the web were initially hard to understand and even harder to imagine how they could be combined. Once I started thinking of filters and their arguments as chains of streams, and working with them using verbose notation, the problem was vastly simplified.

The other concept I needed to wrap my head around was that before I could filter a stream, I needed to have a corresponding input. In some cases, the input was obvious. For example, adding a watermark image and text to a video stream was relatively straightforward. One solution has the following shape:

  ffmpeg -i source.mp4 -i logo.png  \           [1] These are my inputs, 0 and 1 respectively
         -filter_complex "\
           [0:v][1:v]  overlay=.... [main]; \   [2] Overlay the first two streams, write the output to [main]
           [main] drawtext=... [main] \         [3] Add text to the main stream
         \" output.mp4

But what about adding a title screen before the main video? drawtext and overlay would let me add text and images to the stream, but what's the source of the stream in the first place? One solution: generate a stream using the oddly named virtual device lavfi. With this in mind, adding a title screen looks roughly like so:

  ffmpeg -i source.mp4 \        [1] Main video
         -f lavfi \
         -i "color=color=0xf2e4f2:\ [2] Generated input source
            duration=5:\                for our title screen
            rate=30" \
         -filter_complex "\
          [0:]v ... [main]             [3] Do something with the source video and send it to [main]
          [1:v] drawtext=...[pre]\     [4] Draw on the virtual screen, send it to 'pre'
          [pre][main] concat [final]\  [5] Combined our [pre] and [main] streams for a final result
         \" output.mp4

With a solid'ish grasp of filters and lavfi generated streams, I was ready to tackle the original problem. I created a shell script for packaging video and the final result was looking acceptable. But then I ran into another issue: what's the best way to parameterize the script?

If I passed in all the arguments on the command line, about 40 in total, using the script would be a pain. If I stored all the options in a config file, then scripting the command would become tricky. With a bit of reflection, I realized there was a low effort, high value solution to this problem. At the top of the script, I define sane defaults for all the variables. I then process command line arguments like so:

cp /dev/null $config

while [ -n "$1" ] ; do
  arg=$1 ; shift
  if [ -f "$arg" ] ; then                [1]
    echo "# $arg" >> $config
    cat $arg >> $config
  name=$(echo $arg | cut -d= -f1)
  if [ "$name" != "$arg" ]; then         [2]
     echo $arg >> $config

  echo >> $config

. $config  [3]

For each command line argument, I first check if the argument is an existing file. If it is, I add the contents of that file to the end of ~/.pkgvid.last. I then check if the argument has the shape variable=value, if it does, then I add this expression to the end of ~/.pkgvid.last. Once I've processed all the command line arguments, magic happens at [3] by sourcing ~/.pkgvid.last. This reads the variable definitions that were defined in config files and on the command line, and has them override defaults.

This sounds confusing, but in practice, it's delightful to use. Consider blog.config which has been setup to override defaults for blog related videos:

main_caption_text="Ben Simon"

pre_title_text="A BlogByBen Video"




I can then override these settings by using command line options. For example:

 for part in $(seq 1 10); do blog.config pre_title_text="Blogging Secrets. Part $part of 10"

This strategy lets me organize variables into a config file and judiciously overwrite them on the command line.

OK, that's more than enough theory. Let's see this in action. Below is a simple test video, followed by this same test video packaged up with the command: blog.config.

Below is the script that created this video. Hopefully this gives you a fresh appreciation for ffmpeg and an interesting example to play with.


## This script is used for packing up a video

# Default Variables
base_dir=$(dirname $0)



main_caption_text="Thanks for watching"

pre_title_text="Title Text"

post_title_text="Thanks for watching"

cp /dev/null $config

while [ -n "$1" ] ; do
  arg=$1 ; shift
  if [ -f "$arg" ] ; then
    echo "# $arg" >> $config
    cat $arg >> $config
  name=$(echo $arg | cut -d= -f1)
  if [ "$name" != "$arg" ]; then
     echo $arg >> $config

  echo >> $config

. $config

main_video_width=$($ffprobe -show_streams $main_video 2> /dev/null |grep ^width= | cut -d = -f 2)
main_video_height=$($ffprobe -show_streams $main_video 2> /dev/null |grep ^height= | cut -d = -f 2)

if [ "$debug" = "on" ]  ; then
   ffmpeg="echo $ffmpeg"

$ffmpeg -y  \
 -i $main_video \
  -i $logo  \
  -f lavfi -i color=color=$pre_bg_color:${main_video_width}x${main_video_height}:d=$pre_duration \
  -f lavfi -i color=color=$post_bg_color:${main_video_width}x${main_video_height}:d=$post_duration \
  -filter_complex "\
    [1:v] split [logo_a][logo_b] ; \
    [logo_a]scale=w=$main_image_w:h=$main_image_h[logo_a] ; \
    [logo_b]scale=w=$pre_image_w:h=$pre_image_h[logo_b] ; \
    [0:v][logo_a] overlay=${main_image_x}:${main_image_y} [main] ; \
    [main]drawtext=fontfile=$font_dir/$main_caption_font: \
          text='$main_caption_text': \
          x=$main_caption_x: y=$main_caption_y: \
          fontsize=$main_caption_font_size: \
          fontcolor=$main_caption_color: \
          shadowcolor=black@.8: shadowx=4: shadowy=4: \
          enable='between(t,$main_caption_start,$main_caption_end)'[main]; \
    [2:v]drawtext=fontfile=$font_dir/$pre_title_font: \
         text='$pre_title_text': \
         x='$pre_title_x': y='$pre_title_y': \
         fontsize=$pre_title_font_size: \
         fontcolor=$pre_title_color [pre]; \
    [pre][logo_b] overlay=x='$pre_image_x':y='$pre_image_y' [pre] ; \
    [3:v]drawtext=fontfile=$font_dir/$post_title_font: \
         x='$post_title_x': y='$post_title_y': \
        fontsize=$post_title_font_size: \
        fontcolor=$post_title_color [post]; \
    [pre][main][post]concat=n=3 \
  " \
  -vsync 2 \

Monday, November 30, 2020

Mason Neck State Park - Enjoying Kid Friendly Hiking with Fun Kids

Last week we hiked Mason Neck State Park with friends. It had been years since we'd been to Mason Neck and I forgot what a little gem it is. We tackled the Wilson Spring and Bayview Trail, making for about 2 miles of hiking. This was the perfect distance for the kids. The trail winds through forest and swamps and there's no big ups to tire out little legs.

Shortly after the hike began we came across a geocache. The kids searched for, and Aurora quickly found, it. It was nice to start our hike with a win!

Midway thorugh our hike we stopped at the beach on the bay to properly explore the area. We marveled at the massive snail shells, and watched what appeared to be a duck-hunter doing his thing further down the beach. Shia even built a little sand castle!

I saw a number of hikers carrying big lenses, which is promising. My guess is that the glass was to be used for bird watching. We didn't see any interesting specimens this trip, but I'm sure they are there.

At about 40 minutes away from DC, Mason Neck is the perfect combo of backwoods adventure and family friendly convenience. I'm psyched to go back.


Related Posts with Thumbnails