Friday, June 24, 2022

Defeating Blogger's Preview Click Trap

One quirk of Google's Blogger platform is that when you preview posts it sets up an invisible HTML element overlaying the page that keeps click events from being delivered. For example, in preview mode, this link can't be clicked on. In the preview page source code, Google names this element appropriately: blogger-clickTrap. Click trap, indeed.

If you right-mouse click on the preview page and select 'Inspect' you can see the details of this overlay:

Removing the click trap using the inspector panel is straightforward. Add display: none to the blogger-clickTrap element's style and clicks work again.

For years I've either ignored the click trap, or when necessary, manually removed it.

Yesterday, however, I wrote a post that made heavy use of click events. While composing the post I found I needed to frequently refresh the page and then take a few moments to manually remove the overlay. After the 100th time of doing this, I stopped what I was doing and took a moment to consider: could I automatically disable the blogger-clickTrap element?

Tampermonkey to the Rescue

With a few minutes of consideration, I realized that Tampermonkey should be able to save the day. Tampermonkey allows custom code to be injected on the page of your choice. All I needed was to write a few lines of code that would set display: none on the right element when the page loaded, and the click trap would be a distant memory.

I created a new Tampermonkey script to get started:

// ==UserScript==
// @name         Bye-bye clicktrap!
// @namespace    http://blogbyben.com/
// @version      0.1
// @description  Remove blogger preview's clicktrap
// @author       Ben Simon
// @match        https://draft.blogger.com/blog/post/edit/preview/*
// @icon         https://www.google.com/s2/favicons?sz=64&domain=blogger.com
// @grant        none
// ==/UserScript==

(function() {
    'use strict';
    console.log("Is this thing on?");
})();

I loaded up the preview page and success!

I then updated the code to show that it could find the offending element:

// ==UserScript==
// @name         Bye-bye clicktrap!
// @namespace    http://blogbyben.com/
// @version      0.1
// @description  Remove blogger preview's clicktrap
// @author       Ben Simon
// @match        https://draft.blogger.com/blog/post/edit/preview/*
// @icon         https://www.google.com/s2/favicons?sz=64&domain=blogger.com
// @grant        none
// ==/UserScript==

(function() {
    'use strict';
    var found = document.querySelector('.blogger-clickTrap');
    console.log("Found it!", found);
})();

And that's where I hit a problem: no matter how or when I queried the document, my Tampermonkey script couldn't find the click trap element. In the inspector, I could see it was there, and yet, I couldn't programatically access it.

After much debugging I realized that the preview page is a generic shell and imports a specific post by using an iframe pointed to https://<something>.blogspot.com/b/blog-preview?token=.

my attempts to access the iframe's contents via the main page were being denied, apparently because they weren't from the same origin.

My first thought was: I'm already mucking with these requests, perhaps I can turn off this security policy that's blocking me?

However in my search to figure out how to do this, I realized there's a much simpler solution. I needed to be running my Tampermonkey script not on the preview page, but on *.blogspot.com/b/blog-preview*.

When I updated the @match rule on the script above, my code began to work:

Once I was running the Tampermonkey code on the right URL, having it remove the click trap itself was trivial. Here's the final Tampermonkey script:

// ==UserScript==
// @name         Bye-bye clicktrap!
// @namespace    http://blogbyben.com/
// @version      0.1
// @description  Remove blogger preview's clicktrap
// @author       Ben Simonm
// @match        https://*.blogspot.com/b/blog-preview*
// @icon         https://www.google.com/s2/favicons?sz=64&domain=blogger.com
// @grant        none
// ==/UserScript==

(function() {
    'use strict';
    var found = document.querySelector('.blogger-clickTrap');
    found.style.display = 'none';
})();

Happy clicking!

Thursday, June 23, 2022

Story Time: The Pumpkin

It all started back in October of 2020 with a delightful trip to the pumpkin patch.

After checking out many pumpkins and taking many pictures, we decided to bring home this little guy.

Rather than carve him up for Halloween, we gave him a bath and put him on display.

When he started to get soft, we put him our backyard to feed the local flora and fauna. And then winter happened.

In the spring time, among all the weeds growing in our backyard were a few pumpkin plants. Once identified, their massive leaves were hard to miss. It was a pumpkin-patch miracle!

For weeks, our pumpkin plants grew. One day, flowers started to appear.

Before we knew it, our back yard was bursting with giant yellow pumpkin flowers. Pumpkin flowers are magnificent.

One day, a tiny pumpkin started growing off one of the plants.

And it grew.

And grew.

Finally, we were heading out of town one weekend and we opted to clip the pumpkin off the vine to keep it from getting noshed on by squirrels.

What can you do with a tiny green pumpkin? First, we carefully extracted and dried the seeds. How amazing is that our one little pumpkin can give us more pumpkins!

With the seeds separated, we turned our attention to cooking the the pumpkin. It was green, so we doubted it would taste good. We followed a 'recipe' a civil war soldier logged in his diary when he found himself with little to eat but a green pumpkin: slice up the pumpkin, add a little salt and sugar, and fry in butter until till tender. It was delicious!

As the last hard frost date approached, we took our dried pumpkin seeds and soaked them in water to get them ready for planting.

When we were confident the ground wasn't going to freeze again, we picked a few places around our property to plant the seeds. To increase our odds of success, we mixed our pumpkin's seeds with a packet of seed we purchased off of Amazon.

A few days later, the seeds sprouted! Of course, this is how plants work and this is all perfectly normal. Still, seeing a sprouting seed feels like witnessing a little miracle.

And now our next generation of pumpkins is growing. Will they grow massive leaves like their daddy did? Will they produce beautiful flowers and maybe even another tasty and beautiful pumpkin? We'll have to wait and see.

The End.

Wednesday, June 22, 2022

The Cure for Card (and Dice) Game Amnesia

It seems that whenever I have a deck of cards, a group and some time I immediately forget every card game I've ever played. This fact, combined with some upcoming travel had me looking for a fresh solution to remembering the names of, and rules to, interesting card games.

Before I started working on a solution to this problem, I decided to expand the challenge a bit. I picked up a 10 pack of dice for $1.99 with then intention of adding dice games to my repertoire.

Code It

I eventually settled on a simple plan. Rather than build a comprehensive list of games, I focused on just a few, about 5 card games and 5 dice games. From there, I created a cheat sheet for each game to simplify remembering the rules and game play. You can find version 1.0 of this effort over at: github.com/benjisimon/offline-games.

Each game has its own Markdown file that captures the rules, scoring and links to How To Play resources on the web. I created two PHP scripts to work with these files. One script converts the Markdown file to a portable HTML file, while the other creates a top level index for browsing the games.

The Markdown files try to be consistent in how they describe game play. The hope is that the terse and consistent nature of each file will make it easy to bring myself and others up to speed on how a game is played. Looking at what I've created, you'll see that some of the files are fairly detailed, while others are little more than learn more links.

Keep in mind that the point of the cheat sheets is to give me a quick reminder as to how the game is played; not to be a comprehensive guide to the rules.

Once you grab the github repository, you can run make to generate the relevant HTML files.

$ make
(cd scripts/lib ; composer install)
Installing dependencies from lock file (including require-dev)
Verifying lock file contents can be installed on current platform.
Package operations: 1 install, 0 updates, 0 removals
  - Installing michelf/php-markdown (1.9.1): Extracting archive
Generating autoload files
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Ninety_Eight.md > games/card/Ninety_Eight.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Golf.md > games/card/Golf.html
mkdir -p games/dice/
php -f scripts/mkhtml.php srcs/dice/Bunco.md > games/dice/Bunco.html
mkdir -p games/dice/
php -f scripts/mkhtml.php srcs/dice/Ship_Captain_Crew.md > games/dice/Ship_Captain_Crew.html
mkdir -p games/dice/
php -f scripts/mkhtml.php srcs/dice/Ducks_in_a_Bucket.md > games/dice/Ducks_in_a_Bucket.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Horse_Race.md > games/card/Horse_Race.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Rummy.md > games/card/Rummy.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Casino.md > games/card/Casino.html
mkdir -p games/dice/
php -f scripts/mkhtml.php srcs/dice/Farkle.md > games/dice/Farkle.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Coup.md > games/card/Coup.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Cribbage.md > games/card/Cribbage.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/Slapjack.md > games/card/Slapjack.html
mkdir -p games/card/
php -f scripts/mkhtml.php srcs/card/The_Mind.md > games/card/The_Mind.html
php -f scripts/mkindex.php > index.html

Serve It

Because the HTML files are all self contained, you can open them up in any web browser, including your phone's. You can also serve up the files via a simple web server. On my desktop I can browse the games using PHP's built in web server:

$ php -S localhost:9001
[Tue Jun 21 15:35:46 2022] PHP 7.4.27 Development Server (http://localhost:9001) started

More than a little surprisingly, I can do the same thing under Termux and proot on my Galaxy S22 Ultra. Again, I'm using PHP's built in web server:

A less geeky Android solution is to install the Simple HTTP Server app on your phone, and serve up the content that way:

The most practical way I've found to access the game files on my phone is to open up the games directory in My Files and select the Add To Home screen option. Once this is done, a short cut is added to my home screen that gives me one click access.

Problem Solved. Maybe.

I'm far from convinced that I've actually solved the problem I set out to.

On one hand, curating a limited set of games was clearly a good idea. And writing up the cheat sheets is very much in line with the adage that if you want to truly understand something, teach it to others.

The Markdown, PHP and Make solution was fun to work out. Learning that I can run php -S localhost:9000 under Termux / proot and access that content via my phone's browser is downright mind blowing.

On the other hand, I'm not convinced that my game write-ups are of any more useful than existing web content. In fact, I fear that my cheat sheets will end up being both incomplete and hard to understand. And I'm pretty sure I could replace nearly this entire project with a simple Google Sheet that included a list of games and links to existing How To Play videos and web pages.

Still, I'm glad I've got this iteration of Offline Games built out and I'm psyched to field test it over the next couple of months.

Casino anyone? How about a round of Ship, Captain and Crew?

Friday, June 10, 2022

Review: A Long Walk to Water

A Long Walk to Water by Linda Sue Park, opens with two stories: one about a young girl named Nya who, as the title suggests, spends her day doing little more than walking to and from a water source. The second story is about a young boy named Salva who's village is overrun by his country's civil war and is forced to flee.

Both stories take place in Sudan, and both are relatively recent: Salva's story takes place in 1984 and Nya's in 2008 (practically yesterday!).

I have to admit, my first reaction after hearing the story of Nya's endless walking was one of anger towards her parents. How could they subject their child to this life? This thought was quickly followed by two additional insights.

First, while the story talks about literally spending the day retrieving water, how many of my fellow countrymen and women are stuck in essentially the same pattern? That is, going from one minimum wage job to another all trying to meet basic needs? There's lot of effort and movement, but no progress.

And more importantly, how naive and cruel to suggest that her parents are choosing to live in this barren land out of ignorance or stubbornness. Where would I like them to go, and with what resources should they go there? It's easy to forget that my ability to choose my circumstances is a privilege, not the norm shared by all.

As for Silva's story, that one was even harder for me to wrap my head around.

If you spend any time on the web researching how to prepare for emergencies, a common topic will come up: the need for a bug out bag. The idea of 'bugging out' is that some catastrophic event has happened and you need to flee your home or community.

Whether it's a house fire or a local severe weather event, it's smart to have an evacuation plan ready to execute. The problem with much of the advice on the web is the assumption that you're bugging out because of total social collapse and your bag needs to be packed for your new life as a live-off-the-land nomad.

That's just not how emergencies work.

And yet, this is exactly the scenario Silva has found himself in. With no supplies and the most vague plan, run from the sound of gun-fire, he begins a journey of survival--just like the amateur survivalists imagine it. The image of him taking nothing and walking into the wilderness is downright biblical in nature. I'm in complete awe of his courage and fortitude.

I found both stories riveting and my suggestion is that you stop reading my comments here and go read the book for yourself.

Spoilers Ahead

There are many moving moments in Silva's journey, but one that caught me off guard was how his troop of refugees manages to cross the Nile. How does one expect a group of individuals to get across a river so wide it doesn't even look like a river? Considering this is 1984, and not say, 1784, I'd expect them to find the local ferry and hitch a ride across. But that's not what they do. Instead, the group harvests local materials and builds their own boats, using them to safely make it across the river--a two day journey.

Like much of Silva's story, I again find myself in awe when I heard this anecdote. How knowledgeable and tuned for self sufficiency does one need to be that building boats is your natural solution to crossing a body of water. I'm quick to condemn Nya's parents for subjecting her to difficult living conditions, when in reality, I have little sense of what their lives and mindset are like.

One recurring thought I had while listening to the book was how modern the story is. The dates just seemed so recent. Then along come's the book's 3rd act which fully cements this notion that Silva's story isn't ancient history. By chance, Silva is ultimately settled in none other than Rochester, NY, the city I grew up in. And the date of his arrival corresponds to my graduation from college.

The reason Silva's story seems so modern to me is that we're basically the same age, and our lives essentially intersected when we were approaching our 20's. While he was escaping war and surviving in refugee camps, I was attending elementary school and thriving at Boy Scout camp.

This collision of time and space only made me appreciate Silva and Nya's stories even more.

Perhaps the most important take away from the book came as Silva's and Nya's stories are finally revealed to be connected. We are given a glimpse into the profound impact that Silva's newly installed well will have on Nya's village. The new well means that the children no longer need to spend the day retrieving water. This frees them up to attend school, and from there, get an education.

It's as if Silva's story was written to provide a counter argument to my suggestion that Nya's parents leave their home. The text seems to suggest leaving may be tempting, but there's an alternative. Look what happens when you do something as basic as building a reliable and clean water source. With that well you improve the lives of not just one child but a village of children. And those children can in turn can improve surrounding villages, and a virtuous cycle that seemed out of reach can take shape.

Lesson learned Silva, lesson learned.

Friday, June 03, 2022

49 Days in One - In Praise of Programmatic PDF Generation

My shul's Omer Learning project is wrapping up. This year we used the 49 day counting of the Omer to learn about the topic of Shmeita, the surprisingly progressive seven year cycle called for by the Torah.

The Omer Learning project works by collecting up short submissions on a topic and then sharing them out as blog, e-mail and social media posts on a nightly basis. The idea is to both mark the daily count of the omer, as well as learn a little something along the way. (Note: if all you want to do is count the omer, no project beats homercalendar.net.)

As the project closes out, I want to send out a summary of all the days so readers could see any they missed or review any that were especially inspiring. Posting them all to a single blog entry would be excessive. I considered crafting a simple single-page-website that would host the content, but that seemed overly complex and would still call on people to click around to read all the entries.

Let's Build a PDF

Ultimately, I decided on creating a PDF that would serve as a stand alone record of all submissions. I suppose the conventional way to generate this kind of document would be to copy and past each submission into a Word or Google Doc, and format each of them by hand. But as a programmer, there was no way I was going that route. Instead, I decided I'd rely on one of my favorite PHP packages: fpdf.

Fpdf allows for the creation of PDF files pragmatically via PHP. I started with two .csv files, one containing the daily submissions and the other containing a list of the names of those who contributed. I then ran them through my make_book.php script to generate this output:

# make book.pdf
$ php -f make_book.php

# Bonus: make the cover image shown below. Thanks SO.
$ convert -density 300 -depth 8 -quality 90 -background white -alpha \
     remove -alpha off book.pdf[0] cover.jpg

PDF Generation Challenges and Solutions

You can find the PHP code that generates this document here. While the code didn't take long to write, I did solve a number of interesting and common challenges along the way.

  • How can I include custom fonts in the document? Download them from the web and generate the appropriate font files by using fpdf's makefont.
  • How can I switch fonts in a document without getting confused? Implement the withStyle(...) function that sets a preferred style, executes arbitrary code and then sets the font and color back to what they were.
  • How can I pull data in from CSV files? fgetcsv.
  • How can I add a gray border to emphasize submitted content? Make use of withIndent(...) and Line(...).
  • How can I deal with gibberish characters being inserted into the document? I cheated for this one, opting for an on the fly str_replace from fancy quotes and apostrophe to simple versions. A better solution would have been to figure out how to get those nice looking artifacts to be properly rendered.
  • How can I ensure that a day's entry isn't split across a page? Make use of withSmartBreak(...) that takes in a function that lays out content on a PDF page. First, I execute this on an in-memory scratchpad document and measure how much space was consumed. Then I compare that to the current page location and see if the content will fit. If not, I add a new page.
  • What do I do with hyperlinks in the content? Use the bit.ly API to convert all URLs to shortened versions. I then include those compact links in the output. Fpdf also makes it trivial to link text to a URL. The result is that if you're looking at the PDF on a device, or in printed form, you can follow URLs with relative ease.

If you want to see these solutions in detail, check out the source code. I couldn't be more pleased with how this all turned out. The document looks sharp and the process of creating it was painless. And no copying and pasting of content was ever needed.

Download It

Here's the generated PDF: Omer Learning 2022: 49 Days to a Greener and More Equitable Community. Come for the lessons on fpdf, stay for the wisdom of Shemitah.

Friday, May 27, 2022

In Search of a Reliable Gmail Permalink

One of my favorite features of Gmail is that every message is backed by a unique URL. I frequently use this URL to easily refer back to a message. This works great, until it doesn't.

The Problem

Suppose this intriguing message from Mr. Helms is something I want to follow-up on. After all, seven million dollars is a lot of money. Normally, I'd grab the URL to this message and add it to a card on my 'On Deck' Trello Board. But in the Android version of the Gmail App, there's no obvious way to get a URL to a message.

(Incidentally, the above screenshot is from a Samsung DeX session. DeX is amazing and is on my list of topics to blog about.)

A Solution

The obvious work-around is to visit mail.google.com in the phone's browser. This will bring up the Desktop Gmail interface, which does have a URL to the message:

Depending on your device's screen size, you may end up at an alternative version of Google's web mail UI.

In this case it took some fiddling to land at the Desktop version of GMail. First I had to convince Google that I wanted to see the Basic HTML interface, and from there I could click over to the Standard interface:

So while it's possible to convince Chrome on my phone to navigate to the Desktop Gmail UI, in practice this can be maddeningly difficult to do. The multiple Google Accounts on my phone, multiple Gmail UIs and the 'smart' logic that guesses what I want to see, means that I often end up clicking in circles in search of the right page.

A Better Solution. Maybe.

An obvious replacement for all this clicking would be to leverage the Gmail API. I even have an existing command line tool, gmail_tool, that interacts with this API.

Currently, I can use gmail_tool to get me the details of the message in question:

$ gmail_tool -a list -q "label:SPAM Helms"
180df45360340ba9:.GREG HELMS, Director. Airport Storage and Cargo Unit Erie International Airport (Pennsylvania) PA 16505, USA eMAIL.

All that's need is to map the above message to the Gmail URL:

https://mail.google.com/mail/u/1/#spam/FMfcgzGpFzwKLQTspjZvpjCvNxLXMQjz

But alas, that's where things get tricky. The magic token FMfcgzGpFzwKLQTspjZvpjCvNxLXMQjz is neither a message ID nor a thread ID. Apparently, this is a 'view token' and while you can decode it in some respects, there's no obvious mapping from information in the API to this token.

A Better Solution. For Sure.

But all is not lost. This article suggests another way forward to uniquely identify a message within Gmail. The answer, which is obvious in hindsight, is to use the search operator rfc822msgid.

This is quite sensible. Each message comes with it's only unique Message-Id header. Searching by this value you should always bring up the one message in question.

So while I can't get the token FMfcgzGpFzwKLQTspjZvpjCvNxLXMQjz from the Gmail API, I can get the headers for a given message, from there search out the Message-Id value.

$ gmail_tool -a list -q "label:SPAM Helms"
180df45360340ba9:.GREG HELMS, Director. Airport Storage and Cargo Unit Erie International Airport (Pennsylvania) PA 16505, USA eMAIL.

# Pull the full JSON for all messages associated with thread id: 180df45360340ba9
$ gmail_tool -a get -i 180df45360340ba9 -v |  head -4
{
  "id": "180df45360340ba9",
  "historyId": "357728706",
  "messages": [

# Dump out all the headers associated with this thread
$ gmail_tool -a get -i 180df45360340ba9 -v | \
  jq  '.messages[] | .payload.headers[] | .name ' | gmail_tool -a get -i 180df45360340ba9 -v |  jq '.messages[] | .payload.headers[] | .name '
"Delivered-To"
"Received"
"X-Received"
"ARC-Seal"
"ARC-Message-Signature"
"ARC-Authentication-Results"
"Return-Path"
"Received"
"Received-SPF"
"Authentication-Results"
"DKIM-Signature"
"X-Google-DKIM-Signature"
"X-Gm-Message-State"
"X-Google-Smtp-Source"
"X-Received"
"MIME-Version"
"Received"
"Reply-To"
"From"
"Date"
"Message-ID"
"Subject"
"To"
"Content-Type"
"Bcc"

Once I combined my knowledge that you can search by rfc822msgid and how to access the message headers using the Gmail API, I was able to put that together into a simple option for gmail_tool:

# set the shell variable $tid to the matching thread id
$ tid=$(gmail_tool -a list -q "label:SPAM Helms" | cut -d: -f1)

# look up the URL to for $tid
$ gmail_tool -a url -i $tid
https://mail.google.com/mail/u/0/?#search/rfc822msgid:CAMHjZTPcpcGY6nyxKNDTDhxjvw1GTrZf%3DVrH-BCtaLxq2d_vqg%40mail.gmail.com

Visiting this URL takes me a Gmail search result page with the one message I'm seeking:

Success!

Here's the latest version of gmail_tool with both url and header options added:

#!/bin/bash

##
## command line tools for working with Gmail.
##
CLIENT_ID=<from https://console.cloud.google.com/apis/>
CLIENT_SECRET=<from https://console.cloud.google.com/apis/>
API_SCOPE=https://www.googleapis.com/auth/gmail.modify
API_BASE=https://www.googleapis.com/gmail/v1
AUTH_TOKEN=`gapi_auth -i $CLIENT_ID -p $CLIENT_SECRET -s $API_SCOPE token`

usage() {
  cmd="Usage: $(basename $0)"
  echo "$cmd -a init"
  echo "$cmd -a list -q query [-v]"
  echo "$cmd -a get -i id [-v]"
  echo "$cmd -a labels"
  echo "$cmd -a update -i id  -l labels-to-add -r labels-to-remove"
  echo "$cmd -a headers -i id"
  echo "$cmd -a url -i id"
  echo "$cmd -a messages -q query [-v]"

  exit
}

filter() {
  if [ -z "$VERBOSE" ] ; then
    jq "$@"
  else
    cat
  fi
}

listify() {
  sep=""
  expr="[ "
  for x in "$@" ; do
    expr="$expr $sep \"$x\""
    sep=","
  done
  expr="$expr ]"
  echo $expr
}

while getopts ":a:r:q:i:l:vp" opt ; do
  case $opt in
    a) ACTION=$OPTARG             ;;
    v) VERBOSE=yes                ;;
    q) QUERY="$OPTARG"            ;;
    l) LABELS_ADD=$OPTARG         ;;
    r) LABELS_REMOVE=$OPTARG      ;;
    i) ID=$OPTARG                 ;;
    p) PAGING=yes                 ;;
    \?) usage                     ;;
  esac
done

invoke() {
  root=$1 ; shift
  curl -s -H "Authorization: Bearer $AUTH_TOKEN" "$@" > /tmp/yt.buffer.$$
  next_page=`jq -r '.nextPageToken' < /tmp/yt.buffer.$$`

  if [ "$PAGING" = "yes" ] ; then
    if [ "$next_page" = "null" -o -z "$next_page" ] ; then
      cat /tmp/yt.buffer.$$
      rm -f /tmp/yt.buffer.$$
    else
      jq ".$root"  < /tmp/yt.buffer.$$ | sed 's/^.//'> /tmp/yt.master.$$
      while [ "$next_page" != "null" ] ; do
        curl -s -H "Authorization: Bearer $AUTH_TOKEN" "$@" -d pageToken=$next_page |
          tee /tmp/yt.buffer.$$ |
          ( echo "," ; jq ".$root" | sed 's/^.//' ) >> /tmp/yt.master.$$
        next_page=`jq -r '.nextPageToken' < /tmp/yt.buffer.$$`
      done
      rm -f /tmp/yt.buffer.$$
      echo "{ \"$root\" : [ "
      cat /tmp/yt.master.$$
      echo ' ] }'
      rm -f /tmp/yt.master.$$
    fi
  else
    cat /tmp/yt.buffer.$$
    rm /tmp/yt.buffer.$$
  fi
}

case $ACTION in
  init)
    gapi_auth -i $CLIENT_ID -p $CLIENT_SECRET -s $API_SCOPE init
    exit
  ;;
  list)
    if [ -z "$QUERY" ] ; then
      echo "Uh, better provide a query"
      echo
      usage
    fi
    invoke threads -G $API_BASE/users/me/threads \
           --data-urlencode q="$QUERY" \
           -d maxResults=50 |
      filter -r ' .threads[]? |  .id + ":" + (.snippet | gsub("[ \u200c]+$"; ""))'
    ;;

  get)
    if [ -z "$ID" ] ; then
      usage
    fi
    invoke messages -G $API_BASE/users/me/threads/$ID?format=full \
           -d maxResults=50 |
      filter -r ' .messages[] | .id + ":" + (.snippet | gsub("[ \u200c]+$"; ""))'
    ;;

  labels)
    invoke labels -G $API_BASE/users/me/labels |
      filter -r ' .labels[] | .id + ":" + .name'
    ;;


  update)
    if [ -z "$ID" ] ; then
      echo "Missing -i id"
      exit
    fi

    if [ -z "$LABELS_ADD" -a -z "$LABELS_REMOVE" ] ; then
      echo "Refusing to run if you don't provide at least one label to add or remove"
      exit
    fi

    body="{ addLabelIds: $(listify $LABELS_ADD),  removeLabelIds: $(listify $LABELS_REMOVE) }"

    invoke messages -H "Content-Type: application/json" \
           $API_BASE/users/me/threads/$ID/modify \
           -X POST -d "$body" |
        filter -r '.messages[] | .id + ":" + (.labelIds | join(","))'
    ;;

  url)
    if [ -z "$ID" ] ; then
      echo "Missing thread ID"
      exit
    fi

    base_url="https://mail.google.com/mail/u/0/?#search/rfc822msgid"
    message_id=$(gmail_tool -a get  -i $ID -v |
                   jq -r '.messages[0].payload.headers[] | select(.name | ascii_downcase == "message-id") | .value| @uri' |
                   sed -e 's/^%3C//' -e 's/%3E$//' )

    echo "$base_url:$message_id"

    ;;

  headers)
    if [ -z "$ID" ]; then
      echo "Missing thread ID"
    fi

    gmail_tool -a get -i $ID -v |
      jq -r '.messages[] | .id + ":" + (.payload.headers[] | .name + ":" + .value)'
    ;;

  *)
    usage
    ;;
esac

Monday, May 23, 2022

Duct Tape Engineering

G. has never met a ball he didn't want to befriend. So last night I decided we needed to build a ramp to give his friends a bit of new terrain to play on. Mind you, I didn't plan for this, so we had to improvise. Ultimately, I grabbed an Amazon box and a big 'ol roll of duct tape. And thus began our first experience in Red Neck Engineering. The 'ramp' we constructed wasn't much, but it got the job done.

I'm proud to report that G now knows two of life's most important phrases: Duct Tape and More Duct Tape.

I'm hardly an expert on these matters, but surely the next lesson we need to tackle is WD-40, right? Because really, what else do you need besides WD-40 and Duct Tape? Oh yeah,channel lock pliers.