When deploying software with Kubernetes, you need to expose a liveness HTTP request in the application. The Kubernetes default liveness HTTP endpoint is /healthz, which seems to be a Google convention, z-pages. A lot of Kubernetes deployments won’t rely on the defaults. Here is an example Kubernetes pod configuration for a liveness check at <ip>:8080/health:

apiVersion: v1
kind: Pod
metadata:
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    livenessProbe:
      httpGet:
        path: "/health"
        port: 8080
      initialDelaySeconds: 3
      periodSeconds: 3

When setting up a new app to be deployed on Kubernetes, ideally, the liveness endpoint is defined in a service scaffold (this is company and framework dependent), but in the case it isn’t, you just need to add a simple HTTP handler for the route configured in the yaml file. In an express app, it could look something like this:

Given the following make target

.PHONY: my_target
my_target:
    @python scripts/my_script.py $(arg)

one can the argument with an argument in the following manner

make my_target arg=my_arg

I used this approach to run a python script to create the file for this post

make til p=make/pass-arg-to-target

for the following make target

.PHONY: til
til:
	@python scripts/til.py $(p)

It’s also possible to prepend the variable

p=make/pass-arg-to-target make til

I learned about skhd recently, actually after coming across the yabai project. I’ve been toying with the idea of moving away from Hammerspoon for my hotkey and window management, so I took the opportunity to explore skhd as a possible alternative.

Initial setup

To get started on macOS, I followed the guide in the project README. First, I installed skhd via brew.

brew install koekeishiya/formulae/skhd

The instructions say to start the service immediately with

I used open-interpreter to read an epub file and create a DIY audio book.

Open-interpreter suggested that I use the bs4 and ebooklib libraries. It recommended an API to create audio files from text, but I was easily able to switch this out for the free and local alternative, say on macOS. As I worked (let the model write code), it was easier to copy the code to a separate file and make modifications. However, the initial prototype built by open-interpreter accomplish the majority of the work. I was able to go from an epub file to 48 audio tracks on my phone in 15 minutes or so. Open-interpreter was a joy to collaborate with. My main wish for it at this point is for it to write the code it generates to a notebook that I can collaborate it in. This would allow me to help open-interpreter resolve issues it gets stuck on, and maintain a copy of the source that I can revisit in future sessions, or eventually turn the code into a more fully formed program.

There is a website I log into often that I protect with 2FA. One thing that bothers me about this process is that the 2FA screen does not immediately focus to the input, so I can immediately start entering my 2FA code. Today, I tackled that problem.

The most recent experience I’ve had writing userscripts was with a closed source browser extension. A few minutes of search and I discovered Violentmonkey, an open source option with no tracking software.

I usually use

tail -n +2

to get all the first line of a file but today I learned you can also accomplish the same task with

sed '1d'

Both also work for removing more than just the first line of an input. To remove the first three lines

sed '1,3d'

is equivalent to

tail -n +4

It seems like tail is recommended for larger files though, since it doesn’t process the entire file.

A spot where I slipped up in trying to adopt Temporal in an existing Python project and then again in starting a new Python project was in defining a Workflow that invokes an Activity that calls a third party library. Temporal outputs an error message with a long stacktrace that I vaguely understood but didn’t immediately know the solution to

...
raise RestrictedWorkflowAccessError(f"{self.name}.{name}")
temporalio.worker.workflow_sandbox._restrictions.RestrictedWorkflowAccessError: Cannot access http.server.BaseHTTPRequestHandler.responses from inside a workflow. If this is code from a module not used in a workflow or known to only be used deterministically from a workflow, mark the import as pass through.

The message itself is very informative “mark the import as pass through”, but requires a follow up search to find the right snippet to get right. I also overlooked the note about importing Activities in Python, mentioned in the Getting Started Guide.

I wanted to stop the Obsidian editor cursor from blinking. Something like VS Code’s

{
    "editor.cursorBlinking": "solid"
}

Some searching turned up an option to solve this problem in Vim mode using CSS, but in insert mode, the cursor still blinks. Eventually, I came across a macOS-based approach to solve this issue on StackExchange, included here for convenience

defaults write -g NSTextInsertionPointBlinkPeriod -float 10000
defaults write -g NSTextInsertionPointBlinkPeriodOn -float 10000
defaults write -g NSTextInsertionPointBlinkPeriodOff -float 10000

After running, restart Obsidian and the cursor no longer blinks. These configuration changes also disable cursor blinking in other applications, which for me, is a welcome change.

Cursor is VS Code with Cmd+K that opens a text box that can do text generation based on a prompt. When I created this post, I first typed

insert hugo yaml markdown frontmatter

In a few seconds, the editor output

---
title: "Cursor Introduction"
date: 2023-08-12T20:00:00-04:00
draft: false
tags:
- cursor
- intro
---

This was almost exactly what I was looking for except the date was not quite right, so I corrected that and accepted the generation. Since I use VS Code, I felt at home immediately. The only thing missing is my extensions, but extension installation works exactly the same way. Cmd+L opens a ChatGPT-style chat in the right sidebar. You can reference files within the chat interface with @ which seems to load them into the language model prompt as context. I asked it to describe what this post was about with

The problem with long running code in Next serverless functions

The current design paradigm at the time of this writing is called App Router.

Next.js and Vercel provide a simple mechanism for writing and deploying cloud functions that expose HTTP endpoints for your frontend site to call. However, sometimes you want to asynchronously do work on the backend in a way that doesn’t block a frontend caller, needs to move on. You could fire and forget the call from the frontend, but this is often not safe when running in a serverless environment. The following approach uses two server-side API endpoints to run an asynchronous function from the perspective of the frontend caller.