Render is a platform as a service company that makes it easy to quickly deploy small apps.
They have an easy-to-use free tier and I wanted run a Python app with dependencies managed by Poetry.
Things had been going pretty well until I unexpectedly got the following error after a deploy
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
You don’t have to search for too long to find out this isn’t good.
I tried changing the PYTHON_VERSION
and POETRY_VERSION
to no avail.
I also read a few threads on community.render.com.
With nothing much else I could think of trying, I happened to find the Clear build cache & deploy
sub-option under Manual Deploy
.
Fortunately for me, running that fixed my issue.
Hopefully, this helps save someone time.
In Javascript, using async
/await
is a cleaner approach compared to use of callbacks.
Occasionally, you run into useful but older modules that you’d like to use in the more modern way.
Take fluent-ffmpeg
, a 10 year old package that uses callbacks to handle various events like start
, progress
, end
and error
.
Using callbacks, we have code that looks like this:
const ffmpeg = require('fluent-ffmpeg');
function convertVideo(inputPath, outputPath, callback) {
ffmpeg(inputPath)
.output(outputPath)
.on('end', () => {
console.log('Conversion finished successfully.');
callback(null, 'success'); // Pass 'success' string to callback
})
.on('error', (err) => {
console.error('Error occurred:', err);
callback(err);
})
.run();
}
// Usage of the convertVideo function with a callback to receive 'success' string
convertVideo('/path/to/input.avi', '/path/to/output.mp4', (error, result) => {
if (!error && result === 'success') {
console.log('Video conversion completed:', result);
} else {
console.log('Video conversion failed:', error);
}
});
Using a promise, we use async
/await
as well:
When deploying software with Kubernetes, you need to expose a liveness HTTP request in the application.
The Kubernetes default liveness HTTP endpoint is /healthz
, which seems to be a Google convention, z-pages.
A lot of Kubernetes deployments won’t rely on the defaults.
Here is an example Kubernetes pod configuration for a liveness check at <ip>:8080/health
:
apiVersion: v1
kind: Pod
metadata:
name: liveness-http
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: "/health"
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
When setting up a new app to be deployed on Kubernetes, ideally, the liveness endpoint is defined in a service scaffold (this is company and framework dependent), but in the case it isn’t, you just need to add a simple HTTP handler for the route configured in the yaml file.
In an express app, it could look something like this:
Given the following make target
.PHONY: my_target
my_target:
@python scripts/my_script.py $(arg)
one can the argument with an argument in the following manner
make my_target arg=my_arg
I used this approach to run a python script to create the file for this post
make til p=make/pass-arg-to-target
for the following make target
.PHONY: til
til:
@python scripts/til.py $(p)
It’s also possible to prepend the variable
p=make/pass-arg-to-target make til
I learned about skhd
recently, actually after coming across the yabai
project.
I’ve been toying with the idea of moving away from Hammerspoon for my hotkey and window management, so I took the opportunity to explore skhd
as a possible alternative.
Initial setup
To get started on macOS, I followed the guide in the project README.
First, I installed skhd
via brew
.
brew install koekeishiya/formulae/skhd
The instructions say to start the service immediately with
I used open-interpreter to read an epub file and create a DIY audio book.
Open-interpreter suggested that I use the bs4
and ebooklib
libraries.
It recommended an API to create audio files from text, but I was easily able to switch this out for the free and local alternative, say
on macOS.
As I worked (let the model write code), it was easier to copy the code to a separate file and make modifications.
However, the initial prototype built by open-interpreter accomplish the majority of the work.
I was able to go from an epub file to 48 audio tracks on my phone in 15 minutes or so.
Open-interpreter was a joy to collaborate with.
My main wish for it at this point is for it to write the code it generates to a notebook that I can collaborate it in.
This would allow me to help open-interpreter resolve issues it gets stuck on, and maintain a copy of the source that I can revisit in future sessions, or eventually turn the code into a more fully formed program.
There is a website I log into often that I protect with 2FA.
One thing that bothers me about this process is that the 2FA screen does not immediately focus to the input, so I can immediately start entering my 2FA code.
Today, I tackled that problem.
The most recent experience I’ve had writing userscripts was with a closed source browser extension.
A few minutes of search and I discovered Violentmonkey, an open source option with no tracking software.
I usually use
to get all the first line of a file but today I learned you can also accomplish the same task with
Both also work for removing more than just the first line of an input.
To remove the first three lines
is equivalent to
It seems like tail
is recommended for larger files though, since it doesn’t process the entire file.
To write software is to experience constant failure until you get a success.
When you start learning to write code, very little works, especially on your first try.
You make a lot of mistakes.
Maybe you copied example code to get started, then modify it to try and do something new.
Reading errors to help you understand your mistakes is the only way forward.
You can read documentation, search the web or chat with a language model to try and work through the problem, but it is inevitable that you will make mistakes when writing software.
A spot where I slipped up in trying to adopt Temporal in an existing Python project and then again in starting a new Python project was in defining a Workflow that invokes an Activity that calls a third party library.
Temporal outputs an error message with a long stacktrace that I vaguely understood but didn’t immediately know the solution to
...
raise RestrictedWorkflowAccessError(f"{self.name}.{name}")
temporalio.worker.workflow_sandbox._restrictions.RestrictedWorkflowAccessError: Cannot access http.server.BaseHTTPRequestHandler.responses from inside a workflow. If this is code from a module not used in a workflow or known to only be used deterministically from a workflow, mark the import as pass through.
The message itself is very informative “mark the import as pass through”, but requires a follow up search to find the right snippet to get right.
I also overlooked the note about importing Activities in Python, mentioned in the Getting Started Guide.