Last year I wrote about nix and direnv as I explored the potential convenience of an isolated, project-specific environment. There were some interesting initial learnings about nix, but I didn’t really know what I was doing. Now, I still don’t know what I’m doing, but I’ve been doing it for longer. As an example, I’m going to walk through how I set up a flake-driven development environment for this blog with direnv.
I just did a fresh clone of my site for the first time in (probably) years.
I’ve been using nix on my new system, so I was writing a flake to setup a development environment for the site with Hugo and Python.
When I ran hugo serve
, I saw all my content show up
| EN
-------------------+------
Pages | 528
Paginator pages | 20
Non-page files | 0
Static files | 173
Processed images | 0
Aliases | 53
Sitemaps | 1
Cleaned | 0
but when I went to load the local site at localhost:1313
, I saw “Page Not Found”.
Being new to nix and still not quite understanding everything I’m doing, I assumed it was something wrong with my flake or system install.
After half an hour, tearing things up and even checking my old system and deployment pipeline to make sure the version of Hugo I was using with nix was one that was close to my old system’s, I starting paying more attention to the warnings in the console
I tried out Deno for the first time. Deno bills itself as
the most productive, secure, and performant JavaScript runtime for the modern programmer
Given my experience with it so far, I think it may have a case. One thing I immediately appreciated about Deno was how quickly I could go from zero to running code. It’s one of the things I like about Python that has kept me coming back despite a number of other shortcomings. Deno integrates easily into VS Code (Cursor) with the vscode_deno plugin. I found this plugin with a quick search in the marketplace.
Edit (2024-07-21): Vercel has updated the ai
package to use different abstractions than the examples below.
Consider reading their docs first before using the example below, which is out of date.
Vercel has a library called ai
, that is useful for building language model chat applications.
I used it to help build Write Partner
The library has two main components:
- A backend API that is called by a frontend app that streams language model responses
- A hook (in React) that provides access to the chat, its messages and an API to fetch completions
When designing Write Partner, I started the chat session with the following messages
Goku has a concept called a simlayer
.
A simlayer
allows you to press any single key on the keyboard, then any second key while holding the first and trigger an arbitrary action as a result.
I’m going to write a karabiner.edn
config that opens Firefox when you press .+f.
{:simlayers {:launch-mode {:key :period}},
:templates {:open-app "open -a \"%s\""},
:main
[{:des "launch mode",
:rules [:launch-mode [:f [:open-app "Firefox"]]]}]}
❯ goku
Done!
To start, we define a simlayer
for the period key.
We will reference this layer when we define our rules.
Next we define a template.
Each entry in :templates
is a templated shell command that can run when a rule is satisfied.
Finally, we define the “launch mode” rule in :main
.
We can call it anything we want, so I chose “launch mode”.
Now let’s breakdown the rule
Karabiner is a keyboard customizer for macOS. I’ve used it for a while to map my caps lock key to cmd + ctrl + option + shift. This key combination is sometimes called a hyper key. With this keyboard override, I use other programs like Hammerspoon and Alfred to do things like toggle apps and open links. Karabiner provides an out-of-the-box, predefined rule to perform this complex modification. I’ve used this approach for a while but recently learned about Goku which adds a lot of additional functionality to Karabiner using Clojure’s extensible data notation (edn) to declaratively configure Karabiner.
I’ve starting playing around with Fireworks.ai to run inference using open source language models with an API.
Fireworks’ product is the best I’ve come across for this use case.
While Fireworks has their own client, I wanted to try and use the OpenAI Python SDK compatibility approach, since I have a lot of code that uses the OpenAI SDK.
It looks like Fireworks’ documented approach no longer works since OpenAI published version 1.0.0
.
I got this error message:
In a previous note, I discussed running coroutines in a non-blocking manner using gather
.
This approach works well when you have a known number of coroutines that you want to run in a non-blocking manner.
However, if you have tens, hundreds, or more tasks, especially when network calls are involved, it can be important to limit concurrency.
We can use a semaphore to limit the number of coroutines that are running at once by blocking until other coroutines have finished executing.
Python coroutines allow for asynchronous programming in a language that earlier in its history, has only supported synchronous execution. I’ve previously compared taking a synchronous approach in Python to a parallel approach in Go using channels. If you’re familiar with async/await in JavaScript, Python’s syntax will look familiar. Python’s event loop allows coroutines to yield control back to the loop, awaiting their turn to resume execution, which can lead to more efficient use of resources. Using coroutines in Python is different from JavaScript because they can easily or even accidentally be intermingled with synchronously executing functions. Doing this can produce some unexpected results, such as blocking the event loop and preventing other tasks from running concurrently.
Render is a platform as a service company that makes it easy to quickly deploy small apps. They have an easy-to-use free tier and I wanted run a Python app with dependencies managed by Poetry. Things had been going pretty well until I unexpectedly got the following error after a deploy
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
You don’t have to search for too long to find out this isn’t good.
I tried changing the PYTHON_VERSION
and POETRY_VERSION
to no avail.
I also read a few threads on community.render.com.
With nothing much else I could think of trying, I happened to find the Clear build cache & deploy
sub-option under Manual Deploy
.
Fortunately for me, running that fixed my issue.
Hopefully, this helps save someone time.