As I’ve fallen more down the rabbit hole, empowered by nix making it so easy to install, configure and manage any software, I discovered Alacritty as a fast, configurable terminal emulator. I’ve used and enjoyed iTerm2 for a while but it never hurts to try something new. I have some muscle memory built up for how my use my machine, so my aim was to configure something I could use comfortably in Alacritty, modeling it off of my iTerm setup.

I’ve used Hammerspoon as a window manager for almost 10 years. I decided to explore some of the newer tools in window management to see if I could find an alternative approach for what I do with Hammerspoon. Using yabai and skhd, I wrote the following skhdrc file that nearly reproduces the core functionality of my Hammerspoon window management code. I have four general window management use cases:

  • halves
  • quarters
  • maximize
  • move to another display

Here’s how I implemented that with skhd hotkeys mapped to yabai commands:

This post is extremely similar to nix flakes and direnv. Here, I repeated my process, but with a little more thought and a little less language model magic.

I setup my new computer to use nix, switching away from Homebrew, which I’ve used to manage and install dependencies on my system for about a decade. My goal was to unify my configuration management with my package management. Thus far, I’ve been quite satisfied. However, I’ve also relied on asdf to manage and switch between multiple versions of things like Python and Node. Lately, I’ve been jumping between projects that use different versions of node. While modifying my home.nix file and rebuilding would be pretty simple, I wanted to see if I could enable easy access to multiple versions of node at the same time. My first attempt was to add both of the following to my home.packages

I was following this guide to setup nix-darwin on a new Mac when I ran into an issue following the section about cross-compiling Linux binaries. I put this issue to the side when I first encountered it because I was trying to setup dependency management for my new system and this problem didn’t prevent that. However, I was reminded when I read another article by Jacek, which motivated me to figure out what the problem was.

Last year I wrote about nix and direnv as I explored the potential convenience of an isolated, project-specific environment. There were some interesting initial learnings about nix, but I didn’t really know what I was doing. Now, I still don’t know what I’m doing, but I’ve been doing it for longer. As an example, I’m going to walk through how I set up a flake-driven development environment for this blog with direnv.

I just did a fresh clone of my site for the first time in (probably) years. I’ve been using nix on my new system, so I was writing a flake to setup a development environment for the site with Hugo and Python. When I ran hugo serve, I saw all my content show up

                   | EN
-------------------+------
  Pages            | 528
  Paginator pages  |  20
  Non-page files   |   0
  Static files     | 173
  Processed images |   0
  Aliases          |  53
  Sitemaps         |   1
  Cleaned          |   0

but when I went to load the local site at localhost:1313, I saw “Page Not Found”. Being new to nix and still not quite understanding everything I’m doing, I assumed it was something wrong with my flake or system install. After half an hour, tearing things up and even checking my old system and deployment pipeline to make sure the version of Hugo I was using with nix was one that was close to my old system’s, I starting paying more attention to the warnings in the console

OpenAI popularized a pattern of streaming results from a backend API in realtime with ChatGPT. This approach is useful because the time a language model takes to run inference is often longer than what you want for an API call to feel snappy and fast. By streaming the results as they’re produced, the user can start reading them and the product experience doesn’t feel slow as a result.

OpenAI has a nice example of how to use their client to stream results. This approach makes it straightforward to print each token out as it is returned by the model. Most user facing apps aren’t command line interfaces, so to build our own ChatGPT like experience where the tokens show up in realtime on a user interface, we need to do a bit more work. Using Server-Sent Events (SSE), we can display results to a user on a webpage in realtime.

I tried out Deno for the first time. Deno bills itself as

the most productive, secure, and performant JavaScript runtime for the modern programmer

Given my experience with it so far, I think it may have a case. One thing I immediately appreciated about Deno was how quickly I could go from zero to running code. It’s one of the things I like about Python that has kept me coming back despite a number of other shortcomings. Deno integrates easily into VS Code (Cursor) with the vscode_deno plugin. I found this plugin with a quick search in the marketplace.

Disclaimer: I am not a security expert or a security professional.

I’ve tried out many new AI/LLM libraries in the past year. Many of these are written in Python. While trying out new and exciting software is a lot of fun, it’s also important to be mindful about what code you allow to run on your system. Even if code is open source, it’s still possible that the cool open source library you installed includes code like

Edit (2024-07-21): Vercel has updated the ai package to use different abstractions than the examples below. Consider reading their docs first before using the example below, which is out of date.

Vercel has a library called ai, that is useful for building language model chat applications. I used it to help build Write Partner The library has two main components:

  • A backend API that is called by a frontend app that streams language model responses
  • A hook (in React) that provides access to the chat, its messages and an API to fetch completions

When designing Write Partner, I started the chat session with the following messages