I had the idea to try and use a language model as a random number generator. I didn’t expect it to actually work as a uniform random number generator but was curious to see what the distribution of numbers would look like.

My goal was to prompt the model to generate a number between 1 and 100. I could also vary the temperature to see how that changed the distribution of the numbers.

In this notebook, we train two similar neural nets on the classic Titanic dataset using techniques from fastbook chapter 1 and chapter 4.

The first, we train using mostly PyTorch APIs. The second, with FastAI APIs. There are a few cells that output warnings. I kept those because I wanted to preserve print outs of the models’ accuracy.

The Titanic data set can be downloaded from the link above or with:

!kaggle competitions download -c titanic

To start, we install and import the dependencies we’ll need:

I use direnv to manage my shell environment for projects. When using a Jupyter notebook within a project, I realized that the environment variables in my .envrc file were not being made available to my notebooks. The following worked for me as a low-effort way to load my environment into the notebook in a way that wouldn’t risk secrets being committed to source control, since I gitignore the .envrc file.

The code below assumes an .envrc file exists in the project root, containing

I upgraded to macOS Sequoia a few weeks ago. I had a feeling this update wasn’t going to be trivial with my Nix setup, but after trying to upgrade to a newer package version on unstable, I got a message that seemed to imply I needed to upgrade the OS, so I went for it. Also, I was at least confident I wouldn’t lose too much about my setup given it’s all committed to version control in my nix-config repo.

I added some configuration to this Hugo site allow access to the raw Markdown versions of posts. This enables you to hit URLs such as this to get the raw markdown of this post. You can find the same Raw link at the bottom of all my posts as well.

This addition was made possible with the follow config changes

[outputs]
# ...
page = ["HTML", "Markdown"]

[mediaTypes]
[mediaTypes."text/markdown"]
suffixes = ["md"]

[outputFormats]
[outputFormats.Markdown]
mediaType = "text/markdown"
isPlainText = true
isHTML = false
baseName = "index"
rel = "alternate"

which rebuilds the original post markdown according to the definition in layouts/_default/single.md

Hugo allows you to store your images with your content using a feature called page bundles. I was loosely familiar with the feature, but Claude explained to me how I could use it to better organize posts on this site and the images I add to them. Previously, I defined a _static directory at the root of this site and mirrored my entire content folder hierarchy inside _static/img. This approach works ok and is pretty useful if I want to share images across posts, but jumping between these two mirrored hierarchies became a bit tedious while I was trying to add images to the markdown file I generated from a Jupyter notebook (.ipynb file). Using page bundles, I could store the images right next to the content like this:

I was listening to episode 34 of AI & I of Dan Shipper interviewing Simon Eskiidsen. Simon was describing one of the processes he uses with language models to learn new words and concepts. In practice, he has a prompt template that instructs the model to explain a word to him but using it in a few sentences and giving synonyms, then injects the specific word or phrase into this template.

The following is the notebook I used to experiment training an image model to classify types of rowing shells (with people rowing them) and the same dataset by rowing technique (sweep vs. scull). There are a few cells that output a batch of the data. I decided not to include these because the rowers in these images didn’t ask to be on my website. I’ll keep this in mind when selecting future datasets as I think showing the data batches in the notebook/post is helpful for understanding what is going on.

I’ve continued experimenting with techniques to prompt a language model to solve Connections. At a high level, I set out to design an approach to hold the model to a similar standard as a human player, within the restrictions of the game. These standards and guardrails include the following:

  1. The model is only prompted to make one guess at a time
  2. The model is given feedback after each guess including:
    • if the guess was correct or incorrect
    • if 3/4 words were correct
    • if a guess was invalid (including a repeated group or if greater than or fewer than four words, or hallucinated words are proposed)
  3. If the model guesses four words that fit in a group, the guess is considered correct, even if the category isn’t correct

An example

Here is an example conversation between the model and the scorer, as the model attempts to solve the puzzle.

I set out to do a project using my learnings from the first chapter of the fast.ai course. My first idea was to try and train a Ruby/Python classifier. ResNets are not designed to do this, but I was curious how well it would perform.

Classifying images of sources code by language

My first idea was to download a bunch of source code from GitHub, sort it by language type, then convert it to images with Carbon. After working through some GitHub rate limiting issues, I eventually had a list of the top 10 repositories for several different languages. From here, I created a list of files in these repos, filtering by the extension of the programming language I wanted to download.