I’ve continued experimenting with techniques to prompt a language model to solve Connections. At a high level, I set out to design an approach to hold the model to a similar standard as a human player, within the restrictions of the game. These standards and guardrails include the following:

  1. The model is only prompted to make one guess at a time
  2. The model is given feedback after each guess including:
    • if the guess was correct or incorrect
    • if 3/4 words were correct
    • if a guess was invalid (including a repeated group or if greater than or fewer than four words, or hallucinated words are proposed)
  3. If the model guesses four words that fit in a group, the guess is considered correct, even if the category isn’t correct

An example

Here is an example conversation between the model and the scorer, as the model attempts to solve the puzzle.

I set out to do a project using my learnings from the first chapter of the fast.ai course. My first idea was to try and train a Ruby/Python classifier. ResNets are not designed to do this, but I was curious how well it would perform.

Classifying images of sources code by language

My first idea was to download a bunch of source code from GitHub, sort it by language type, then convert it to images with Carbon. After working through some GitHub rate limiting issues, I eventually had a list of the top 10 repositories for several different languages. From here, I created a list of files in these repos, filtering by the extension of the programming language I wanted to download.

I’ve enjoyed using fasthtml to deploy small, easily hosted webpages for little apps I’ve been building. I’m still getting used to it but it almost no effort at all to deploy. Recently, I built an app that would benefit from having a loading spinner upon submitting a form, but I couldn’t quite figure out how I would do that with htmx in FastHTML, so I built a small project to experiment with various approaches. This is what I came up with:

I revisited Eugene’s excellent work, “Prompting Fundamentals and How to Apply Them Effectively”. From this I learned about the ability to prefill Claude’s responses. Using this technique, you can quickly get Claude to output JSON without any negotiation and avoid issues with leading codefences (e.g. ```json).

While JSON isn’t as good an example as XML, which ends less ambiguously, here’s a quick script showing the concept:

import anthropic


message = anthropic.Anthropic().messages.create(
    model="claude-3-5-sonnet-20240620",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": """<status>Today is Tuesday, September 3rd, 2024 at 8:46pm ET, in New York, NY</status>
Extract the <day_of_week>, <month>, <day>, <year> and <location> from the <status> as JSON.
""",
        },
        {"role": "assistant", "content": "{"},
    ],
    stop_sequences=["}"],
)
print(message.content[0].text)

The script outputs

One challenge I’ve continued to have is figuring out how to use the models on Huggingface. There are usually Python snippets to “run” models that often seem to require GPUs and always seem to run into some sort of issues when trying to install the various Python dependencies. Today, I learned how to run model inference on a Mac with an M-series chip using llama-cpp and a gguf file built from safetensors files on Huggingface.

I’ve been experimenting with FastHTML for making quick demo apps, often involving language models. It’s a pretty simple but powerful framework, which allows me to deploy a client and server in a single main.py – something I appreciate a lot for little projects I want to ship quickly. I currently use it how you might use streamlit.

I ran into an issue where I was struggling to submit a form with multiple images.

I spent a bit of time configuring WezTerm to my liking. This exercise was similar to rebuilding my iTerm setup in Alacritty. I found WezTerm to be more accessible and strongly appreciated the builtin terminal multiplexing because I don’t like using tmux.

I configured WezTerm to provide the following experience. Getting this working probably took me 30 minutes spread across a few sessions as I noticed things I was missing.

  • Monokai-like theme
  • Horizontal and vertical pane splitting
  • Dimmed inactive panes
  • Steady cursor
  • Immediate pane closing with confirmation if something is still running
  • Pane full screening
  • Command+arrow navigation between panes
  • Command+option+arrow navigation between tabs
  • Moving between words in the command prompt with option-arrow
  • Hotkey to clear terminal

What went well

I found achieving these configurations to be much easier in WezTerm than Alacritty, or at least, it took me less time. The blend of native UI with dotfile-style configurable settings hits a sweet spot for my preferences as well, and I haven’t even scratched the surface of scripting things with Lua.

I’ve done some experimentation extracting structured data from documents using VLMs. A summary of one approach I’ve tried can be found in my repo, impulse. I’ve found using Protobufs to be a relatively effective approach for extracting values from documents. The high-level idea is you write a Protobuf as your target data model then use that Protobuf itself as most of the prompt I really need a name for this as I reference the concept so frequently. . I discussed the approach in more detail in this post so I am going to jump right into it.

I’ve been prompting models to output JSON for about as long as I’ve been using models. Since text-davinci-003, getting valid JSON out of OpenAI’s models didn’t seem like that big of a challenge, but maybe I wasn’t seeing the long tails of misbehavior because I hadn’t massively scaled up a use case. As adoption has picked up, OpenAI has released features to make it easier to get JSON output from a model. Here are three examples using structured outputs, function calling and just prompting respectively.

In light of OpenAI releasing structured output in the model API, let’s move output structuring another level up the stack to the microservice/RPC level.

A light intro to Protobufs

Many services (mostly in microservice land) use Protocol Buffers (protobufs) to establish contracts for what data an RPC requires and what it will return. If you’re completely unfamiliar with protobufs, you can read up on them here.

Here is an example of a message that a protobuf service might return.