I downloaded Warp today. I’ve been using iTerm2 for years. It’s worked well for me but Warp came recommended and so I figured I should be willing to give something different a chance. Warp looks like a pretty standard terminal except you need to sign-in, as with most things SaaS these days. It looks like the beta is free but there is a paid version for teams. Warp puts “workflows” as first class citizens of the editor experience. These occupy the left sidebar where files typically live in a text editor. At first past, workflows seem like aliases where the whole “formula” is visible in the terminal window when you invoke them, rather than requiring you to memorize your alias/function and arguments. Additionally, typing workflows: or w: in the prompt, opens a workflow picker with fuzzy search and a preview of what the workflow runs. It comes with window splitting (like tmux) by default, and somehow using my personal hotkeys. I’m not sure if this is a lucky coincidence or it they somehow loaded by iTerm2 settings. By default, the PS1 is

Nix Language

To broaden my knowledge of nix, I’m working through an Overview of the Nix Language.

Most of the data types and structures are relatively self-explanatory in the context of modern programming languages.

Double single quotes strip leading spaces.

''  s  '' == "s  "

Functions are a bit unexpected visually, but simply enough with an accompanying explanation. For example, the following is a named function f with two arguments x and y.

f = x: y: x*y

To call the function, write f 1 4. Calling the function with only a single arg returns a partial.

Zero to Nix

I started working through the Zero to Nix guide. This is a light introduction that touch on a few of the command line tools that come with nix and how they can be used to build local and remote projects and enter developer environments. While many of the examples are high level concept you’d probably apply when developing with nix, flake templates are one thing I could imagine returning to often.

Go introduced modules several years ago as part of a dependency management system. My Hugo site is still using git submodules to manage its theme. I attempted to migrate to Go’s submodules but eventually ran into a snag when trying to deploy the site.

To start, remove the submodule

git submodule deinit --all

and then remove the themes folder

git rm -r themes

To finish the cleanup, remove the theme key from config.toml.

The threading macro in Clojure provides a more readable way to compose functions together. It’s a bit like a Bash pipeline. The following function takes a string, splits on a : and trims the whitespace from the result. The threading macro denoted by -> passes the threaded value as the first argument to the functions.

(defn my-fn
  [s]
  (-> s
    (str/split #":") ;; split by ":"
    second ;; take the second element
    (str/trim) ;; remove whitespace from the string
    )
  )

There is another threading macro denoted by ->> which passes the threaded value as the last argument to the functions. For example:

I was interested to learn more about the developer experience of Cloudflare’s D1 serverless SQL database offering. I started with this tutorial. Using wrangler you can scaffold a Worker and create a D1 database. The docs were straightforward up until the Write queries within your Worker section. For me, wrangler scaffolded a worker with a different structure than the docs discuss. I was able to progress through the rest of the tutorial by doing the following:

I tried out jsonformer to see how it would perform with some of structured data use cases I’ve been exploring.

Setup

python -m venv env
. env/bin/activate
pip install jsonformer transformers torch

Code

⚠️ Running this code will download 10+ GB of model weights ⚠️

from jsonformer import Jsonformer
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")

json_schema = {
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "RestaurantReview",
  "type": "object",
  "properties": {
    "review": {
      "type": "string"
    },
    "sentiment": {
      "type": "string",
      "enum": ["UNKNOWN", "POSITIVE", "MILDLY_POSITIVE", "NEGATIVE", "MILDLY_NEGATIVE"]
    },
    "likes": {
      "type": "array",
      "items": {
        "type": "string"
      }
    },
    "dislikes": {
      "type": "array",
      "items": {
        "type": "string"
      }
    }
  },
  "required": ["review", "sentiment"]
}
prompt = """From the provided restaurant review, respond with JSON adhering to the schema.
Use content from the review only.
Review:
Amazing food, I like their brisket sandwiches! Also, they give you a lot of sides! Excited to come again.
Response:
"""
jsonformer = Jsonformer(model, tokenizer, json_schema, prompt)
generated_data = jsonformer()

print(json.dumps(generated_data, indent=2))

Results

(env) ~/ time python run_review.py
{
  "review": "Amazing food, I like their brisket sandwiches",
  "sentiment": "POSITIVE",
  "likes": [
    "They give you a lot of sides!"
  ],
  "dislikes": [
    "I'm not a fan of the rice"
  ]
}
150.52s user 98.48s system 104% cpu 3:57.68 total

(env) ~/ time python run_review.py
{
  "review": "Amazing food, I like their brisket sandwiches",
  "sentiment": "POSITIVE",
  "likes": [
    "Excited to come again"
  ],
  "dislikes": [
    "Their sandwiches are too expensive"
  ]
}
141.12s user 92.58s system 109% cpu 3:34.12 total

(env) ~/ time python run_review.py
{
  "review": "Amazing food, I like their brisket sandwiches",
  "sentiment": "POSITIVE",
  "likes": [
    "Excited to come again"
  ],
  "dislikes": [
    "They give you a lot of sides"
  ]
}
148.66s user 96.66s system 106% cpu 3:50.38 total

Takeaways

jsonformer’s has a nice API to mandate structured output of a language model. The quality of the output from dolly isn’t the best. There are hallucinations and only a single like and dislike is generated for each completion. It would be nice it if supported more than just JSON schemas. It runs quite slowly on an M1 Macbook Pro. This library could become much more compelling if OpenAI is added.

I’ve been keeping an eye out for language models that can run locally so that I can use them on personal data sets for tasks like summarization and knowledge retrieval without sending all my data up to someone else’s cloud. Anthony sent me a link to a Twitter thread about product called deepsparse by Neural Magic that claims to offer

[a]n inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application