Richard WM Jones booted Linux 292,612 to find a bug where it hangs on boot. I loved reading the recounting of his process to accomplish this, by bisecting through the different versions of Linux and boot each thousands of times to determine whether the version contained the bug.


Georgi Gerganov started a company, ggml.ai, to run language models on commodity hardware.

I was listening to a podcast interview of Adam Robinson and he was discussing why he believed it is important process information with your body. He gives the example that when listening to something, he stops around once every minute and sees how it feels. He later goes on to highlight the importance of recognizing when something is “weird” and paying attention to it, or trusting one’s intuition even if rationally we can’t understand why something feels unusual. He asserts that we should take action on these intuitions – “if something seems a little bit off, it’s very off”. I could see this approach contributing to more well-rounded cognition, since it can be easier to discard these “intuitions” as unsubstantiated or lacking facts to justify them. On the other hand, unless you track how often you intuition proves correct, it could be hard to know if it’s well calibrated or trustworthy in the area in which you are applying it.

I read an article today about Scripting with Elixir that caught my eye because it touches on a problem I often struggle with: how you do easily distribute a script along with its dependencies in Python? Elixir has an example of what this could look like in the form of Mix.install. This feature allows one to distribute just the source code of your script and dependency management can be done when the script runs, without needing to distribute a mix.esx or requirements.txt file (as one does in Python) along with the source.

Today, I played around with Matt Rickard’s ReLLM library, another take on constraining LLM output, in this case, with regex. I tried to use it to steer a language model to generate structure (JSON) from unstructured input. This exercise is sort of like parsing or validating JSON with regex – it’s not a good idea. Complicated regular expressions to describe JSON are hard to write. I do love the demo that generates acronyms though. Matt also wrote parserLLM which provides to ability to use a context-free grammar to constrain the next predicted token from the language model. I prefer the context-free grammar approach at a high-level, but believe we need the language model constraints to be directly connected with the data structures we intend to use in code to effectively weave language models into our existing applications.

I’ve been trying to find a simple way to host a website that formats and serves a nice looking version of a recipe written in markdown. There are a few open source projects available, but nothing that has fit the bill yet. I briefly had the idea to try out something with Next.js and mdx but I found when I scaffolded a new app that I didn’t even recognize the project structure. Next.js has moved to the “App Router” approach for structuring projects. It’s not immediately obvious or intuitive how this works. As a batteries-included framework, it makes sense that different approaches to structure will have their own learning curves. Nevertheless, it’s a bit irritating how frequently this structure changes. I have a ~2-3 year old Next.js project that looks nothing like the project I am currently working on. And the project I am currently working on looks very different from a newly scaffolded project with App Router today.

I did a bit more work with Clojure today. My imperative programming habits are still bleeding through. The exercise introduced cond as a sort of case statement for flow control. I wrote a simple cond statement but was getting a bizarre runtime error:

(defn my-fn
  [x]
  (cond
    (x < 0) "negative"
    (x = 0) "zero"
    (x > 0) "positive"
  )
)
user=> (my-fn 1)
Execution error (ClassCastException) at user/my-fn (REPL:4).
class java.lang.Long cannot be cast to class clojure.lang.IFn (java.lang.Long is in module java.base of loader 'bootstrap'; clojure.lang.IFn is in unnamed module of loader 'app')

It took me a frustratingly long time to realize I needed to use prefix notation for the conditions in the cond.

I’ve following the work Exercism has been doing for several years now. I used their product to learn a bit of Elixir and Go a few years back. I got an email from the today promoting a focus on functional programming languages in the month of June. I decided to learn a bit of Clojure, since I’ve been working with the JVM lately. I’ve done a few of the exercises and my takeaways so far are

Logan says any changes to the model would have been communicated. It seems some folks have data that show the model’s degradation. As competition emerges in the space, it could be a problem for OpenAI if they lose user trust on model versioning and evolution.


Tried to setup Falcon 40B. Used their provided example code and download about 90GB of weights. Ran the Python code and it failed. Did a search on the error. Found many others were seeing the same in the HuggingFace forum. Eventually, got the program to run in some form. Maxed out Macbook’s memory at about 90GB (went to swap) and crashed the process. I wonder if a Mac can be tuned to make this work.

A number of folks are reporting gpt-4 appears to be performing less impressively as of late (additional conversation). I was using gpt-4 to write code earlier today, and anecdotally, can say it seems to be less effective at code generation. It still writes working code but the code, but the tests cases it provided aren’t always correct and it seems to be less “expert level” than I recall initially.

I’ve been following Eric’s posts about SudoLang since the first installment back in March. I’ve skimmed through the spec and the value proposition quite compelling. SudoLang seeks to allow programmers all levels to instruct LLMs and can also be transpiled into your programming language of choice. While I’m still early in my understanding of how to use this technique, it’s one I’m following closely and continuing to experiment with.