A low-effort quality-of-life improvement for oncall has been starting a week-long shift on a Friday instead of a Monday. Beginning a weekend with oncall isn’t the best, but it’s more than offset by how good it feels to finish the week and oncall at the same time next Friday.

LMQL is a SQL-like programming language for interacting with LMs. It takes a declarative approach to specifying the output constraints for a language model, with a SQL flavor.


Microsoft created a project called guidance which is an LLM-agnostic language to “interleave generation, prompting, and logical control into a single continuous flow matching how the language model actually processes the text”. It’s based on Handlebars templates and provides in-template notion for system and user messages.

marvin’s @ai_model decorator implements something similar to what I had in mind for extracting structured data from an input to a language model.

They also use a phase that I like and may adopt for this approach to formatting the output of a language model:

Format … data declaratively

In most of the examples, structured data is extracted from unstructured input. The docs don’t discuss the use of schema to add additional context to the provided data. I’m curious see if there are use cases for also defining the inputs or if the extraction and mapping can all be done with just a target schema.

Restricting the next predicted token to adhere to a specific context free grammar seems like a big step forward in weaving language models into applications.

Using system prompts provides an intuitive separation for input and output schema from input content.

Using system prompts does not effectively guard against prompt injection.

With the support of GPT-4, I feel unstoppable. The overnight surge in productivity is intoxicating, not for making money or starting a business, but for the sheer joy of continuously creating ideas from my mind, which feels like happiness.

- Ke Fang