Brex wrote a nice beginner guide on prompt engineering.
A low-effort quality-of-life improvement for oncall has been starting a week-long shift on a Friday instead of a Monday. Beginning a weekend with oncall isn’t the best, but it’s more than offset by how good it feels to finish the week and oncall at the same time next Friday.
LMQL
is a SQL-like programming language for interacting with LMs.
It takes a declarative approach to specifying the output constraints for a language model, with a SQL flavor.
Microsoft created a project called guidance
which is an LLM-agnostic language to “interleave generation, prompting, and logical control into a single continuous flow matching how the language model actually processes the text”.
It’s based on Handlebars templates and provides in-template notion for system and user messages.
marvin
’s @ai_model
decorator implements something similar to what I had in mind for extracting structured data from an input to a language model.
They also use a phase that I like and may adopt for this approach to formatting the output of a language model:
Format … data declaratively
In most of the examples, structured data is extracted from unstructured input. The docs don’t discuss the use of schema to add additional context to the provided data. I’m curious see if there are use cases for also defining the inputs or if the extraction and mapping can all be done with just a target schema.
Added arbitrary context free grammar constraints to llama.cpp
— Grant Slatton (@GrantSlatton) May 14, 2023
Can now plug in any llama.cpp compatible model and give an exact grammar spec: JSON, etc
Excited to use with more powerful local models as they are released
Thanks @ggerganov & friends for such a wonderful project. pic.twitter.com/HCLACavrlH
Restricting the next predicted token to adhere to a specific context free grammar seems like a big step forward in weaving language models into applications.
Using system prompts provides an intuitive separation for input and output schema from input content.
Using system prompts does not effectively guard against prompt injection.
With the support of GPT-4, I feel unstoppable. The overnight surge in productivity is intoxicating, not for making money or starting a business, but for the sheer joy of continuously creating ideas from my mind, which feels like happiness.
- Ke Fang
I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think.
— Paul Graham (@paulg) May 9, 2023
I wrote a few paragraphs disagreeing with Paul’s take, asserting that, like Simon suggests, we should think of language models like ChatGPT as a “calculator for words”.