Imagine we have a query to an application that has become slow under load demands. We have several options to remedy this issue. If we settle on using a cache, consider the following failure domain when we design an architecture to determine whether using a cache actually is a good fit for the use case.
Motivations for using a cache When the cache is available and populated it will remove load from the database.I’ve written several posts on using JSON and Pydantic schemas to structure LLM responses. Recently, I’ve done some work using a similar approach with protobuf message schemas as the data contract. Here’s an example to show what that looks like.
Example Imagine we have the following questionnaire that we send out to new employees when they join our company so their teammates can get to know them better.
What are your hobbies or interests outside of work?Plenty of data is ambiguous without additional description or schema to clarify its meaning. It’s easy to come up with structured data that can’t easily be interpreted without its accompanying schema. Here’s an example:
{ "data": [ 12, "21", true, { "name": "John Doe", "age": 30 } ] } You can argue that this is “bad” structured data, but if you have this data structure, there is little meaning you can derive without additional insight into what the data represents.The most popular language model use cases I’ve seen around have been
chatbots agents chat your X use cases These use cases are quite cool, but often stand alone, separate from existing products or added on as an isolated feature.
Expanding production use cases for language models I’ve been thinking about what could it look like to naturally embed a call to language model in code to flexibly make use of its capabilities in a production application.It’s necessary to pay attention to the shape of a language model’s response when incorporating it as a component in a software application. You can’t programmatically tap into the power of a language model if you can’t reliably parse its response. In the past, I have mostly used a combination of prose and examples to define the shape of the language model response in my prompts. Something like:
Respond using JSON with the following shape:Auto-GPT is a popular project on Github that attempts to build an autonomous agent on top of an LLM. This is not my first time using Auto-GPT. I used it shortly after it was released and gave it a second try a week or two later, which makes this my third, zero-to-running effort.
I installed it
python -m venv env . env/bin/activate pip install -r requirements.txt and then ran itI believe that language models are most useful when available at your fingertips in the context of what you’re doing. Github Copilot is a well known application that applies language models in this manner. There is no need to pre-prompt the model. It knows you’re writing code and that you’re going to ask it to write code for you based on the contents of your comment. Github is further extending this idea with Copilot for CLI, which looks promising but isn’t generally available yet.Over the the years, I’ve developed a system for capturing knowledge that has been useful to me. The idea behind this practice is to provide immediate access to useful snippets and learnings, often with examples. I’ll store things like:
Amend commit message
git commit --amend -m "New commit message" with tags like #git, #commit, and #amend after I searched Google for “how to amend a git commit message”. With the knowledge available locally and within my own file system, I’ve added search capabilities on top which allows me to quickly find this snippet any time I need it.I know a little about nix. Not a lot. I know some things about Python virtual environments, asdf and a few things about package managers. I’ve heard the combo of direnv and nix is fantastic from a number of engineers I trust, but I haven’t had the chance to figure out what these tools can really do. I could totally ask someone what their setup is, but I decided to ask ChatGPT (gpt-4) to see what would happen.I came upon https://gpa.43z.one today. It’s a GPT-flavored capture the flag. The idea is, given a prompt containing a secret, convince the LM to leak the prompt against prior instructions it’s been given. It’s cool way to develop intuition for how to prompt and steer LMs. I managed to complete all 21 levels. A few were quite easy and several were challenging. I haven’t figured out how to minimize the number of characters needed to expose the secret.