NVIDIA researchers introduce an LLM-based agent with “lifelong learning” capabilities that can navigate, discover, and accomplish goals in Minecraft without human intervention.
The Alexandria Index is building embeddings for large, public data sets, to make them more searchable and accessible.
That people produce HTML with string templates is telling us something.
I think about this phenomena often, though I personally find most string template systems that produce HTML difficult to use.
Django templates, Handlebars, Rails ERB, Hugo templates just to name a few.
My experience has been these systems are difficult to debug and are practically their own full programming languages.
I think React finds the sweet spot for the challenges that these other systems run into, with the abilities of Typescript/Javascript (maybe the braces-in-JSX syntax notwithstanding).
React can still be difficult to debug but it feels much more like you’re writing the thing that will be rendered rather than an abstraction on top of that (yes React is probably a higher level abstraction than any other these but it’s about experience over the implementation details).
I’ve seen a lot of “GPT detection” products floating around lately.
Sebastian discusses some of the products and their approaches in this article.
Some products claim to have developed an “algorithm with an accuracy rate of text detection higher than 98%”.
Unfortunately, this same algorithm determined a GPT-4 generated response from the prompt “write a paragraph in the style of Edgar Allan Poe” was 0% AI GPT.
In my experience, you don’t need to try very hard to trick “AI-detection” systems.
It seems that adding “in the style of…” to pretty much any prompt can thwart detections approaches.
Even though these products don’t seem to work, there is clearly a market for them and many products in that market, which seems to indicate a desire for them to work.
From these products’ marketing and news references, it appears folks in education are quite interested in them.
I can’t begin to appreciate the challenges educators must be experiencing as they attempt to adjust to the changes brought on by the accessibility of LLMs.
However, as someone who struggled to learn in the traditional education system, I do hope teachers will pivot their energies to adopting their curriculums rather than trying to maintain the status quo.
Brex wrote a nice beginner guide on prompt engineering.
A low-effort quality-of-life improvement for oncall has been starting a week-long shift on a Friday instead of a Monday.
Beginning a weekend with oncall isn’t the best, but it’s more than offset by how good it feels to finish the week and oncall at the same time next Friday.
LMQL
is a SQL-like programming language for interacting with LMs.
It takes a declarative approach to specifying the output constraints for a language model, with a SQL flavor.
Microsoft created a project called guidance
which is an LLM-agnostic language to “interleave generation, prompting, and logical control into a single continuous flow matching how the language model actually processes the text”.
It’s based on Handlebars templates and provides in-template notion for system and user messages.
marvin
’s @ai_model
decorator implements something similar to what I had in mind for extracting structured data from an input to a language model.
They also use a phase that I like and may adopt for this approach to formatting the output of a language model:
Format … data declaratively
In most of the examples, structured data is extracted from unstructured input.
The docs don’t discuss the use of schema to add additional context to the provided data.
I’m curious see if there are use cases for also defining the inputs or if the extraction and mapping can all be done with just a target schema.
Restricting the next predicted token to adhere to a specific context free grammar seems like a big step forward in weaving language models into applications.
Using system prompts provides an intuitive separation for input and output schema from input content.
Using system prompts does not effectively guard against prompt injection.
With the support of GPT-4, I feel unstoppable. The overnight surge in productivity is intoxicating, not for making money or starting a business, but for the sheer joy of continuously creating ideas from my mind, which feels like happiness.
- Ke Fang