The Alexandria Index is building embeddings for large, public data sets, to make them more searchable and accessible.
That people produce HTML with string templates is telling us something. I think about this phenomena often, though I personally find most string template systems that produce HTML difficult to use. Django templates, Handlebars, Rails ERB, Hugo templates just to name a few. My experience has been these systems are difficult to debug and are practically their own full programming languages. I think React finds the sweet spot for the challenges that these other systems run into, with the abilities of Typescript/Javascript (maybe the braces-in-JSX syntax notwithstanding). React can still be difficult to debug but it feels much more like you’re writing the thing that will be rendered rather than an abstraction on top of that (yes React is probably a higher level abstraction than any other these but it’s about experience over the implementation details).
Added a full LLM type system and context-free grammar to Python. Includes all typescript types, regexes, and arbitrary grammars. Makes coding programs with LLM calls woven in super easy.
— Grant Slatton (@GrantSlatton) May 21, 2023
Thanks @ggerganov for llama.cpp and @abetlen for llama-cpp-python that enable this. pic.twitter.com/5WyoY3xtsJ
Constricting LLMs to responding in adherence with specific schema or following a context-free grammar continues to be the direction I expect most production use cases will go. Some of the comments mention the irony that we constrain LLMs to responding following a strict grammar, but this is the reality of most production systems. APIs don’t respond to queries with
The iMac computer features a 27-inch Retina 5K display, an 8-core Intel Core i9 processor with up to 3.6GHz turbo boost, up to 128GB of RAM, and up to 8TB of SSD storage.
They respond with structure.
{
"display": "27-inch Retina 5K",
"processor": "8-core Intel Core i9 with up to 3.6GHz turbo boost",
"ram": "Up to 128GB",
"storage": "Up to 8TB SSD"
}
Structure is what it takes (in today’s world at least), to incorporate a new piece of technology into the stack. That is not to say this can’t and won’t change, but to realize benefits of LLMs today, I believe this is a useful initial approach. We’ll have to trust our AI systems to behave consistently a lot more than we do today before we’ll be ready to using natural languages to communicate between our systems. Even with more consistent behavior, natural language with always be more ambiguous than context-free grammars and schemas. The latter is what you want for predictably behaving systems.