I’ve been doing some experimentation with smaller models and embeddings, including distilbert/distilbert-base-uncased-finetuned-sst-2-english and cardiffnlp/twitter-roberta-base-sentiment-latest as binary sentiment classifiers and google/vit-base-patch16-224 as an image classifier. Also GoogleNews-vectors-negative300 and fasttext-wiki-news-subwords-300 for embeddings to try and find semantically similar words and concepts.

2024-08-09

I figured out the issue with adding mistral-large. After a bit of debugging, I realized by manually calling llm_mistral.refresh_models() that something was wrong with how I had added the secret on Modal. It turns out the environment variable name for the Mistral API key needed to be LLM_MISTRAL_KEY. I’m going to try and make a PR to the repo to document this behavior. I’ve been trying to run models locally. Mostly specifically colpali and florence-2.
I did some experimentation deriving a data model iteratively (something I am currently calling “data model distillation”) by sequentially passing multiple images (could work with text as well) to a language model and prompting it to improve the schema using any new learnings from the current image. Results so far have been unimpressive. I’ve been hearing good things about mistral-large-2. I’m working on adding it to bots-doing-things but have had a bit of dependency trouble so far.
I watched Jeremy Howard’s interview with Carson Gross, the author of htmx. As someone who learned my first bits of web dev with jQuery, I feel like I appreciate the foundations of the approach in the library, but am still early in fully developing my mental model. Jeremy built a Python wrapper on top of htmx called fastml and the combination of these technologies is pretty well aligned with the technology I like to work with.
I tried the Vision Pro today. I had heard mixed reviews, mostly about how heavy it is and people getting headaches or vertigo. Those challenges are real. Even still, the experience was pretty incredible. I might need to go back for another demo, so I can choose my own adventure rather than staying on the standard demo path. The eye tracking selection was natural and pretty effective. I did find my eyes getting a bit tired though and eventually got a bit of a headache.

2024-08-02

I added image support for the chat shortcode of this site. gpt-4o user Describe this image assistant The image depicts a LEGO chef figure in a playful, detailed kitchen setting. The chef, dressed in a white chef’s coat, apron, and white chef’s hat, appears to be cooking or serving food. The LEGO figure is smiling and holding utensils, with plates of colorful items like tomatoes in the foreground, resembling food. The background features a kitchen environment with soft, out-of-focus lighting and various kitchen elements, contributing to the warm and inviting atmosphere.
I tried stacking multiple pages of a pdf vertically as a single image to a model, then doing data extraction from this. It didn’t work. I imagine this is because models aren’t trained on much data like this. The inference seemed to output made up data. An interesting pitch written by Hillel for preferring reStructuredText to Markdown. Multiple studies have shown that hallucinations can be significantly reduced by giving the model the right context via retrieval or tools that the model can use to gather context (e.
I wrote and screen-recorded myself building a Python app to call a model to extract structured data from an image, making heavy use of codegen with Cursor. The same protobuf is used as instructions in the prompt and to unpack the result returned by the model into an instance of the class generated from the protobuf via protoc. I’m planning to open source this pattern once I get it into a better state.
My thesis is clearer now. I'm short on "complex reasoning and agents" because it is often a scapegoat for poorly described problem spaces. My thoughts on capabilities are about figuring out the 80/20 and baking "complex reasoning" into specific tools, so you put fewer… — jason liu (@jxnlco) July 28, 2024 This point resonates with me. The more time I spend prompting models, the more it’s becoming clear that the clarity of the instructions are what matter most.
I ran the code from my Fine-tuning “Connections” post using gpt-4o-mini. I was hoping the results might be a bit better, which could motivate an effort to fine-tune the model. I’m not sure where my original version of this code went, so I reconstructed a repo for it. Once I was done, I ran 100 prompts through the model to get a sense of where its baseline performance was. Correct: 2.