2024-09-26

Cool article by Jacob on a blog re-write to Astro. I’ve been getting a bit of a re-write itch lately but I don’t want it to be a distraction. Might need to wait until the end of the FastAI course with just a little exploration on the side.

2024-09-25

Several interesting releases today/recently.

Multi-modal llama: llama3.2. Tons of model infra providers announced availability day one. We seem to be getting into a bit of a rhythm here. It’s also convenient for Meta who doesn’t need to scale the infra (though they of all companies would probably be capable) – providers do it for them.

AllenAI’s Olmo: another interesting, open source multi-modal model.

Open source is catching up in multi-modal. I’m looking forward to experimenting with both of these.

I finally found some time to run a more comprehensive evals of Connections with one guess at a time and using Python code to validate the guesses and give feedback. I ran about 100 puzzles with gpt-4o-mini, gp-4o, and claude-3-5-sonnet, but it became clear that Sonnet was going to perform the best, so I decide to only complete the 466 puzzles released as of today with Sonnet. This wasn’t cheap but it was interesting to see the results. I’m going to write up some more comprehensive findings and push the code soon.

2024-09-13

There have been a number of small-in-scope, but tough problems that I’ve run into that models haven’t been able crack as l’ve presented them via prompting. Usually, these are problems with a few separate areas of complexity, like a recursive parser plus a weird templating language to do it in. o1 is the first model that I can recall that took my high level approach and suggested a simplifying change to the input (tree -F to tree -J -F) that meaningfuly simplified the problem’s complexity (the parser is no longer needed if the input is JSON). With this change and two followups to correct a hallucination, the model output a recursive Hugo template shortcode to render a filetree with collapsible folders.

I’m making another, more thorough pass of course.fast.ai, including all notebooks and videos and this time I am going to focus more on the projects. I’ll also be logging a lot more notes as doing so is by far the most effective way that I learn things.

The course materials are very detailed but I’ve still run into some rough edges. The image search for bird vs. forest image classifier didn’t quite work without some modifications to make the search work. Also, the recommended approach for working the textbook notebooks is on Google Colab, which requests a large number of account permissions for accessing my Google account masquerading as “Google Drive for Desktop” and doesn’t make me feel great. I was able to run most of the examples on my personal computer, but training the model for the IMDB movie review classifier was quite slow. I decided it might be worth trying out Colab, since I imagine there could be several more models of this size/complexity I’ll want to train and finding a reasonably fast way to do that will be useful. I went back to the Colab notebook and tried running the cat-or-not classification example. This seemed to take longer than it did on my local machine with an apparent ETA of ~30 minutes.

2024-09-10

A nice writeup by Eugene on building a simple data viewer webapp with a few different framworks. I am going to need to try out including llm-ctx.txt next time I write FastHTML to see if it helps make the language model better at writing it.

2024-09-08

I was going to write a quick guide on how to get up and running using Google’s Gemini model via API, since I found it quite straightforward and Twitter is currently dunking on Google for how hard this is. When I tried to retrace my steps, the CSS for the documentation was failing to load with a 503, so I guess this will have to wait until another day.

2024-09-07

I am continuing to see a lot of buzz about ColPali and Qwen2-VL. I’d like to try these out but haven’t put together enough of the pieces to make sense of it yet. I am also seeing a lot of conversation about how traditional OCR to LLM pipelines will be superseded by these approaches. Based on my experience with VLMs, this seems directionally correct. The overall amount of noise makes it tough to figure out what is worth focusing on and what is real vs. hype.

2024-09-05

Played around a bit with baml for extraction structured data with a VLM. It’s an interesting approach and has better ergonomics and tooling from most things I’ve tried so far. I like how you can declare test cases in the same place as the object schemas and that there is a built-in playground. I need to see how to handle multi-step pipelines.

I experimented with doing data extraction from pictures of menus. Early results were mixed. I think my photo quality isn’t great and that might be one of the bigger issues.