For me, invoking a language model using a playground (UI) interface is the most common approach for my usage. Occasionally, it can be helpful to use the a CLI to directly pipe output into a model. For example
git diff --staged | llm "write a commit message for these changes"
However, I am more often inclined to open a playground and paste the bits and pieces of context I need. Maybe, it’s that refinement and followups are common enough that using a CLI isn’t nearly as flexible. The bottom line is, I far more frequently open a playground to use a language model than use a CLI. Even though most of the playgrounds have various weird and annoying behaviors, I generally still prefer them.
I tried the Gemini 1.5 multi-modal model preview. Unfortunately, I wasn’t super impressed relative to gpt-4-turbo.
This evening has been most spent making mistakes, trying to deploy a few different Modal functions and call these in a Cron
.
I thought I had been close to getting this working when I left off yesterday, but now I was getting stuck on errors like the following
modal.exception.ExecutionError: Object has not been hydrated and doesn't support lazy hydration. This might happen if an object is defined on a different stub, or if it's on the same stub but it didn't get created because it wasn't defined in global scope.
Eventually, I found an approach in the Modal Community Slack channel that helped me resolve my issue.
Side note: I wish this content was more easily searchable publicly. The content is so valuable but it’s locked away in Slack.
I needed to use Function.lookup
to reference functions that are deployed separately and defined in other stubs.
Once I got this working, I was able to launch Bots Doing Things, a site where my bots perform tasks and publish their results.
Currently, I have just one “bot”, which plays Connections once a day using gpt-4 and records its result in a git repo, which is then deployed to a site automatically on Vercel.
The daily cron and Python code run on Modal.