Having gotten more into using llama 7b and 30b lately, this take seems likes it could hold water.
Model inference still isn’t free when you scale a consumer app.
Maybe I can use llama3 for all my personal use cases, but I still need infra to scale it.
The price probably goes down significantly though with so many model inference providers and the speed will go way up once Groq starts running it (if they can run multi-modal models).
I read Jason, Ivan and Charles’ blog post on Modal about fine tuning an embedding model.
It’s a bit in the weeds of ML for me but I learn a bit more every time I read something new.
I played around with trying to run a Temporal worker on Modal.
I didn’t do a ton of research upfront – I just kind of gave it a shot.
I suspect this isn’t possible.
Both use Python magic to do the things they do.
This is what I tried.
import asyncio
import os
import modal
from temporalio import activity, workflow
from temporalio.client import Client, TLSConfig
from temporalio.worker import Worker
@activity.defn
async def my_activity(name: str) -> str:
return f"Hello, {name}!"
@workflow.defn
class MyWorkflow:
@workflow.run
async def run(self, name: str) -> str:
return await workflow.execute_activity(
my_activity, name, start_to_close_timeout=60
)
async def worker_main():
client = await Client.connect(
"my.namespace.tmprl.cloud:7233",
namespace="my.namespace",
tls=TLSConfig(
client_cert=bytes(os.environ["TEMPORAL_CLIENT_CERT"], "utf-8"),
client_private_key=bytes(os.environ["TEMPORAL_CLIENT_KEY"], "utf-8"),
),
)
worker = Worker(
client,
task_queue="modal-task-queue",
workflows=[MyWorkflow],
activities=[my_activity],
)
await worker.run()
stub = modal.Stub("temporal-worker")
@stub.function(
image=modal.Image.debian_slim().pip_install(
[
"temporalio==1.5.1",
]
),
secrets=[modal.Secret.from_name("modal-temporal-worker")],
)
def main():
asyncio.run(worker_main())
if __name__ == "__main__":
with stub.run():
main.call()
Run with
I read this interesting article by Gajus about finetuning gpt-3.5-turbo.
It was quite similar to my experience fine tuning a model to play Connections.
A helpful takeaway was that after finetuning the model, you shouldn’t need to include system prompt in future model inference, so you can save on token cost.
I also liked the suggestion to use a database to store training data.
I had also been wrangling jsonl files.
About a month ago, I had been looking into creating a NL to SQL plugin for datasette
.
Simon release a version of exactly that the next day and I came across it in his article here.
Hopefully I can find time to try this out in the next few days.
I did a refactor of my nix config following a pattern I learned from reading Davis’ setup
.
My two main uses right now for Nix/home-manager
are to install and configure programs.
Some of these programs have nix modules that allow for the configuration to be written in Nix.
Others don’t, but you can still use Nix to create a config file for that program to read.
I do the latter with skhd
and goku
to create a karabiner.json
.
With this refactor, I used the default.nix
file to create program-specific module imports.
I refactored my home.nix
to use the same approach as well.
This allows me to easily co-locate code to set up a given program, regardless of whether I am configuring it with Nix or by creating dotfiles.
For me, invoking a language model using a playground (UI) interface is the most common approach for my usage.
Occasionally, it can be helpful to use the a CLI to directly pipe output into a model.
For example
git diff --staged | llm "write a commit message for these changes"
However, I am more often inclined to open a playground and paste the bits and pieces of context I need.
Maybe, it’s that refinement and followups are common enough that using a CLI isn’t nearly as flexible.
The bottom line is, I far more frequently open a playground to use a language model than use a CLI.
Even though most of the playgrounds have various weird and annoying behaviors, I generally still prefer them.
I enjoyed this article by Ken about production LLM use cases with OpenAI models.
When it comes to prompts, less is more
This resonated with me.
I’ve found that too much instruction can lead a model to perform worse on a task.
GPT is really bad at producing the null hypothesis
This also seems to confirm what I’ve seen empirically, but I never ask for it.
I ask for something like, “return an empty JSON array if you can’t find anything”.
I enjoyed Martin’s article on preserving your shell history.
I implemented some of his approaches in my system config.
Gemini Pro 1.5 up and running.
I’ve said this before but I will say it again – the fact that I don’t need to deal with GCP to use Google models gives me joy.
❯ llm -m gemini-1.5-pro-latest "who is the fastest man in the world?"
As of November 2023, **Usain Bolt** is still considered the fastest man in the world. He holds the world record in the 100 meters with a time of 9.58 seconds, set in 2009. He also holds the record for the 200 meters at 19.19 seconds, achieved in 2009 as well.
Having all these models readily available is great.
My hope is to play around with several to become a bit of an amateur model sommelier.