I’m looking into creating a Deno serve that can manage multiple websocket connections and emit to one after receiving a message from another. A simple way to implement this is to have a single server id and track all the ongoing connections to websocket clients. I’m learning more about approaches that could support a multi-server backend.

2024-05-09

I take an irrational amount of pleasure in disabling notifications for apps that use them to send me marketing.

2024-05-08

I enjoyed reading Yuxuan’s article on whether Github Copilot increased their productivity. I personally don’t love Copilot but enjoy using other AI-assisted software tools like Cursor, which allow for use of more capable models than Copilot. It’s encouraging to see more folks adopting a more unfiltered thought journal.

I read this post by Steph today and loved it. I want to try writing this concisely. I imagine it takes significant effort but the result are beautiful, satisfying and valuable. It’s a privilege to read a piece written by someone who values every word.

Having gotten more into using llama 7b and 30b lately, this take seems likes it could hold water. Model inference still isn’t free when you scale a consumer app. Maybe I can use llama3 for all my personal use cases, but I still need infra to scale it. The price probably goes down significantly though with so many model inference providers and the speed will go way up once Groq starts running it (if they can run multi-modal models).

I played around with trying to run a Temporal worker on Modal. I didn’t do a ton of research upfront – I just kind of gave it a shot. I suspect this isn’t possible. Both use Python magic to do the things they do. This is what I tried.

import asyncio
import os
import modal
from temporalio import activity, workflow
from temporalio.client import Client, TLSConfig
from temporalio.worker import Worker

@activity.defn
async def my_activity(name: str) -> str:
    return f"Hello, {name}!"

@workflow.defn
class MyWorkflow:
    @workflow.run
    async def run(self, name: str) -> str:
        return await workflow.execute_activity(
            my_activity, name, start_to_close_timeout=60
        )

async def worker_main():

    client = await Client.connect(
        "my.namespace.tmprl.cloud:7233",
        namespace="my.namespace",
        tls=TLSConfig(
            client_cert=bytes(os.environ["TEMPORAL_CLIENT_CERT"], "utf-8"),
            client_private_key=bytes(os.environ["TEMPORAL_CLIENT_KEY"], "utf-8"),
        ),
    )
    worker = Worker(
        client,
        task_queue="modal-task-queue",
        workflows=[MyWorkflow],
        activities=[my_activity],
    )
    await worker.run()


stub = modal.Stub("temporal-worker")

@stub.function(
    image=modal.Image.debian_slim().pip_install(
        [
            "temporalio==1.5.1",
        ]
    ),
    secrets=[modal.Secret.from_name("modal-temporal-worker")],
)
def main():
    asyncio.run(worker_main())

if __name__ == "__main__":
    with stub.run():
        main.call()

Run with

I read this interesting article by Gajus about finetuning gpt-3.5-turbo. It was quite similar to my experience fine tuning a model to play Connections. A helpful takeaway was that after finetuning the model, you shouldn’t need to include system prompt in future model inference, so you can save on token cost. I also liked the suggestion to use a database to store training data. I had also been wrangling jsonl files.