2024-11-20

Tried out Letta. Unsure where to try and go with it.

2024-11-16

Would another day of editing fundamentally change the value readers get? Probably not. Ship it and move on to your next idea while you’re still energized.

2024-11-10

When you are curious about something, you have the right cocktail of neurotransmitters present to make that information stick. If you get the answer to something in the context of your curiosity, then it’s going to stay with you. David Eagleman, https://freakonomics.com/podcast/feeling-sound-and-hearing-color/

2024-11-09

we can improve the accuracy of nearly any kind of machine learning algorithm by training it multiple times, each time on a different random subset of the data, and averaging its predictions Fastbook Chapter 9
I wanted to get more hands-on with the language model trained in chapter 12 of the FastAI course, so I got some Google Colab credits and actually ran the training on an A100. It cost about $2.50 and took about 1:40, but generally worked quite well. There was a minor issue with auto-saving the notebook, probably due to my use of this workaround to avoid needing to give Colab full Google Drive access.

2024-10-29

The following code allowed me to successfully download the IMDB dataset with fastai to a Modal volume: import os os.environ["FASTAI_HOME"] = "/data/fastai" from fastai.text.all import * app = modal.App("imdb-dataset-train") vol = modal.Volume.from_name("modal-llm-data", create_if_missing=True) @app.function( gpu="any", image=modal.Image.debian_slim().pip_install("fastai"), volumes={"/data": vol}, ) def download(): path = untar_data(URLs.IMDB) print(f"Data downloaded to: {path}") return path run with modal run train.py::download Next, I tried to run one epoch of training of the language model @app.function( gpu="h100", image=modal.
I tried training a language model with fastai on Modal. First I attempted it in a standalone Modal script. I first wrote a script to unpack the data to a volume, then ran the fit_one_cycle function with the learner. I ran into an issue with counter.pkl sort of similar to this issue but I haven’t figured out how to resolve it yet. On a whim, I checked to see if I could run a Jupyter notebook on Modal.

2024-10-26

Jon wrote an interesting blog on top of Cloudflare Workers and KV. I’ve been seeing more and more notebook-like products and UX. A few I’ve seen recently: https://github.com/fastai/fastbook https://nbdev.fast.ai/ https://runme.dev/ https://github.com/tzador/makedown https://github.com/dim0x69/mdx

2024-10-21

Ran several experiments using local LLMs (~7b parameter models) like llama3.2 and phi3 to generate a random number between 1 and 100. The exact prompt was llama3.2 system You are a random number generator that provides a number between 1 and 100. user Generate a random number between 1 and 100. Provide the output in the following format: ‘Random number: X’, where X is the generated number. Ensure the number is an integer and do not include any additional text or explanations.