After more thinking than I really expected I would do about such a thing, I open-sourced Delta today. I still really like the concept of conversation branching for ideation, thought partnership and rabbit holing with LLMs. There are several folks who are working on closed-source, cloud-hosted versions of similar ideas. I don’t have the bandwidth to pursue the project further right now, but if I can empower someone with the time and interest, that would be a win.

2025-02-01

I’ve been working on and off on a very simple, Krea-inspired image editor in the browser that can turn sketches into images using Flux Canny. I’ve tried a few separate times to prompt models (Sonnet, o1, r1) to add an undo-redo stack to my project. All have failed. It’s possible I am under specifying the ask but interesting to identify this as a particular area where models don’t seem to do very well.

2025-01-29

I spent some time building an iOS app today, all with Cursor. I’ve never really written Swift before (well apparently I tried it a year ago). Since this was pre-Sonnet, I probably wasn’t doing a ton of code generation at the time. It will be interesting to see how far I can get.

I worked through a bit of setup to mount an R2 bucket to the local filesystem with rclone but haven’t taken the final step to modify my system security permissions yet.

2025-01-21

The agricultural revolution certainly enlarged the sum total of food at this disposal of humankind, but the extra food did not translate into better diet or more leisure. Rather, it translated into population explosions and pampered elites.

  • Yuval Noah Harari, Sapiens

I spent some time looking for low latency, image-to-image APIs. I look around a fair bit and think I’ve settled on together.ai.

My main needs are very similar to Krea

  • < 1 sec latency
  • ability to specify the starting a prompt and image

I’m still validating this is the case but latency seems nearly there and I’m still trying to confirm how to make a start image work.

I’ve been playing around a bunch with image generation models/tools, notably Recraft and Krea. Recently, I’ve been trying to use these tools to design a logo and favicon for Thought Eddies.

I’ve also been experimenting with using D3 to build visuals for LLM chat conversation branching. It’s harder to work with than React Flow, unsurprisingly since it is a much more general tool/framework. I am aiming to create a few different interactive visuals to showcase a few different ideas I have for working with LLMs on a canvas. Maybe I will end up going back to React Flow if I keep having challenges with D3.

I tried two separate ways to configure Cursor to point to an alternative OpenAI compliant API endpoint by modifying the “OpenAI API Key > Override OpenAI Base URL” section of the Cursor settings.

My first attempt was with Deepseek, using learnings from wiring that up to llm. I got to the point where Cursor failed to validate the API endpoint (don’t forget to save the url override), but the curl command it output for me to check manually worked if I switched the model to deepseek-chat.

2024-12-19

A day in the mind of Claude Sonnet

Dan Corin (@danielcorin.com) 2024-12-19T21:21:16.724Z

To create this animation for a day in the mind of Claude Sonnet, I used Sonnet to write the following code

  • to generate the HTML with this Python script
  • capture PNGs with puppeteer
  • use ffmpeg to stitch them together
import llm
import requests
import time
from pathlib import Path
from datetime import datetime, timedelta

MODELS = ["claude-3-5-sonnet-latest"]


def get_temp_weather(hour):
    if hour < 14:
        temp = 30 + (24 * (hour / 14))
    else:
        temp = 54 - (27 * ((hour - 14) / 10))

    if hour < 6:
        weather = "cold"
    elif hour < 18:
        weather = "sunny"
    else:
        weather = "snowing"

    return round(temp), weather


now = str(int(time.time()))
out_dir = Path("out") / now
out_dir.mkdir(parents=True, exist_ok=True)

start_time = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
end_time = start_time + timedelta(days=1)
current = start_time

start_time = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
specific_times = [
    start_time.replace(hour=2, minute=30),
    start_time.replace(hour=3, minute=0),
    start_time.replace(hour=5, minute=0),
]

for current in specific_times:
    hour = current.hour
    minute = current.minute

    temp, weather = get_temp_weather(hour)
    time_str = current.strftime("%I:%M %p")

    PROMPT = f"""It is {time_str}. {temp}°F and {weather}.
Generate subtle, abstract art using SVG in an HTML page that fills 100% of the browser viewport.
Take inspiration for the style, colors and aesthetic using the current weather and time of day.
Prefer subtle colors but it's ok to use intense colors sparingly.
No talk, no code fences. Code only."""

    timestamp = current.strftime("%H_%M")
    prompt_file = out_dir / f"prompt_{timestamp}.txt"
    prompt_file.write_text(PROMPT)

    for m in MODELS:
        model = llm.get_model(m)
        response = model.prompt(PROMPT, temperature=1.0)
        out_file = out_dir / f"{timestamp}_{m}.html"
        out_file.write_text(response.text())

    current += timedelta(minutes=30)
const puppeteer = require('puppeteer');


async function createGif() {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();

    await page.setViewport({
        width: 800,
        height: 600
    });

    const htmlFiles = [
        // paths to html files
    ];

    for (let i = 0; i < htmlFiles.length; i++) {
        await page.goto(`file://${__dirname}/${htmlFiles[i]}`);
        await page.screenshot({
            path: `frame${i}.png`
        });
    }

    await browser.close();
}

createGif();
ffmpeg -framerate 2 -i timestamped_frame%d.png -c:v libx264 -pix_fmt yuv420p output.mp4

2024-12-17

I’ve been setting up the foundations to add node summaries to Delta. Ideally, I will use the same model to create the node summaries as I use to generate the responses since this will keep the model dependencies minimal. However, my early experiments have yielded some inconsistency in how a shared prompt behaves across models. To try and understand this and smooth it out as much as possible, I plan to set up evals to ensure the summaries are