Today, I needed to turn SVGs into PNGs.
I decided to use Deno to do it.
Some cursory searching showed Puppeteer should be up to the task.
I also found deno-puppeteer
which seemed like it would provide a reasonable way to make this work.
To start, let’s set up a deno
project
deno init deno-browser-screenshots
deno-browser-screenshots
Using puppeteer
Now, add some code to render an SVG with Chrome via puppeteer
.
import puppeteer from "https://deno.land/x/[email protected]/mod.ts";
const svgString = `
<svg width="512" height="512" xmlns="http://www.w3.org/2000/svg">
<rect width="100%" height="100%" fill="#87CEEB"/>
<circle cx="256" cy="256" r="100" fill="#FFD700"/>
<path d="M 100 400 Q 256 300 412 400" stroke="#1E90FF" stroke-width="20" fill="none"/>
</svg>`;
if (import.meta.main) {
try {
const browser = await puppeteer.launch({
headless: true,
args: ["--no-sandbox"],
});
const page = await browser.newPage();
await page.setViewport({ width: 512, height: 512 });
await page.setContent(svgString);
await page.screenshot({
path: "output.png",
clip: {
x: 0,
y: 0,
width: 512,
height: 512,
},
});
await browser.close();
} catch (error) {
console.error("Error occurred:", error);
console.error("Make sure Chrome is installed and the path is correct");
throw error;
}
}
When we run this code, we get the following error
About 6 months ago, I experimented with running a few different multi-modal (vision) language models on my Macbook. At the time, the results weren’t so great.
An experiment
With a slight modification to the script from that post, I tested out llama3.2-vision
11B (~8GB in size between the model and the projector).
Using uv
and inline script dependencies, the full script looks like this
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "ollama",
# ]
# ///
import os
import sys
import ollama
PROMPT = "Describe the provided image in a few sentences"
def run_inference(model: str, image_path: str):
stream = ollama.chat(
model=model,
messages=[{"role": "user", "content": PROMPT, "images": [image_path]}],
stream=True,
)
for chunk in stream:
print(chunk["message"]["content"], end="", flush=True)
def main():
if len(sys.argv) != 3:
print("Usage: python run.py <model_name> <image_path>")
sys.exit(1)
model_name = sys.argv[1]
image_path = sys.argv[2]
if not os.path.exists(image_path):
print(f"Error: Image file '{image_path}' does not exist.")
sys.exit(1)
run_inference(model_name, image_path)
if __name__ == "__main__":
main()
We run it with
Deepseek V3 was recently released: a cheap, reliable, supposedly GPT-4 class model.
Quick note upfront, according to the docs, there will be non-trivial price increases in February 2025:
- Input price (cache miss) is going up to
$0.27
/ 1M tokens from$0.14
/ 1M tokens (~2x) - Output price is going up to
$1.10
/ 1M tokens from$0.28
/1M tokens (~4x)
From now until 2025-02-08 16:00 (UTC), all users can enjoy the discounted prices of DeepSeek API
This year included a lot of writing and learning new things.
My goals for the year were the following
Train a machine learning model and write about it
- I’ve been learning ML in reverse, first playing with language models and now learning more about what it actually takes to construct a system capable of ML inference. Training my own models feels like the next step to develop depth of understanding in this area.
Build search for my blog
I’ve been building an Electron app called “Delta”. Delta is a tool for knowledge exploration and ideation through the branching of conversations with language models. I have lots of ideas for how I want to make this idea useful and valuable, but today it looks like this.
This article is about my struggles building Delta using Electron and how I eventually found workable, though likely suboptimal, solutions to these challenges.
I’m aiming to setup a space for more interactive UX experiments. My current Hugo blog has held up well with my scale of content but doesn’t play nicely with modern Javascript frameworks, where most of the open source energy is currently invested.
Astro seemed like a promising option because it supports Markdown content along with plug-and-play approach to many different frameworks like React, Svelte and Vue. More importantly, there is a precedent for flexibility when the Next Big Thing emerges which makes Astro a plausible test bed for new concepts without requiring a brand new site or a rewrite. At least, this was my thought process when I decided to try it out.
In this notebook, we’ll use the MovieLens 10M dataset and collaborative filtering to create a movie recommendation model.
We’ll use the data from movies.dat
and ratings.dat
to create embeddings that will help us predict ratings for movies I haven’t watched yet.
Create some personal data
Before I wrote any code to train models, I code-generated a quick UI to rate movies to generate my_ratings.dat
, to append to ratings.dat
.
There is a bit of code needed to do that.
The nice part is using inline script metadata and uv
, we can write (generate) and run the whole tool in a single file.