I enjoyed reading Jordan’s post, a walk down memory lane of his career so far through a series of emails.
He includes things like following up on internship opportunities, negotiating, and meeting people who would change the course of his career.
He inspired me to look back through some old emails as well, both to remember this time and acknowledge how much has changed since then.
I managed to find my original offer letter from Uber in 2016, which brought back many memories that I might write about in longer form sometime in the future.
For the first time in a while I used iTunes.
I mean the Music app, sorry.
I clicked on the album art while I was playing a song and the app switch to mini-player mode.
I…didn’t see what I could click to get back to the main player.
From my perspective the app had just shrunken down to the mini-player.
I hesitantly clicked the lyrics button.
Yeah, definitely not that.
I clicked the red button of the traffic lights.
Ah.
Has this always been this way?
It’s been a while.
Eugene’s article on prompting is one of the best things I’ve read recently, full stop.
As also noted by Kyle, try starting with 0.8 temperature and lower only if necessary, even for deterministic use cases .
A few years old, excellent response written by Maxim about extracting the most value from Temporal by using it “as a service mesh for invocations of child workflows and activities hosted by different services”.
I found a very satisfying, time-saving use of a model though Cursor to generate an inline shell script to reformat some data for me.
The goal was to reformat json files that looked like this
to ones that look like this
I used the following prompts:
for each json file in the current folder, read the json (which is a json list) and turn it into a json object with a single key, “cells” whose value is the json list
Matthew wrote a thread summarizing Apple’s private cloud and their security approach.
Between the speed at which models are changing and their size, it’s not currently practical to run things like LLM inference on iPhones, at least not for the best available models.
These models will probably always be larger or require more compute than what handheld devices are capable of.
If you want to use the best models, you’ll need to offload inference to a data center.
I spent some time moving my bookmarks and workflows to Raindrop from Pocket.
I’m mostly done with a short post on how and why I did that.
I’d like to use the python-raindropio
package to generate stubs for logs posts from links I bookmark, including comments I write about them.
The Nix flake I use for this blog hampered those efforts.
The python-raindropio
package isn’t available as a Nix package and I’m not particularly well versed in building my own packages with Nix, so that became a bit of a rabbit hole.
After poking around for other available libraries, it turns out, this one isn’t even maintained anymore.
It was forked as raindrop-io-py
and it looks like that might not even be maintained anymore.
Not the greatest sign, but the package does still work for the time being.
I was away from the computer for a couple of weeks.
That was really nice.
During my downtime and in transit, these were some of my favorite things I read in the past two weeks.
I’m planning to come up with a bit of a better system for logging and adding thoughts to stuff I read but for now a list will have to do.
Sabrina wrote an interesting
write up on solving a math
problem with gpt-4o
. It turned out the text-only, chain-of-thought approach
was the best performing, which is not what I would have guessed.
It’s was cool to see Simon dive into LLM-driven data extraction in using his
project datasette
in
this video. Using multi-modal
models for data extraction seems to bring a new level of usefulness and makes
these models even more general purpose.
Nostalgia: https://maggieappleton.com/teenage-desktop.
I wish I had done something like this.
Maybe I can find something on an old hard drive.