Hardly seemed with a TIL post because it was too easy, but I learned gpt-4 is proficient at building working ffmpeg commands. I wrote the prompt convert m4a to mp3 with ffmpeg and it responsed with ffmpeg -i input.m4a -codec:v copy -codec:a libmp3lame -q:a 2 output.mp3 Since the problem at hand was low stakes, I just ran the command and, to my satisfaction, it worked. Language models can’t solve every problem but they can be absolutely delightful when they work.
I spent another hour playing around with different techniques to try and teach and convince gpt-4 to play Connections properly, after a bit of exploration and feedback. I incorporated two new techniques Asking for on category at a time, then giving the model feedback (correct, incorrect, 3/4) Using the chain of thought prompting technique Despite all sorts of shimming and instructions, I still struggled to get the model to only suggest each word once, even when it already got a category correct only suggest words from the 16 word list Even giving a followup message with feedback that the previous guess was invalid didn’t seem to help.
After some experimentation with GitHub Copilot Chat, my review is mixed. I like the ability to copy from the sidebar chat to the editor a lot. It makes the chat more useful, but the chat is pretty chatty and thus somewhat slow to finish responding as a result. I’ve also found the inline generation doesn’t consistently respect instructions or highlighted context, which is probably the most common way I use Cursor, so that was a little disappointing.
I worked through a basic SwiftUI 2 tutorial to build a simple Mac app. Swift and SwiftUI are an alternative to accomplish the same things Javascript and React do for web. I could also use something like Electron to build a cross-platform app using web technology, but after reading Mihhail’s article about using macOS native technology to develop Paper, I was curious to dip my toe in and see what the state of the ecosystem looked like.

2024-01-05

I enjoyed this article by Robin about writing software for yourself. I very much appreciate the reminder of how gratifying it can be to build tools for yourself.
I read Swyx’s article Learn in Public today and it’s inspired me to open source most of my projects on Github. A beautifully written and thought-provoking piece by Henrik about world models, exploring vs. exploiting in life, among other things.
I finally had a chance to use Github Copilot Chat in VS Code. It has a function to chat inline like Cursor, which has worked quite well given my initial use of it. I’m looking forward to using this more. Unfortunately, it’s not available for all IDEs yet but hopefully will be soon! I watched lesson 3 of the FastAI course. I’ve really enjoyed Jeremy Howard’s lecture’s so far.
I looked into 11ty today to see if it could be worth migrating away from hugo, which is how (at the time of this post) I build my blog. After a bit of research and browsing, I setup this template and copied over some posts. Some over my older posts were using Hugo’s markup for syntax highlighting. I converted these to standard markdown code fences (which was worthwhile regardless). I also needed to adjust linking between posts.
I would love if OpenAI added support for presetting a max_tokens url parameter in the Playground. Something as simple as this: https://platform.openai.com/playground?mode=chat&model=gpt-4-1106-preview&max_tokens=1024 My most common workflow (mistake): Press my hotkey to open the playground Type in a prompt Submit with cmd+enter Cancel the request Increase the “Maximum Length” to something that won’t get truncated Submit the request again

2023-11-25

A thoroughly enjoyable and inspiring read by Omar about his 20 year journey to date. Quantity was important. Quantity led to emergent of quality. Read the documentation: I can’t emphasize how useful this is. There are gems upon gems in the documentation. A good documentation gives a glimpse of the mind of the authors, and a glimpse of their experience.