I wanted to stop the Obsidian editor cursor from blinking.
Something like VS Code’s
{
"editor.cursorBlinking": "solid"
}
Some searching turned up an option to solve this problem in Vim mode using CSS, but in insert mode, the cursor still blinks.
Eventually, I came across a macOS-based approach to solve this issue on StackExchange, included here for convenience
defaults write -g NSTextInsertionPointBlinkPeriod -float 10000
defaults write -g NSTextInsertionPointBlinkPeriodOn -float 10000
defaults write -g NSTextInsertionPointBlinkPeriodOff -float 10000
After running, restart Obsidian and the cursor no longer blinks.
These configuration changes also disable cursor blinking in other applications, which for me, is a welcome change.
Cursor is VS Code with Cmd+K that opens a text box that can do text generation based on a prompt.
When I created this post, I first typed
insert hugo yaml markdown frontmatter
In a few seconds, the editor output
---
title: "Cursor Introduction"
date: 2023-08-12T20:00:00-04:00
draft: false
tags:
- cursor
- intro
---
This was almost exactly what I was looking for except the date was not quite right, so I corrected that and accepted the generation.
Since I use VS Code, I felt at home immediately.
The only thing missing is my extensions, but extension installation works exactly the same way.
Cmd+L opens a ChatGPT-style chat in the right sidebar.
You can reference files within the chat interface with @
which seems to load them into the language model prompt as context.
I asked it to describe what this post was about with
The problem with long running code in Next serverless functions
The current design paradigm at the time of this writing is called App Router.
Next.js and Vercel provide a simple mechanism for writing and deploying cloud functions that expose HTTP endpoints for your frontend site to call.
However, sometimes you want to asynchronously do work on the backend in a way that doesn’t block a frontend caller, needs to move on.
You could fire and forget the call from the frontend, but this is often not safe when running in a serverless environment.
The following approach uses two server-side API endpoints to run an asynchronous function from the perspective of the frontend caller.
First attempt
I made an attempt to setup TypeChat to see what’s happening on the Node/TypeScript side of language model prompting.
I’m less familiar with TypeScript than Python, so I expected to learn some things during the setup.
The project provides example projects within the repo, so I tried to pattern off of one of those to get the sentiment classifier example running.
I manage node
with asdf
.
I’d like to do this with nix
one day but I’m not quite comfortable enough with that yet to prevent it from become its own rabbit hole.
I installed TypeScript globally (npm install -g typescript
) to my asdf
managed version of node, then put the version I was using in .tool-version
in my project.
I downloaded Warp today.
I’ve been using iTerm2 for years.
It’s worked well for me but Warp came recommended and so I figured I should be willing to give something different a chance.
Warp looks like a pretty standard terminal except you need to sign-in, as with most things SaaS these days.
It looks like the beta is free but there is a paid version for teams.
Warp puts “workflows” as first class citizens of the editor experience.
These occupy the left sidebar where files typically live in a text editor.
At first past, workflows seem like aliases where the whole “formula” is visible in the terminal window when you invoke them, rather than requiring you to memorize your alias/function and arguments.
Additionally, typing workflows:
or w:
in the prompt, opens a workflow picker with fuzzy search and a preview of what the workflow runs.
It comes with window splitting (like tmux) by default, and somehow using my personal hotkeys.
I’m not sure if this is a lucky coincidence or it they somehow loaded by iTerm2 settings.
By default, the PS1
is
promptfoo
is a Javascript library and CLI for testing and evaluating LLM output quality.
It’s straightforward to install and get up and running quickly.
As a first experiment, I’ve used it to compare the output of three similar prompts that specify their output structure using different modes of schema definition.
To get started
mkdir prompt_comparison
cd prompt_comparison
promptfoo init
The scaffold creates a prompts.txt
file, and this is where I wrote a parameterized prompt to classify and extract data from a support message.
To broaden my knowledge of nix
, I’m working through an Overview of the Nix Language.
Most of the data types and structures are relatively self-explanatory in the context of modern programming languages.
Double single quotes strip leading spaces.
Functions are a bit unexpected visually, but simply enough with an accompanying explanation.
For example, the following is a named function f
with two arguments x
and y
.
To call the function, write f 1 4
.
Calling the function with only a single arg returns a partial.
I started working through the Zero to Nix guide.
This is a light introduction that touch on a few of the command line tools that come with nix
and how they can be used to build local and remote projects and enter developer environments.
While many of the examples are high level concept you’d probably apply when developing with nix
, flake templates are one thing I could imagine returning to often.
I’ve been following the “AI engineering framework” marvin for several months now.
In addition to openai_function_call, it’s currently one of my favorite abstractions built on top of a language model.
The docs are quite good, but as a quick demo, I’ve ported over a simplified version of an example from an earlier post, this time using marvin
.
import json
import marvin
from marvin import ai_model
from pydantic import (
BaseModel,
)
from typing import (
List,
)
marvin.settings.llm_model = "gpt-3.5-turbo-16k"
class Ingredient(BaseModel):
name: str
quantity: float
unit: str
@ai_model
class Recipe(BaseModel):
title: str
description: str
duration_minutes: int
ingredients: List[Ingredient]
steps: List[str]
# read the recipe from a text file
with open("content.txt", "r") as f:
content = f.read()
recipe = Recipe(content)
print(json.dumps(recipe.dict(), indent=2))
The result:
Go introduced modules several years ago as part of a dependency management system.
My Hugo site is still using git submodules to manage its theme.
I attempted to migrate to Go’s submodules but eventually ran into a snag when trying to deploy the site.
To start, remove the submodule
git submodule deinit --all
and then remove the themes
folder
To finish the cleanup, remove the theme
key from config.toml
.