I tried Townie. As has become tradition, I tried to build a writing editor for myself. Townie got a simple version of this working with the ability to send a highlighted selection of text to the backend and run it through a model along with a prompt. This experience was relatively basic, using a textarea and a popup. From here, I got Townie to add the ability to show diffs between the model proposal and original text. It was able to do this for the selected text using CSS in a straightforward manner. I wanted to support multiple line diffs and diffs across multiple sections of the file. I suggested we use an open source text editor. At this point, things started to break. The app stopped rendering and I wasn’t able to prompt it into resolving the issue. I did manage to get it to revert (fixing forward) to a state where the app rendered again. However, the LLM-completion hotkey was broken.
I’ve been trying out Cursor’s hyped composer mode with Sonnet.
I am a bit disappointed.
Maybe I shouldn’t be.
I think it’s not as good as I expected because I hold Cursor to a higher bar than the other developer tools out there.
It’s possible it’s over-hyped or that I am using it suboptimally.
But it’s more or less of the same quality as most of the tools of the same level of abstraction like aider
, etc.
I am trying to create a multipane, React-based writing app.
It’s possible I need to provide more detailed description than I am giving so far.
However, my main complaint after running it is that now I have a ton of code that isn’t quite right and I don’t know where or why it’s sort of broken.
Now, I need to read all the code.
This approach is notably less productive than slowly building up an app with LLM-code generation, because after each generation I can test the new code and make sure it does what I intended (or write automated tests to do that).
The code I get out of Composer doesn’t do what I want, but the LLM doesn’t know why, either because my high level task is under-specified, it doesn’t have enough context, or the ask is too vague.
I don’t usually run into this issue when I use cmd+k.
Maybe, I need to watch some videos of folks using it.
I tried out OpenRouter for the first time.
My struggles to find an API that hosted llama3.1-405B
motivated me to try this out.
There are too many companies providing inference APIs to keep track.
OpenRouter seems to be aiming to make all these available from a single place, sort of like AWS Bedrock, but not locked in cloud configuration purgatory.
The first thing I tried was playing a game of Connections with nousresearch/hermes-3-llama-3.1-405b
.
It didn’t get any categories correct for the 2024-08-21 puzzle.
OpenRouter’s app showcase list is an interesting window into how people are using models.
The dominant themes are
An interesting read about how the world works through an economic lens.
But what is success? You can quantify net worth, but can you quantify the good you have brought to others lives?
It is not all about the TAM monster–doing cool things that are NOT ECONOMICALLY VALUABLE, but ARTISTICALLY VALUABLE, is equally important.
I downloaded Pile, a journal app with a first-class language model integration and an offline ollama
integration.
For personal data, running the model offline is a must for me.
I use DayOne sporadically, but I’m intrigued by the potential of a more of conversational format as I write.
The concept of a journal writing partner appears to be capturing mindshare. I found another similar app called Mindsera today as well. I also learned about Lex which puts collaborative and AI features at the heart of document authorship, I concept I played around a bit with Write Partner.
I setup WezTerm and experimented a bit. It’s a nice terminal emulator. I like the builtin themes and Lua as a configuration language. These days, I largely rely on the Cursor integrated terminal. It’s not the greatest, but having cmd+k it’s a bit of a killer feature.
I can't believe we're back to discussing LLMs' ability to reason. Where have you been these past two years? In a bunker? If you'd actually worked with LLMs during this time, you'd know by now that they're obviously pattern-matching machines. Try asking one to write incorrect… pic.twitter.com/KPcDCI2cjD
— Andriy Burkov (@burkov) August 18, 2024
I haven’t viewed the LLMs-can, LLM-can’t discourse through this lens explicitly.
they’re obviously pattern-matching machines
I’m not sure if I understand at what point these are different things. Maybe it’s a consequence of how I learn, but I generally develop skills on the foundations of seeing and understanding how someone more skilled than myself solves a problem.
I tried to run florence-2
and colpali
using the Huggingface serverless inference API.
Searching around, there seems to pretty pretty start support for image-text-to-text
models.
On Github, I only found a few projects that even reference these types of models.
I didn’t really know what I was doing, so I copied the example code then tried to use a model to augment it to call florence-2
.
Initially, it seemed like it was working:
I’ve been doing some experimentation with smaller models and embeddings, including distilbert/distilbert-base-uncased-finetuned-sst-2-english
and cardiffnlp/twitter-roberta-base-sentiment-latest
as binary sentiment classifiers and google/vit-base-patch16-224
as an image classifier.
Also GoogleNews-vectors-negative300
and fasttext-wiki-news-subwords-300
for embeddings to try and find semantically similar words and concepts.
I figured out the issue with adding mistral-large
.
After a bit of debugging, I realized by manually calling llm_mistral.refresh_models()
that something was wrong with how I had added the secret on Modal.
It turns out the environment variable name for the Mistral API key needed to be LLM_MISTRAL_KEY
.
I’m going to try and make a PR to the repo to document this behavior.
I’ve been trying to run models locally. Mostly specifically colpali and florence-2. This has not been easy. It’s possible these require GPUs and might not be macOS friendly. I’ve ended up deep in Github threads and dependency help trying to get basic inference running. I might need to start with something more simpler and smaller and build up from there.