I enjoyed Martin’s article on preserving your shell history. I implemented some of his approaches in my system config.
Gemini Pro 1.5 up and running. I’ve said this before but I will say it again – the fact that I don’t need to deal with GCP to use Google models gives me joy.
β― llm -m gemini-1.5-pro-latest "who is the fastest man in the world?"
As of November 2023, **Usain Bolt** is still considered the fastest man in the world. He holds the world record in the 100 meters with a time of 9.58 seconds, set in 2009. He also holds the record for the 200 meters at 19.19 seconds, achieved in 2009 as well.
Having all these models readily available is great. My hope is to play around with several to become a bit of an amateur model sommelier.
Today, I learned about Command-R model series from Cohere from Shawn’s great AI newsletter (ainews).
I searched to see if a plugin was available for llm
and Simon had literally authored one 8(!) hours earlier.
Folks like you keep me inspired and motivated π.
No better workflow out there that I know of:
llm install llm-command-r
llm -m r-plus hello
Error: No key found - add one using 'llm keys set cohere' or set the COHERE_API_KEY environment variable
llm keys set cohere
Enter key: ...
llm -m r-plus "hi, who am I speaking with?"
You are speaking with Coral, an AI chatbot trained to assist users by providing thorough responses. How can I help you today?
A great article by Manuel about forever-growth of companies. I too wish we’d be more willing to celebrate enough.
I’ve been digging more into evals.
I wrote a simple Claude completion function in openai/evals
to better understand how the different pieces fit together.
Quick and dirty code:
from anthropic import Anthropic
from evals.api import CompletionFn, CompletionResult
from evals.prompt.base import is_chat_prompt
class ClaudeChatCompletionResult(CompletionResult):
def __init__(self, response) -> None:
self.response = response
def get_completions(self) -> list[str]:
return [self.response.strip()]
class ClaudeChatCompletionFn(CompletionFn):
def __init__(self, **kwargs) -> None:
self.client = Anthropic()
def __call__(self, prompt, **kwargs) -> ClaudeChatCompletionResult:
if is_chat_prompt(prompt):
messages = prompt
system_prompt = next((p for p in messages if p.get("role") == "system"), None)
if system_prompt:
messages.remove(system_prompt)
else:
# I think there is a util function to do this already
messages = [{
"role": "user",
"content": prompt,
}]
message = self.client.messages.create(
max_tokens=1024,
system=system_prompt["content"] if system_prompt else None,
messages=messages,
model="claude-3-opus-20240229",
)
return ClaudeChatCompletionResult(message.content[0].text)
claude/claude-3-opus:
class: evals.completion_fns.claude:ClaudeChatCompletionFn
args:
completion_fn: claude-3-opus
Run with
I can’t believe I am saying this but if you play around with language models locally, a 1 TB drive, might not be big enough for very long.
As someone learning to draw, I really enjoyed this article: https://maggieappleton.com/still-cant-draw. I’ve watched the first three videos in this playlist so far and have been sketching random objects from around the house. I find that I’m not too big of a fan of my drawing as I’m doing it but when I return to it later, I seem to like it more. Apparently, this is a common experience for creatives.
Added GoatCounter to my site. I’m planning to see how I like it compared to Posthog.
I’m taking a break from sketchybar for now.
I’m currently looking into build a NL to SQL plugin or addition to datasette
to use a language model to write queries.
π€ Connections (claude-3-opus)
Puzzle #287
π©π©π©π©
π¨π¨π¨π¨
π¦π¦π¦π¦
πͺπͺπͺπͺ
I got this result twice in a row.
gpt-4
couldn’t solve it.
Here is one attempt.
π€ Connections (gpt-4)
Puzzle #287
π©πͺπ©π©
π©π©π©π©
π¦π¨π¦π¨
π¦π¨π¦π¨
π¦π¨π¦π¦
I tried https://echochess.com/. Kind of fun.
I remember when my highschool teachers used to tell us Wikipedia wasn’t a legitimate source. It sort of feels like education is having this type of moment now with language models. Professionals are using this technology today to do work in the real work. Learn how the technology works, teach it and teach with it.
One of the greatest misconceptions concerning LLMs is the idea that they are easy to use. They really arenβt: getting great results out of them requires a great deal of experience and hard-fought intuition, combined with deep domain knowledge of the problem you are applying them to.
The whole “LLMs are useful” section hits for me. I have an experience similar to Simon’s and I also wouldn’t claim LLMs are without issue or controversy. For me they are unquestionably useful. They help me more quickly and effectively get ideas out of my head and into the real world. They help me learn more quickly and solve problems and answer questions as I’m learning. They increase my capabilities as an individual in the same sort of way getting access to Google for the first time did. These days, not having access to a language model makes me feel like I’ve had an essential tool taken away from me, like not having a calculator or documentation.