Tried to join in on the llama3.1-405b hype using Groq but sadly, no dice
curl -X POST https://api.groq.com/openai/v1/chat/completions \ -H "Authorization: Bearer $GROQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "llama-3.1-405b-reasoning", "messages": [ { "role": "user", "content": "Hello, how are you?" } ] }' {"error":{"message":"The model `llama-3.1-405b-reasoning` does not exist or you do not have access to it.","type":"invalid_request_error","code":"model_not_found"}} The queue to try it out in their chat is also quite long, so I guess either the infra needs to scale up or the hype needs to die down.I’ve been wanting to create a chat component for this site for a while, because I really don’t like quoting conversations and manually formatting them each time. When using a model playground, usually there is a code snippet option that generates Python code you can copy out intro a script. Using that feature, I can now copy the message list and paste it as JSON into a Hugo shortcode and get results like this:espanso I tried out adding espanso to configure text expansions rather than using Alfred just to try something new. This is the PR to add it to my Nix configurations. The existing examples are a toy configuration. The tool seems to support far more complex configuration that I still need to look into further.
gpt-4o-mini people frame this like it’s somehow a win over llama, when in fact the goal of llama has wildly succeeded: commoditize models and drive token cost to zeroIncredible writing and insight by Linus in Synthesizer for thought. I will probably need to revisit this work several times.How can I add videos to Google Gemini as context (is this even what their newest model is called anymore) and why is it so hard to figure it out? https://gemini.google.com only let’s me upload images. I assume I need to pay for something.
I played around with Cohere’s chat. They support web search and calculator and a python interpreter as tools as well as files and an internet search connector.Research and experimentation with models presents different problems than I am used to dealing with on a daily basis. The structure of what you want to try out changes often, so I understand why some folks prefer to use notebooks. Personally, notebooks haven’t caught on for my so I’m still just writing scripts. Several times now, I’ve run a relatively lengthy (and expensive) batch of prompts through a model only to realize something about my setup wasn’t quite right.I spent some more time experimenting with thought partnership with language models. I’ve previously experimented with this idea when building write-partner. Referring back to this work, the prompts still seemed pretty effective for the goal at hand. My original idea was to incrementally construct and iterate on a document by having a conversation with a language model. A separate model would analyze that conversation and update the working draft of the document to include new information, thoughts or insights from the conversation.While I didn’t have much success getting gpt-4o to perform Task 1 - Counting Line Intersection from the Vision Language Models Are Blind paper, I pulled down some code and did a bit of testing with Claude 3.5 Sonnet. The paper reports the following success rate for Sonnet for this line intersection task:
Thickness Sonnet 3.5 2 80.00 3 79.00 4 73.00 Average 77.33 I used the code from the paper to generate 30 similar images with line thickness 4 of intersecting (or not) lines.We probably are living in a simulation and we’re probably about to create the next one.
Martin Casado
https://podcasts.apple.com/us/podcast/invest-like-the-best-with-patrick-oshaughnessy/id1154105909?i=1000661628717VLMs are Blind showed a number of interesting cases where vision language models fail to solve problems that humans can easily solve. I spent some time trying to build examples with additional context that could steer the model to correctly complete Task 1: Counting line intersections, but didn’t have much success.