2023-09-11

I read quote from a long tweet the other day that made me smile. Writing pure JavaScript is like trying to cut a watermelon with a chainsaw in the dark. It sounds fun and free and quite easy until there’s a roomful of mess to clean up. https://twitter.com/kettanaito/status/1699440414812504443
It’s much easier to test Temporal Workflow in Python by invoking the contents of the individual Activities first, in the shell or via a separate script, then composing them into a Workflow. I need to see if there’s a better way to surface exceptions and failures through Temporal directly to make the feedback loop faster. From this paper: 62% of the generated code contains API misuses, which would cause unexpected consequences if the code is introduced into real-world software
Language models and prompts are magic in a world of deterministic software. As prompts change and use cases evolve, it can be difficult to continue to have confidence in the output of a model. Building a library of example inputs for your model+prompt combination with annotated outputs is critical to evolving the prompt in a controlled way, ensuring performance and outcomes don’t drift or regress as you try and improve your overall performance.
I’ve been doing a bit of work with Temporal using it’s Python SDK. Temporal remains one of my favorite pieces of technology to work with. The team is very thoughtful with their API design and it provides a clean abstraction for building distributed, resilient workflows. It’s a piece of technology that is difficult to understand until you build with it, and once you do, you find applications for it everywhere you look.

2023-08-07

🎧 Velocity over everything: How Ramp became the fastest-growing SaaS startup of all time | Geoff Charles (VP of Product) This conversation between Lenny and Geoff was particularly noteworthy for me because it hit on so many areas of what I’ve seen in the most effective organizations and teams I’ve been apart of as well as realigning incentives to solve a number of problems I’ve experienced that hold teams back.
Simon wrote an excellent post on the current state of the world in LLMs. Twitter continues to talk LK-99. It seems like an easy thing to root for but hard to tell exactly what is going on.
The high-order bit that changed in AI: "I'll give you 10X bigger computer" - 10 years ago: I'm not immediately sure what to do with it - Now: Not only do I know exactly what to do with it but I can predict the metrics I will achieve Algorithmic progress was necessity, now bonus. — Andrej Karpathy (@karpathy) August 3, 2023 Turning scaling into a systematic science is the biggest advance enabled by LLMs.
While not an entirely unique perspective, I believe Apple is one of the best positioned companies to take advantage of the recent improvements in language models. I expect more generic chatbots will continue to become commodities whereas Apple will build a bespoke, multi-modal assistant with access to all your personal data on device. This assistant will be able to do anything the phone can do (invoke functions/tools) as well as answer any question about your personal data (show me photos from Christmas in 2018).

2023-07-30

A heartwarming exchange Your project has a youthful optimism that I hope you won’t lose as you go. And in fact it might be the way to win in the long run.