I’ve said this before, but as the feature set of LLM APIs increases, it becomes harder and harder to not just pick a provider and using their SDK for your use case. Between tool calling, thinking, streaming, citations, pdf parsing, and more, there is no consistency in the API design across providers.
This feels like OpenRouter’s moat. They figure out a way to abstract this as much as possible across model APIs and then I just use them. LiteLLM as well though I need experiment with it more.