I’ve been integrating Copilot into my workflow the past few days. From my understanding, it uses OpenAI’s Codex model, which is part of the GPT-3 model series. I believe this also predates the chat models, gpt-3.5-turbo and gpt-4. As someone who has been using Cursor for my personal work for several months now, the core completion functionality of Copilot feels like a step back compared to Cursor’s cmd+k (to say nothing of chat, which both have and other features). In general, when writing code, I’ve found I don’t really want line completion, I want idea completion. Copilot can do this, but you have to set it up in the right way, with a comment or a well named function stub

# sort list of tuples by second element

or

def sort_list_by_second_element(l):

This doesn’t always work and it’s sometimes hard to say if this is due to the API’s performance or the model’s capabilities. With Cursor, you just type thing what you want into the prompt box

sort list of tuples by second element

It’s basically the same input, but you’re guaranteed a completion because you asked for one. This issue has arisen for me several times with Copilot. I want to invoke it – not just have it step in when it thinks it’s useful.

Additionally, Cursor allows you to select specific blocks of text to modify or augment based on your prompt. It will also walk through an entire file, making minor modifications to existing code in place. This capability seems beyond what Copilot currently offers, which limits it’s usefulness for modifying existing code.

Github/Microsoft have an advantage with enterprise clients, but I believe they’re already lagging behind best-in-class development tooling using language models. That said, I expect they’ll observe the shifting landscape and begin adding more popular features to Copilot. They have a lot of resources to throw at the problem.