Genuinely an S-tier move by Github / MSFT. Cursor didn't really work out for me and I've been happily using using Claude projects for most of my prototyping. I'm hoping I can use it directly in VS Code without pay-as-you-go pricing or a Claude API key.
Copilot has been so-so in my experience but I still use it very often to infer Typescript types automatically (the VS Code Cmd + I shortcut is excellent!).
It's hard to explain but over the last couple of months I have an intuitive sense of when Claude (the web version) starts to lose context over a problem. Over there it's as simple as starting a new chat and building on top of that.
To me, it felt like Cursor blurred that line too much for my liking. The way I use LLMs is by giving it atomic, independant chunks of code that I need to review / refactor which, in my experience, leads to far superior output. I'm sure there's some way to make it work with Cursor but it just didn't click for me.
It sounds like we use LLMs similarly. I've been a bit worried I'm missing out on some of these more sophisticated tools, but I kinda figured it would go like this. Likewise, anything with 'copilot' in the name is fucking garbage.
In the future, we might not have to helicopter-parent the context window, but we definitely aren't there yet. I suspect nearly everyone who speaks poorly of LLMs simply hasn't figured this out yet.
I have created two simple scripts to help me with this workflow, `pc` and `po`. `pc` accepts a list of files and concatenates sanitized versions into the clipboard buffer, each file enclosed in triple backticks and prefixed with its filename. `po` accepts a single file, reverses sanitization and displays a diff between clipboard buffer and the specified file. It's not much, but it significant accelerates this workflow for me.
If the whole world hasn't completely left me behind by next week, I might look into adding some sort of AST-based diff/merge in `po`. It'd be nice not needing to constantly remind the LLM to output only changed files, complete, with no omissions.
Cursor's AI is in my opinion superior. Copilot gets anything but the most basic things wrong or poor practice advice, and even basic things sometimes. Cursor's _interface_ on the other hand, is not great. I found it clunky and too presumptuous (grabbing 'k' for all the shortcuts was a bad move).
So now that Claud is coming to Copilot, I don't think there's any reason to consider Cursor.
Anecdotally, I've stopped using OpenAI entirely for coding activities in preference for Claude due to the _vastly_ improved performance on my coding tasks. This was before the recent update to Claude and I haven't gotten a feel for how the new version is doing.
Also entirely anecdotally, the newer multi-modal features added to the OpenAI models have _seemed_ to significantly degrade its other capabilities especially coding in languages other than Python and TypeScript and _seems_ to be more repetitive in its answers (likely to get stuck repeating the same incorrect information even after a correction). This could absolutely be a sampling or task bias so your mileage may vary.
I've still found Github Copilot to be useful for VERY SHORT look-ahead/completion but it has almost always assumed too much in the very wrong direction for more than about a line. I haven't tried the Claude version of Copilot but I'm absolutely switching over to it.
Genuinely an S-tier move by Github / MSFT. Cursor didn't really work out for me and I've been happily using using Claude projects for most of my prototyping. I'm hoping I can use it directly in VS Code without pay-as-you-go pricing or a Claude API key.
Copilot has been so-so in my experience but I still use it very often to infer Typescript types automatically (the VS Code Cmd + I shortcut is excellent!).
What were your experiences with Cursor? I was thinking of switching but didn't see the point.
It's hard to explain but over the last couple of months I have an intuitive sense of when Claude (the web version) starts to lose context over a problem. Over there it's as simple as starting a new chat and building on top of that.
To me, it felt like Cursor blurred that line too much for my liking. The way I use LLMs is by giving it atomic, independant chunks of code that I need to review / refactor which, in my experience, leads to far superior output. I'm sure there's some way to make it work with Cursor but it just didn't click for me.
It sounds like we use LLMs similarly. I've been a bit worried I'm missing out on some of these more sophisticated tools, but I kinda figured it would go like this. Likewise, anything with 'copilot' in the name is fucking garbage.
In the future, we might not have to helicopter-parent the context window, but we definitely aren't there yet. I suspect nearly everyone who speaks poorly of LLMs simply hasn't figured this out yet.
I have created two simple scripts to help me with this workflow, `pc` and `po`. `pc` accepts a list of files and concatenates sanitized versions into the clipboard buffer, each file enclosed in triple backticks and prefixed with its filename. `po` accepts a single file, reverses sanitization and displays a diff between clipboard buffer and the specified file. It's not much, but it significant accelerates this workflow for me.
If the whole world hasn't completely left me behind by next week, I might look into adding some sort of AST-based diff/merge in `po`. It'd be nice not needing to constantly remind the LLM to output only changed files, complete, with no omissions.
Cursor's AI is in my opinion superior. Copilot gets anything but the most basic things wrong or poor practice advice, and even basic things sometimes. Cursor's _interface_ on the other hand, is not great. I found it clunky and too presumptuous (grabbing 'k' for all the shortcuts was a bad move).
So now that Claud is coming to Copilot, I don't think there's any reason to consider Cursor.
[dupe] More discussion: https://news.ycombinator.com/item?id=41985915
How much difference does this make to the code copilot generates?
Anecdotally, I've stopped using OpenAI entirely for coding activities in preference for Claude due to the _vastly_ improved performance on my coding tasks. This was before the recent update to Claude and I haven't gotten a feel for how the new version is doing.
Also entirely anecdotally, the newer multi-modal features added to the OpenAI models have _seemed_ to significantly degrade its other capabilities especially coding in languages other than Python and TypeScript and _seems_ to be more repetitive in its answers (likely to get stuck repeating the same incorrect information even after a correction). This could absolutely be a sampling or task bias so your mileage may vary.
I've still found Github Copilot to be useful for VERY SHORT look-ahead/completion but it has almost always assumed too much in the very wrong direction for more than about a line. I haven't tried the Claude version of Copilot but I'm absolutely switching over to it.
> it has almost always assumed too much in the very wrong direction for more than about a line.
I have hope that `copilot-instructions.md`^1 will improve this!
^1 https://news.ycombinator.com/item?id=41987263
Awww I got really excited but it's limited to Copilot chat.
[flagged]