It seems like their new name is literally "Qodo (formerly Codium)", parenthetical included. At first I thought they were just including it in the blog post for clarity, but they literally write it out that way a dozen times. It's also included as part of their new logo and in the site title.
I've never seen anything like that before. It feels like a search+replace operation gone awry.
cursor. First one I've tried that seems like it's more than a neat demo.
- but I'm weird and I usually disable tab completion, I find having generations popping up while I'm typing slows me down, I gotta read them and think about it, feels like it's giving me ADD. So I've always kinda been a copilot hater. Lots of people find this more productive, and a fancy version of it is on by default in Cursor. However, Cursor implemented a bunch of different interfaces well not just the copilot one, and I find the chat window in your editor for churning out boilerplate or refactors is a huge productivity win personally. There are a lot of one-off refactors that are annoying enough that I wouldn't want to dedicate an afternoon to them but now they are taking me just a few minutes of reviewing AI changes.
Exactly why I never went with getting copilot, I instead got a chatgpt subscription and prompt for stuff I need.
I do sort of regret it too, sometimes you just want to give more context and it's a hassle at that time. Figuring out what is it you need to paste to ensure the model has adequate context to generate something valid. Also, Claude is magnitudes superior to ChatGPT anything. Both are terrible at implementing abstract completely unique code blocks, however ChatGPT is significantly more "markov-y" when it comes to generating any code. When Claude gets things wrong it feels like a more human mistake.
Anyway, with 50% of HN obsessing over Cursor, is it worth it? I couldn't get it to open projects I have in WSL2 and I kind of gave up at that point. I've gotten far with Claude's free tier and $20 for just cursor seems steep for something that's not as stable.
Have you assessed Zed auto completion or read about others experience of it? Zed seems like something with a more stable foundation than any of these VSCode forks.
My trajectory was sublime -> vscode -> cursor. I tried Zed, but didn't need any of the collaboration features, didn't notice any speed increases vs normal vscode usage, and was generally just less productive than vscode with all the extensions I had configured. Cursor imported all of those, and my keyboard shortcut muscle memory still worked right after install. For me at least, moving over to it was completely seamless.
Like you, I started using LLMs in a chat window copy pasting code back and forth, (but I think Claude 3.5 sonnet was the first model I felt it worth the hassle). The cursor workflow is basically indexing all your files in your codebase to figure out what to copy over into the LLM, and then scraping the LLM output to figure out where to paste it in your code. It works like 95% of the time, which is way more than I was expecting, and falls back to the manual copy paste workflow pretty easily. Also it comes at a time where models are good enough to be handed gobs of context and trusted with more than a handful of lines at a time (Claude Sonnet 3.5 really shines here).
The whole experience just feels very polished and well thought out. One great (not really well documented?) feature is a .cursorrules file that gets invisibly pasted into all the context windows where you can say things like "use double quotes in javascript" or "prefer functional paradigms and always include type annotations in python" to avoid having to make those edits for consistency and style if the LLM fails to pick them up from the surrounding code. You can commit this file and have teammates get the same part of the prompt. Now everyone's autocomplete consistency and style is improved.
It's difficult to state how nice having it automatically do this copy paste back and forth just a keyboard shortcut away feels, it really is much better than I expected. So yes, I would try Cursor.
Cursor has been my favorite so far but I also have never tried Codium. Copilot was the winner prior but honestly its just tab completion. I tried Jetbrains but it felt janky and slow. Cursor tab completion feels nicer, its super fast and will do updates based code changes. I like being able to quickly get it to write some code updates and it returns in a green/red line like a github PR. The flow is really nice for me and I am looking forward to the future.
I am about 70% through my first paid month with Cursor.
I should say I have budgeted that I am going to pay $20 for something AI right now no matter what.
Cursor is worth it, but I should have had an exact project in mind when I subscribed. I don't think I am going to get my money's worth completely this month because of how generous Claude's free tier is. I do like just asking the web ui random things sometimes.
At one point a few weeks back I did have overlap with o1, Cursor paid and Claude free tier all at the same time. I know I am not subscribing again to chatGPT until the non-preview o1 is out.
I do like the Cursor UI but there is something about copy/paste with the chatbot that I also like in conjunction with non-AI VScode or jupyter. Like the difference between oil paints and pastels. Both probably have their place in the tool belt.
Zed - open source, written in rust and hence extremely fast. I’ve always been a jetbrains user and every time I tried VSCode it never stuck. For such people zed is great since you can set up the config to use jetbrains key maps. Also you can get it to work with open LLMs via ollama or groq or cerebras; the latter two require an unpublished hack; here’s my config, thank me later :)
I don't use AI much but I do have CopilotChat.nvim for when I do need it. o1 and Claude support was just added yesterday. (Disclaimer: I am one of the maintainers)
I use Tabnine. It supports many models, including Claude. I find the output better compared to CoPilot. My IDE's are from Jetbrains and I work in Python and PHP mainly.
I tried generating the same test with all 5 models in Qodo Gen.
o1 is very slow - like, you can go get a coffee while it generates a single test (if it doesn’t time out in middle).
o1-mini thought worked really well. It generated a good test and wasn’t noticeably slower than the other models.
My feeling is that o1-mini will end up being more useful for coding than o1, except for maybe some specific instances where you need very deep analysis
How well did it work for generating tests? I was looking for an AI test generation tool yesterday and I came across this and it wasn't clear how good it is.
(before I get a bunch of comments about not letting AI write tests, this is for a hobby side project that I have a few hours a week to work on. I'm looking into AI test generation because the alternative is no tests)
Apparently that's Codium, who have recently renamed themselves to Qodo: https://www.qodo.ai/blog/introducing-qodo-a-new-name-the-sam... (TIL)
It seems like their new name is literally "Qodo (formerly Codium)", parenthetical included. At first I thought they were just including it in the blog post for clarity, but they literally write it out that way a dozen times. It's also included as part of their new logo and in the site title.
I've never seen anything like that before. It feels like a search+replace operation gone awry.
Why is "Qodo" "better" than "Codium"?
Not sure maybe they felt like SEO results would rank VS Codium over it?
no theres another series b startup codeium that they get confused for all the time. we talked to both
- https://latent.space/p/codium-agents
- https://latent.space/p/varun-mohan
Huh, look at that. I completely mixed them up too. Thankfully they have an article which "helpfully" clarifies things:
> We’ve noticed that there’s been some confusion between our company, Codium (now Qodo (formerly Codium)), and another company, Codeium.
https://www.qodo.ai/blog/codiumai-or-codeium-which-are-you-l...
"How to Distinguish Between Codium (now Qodo (formerly Codium)) and Codeium"
dear god
Hey, X (formerly Twitter) is clearly doing great and was a good branding change that makes sense and is reasonable.
Should have bought the quibi name.
Maybe it’s a joke? Like “X, formerly Twitter” or “Ye, formerly Kanye”, or even “the artist formerly known as Prince”.
SEO reasons maybe?
I guess this is as good a place as any to ask -- what's everyone's favorite AI code assist tool?
cursor. First one I've tried that seems like it's more than a neat demo.
- but I'm weird and I usually disable tab completion, I find having generations popping up while I'm typing slows me down, I gotta read them and think about it, feels like it's giving me ADD. So I've always kinda been a copilot hater. Lots of people find this more productive, and a fancy version of it is on by default in Cursor. However, Cursor implemented a bunch of different interfaces well not just the copilot one, and I find the chat window in your editor for churning out boilerplate or refactors is a huge productivity win personally. There are a lot of one-off refactors that are annoying enough that I wouldn't want to dedicate an afternoon to them but now they are taking me just a few minutes of reviewing AI changes.
Exactly why I never went with getting copilot, I instead got a chatgpt subscription and prompt for stuff I need.
I do sort of regret it too, sometimes you just want to give more context and it's a hassle at that time. Figuring out what is it you need to paste to ensure the model has adequate context to generate something valid. Also, Claude is magnitudes superior to ChatGPT anything. Both are terrible at implementing abstract completely unique code blocks, however ChatGPT is significantly more "markov-y" when it comes to generating any code. When Claude gets things wrong it feels like a more human mistake.
Anyway, with 50% of HN obsessing over Cursor, is it worth it? I couldn't get it to open projects I have in WSL2 and I kind of gave up at that point. I've gotten far with Claude's free tier and $20 for just cursor seems steep for something that's not as stable.
Have you assessed Zed auto completion or read about others experience of it? Zed seems like something with a more stable foundation than any of these VSCode forks.
My trajectory was sublime -> vscode -> cursor. I tried Zed, but didn't need any of the collaboration features, didn't notice any speed increases vs normal vscode usage, and was generally just less productive than vscode with all the extensions I had configured. Cursor imported all of those, and my keyboard shortcut muscle memory still worked right after install. For me at least, moving over to it was completely seamless.
Like you, I started using LLMs in a chat window copy pasting code back and forth, (but I think Claude 3.5 sonnet was the first model I felt it worth the hassle). The cursor workflow is basically indexing all your files in your codebase to figure out what to copy over into the LLM, and then scraping the LLM output to figure out where to paste it in your code. It works like 95% of the time, which is way more than I was expecting, and falls back to the manual copy paste workflow pretty easily. Also it comes at a time where models are good enough to be handed gobs of context and trusted with more than a handful of lines at a time (Claude Sonnet 3.5 really shines here).
The whole experience just feels very polished and well thought out. One great (not really well documented?) feature is a .cursorrules file that gets invisibly pasted into all the context windows where you can say things like "use double quotes in javascript" or "prefer functional paradigms and always include type annotations in python" to avoid having to make those edits for consistency and style if the LLM fails to pick them up from the surrounding code. You can commit this file and have teammates get the same part of the prompt. Now everyone's autocomplete consistency and style is improved.
It's difficult to state how nice having it automatically do this copy paste back and forth just a keyboard shortcut away feels, it really is much better than I expected. So yes, I would try Cursor.
Cursor has been my favorite so far but I also have never tried Codium. Copilot was the winner prior but honestly its just tab completion. I tried Jetbrains but it felt janky and slow. Cursor tab completion feels nicer, its super fast and will do updates based code changes. I like being able to quickly get it to write some code updates and it returns in a green/red line like a github PR. The flow is really nice for me and I am looking forward to the future.
I am about 70% through my first paid month with Cursor.
I should say I have budgeted that I am going to pay $20 for something AI right now no matter what.
Cursor is worth it, but I should have had an exact project in mind when I subscribed. I don't think I am going to get my money's worth completely this month because of how generous Claude's free tier is. I do like just asking the web ui random things sometimes.
At one point a few weeks back I did have overlap with o1, Cursor paid and Claude free tier all at the same time. I know I am not subscribing again to chatGPT until the non-preview o1 is out.
I do like the Cursor UI but there is something about copy/paste with the chatbot that I also like in conjunction with non-AI VScode or jupyter. Like the difference between oil paints and pastels. Both probably have their place in the tool belt.
Same question, but for VSCode plugins. Besides copilot what is everyone using? Claude support is a huge plus.
Emacs.
Zed - open source, written in rust and hence extremely fast. I’ve always been a jetbrains user and every time I tried VSCode it never stuck. For such people zed is great since you can set up the config to use jetbrains key maps. Also you can get it to work with open LLMs via ollama or groq or cerebras; the latter two require an unpublished hack; here’s my config, thank me later :)
https://gist.github.com/pchalasani/9e71c58d2f846412b253ae0ec...
Recent: https://news.ycombinator.com/item?id=41819039
I don't use AI much but I do have CopilotChat.nvim for when I do need it. o1 and Claude support was just added yesterday. (Disclaimer: I am one of the maintainers)
Zed's integrated tools have been more than enough for me.
Cursor.
All in on tab completion and its other UI/UX advances (generate, chat, composer, ...)
I've been loving Claude Sonnet for python
Cody
Aider AI.
Cursor.
aider-chat
I use Tabnine. It supports many models, including Claude. I find the output better compared to CoPilot. My IDE's are from Jetbrains and I work in Python and PHP mainly.
I tried generating the same test with all 5 models in Qodo Gen.
o1 is very slow - like, you can go get a coffee while it generates a single test (if it doesn’t time out in middle).
o1-mini thought worked really well. It generated a good test and wasn’t noticeably slower than the other models.
My feeling is that o1-mini will end up being more useful for coding than o1, except for maybe some specific instances where you need very deep analysis
How well did it work for generating tests? I was looking for an AI test generation tool yesterday and I came across this and it wasn't clear how good it is.
(before I get a bunch of comments about not letting AI write tests, this is for a hobby side project that I have a few hours a week to work on. I'm looking into AI test generation because the alternative is no tests)
How is that free ???
Presumably free refers to the users on their free plan, which does not include code generation/autocomplete except for tests.
I want to bring my own API keys not pay $20 a month for another sub