300 points by simonpure 9 months ago | 87 comments
shreezus 9 months ago
Keep in mind Cursor is just a fork of VSCode, with AI features that are pretty much just embedded extensions. Their product is great, but many users would prefer a bring-your-own-key & selecting their own model providers.
kamaal 9 months ago
On the contrary. Most enterprise users will prefer one package they can buy and not buy the thing piece wise.
Big reason why VSCode won was because they were able to provide a lot of things out the box saving the user round trip through the config hell rabbit hole modern vim/emacs ecosystems are.
If you want to sell developer tooling products, provide as much as possible out of the box.
People want to solve their problems using your tool. Not fix/build problems/missing features in your tool.
wyclif 9 months ago
That used to be a valid problem, but times have changed. For instance, Neovim has things now like kickstart.nvim and lazy.nvim that solve this problem. I've been test-driving LazyVim for the past month or so and I don't have to config anything anymore because the updates and plugins are sane choices.
kamaal 9 months ago
But the IDE ship sailed long back. Its just that the modern IDE's simply do a lot out of the box that is just not possible to configure quickly, or well enough outside using packages. Most of the times the packages don't work well with each other.
Like the Python's formatter often interferes with vim's modal editing etc.
With AI, this will take it a level further apart.
shreezus 9 months ago
Microsoft has its top-tier distribution advantages, plus they can build native integrations with Github/Azure etc to take things to a new level - think one-click deployments built into VSCode.
In fact, given the rising popularity of Replit/Vercel/etc I wouldn't be surprised if Microsoft is cooking as we speak.
FridgeSeal 9 months ago
Most of the time they can deliver on...some fraction of some of those.
This is just the AI version of "oh they have Visual Studio + C# + Azure, C# can do FE + BE with ASP.net etc etc so why would anyone ever use anything else?"...and yet, here we are.
They'll deliver some janky, combo of features, it'll be stapled on top of VS Code, it'll be about 65% as good as everything else, but you've got to be all-in-on-MS to really get the value out of it, which will be great for a few enterprise clients, and middling to average for everyone else.
Open Source versions off this will be available soon enough, self-hosted models are only getting better, many orgs are loathe to spend anymore than the absolute minimum on dev-tools (why would they pay for fancy ML stuff that devs will want to run personal versions of anyways) sowhat's the real moat or value prop here?
rafaelmn 9 months ago
Any other third party needs to get vetted/trusted - I would be the first to doubt an AI startup.
kamaal 9 months ago
But also note, there is likely to be a situation like the early days of internet where people write lots of code, but also bug ridden and unreadable code.
It will take a few years for things to return to routine.
rafaelmn 9 months ago
I've had it disabled when switching between environments and sometimes not notice for a day - depending on what I'm doing.
GardenLetter27 9 months ago
skp1995 9 months ago
I think using the LSP is not just a trivial task of grabbing definitions using the LSP, there is context management and the speed at which inference works. On top of it, we also have to grab similar snippets from the surrounding code (open files) so we generate code which belongs to your codebase.
Lots of interesting challenges but its a really fun space to work on.
thawab 9 months ago
ode 9 months ago
LoganDark 9 months ago
cellshade 9 months ago
thomashop 9 months ago
anonzzzies 9 months ago
Zion1745 9 months ago
jadbox 9 months ago
WhyNotHugo 9 months ago
Sounds to me like Rabbit R1. A company picks up existing open source tools, builds their own extension/UI on top, and ship as something entirely new and novel. It'll grab a lot of attention short term, but others will quickly figure out how to make their own implementation that runs directly on the existing open source tools.
samstave 9 months ago
iansinnott 9 months ago
worldsayshi 9 months ago
Also there's https://github.com/yacineMTB/dingllm.nvim which seems promising but quite wip.
d4rkp4ttern 9 months ago
zed: https://zed.dev/
HN Discussion from few days ago (397 pts): https://news.ycombinator.com/item?id=41302782
westoncb 9 months ago
For me the collab with Anthropic mentioned is significant too—auspicious.
SirLordBoss 9 months ago
Can anyone chime in on whether using zed on wsl is viable, or loses all the speed benefits?
vunderba 9 months ago
FridgeSeal 9 months ago
It's all open source though, so you could probs verify easily enough.
divan 9 months ago
iansinnott 9 months ago
- copilot-chat was just a chat sidebar last time I used it, you still had to manually apply any code suggestions. cursor will apply changes for you. It's very helpful to have a diff of what the AI wants to change.
It's been a while since i've used copilot though, so copilot chat might be more advanced then i'm remembering.
edit: formatting
divan 9 months ago
I test Copilot Workspace from time to time, it's still far from perfect, but it already can make large-scale codebase changes across multiple files in a repository. Ultimately that's what I want from an AI assistant on my machine - give a prompt and see changes in all repo, not just current file.
IanCal 9 months ago
For the autocomplete, in the same file. So proposing adding more logging when you add a few statements, changing an error check, adding something to the class def or constructor.
They do have a multi-file editing thing called "composer" I think, which I used to make larger changes to an app (e.g. add a new page that lists all the X, and it creates that and the links to it in the other pages).
You might also be interested in aider https://github.com/paul-gauthier/aider for larger changes.
divan 9 months ago
worldsayshi 9 months ago
Copilot is still surprisingly basic but I've heard rumours that they are working on a version with a lot more features?
thawab 9 months ago
shombaboor 9 months ago
anotherpaulg 9 months ago
armchairhacker 9 months ago
To me it feels like trying to explain something (for an LLM) is harder than writing the actual code. Either I know what I want to do, and describing things like iteration in English is more verbose than just writing it; or I don’t know what I want to do, but then can’t coherently explain it. This is related to the “rubber duck method”: trying to explain an idea actually makes one either properly understand it or find out it doesn’t make sense / isn’t worthwhile.
For people who experience the same, do tools like Cursor make you code faster? And how does the LLM handle things you overlook in the explanation: both things you overlooked in general, and things you thought were obvious or simply forgot to include? (Does it typically fill in the missing information correctly, incorrectly but it gets caught early, or incorrectly but convincing-enough that it gets overlooked as well, leading to wasted time spent debugging later?)
IanCal 9 months ago
In general, it's like autocomplete that understands what you're doing better. If I've added a couple of console.logs and I start writing another after some new variable has been set/whatever it'll quickly complete that with the obvious thing to add. I'll start guessing where next to move the cursor as an autocomplete action, so it'll quickly move me back and forth from adding a new var in a class to then where I'm using it for example.
As a quick example, I just added something to look up a value from a map and the autocomplete suggestion was to properly get it from the map (after 'const thing = ' it added 'const thing = this.things.get(...)' and then added checking if there was a result and if not throwing an error.
It's not perfect. It's much better than I expected.
For larger work, I recently tried their multi-file editing. I am writing a small app to track bouldering attempts, and I don't know react or other things like that so well. I explained the basic setup I needed and it made it. "Let's add a problem page that lists all current problems", "each attempt needs a delete button", "I need it to scan QR codes", "Here's the error message". I mostly just wrote these things and clicked apply-all. I'm not explaining exactly how or what to do, then I'd just do it.
I'm surprised at how much it gets right first time. The only non-obvious problem to a novice/non-developer it got stuck on was using "id" somewhere, which clashed with an expected use and caused a weird error. That's where experience helps, having caused very similar kinds of problems before.
Sometimes I think as programmers we like to think of ourselves doing groundbreaking brand new work, but huge amounts of what we do is pretty obvious.
fred123 9 months ago
the_duke 9 months ago
In languages I know well, I use copilot like a smart autocomple. I already know what I want to write and just start typing. Copilot can usually infervery well what I'm going to write for a few lines of code, and it saves time.
In languages I don't know well, where I don't fully know the various standard library and dependency APIs, I write a quick explanation to get the basic code generated and then tweak manually.
tiffanyh 9 months ago
Curious to see how all this VC money into editors end up.
CuriouslyC 9 months ago
mhuffman 9 months ago
tymonPartyLate 9 months ago
d4rkp4ttern 9 months ago
https://plugins.jetbrains.com/plugin/9682-cody-ai-coding-ass...
bcjordan 9 months ago
0xCAP 9 months ago
yetone 9 months ago
I plan to abandon nui.nvim for implementing the UI (actually, we only use nui's Split now, so it's exceptionally simple to abandon). Regarding the tiktoken_core issue, everything we did was just to make installation easier for users. However, the problem you mentioned is indeed an issue. I plan to revert to our previous approach: only providing installation documentation for tiktoken_core instead of automatically installing it for users.
As for why avante.nvim must depend on tiktoken_core, it's because I've used the powerful prompts caching feature recently introduced by the Anthropic API. This feature can greatly help users save tokens and significantly improve response speed. However, this feature requires relatively accurate token count calculations, as it only takes effect for tokens greater than 1024; otherwise, adding the caching parameter will result in an error.
gsuuon 9 months ago
[1] https://docs.anthropic.com/en/docs/build-with-claude/prompt-...
acheong08 9 months ago
Suggestion to the author: fork the repo and pin it to a hash.
leni536 9 months ago
No package signing, no audits, no curation. Just take over one popular vim package and you potentially gain access to a lot of dev departments.
yriveiro 9 months ago
Probably it also use Plenary for I/O as well.
Not reinventing the wheel is a good thing, don’t see the problem with the dependencies.
ilrwbwrkhv 9 months ago
nsonha 9 months ago
it's one thing to have preference and sense of aesthetic, it's another thing to claim that said things are universally more usable. If not for components that were invented in VSCode (LSP) then no one would be using vim these days. There are plenty of hills to die on that's much more noble than "I like this editor"
nobleach 9 months ago
I hate to tell you but, Vim has always had a pretty strong user base. There are folks like me that used it before LSP, and never had interest in leaving. Now your statement might be more accurate if you said, "If it were not for LSP, no one would be leaving VSCode for NeoVim."
> There are plenty of hills to die on that's much more noble than "I like this editor"
I agree. Use whatever you want. But don't make misinformed statements about WHY folks choose something other than your choice.
adezxc 9 months ago
unsober_sailor 9 months ago
adezxc 9 months ago
fragmede 9 months ago
ilrwbwrkhv 9 months ago
worldsayshi 9 months ago
hztar 9 months ago
piperly 9 months ago
BaculumMeumEst 9 months ago
Open source tooling is always going to be a different focus: giving you a toolbox to assemble functionality yourself.
gsuuon 9 months ago
dheerkt 9 months ago
tyfon 9 months ago
calgoo 9 months ago
wey-gu 9 months ago
indigodaddy 9 months ago
robertinom 9 months ago
pajeets 9 months ago
kache_ 9 months ago
arendtio 9 months ago
schmeichel 9 months ago
benreesman 9 months ago
If Claude could do custom shit on rules_python, I’d marry it.
But it’s a fucking newb on hard stuff.
maleldil 9 months ago
benreesman 9 months ago
nsonha 9 months ago
rfoo 9 months ago
benreesman 9 months ago
nsonha 9 months ago
benreesman 9 months ago
yakorevivan 9 months ago
RMPR 9 months ago
sqs 9 months ago