53 points by cmpit 8 months ago | 53 comments
artemsokolov 8 months ago
1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer
2024: Using AI Generated Code Will Make You a Bad Programmer
dwaltrip 8 months ago
I get where this is coming from and it is true sometimes (e.g. my favorite example is Google maps). But it’s quite silly to assume this for all tools and all skill sets, especially with more creative and complex skills like programming.
Wise and experienced practitioners will stay grounded in the fundamentals while judiciously adding new tools to their kit. This requires experimentation and continual learning.
The people whose skills will be impacted the most are those who didn’t have strong fundamentals in the first place, and only know the craft through that tool.
Edit: forgive my frequent edits in the 10 minutes since after initially posting
colincooke 8 months ago
Most of SWE (and much of engineering in general) is built on abstractions -- I use a Numpy to do math for me, React to build a UI, or Moment to do date operations. All of these libraries offer abstractions that give me high leverage on a problem in a reliable way.
The issue with the current state of AI tools for code generation is that they don't offer a reliable abstraction, instead the abstraction is the prompt/context, and the reliability can vary quite a bit.
I would feel like one hand it tied behind my back without LLM tools (I use both co-pilot and Gemini daily), however the amount of code I allow these tools to write _for_ me is quite limited. I use these tools to automate small snippets (co-pilot) or help me ideate (Gemini). I wouldn't trust them to write more than a contained function as I don't trust that it'll do what I intend.
So while I think these tools are amazing for increasing productivity, I'm still skeptical of using them at scale to write reliable software, and I'm not sure if the path we are on with them is the right one to get there.
danielmarkbruce 8 months ago
SaucyWrong 8 months ago
danielmarkbruce 8 months ago
Yeah, they are generally probabilistic. That has nothing to do with abstraction. There are good abstractions built on top of probabilistic concepts, like rngs, crypto libraries etc.
morkalork 8 months ago
stonethrowaway 8 months ago
Both of these remain true to today, which is why we always interview people at one layer below the requirement of the job so they know what they’re doing.
Writing C/C++ - know how the output looks like. Using GC-based languages? Know the cleanup cycle (if any).
I would wager the third also holds true.
giraffe_lady 8 months ago
PeterStuer 8 months ago
- using a debugger will make you a bad programmer
- using an IDE will make you a bad programmer
- using Google will make you a bad programmer
- using StackOverflow will make you a bad programmer
Hint: It's not the tools, it's how you use them.
perihelion_zero 8 months ago
musicale 8 months ago
On the other hand, it's nice to not have to have people memorize every book in the library every generation.
m463 8 months ago
ted_bunny 8 months ago
CivBase 8 months ago
I can trust that a garbage collector will allocate and cleanup memory correctly.
I cannot trust that an AI will generate quality code. I have to review its output. As someone who has been stuck doing nothing but review other people's code for the last few months, I can confidently say it would take me less time to code the solution myself than to read, digest, provide feedback for, and review changes for someone else's code. If I cannot write the code myself, I cannot accurately review its output. If I can write the code myself, it would be faster (and more fulfilling) to do that than review output from an AI.
helf 8 months ago
luckman212 8 months ago
euroderf 8 months ago
rsynnott 8 months ago
If garbage collectors only did the correct thing 90% of the time, and non-deterministically did something stupid the other 10%, then, er, yeah, it very much would!
There's a reason that conservative GCs for C didn't _really_ catch on... (It would be unfair to describe them as as broken as an LLM, but they certainly have their... downsides.)
namaria 8 months ago
shusaku 8 months ago
intelVISA 8 months ago
BillLucky 8 months ago
budududuroiu 8 months ago
The thing that bothers me is that your colleagues will use AI, your bosses will see it as progress, yet not realise the time saved now is going to be wasted down the road
Alifatisk 8 months ago
If it wasn't for that, I'd switch to Cursor or use copilot in a instant, because honestly, I've asked some ai tools like Claude for help a couple of times, and those has been for tasks that I know more than one would need to be involved in to complete that, but with Claude, I solved it in a couple of hours, incredible stuff!
Also, if it wasn't obvious, I am not claiming that this is the case, these are just my feelings, I would love to be convinced otherwise because then I might switch and try out the luxury QoF others are having.
viraptor 8 months ago
This is a reality for some of us already - but it's not about tools. I'm working with 5+ different languages and likely between 20+ projects. I had to lookup in the docs how to (for example) lowercase a string every single time, because every language has to invent their own name/case for it. Now I'm free - just write tolower() and it gets fixed. New string array - ["foo"] and it gets fixed. etc. etc.
There's a huge number of things that are not necessary to remember, but you just do if you see them consistently every day. But now I'm free. If I ever need to do them manually again, I'll check the docs. But today I'm enjoying the time saving.
throw16180339 8 months ago
1. Having it perform mechanical refactorings where there's no creativity involved. I'm hacking on a program that was written in the early 2000s. It predates language support for formatted IO. I had ChatGPT replace many manual string concatenations with the equivalent of sprintf. It's easy enough to test the replacements at the REPL.
2. Questions that would be unreasonable or impossibly tedious to ask a person.
Describe in detail the changes from language version X to language version Y.
Which functions in this module can be replaced by library functions or made tail recursive? This definitely misses things, but it's a good starting point for refactoring.
Is there a standard library equivalent of this function? I regularly ask it this, and have replaced a number of utility functions.
Give examples for using function.
rerdavies 8 months ago
It has fundamentally changed the way I write code. And I'm still exploring the boundaries of what kinds of tasks I can feed it. (45 year veteran senior programmer).
Sorry for the TLDR post, but I find it difficult to briefly make the case for why Claude 3.5 Sonnet (and other similarly modern and capable AIs) are fundamentally different from smaller and older AIs when it comes to use as a coding assistant.
I do use it for simple tedious things like "Convert an ISO date string in GMT to a std::chrono::systemclock::timepoint" (requires use of 3 generations of C/C++/Posix library functions that would take about 15 minutes of wading through bad documentation and several false starts to get right).
But have also had success with larger fragments ranging up to 100 or 200 lines of code. It still has distinct limitations (bizarre choices of functional composition, and an unpleasant predilection for hard-coded integer constants, which can be overcome with supplementary prompts. Seems to be brilliant tactically, and shows a terrifyingly broad knowledge of APIs and libraries across the the three platforms I use (android/javascript, typescript/React/MUI, C++/linux). But doesn't yet have a good sense of strategic coding (functional and class decomposition &c).And usually requires three or four supplementary prompts to get code into a state that's ready to copy and paste (and refactor some more). e.g. "Wrap that up in a class; use CamelCase for classnames, and camelCase for member names. ... &c &c.
And have also used it help me find solutions to problems that I've spent months on ("android, java: unable to connect to an IoT wi-fi hotspot without internet access when the data connection is active"; Claude:" blah blah ... use connectivityManager.bindProcessToNetwork()"!!!).
Or "C++/linux/asound library: why does this {3000 lines of code} fail to reliably recover from audio underruns".
And had some success with "find the wild memory write in this 900 line code fragment". Doesn't always work, but I've had success with it often enough that I'm going to use it lot more often.
And used it to write some substantial bash scripts that I just don't have the skills or literacy to deal with myself (long time Windows programmer, relative newcomer to linux).
sireat 8 months ago
That is how to get the tight integration that Copilot offers but with Claude?
I've been using Github Copilot since the technical preview in mid 2021 and it too changed fundamentally how I write code. Perhaps I've gotten too used to it.
I find that regular LLM chat interfaces break the flow for me.
My usual use is to use Copilot as a rubber ducky or an eager junior assistant of sorts That is I would write
//Converting an ISO date string in GMT to a std::chrono::systemclock::timepoint
and then it is Tab time.If the result is not so good it means my requirements were not detailed enough. Rarely will it be completely unusable.
As a side effect I am forced to document more of my work, which is a good thing.
marginalia_nu 8 months ago
In a learning context, sure, you probably should not be using copilot or similar, the same way you shouldn't be using a calculator when doing your basic arithmetic homework.
Beyond that, this just seems like a classic scrub mentality hangup. If a tool is useful, you should use it where appropriate. You'd be a fool not to. If it's not useful, then don't use it.
Rzor 8 months ago
Buckle up, LLMs are here to stay and will likely continue improving for a while before they plateau.
BugsJustFindMe 8 months ago
If you're going to be eventually replaced, and I absolutely believe that even the best of us will, you may as well get in on the ground floor to extract value for a bit before that happens.
Not writing your own code doesn't need to mean turning your brain off. You still need to look at what came out, understand it, and spot where it didn't match your needs.
baw-bag 8 months ago
By that point, having never used any of the tools makes you almost no different from anyone off the street.
In a way I welcome it. Writing the same menial code as everyone else slightly differently becomes a pretty stale existence.
idopmstuff 8 months ago
But in all seriousness, these models are getting to the point where they're really useful for me to just build one-off tools for my own use or to prototype things to show other people what I'm looking for (like an interactive mockup). That's the power of turning a non-programmer into a bad programmer, and it's certainly worth something!
budududuroiu 8 months ago
Also exacerbates the problem of A teams that get assigned to greenfield work and B teams that thanklessly maintain and actually productionise said greenfield work
cranberryturkey 8 months ago
everforward 8 months ago
Most people write pull requests that are scoped too poorly to tell what they’re doing. Like I get a single function with unit tests, so the best I can do for a review is check whether there are any obvious missed edge cases for a function whose purpose I don’t understand.
On the review side, most people review by doing basically what a linter does. I joke with people that if they want to nitpick my variable names then I’ll start DMing them to ask what name they want every time I need a variable. A meaningful review would analyze whether abstractions are good, whether there is behavior that relies on an unspecified part of an abstraction (timing), etc. Nobody does those.
anonzzzies 8 months ago
stuckinhell 8 months ago
mewpmewp2 8 months ago
fragmede 8 months ago
souldeux 8 months ago
Koshkin 8 months ago
anonzzzies 8 months ago
Koshkin 8 months ago
ErikBjare 8 months ago
My experience has been quite the opposite: it speeds up my rate of work as I get answers faster, and thus gives me more learning opportunities in a workday.
chairmansteve 8 months ago
rerdavies 8 months ago
anarticle 8 months ago
More specifically, I think code quality is a luxury that not everyone has if you work for dumb corpos who think that moving the gantt chart block left will speed up development.
The answer there is probably don't work for those people, but salaries cap out at some point and the allure of megacorps is there.
I'm a CS old head, who has manually allocated / managed memory, and built what would be considered stupid data structures to support scientific efforts.
For me, using AI and getting 0 to 1 experience in languages/frameworks I don't know is ultra. Combining those skills has made me some money in shipping small software, which has been fun.
JohnMakin 8 months ago
1) I do not believe AI will ever replace programming as a practice, because people will still need to read/review the code (and no, I don't personally believe LLM's are going to be able to do that themselves in the vast majority of cases)
2) while the "script kiddie" characterization is a bit of an unfair generalization, there is some truth to this. I disagree that using AI to generate code puts you in that realm automatically, but I have seen quite a few cases of this actually happening to give this point some merit.
3) Using AI generated code atrophies your skills no less than using someone's imported library/module/whatever. Yes, I probably couldn't write a really good merge sort in C off the top of my head anymore without thinking through it, but I don't really have to, because a bazillion people before me have solved that problem and created abstractions over it that I can use. It is not inherently bad to use other people's code, the entire software world is built on that principle. In fact, it's an extremely junior mindset (in my view) that all code you use must be written by your own hand.
4) "code being respected" is not really a metric I'd ever go for, and I'm not sure in my career so far I've ever seen someone push a big pull request and not have a bazillion nitpicky comments about it. Respecting other people's code doesn't seem to be very common in the industry. I struggle to think why I personally would even want that. Does it work? Is it readable/maintainable by someone other than me? Is it resilient to edge cases? If all yes, good, that is all I really care about.
5) > If you're someone who has no actual interest in learning to code, and instead see AI as more of a freelancer—telling it stuff like "make a kart racer game," and "that sucks, make it better"—then none of this really applies to you.
I mean, sure. I have very little interest or joy in "coding." I like building, and coding is a means to that end. Again, seems like a very junior mindset. I know people do find an enormous amount of joy in it for the sake of it, I am not one of those people, and that's fine. Usually it drives me to create better abstractions and automation so I don't have to write more code than I want to.
cheevly 8 months ago
howenterprisey 8 months ago
JohnMakin 8 months ago
mjtechguy 8 months ago
solsane 8 months ago
m2024 8 months ago