I think us folks that have been coding for 20+ years can properly utilize AI coding assistants because we can take the code it spits out and usually understand it pretty quickly and debug it if needed. It's just a skill that comes from writing code for so long.
I have seen many jr level programmers struggle with AI coding assistants because they ask a question, get some code, paste it in, realize it doesn't work, but don't know why or the right questions to ask to properly debug it and get it working.
Nothing wrong with SO or even using an AI assistant as a jr. You should use any tool that helps you, but I still believe nothing beats practice and doing some manual coding to really learn and understand the concepts.
I still think one should reach out to a book first, then goes on to whatever method is best (asking a senior/mentor, SO, ChatGPT,...). It gives you more information and the material for better questions.
You can do this, and sometimes it works, but more often than not it does not. it's basically the blind leading the blind. With the current state of LLM hallucinations, it's very easy to be given bad or outright wrong answers, and if you don't know any better (which as a jr you probably don't) you'll just end up running in circles.
I think this will def get better over time though.
damn right, I tried using GPT-4 for to make a text editor enhanced with CodeMirror 6, but the model kept confusing with version 5, and the thing is, they moved stuff around and deleted some modules, so it was a total mess
when you refactor the whole framework you should change its name to keep docs straight, a mere version change is not strong enough for LLMs
I have seen many jr level programmers struggle with AI coding assistants because they ask a question, get some code, paste it in, realize it doesn't work, but don't know why or the right questions to ask to properly debug it and get it working.