Hello, I have been using Exercism Exercises as a basis for exploring what Copilot can do. I am trying to limit myself to exercises I have already solved, but I don’t feel good about myself. It feels like cheating. On the other hand, coding with Copilot will certainly become a critical modern skill, so perhaps I shouldn’t feel too bad.
Well my perspective is that using copilot is bad if the intention of what I am doing currently is to learn, it could be a language but also other things. If what I am doing is with the intention to build something without the main goal being learning, I feel like it is okay to just chatgpt or copilot.
It is spooky what the editor does when you open it on an excersim folder (VS code with copilot extension installed). I guess Copilot can see a lot of exercism soultions in GitHub. Perhaps I am just seeing the community solutions in another way?
More positively, it also feels like working with a mentor, but phrasing good prompts is clearly an important skill, and I am learning here.
Also, the Copilot response frequently uses language features with which I am not familiar, so working our how the solution works is a new learning expience too.
Finally, trying to judge if the copilot response is good code is another kind of challenge.
I personally feel like copilot can take away the problem solving part of a problem. Once solved the problem it could give valuable insight in how you could have done differently (similliar to community solutions).
I half agree. Phrasing a good prompt is like writing pseudo code to solve the problem. But there is a risk that a close prompt will reveal a complete solution before I have actually solved the problem.
If you want to learn how to generate code using AI then using an AI is good practice for that. If you want to learn to write code on your own, then using AI doesn’t help with that.
The LLMs are prediction machines. They’re basically really good preemptive Stack Overflow engines. If you’re trying to generate code which already exists on SO, they’re great. If you’re hoping to learn how to write novel code which cannot be cobbled together from SO, they can fall short.
I have no experience with Copilot, but I do have extensive experience with ChatGPT. My verdict is that even its new model still makes many mistakes. You need to think of it as an assistant: you can consult it, but you must also recognize when the information it provides is incorrect or not applicable to your situation. That can be extremely challenging if you lack experience with the topic, and you may quickly adopt bad habits if you rely on it exclusively.
Another model I’ve used in the past is Phind.com. It seems to perform much better than ChatGPT for programming-related tasks, and it also provides links to Stack Overflow articles for further reference. I really liked that feature.