Copilot and ChatGPT and other AI - should I feel bad?

Hello, I have been using Exercism Exercises as a basis for exploring what Copilot can do. I am trying to limit myself to exercises I have already solved, but I don’t feel good about myself. It feels like cheating. On the other hand, coding with Copilot will certainly become a critical modern skill, so perhaps I shouldn’t feel too bad.

What does anyone else think?

Well my perspective is that using copilot is bad if the intention of what I am doing currently is to learn, it could be a language but also other things. If what I am doing is with the intention to build something without the main goal being learning, I feel like it is okay to just chatgpt or copilot.

Perhaps I can start with some of my thoughts.

It is spooky what the editor does when you open it on an excersim folder (VS code with copilot extension installed). I guess Copilot can see a lot of exercism soultions in GitHub. Perhaps I am just seeing the community solutions in another way?

More positively, it also feels like working with a mentor, but phrasing good prompts is clearly an important skill, and I am learning here.

Also, the Copilot response frequently uses language features with which I am not familiar, so working our how the solution works is a new learning expience too.

Finally, trying to judge if the copilot response is good code is another kind of challenge.

But what today’s goal is to learn what copilot / chatgpt can do? Shoud I go somewhere else?

I personally feel like copilot can take away the problem solving part of a problem. Once solved the problem it could give valuable insight in how you could have done differently (similliar to community solutions).

I half agree. Phrasing a good prompt is like writing pseudo code to solve the problem. But there is a risk that a close prompt will reveal a complete solution before I have actually solved the problem.

If you want to learn how to generate code using AI then using an AI is good practice for that. If you want to learn to write code on your own, then using AI doesn’t help with that.

The LLMs are prediction machines. They’re basically really good preemptive Stack Overflow engines. If you’re trying to generate code which already exists on SO, they’re great. If you’re hoping to learn how to write novel code which cannot be cobbled together from SO, they can fall short.

1 Like

I have no experience with Copilot, but I do have extensive experience with ChatGPT. My verdict is that even its new model still makes many mistakes. You need to think of it as an assistant: you can consult it, but you must also recognize when the information it provides is incorrect or not applicable to your situation. That can be extremely challenging if you lack experience with the topic, and you may quickly adopt bad habits if you rely on it exclusively.

Another model I’ve used in the past is Phind.com. It seems to perform much better than ChatGPT for programming-related tasks, and it also provides links to Stack Overflow articles for further reference. I really liked that feature.

It’s interesting that you’ve noticed how Copilot can suggest solutions based on patterns in the community. It really does feel like it has access to a wide range of examples, and sometimes it’s almost like a reflection of what’s available across GitHub. I totally agree that it feels a bit like working with a mentor, especially when you’re learning new concepts or need some direction on how to approach a problem. It’s also really insightful that you pointed out how phrasing good prompts is key—sometimes the quality of the response depends heavily on how the question is framed.

As for the unfamiliar language features, it’s great that you’re seeing this as an opportunity to expand your knowledge. Copilot’s suggestions can push you out of your comfort zone and expose you to new techniques or features, which can be a fantastic way to learn. And I totally understand your point about judging whether the code is good. It can be tough to trust an AI’s suggestions, especially when you’re unsure if they’re optimal or just a quick fix. It’s definitely a skill to evaluate the solutions and sometimes even iterate on them to suit your needs. Sounds like you’re really diving into this experience with an open mind—keep it up!

Thanks for your thoughtful reply. I am glad that I am not the only one who views the combination of Exercism and coding with AI as a positive experience. I am certain that building skills in using AI professionally is vital, and judicious use with Exercism is working for me.

Clearly you can cheat yourself and just accept the first thing that comes from AI, but with Exercism, you can do that just as easily by submitting an empty function, then thoughtlessly copying a community solution.

Since my first post, I have found a helpful new way to use AI, by asking questions like “Compare rational division of integers in the C, Clojure, Julia and Python programming languages”. This is helping me build out from languages I know well to languages that I am learning.

Co-pilot is also valuable in correcting annoying compiler errors. I don’t think I am cheating myself here, just using AI as a “better compiler”.

I have also been running Co-pilot while revisiting some Project Euler exercises (I hope to catch up someday!). Here AI is a very bad thing. The fun with Project Euler is that you don’t know the answer until you have a solution, but the AI tools can clearly see lots of existing solutions, so it just writes the function for me and spoils the fun.