Hey :) We’re considering that, but ChatGPT 3.5 isn’t great for it (4 is much better) so it’s not something we’ve added yet. We also don’t want to necessarily encourage people to take that route over mentoring. So some thought to do still!
The main problem with asking the bot to optimize for speed is that even humans don’t agree if something is a speed optimization. On the other hand, humans will agree on things that are clearly errors (spelling mistakes etc.) That means ChatGPT will often come up with inaccurate info when asked general questions like “optimize”.
I do have a recent conversation with v3.5 which clearly illustrates how bad things can get if you already know the answer. I’ll post the outline here if there’s interest.
I understand the appeal but I also am very apprehensive regarding asking CharGPT for optimizations. Humans are both notoriously bad at it and very likely to propagate bad advice. Seeing as ChatGPT is built to imitate (‘helpful’) humans… I’m not very hopeful.
At the end of the day, you should probably either
not care, or
measure!
I do have a bit more trust that advice on non-pessimization will be sound, but such advice is probably harder to come by. And you would need to explicitly ask for it of course, which most by far of the students will not do: I have seen many requests for comments on optimization and none for non-pessimization.
To be a bit more nuanced, I was thinking about a situation where you’re learning a new language (I’m picking up several atm) and you’re just not yet familiar enough to deliver a solution that doesn’t feel clunky. So it would be more refactoring than optimization, but I guess that is exactly what code review/mentorship is for, as you mentioned @iHiD.
@MatthijsBlom Could you give a definition of “non-pessimization”? I’m not familiar with the term.
Non-pessimization is a [‘optimization’] philosophy that says: when I’m writing code, I’m just going to write the simplest possible thing and not introduce tons of extra work for the CPU to do, so that in general the code that I produce is at least not doing things unnecessarily.
(When listening to Muratori, do take into account their background: creating computer games.)
While mentoring/reviewing, I seldomly hunt for opportunities to do certain things quicker (i.e. ‘optimize’). But I do look for unnecessary work being performed. One very common and very simple (so perhaps not great) example is storing a sequence of generated values, even though they are needed only once:
# (Python)
generated_values = [ f(e) for e in iterable ] # needlessly allocate a list
# vs.
generated_values = ( f(e) for e in iterable ) # generate on the fly
sum( generated_values ) # use the generated values once
I do sometimes use ChatGPT to rewrite my thoughts during mentoring discussions if I’m not being clear rather than ask it to write solution code for myself especially if I’m dealing with a likely underrepresented language in its training dataset. It’d helpful if I could use the token allocation to ask ChatGPT v4 about my mentoring comments.
Nice idea. The normal ChatGPT website is probably as good for this though, so I’d probably rather not cost us the extra $0.20 or whatever for that request as you could get it for free from the OpenAI. I could see how we could add extra value (e.g. bundling the solution, tests, etc) but I suspect it won’t make a huge difference to the quality of ChatGPT’s output.