I’ve been doing Exercism for a while, and I feel it’d be helpful if we have some sort of a “review” feature. My issue is that right now, as a beginner, I solve an exercise in a long-winded way, and then after I submit I see a community solution that’s really neat and idiomatic. I now have three options: a) copy over that code (not literally) and submit a new iteration, b) walk through it with a mentor, or c) close it.
Option A, according to me, results in little benefit - I’ll forget it very quickly. Option B is good, but I feel bad about unnecessarily loading the mentor queue when I know I’ve solved an exercise and solved it well (in some cases), but just want to remember one feature (say the unary or bitwise operators). Plus, even if I do write the code after a mentor helps me out, the benefit isn’t maximized. Closing it is ignoring the opportunity to expand my savviness in the language.
If we have a feature that pops up an exercise after, say, 30 days of completion, along with a self-written note (“solve with unary”), I think it’d be really helpful for my long-term acquisition of language skills. If you’ve heard of it, I’m asking for a simplified version of the Spaced Repetition System.
How do you feel about this feature? Will it be beneficial for others?
FYI most track queues are very lightly loaded with more mentors looking for something to mentor than solutions waiting for mentoring. Additionally, more often than not mentors can point out things you didn’t think about. Students often think they solved something well, but didn’t think about something. A mentor can help get them to think about that thing. If the exercise is already solved perfectly, there’s nothing to return to and improve.
But say they learned a new feature during the mentoring session: wouldn’t it be helpful if they redid the exercise in a few days/weeks?
I’m already doing this manually, but I’m hoping for something automated.
I like the idea of being reminded somehow to revisit something, especially something we didn’t already know we could improve on. One way could be by somehow “subscribing” to starring of community solutions to any problem we’ve done, ideally as a global setting but maybe per-track or per-exercise (PITA to remember but we could get prompted on completion). That could get noisy, so maybe with an optional user-set threshold, like “notify me when a solution gets at least three stars” or “…at least as many stars as my solution”. (And also don’t repeat starring-notifications for the same solution, like when it gets its third and fourth stars.)
To illustrate: currently all tracks have ≤ 5 requests queued, except Ballerina (7), Scheme (6), Unison (6), MIPS (6), and Haskell (40 ).
Interesting . How do you see this data? Is it restricted to maintainers? Also, I was wondering about the possible pros and cons of making mentoring compulsory/default?
@davearonson, that sounds good. I proposed a review when we did know what to improve, but it’s definitely even better to be reminded when we didn’t know what to improve.
You can specify the track, eg:
Exercism v2 had mandatory reviews to mark an exercise as complete. v3 did away with that after much consideration and discussion.