Hi everyone. I love Exercism. It is an extremely effective and fun way to learn. I just wanted to say that I found the ‘level of difficulty’ rating for some of the exercises was way off. Some of the ‘easy’ exercises were very difficult, and some of the ‘medium’ exercises were very easy. Maybe you cold get users to rate the difficulty of the exercise after completion, and things would self correct. Thanks.
Which track? Which exercises?
Exercises are rated on a scale from 1 through 10 then buckets into easy, medium, hard. Unless you’ve completed the hardest exercises, it’s hard to map exercises to values (since you’re not familiar with the upper bounds). Another tricky part of scoring difficulty is that difficulty can be scored on multiple criteria. CS students might think implementing a breadth first algorithm while non-CS students might have a hard time with the algorithm. Some people may feel implementing a long list of validation steps makes an exercise hard while others might think that’s trivial. Some people might thing matching brackets is very hard (and it is until you light upon the right algorithm) while others may think it is trivial (which it is with the right algorithm).
If you’re completed a whole bunch of exercises and have suggested updates for the difficulties, you can see if the track maintainers are interested in suggested updates!
Ah, that’s a neat idea! Truth be told, assigning difficulty to exercises is, well… a difficult thing because folks’ skill sets vary widely. But as you say, it would be interesting to see if a user rating system would cause the level to stabilize over time.
This has been my experience as well (on the racket track). I like the idea of users rating the difficulty level.
Calculus is not very difficult if they teach you how to do it, but it took centuries for humanity to discover it and set the foundations for it.
If you’re asking basically anyone to figure out calculus from scratch, it is a very difficult task.
If you’re asking someone to find the integral of x² dx is very easy if you know how to do it but I wouldn’t be grading integral calculus as “easy” or “beginner” in a math curriculum just because of that.
What I mean is that when you want to grade an exercise you have to take the point of view of someone coming to it for the first time (worst case scenario) unless you’re offering some kind of previous explanation or something like that (which is not the case in Exercism, where most exercises are just thrown at you and it’s up to you to figure everything out).
I’ve come across an “easy” exercise that’s been the most difficult one so far for me because it wasn’t obvious at all how to solve it (I’m talking about the matching brackets).
On the other hand, some exercises related to prime numbers which are very straight forward are rated as medium.
Just because an exercise looks simple doesn’t mean it’s easy.
And just because an exercise deals with prime numbers doesn’t turn it into a hard to solve exercise.
So, to sum up, I really agree with the original poster in that the rating could and should be more accurate.
Even if you can’t devise a completely accurate method of measuring such a subjective thing as an exercise’s difficulty, I’m not sure that means riwepo’s idea should be dismissed outright. Difficulty ratings from users would provide an idea of how difficult users generally found a particular exercise. Shouldn’t the question be whether or not that would be valuable to other users, rather than whether it would be “accurate”? Anyway, isn’t awarding “likes” and whatnot what the kids today like to do?
I totally agree with you.
This should be about utility for users.
A system that would allow us to rate an exercise after we completed it.
Something like: How hard did you find this exercise to be?
I don’t understand who is rating all those exercises and on what basis, because it’s not very useful.
As mentioned previously, if we assume users are growing more proficient as they complete exercises, the difficulty as users complete exercises doesn’t tend to spread difficulties across the full range. In an ideal world, the experienced difficulty of each exercise should always be “just hard enough to push me to learn but not too hard that I was completely stuck”. Ideally users learn from each exercise which allows them to complete a harder exercise … while experiencing the same level of challenge.
Of course, there may be lots of practical reasons why they might choose not to implement this, but that doesn’t mean it’s not a creditable idea.
I don’t know if implementing a rating system is worth the trouble. Most online problem solving platforms just have a very rough and subjective difficulty level set by the authors, and additional statistics on the number of attempts and accepted solutions.
Exercism has for each track a build page that displays some stats; like this one: C++ impact and status on Exercism (scroll to the bottom, look for Usage Statistics)
If the track has enough users, it can give a good idea for the difficulty of the exercises. Of course exercism’s test suites are not very strong, so it doesn’t discriminate unoptimized solutions, but even so, usage statistics for each problem might serve as a good indicator of their difficulty.
So many online stores have ratings for their products, this could work in a similar way.
Let the users choose how hard an exercise was in the same fashion a customer can choose a rating for the purchased product.