Track exercise metrics

Hi There, is there any metrics on people doing exercises?

I’m a maintainer of the Elm track, and it feels like if could be good to know where to focus our efforts.

For example, how many people do a partoicular 48 in 24 exercise?

Or which exercises are the most popular (and hence worth the most attention)

Or which exercises are the most abandoned (so probably need a look)
And so on.

Thanks!
Cedd

I’m not sure if something official exists, but if you have a sense of what the number of attempts and solves were before 48in24, you could look up the current exercise stats in Elm impact and status on Exercism and compare.

1 Like

252 students started Eliud’s Eggs with an average of 7.5 attempts and a completion rate of 66%. For a somewhat “easy” exercise, that seems a bit high on the attempts average and low on the completion rate. Anecdotally as an Exercism student, I’d be pretty frustrated after the 4th attempt and probably trying anything to make it pass. Maybe an instructions append or a hints file is in order?

I would say with the average of 7.5 attempts (This means that those iterations fail the tests, right, since they are attempts rather than solutions).

Can we see if students are just hitting that “test button” to find and fix typographical errors as opposed to honest solution failures?

I would be interested in seeing how many iterations are refactors and different approaches altogether, to measure success of learning as well, in languages that are not “There is one way to do it.”

On the other hand, if “attempts” is really iterations, then seeing an exercise like TwoFer with 10-20 iterations does not mean a failure of the exercise at all. It likely means it is designed well enough to explore different approaches and features of the language.

While the data can have many interpretations, the useful thing is to compare exercises to find outliers, which is what @ceddlyburge was asking for. And Eliud’s Eggs looks like such an outlier.

I’d suggest looking at other tracks to see if the completion rate of Eliud’s Eggs is equally low globally, or if there’s something specific to Elm that’s making it less effective here.

1 Like

Thanks everyone. That impact and status link is useful.

I usually run the tests after each step / task in the instructions (to check the work I have done so far), so I am expecting it to fail quite a few times. But probably everyone is different so would need to have a think about the data.

Is this type of information available or could be made available through the API? It would be helpful to query a track for an exercise and get the info shown on the build page (started solution count, the attempts, the average of attempts, and the solutions completed). Then I can build a table of the tracks I’m interested in and the corresponding stats for a specific exercise. I could do that by hand if I’m looking at a small handful of tracks, but if I want a more global picture, that’s a lot of clicking and copying. :)