Introducing Representers

We’ve now fully launched the second part of our automated feedback strategy - Representers!

Representers help ensure that mentors never have to give the same feedback twice, and dramatically improve the likelihood that a student will get instant automated feedback.

I’ve written a post and recorded a video on how they work and why they’re so useful.

Supermentors: There’s fun to be had here giving feedback instantly to hundred of students. Check out the automation tab to get started (and please read the info on the solutions carefully :blue_heart:).

(Think you should be a super mentor but you’re not - contact @jonathanmiddleton (

To learn more:

You might also like to spread the word by sharing a tweet or a LinkedIn post - thanks in advance!


Does this mean the JS thing is now fixed?

No, not yet, but that’s discussed in Updating Representers.

WIP PR (with remaining tasks for @ErikSchierboom listed) at Handle representers being updated by iHiD · Pull Request #3041 · exercism/website · GitHub - we can potentially merge this as is (and we probably will), which will solve JS, but wouldn’t yet give us the ability for tracks to deliberately expire representer versions or exercises.

1 Like

Is it conceivable add read access to the automation tab for track maintainers who are more tooling maintainers than mentors for the track, and not reaching the 100 mentoring sessions?
It makes sense to me that we don’t want write access for people who have not mentored much, to avoid poorly written feedback. However it’s still interesting to enable some access for maintainers writing the corresponding tools.

What I really want to do is add all this to a maintainers dashboard. In the interim I’m sure @ErikSchierboom could add a supermentor role into the database and check that as well as the manual check we have (roles.include?("supermentor") || ...), and then give you that role :slight_smile:

Alright, that sounds great and I can definitely wait in the meantime.

I noticed that the version of one of the solutions I am giving feedback on–Solution #16295 Currency Exchange in Python–is different from the one I solved and also different from the current version. Would it be possible to include the problem version number? Should I create an issue for that?

We don’t actually have exercise versions. We instead use a hash of the important files in the exercise’s folder in git to determine its uniqueness.

Extrapolating your post slightly, I wonder if we should only be showing representations of solutions that pass the latest tests, which then removes the issue of giving feedback on an old solution. Does that make sense to you? (cc @ErikSchierboom for thoughts)

Yes. I agree. Showing only solutions to the most current iteration of the exercise would make the most sense.

Happened to find a post on Mastodon from someone who is really enjoying the Elixir representer: Functional Café

#elixir track on #exercism is giving some really cool feedback on naming conventions!
I’m curious to go read how their analysis works now :open_mouth:

I replied with a link to the blog post.

Edit: Er… I just realized this is probably from the analyzer, not the representer but whatevs :sweat_smile:

1 Like

Awesome to see! Thank you for sharing :slightly_smiling_face:

1 Like