Requiring people solve exercises prior to mentoring?

If it makes sense for mentors to experience how things work first hand, would it make sense that they also be mentored once?

To me it seems so. I fail to come up with counterexamples.

I tend to agree with you and lean in favor of putting up tiny barriers to encourage a minimum of preparation.

But this particular barrier I’m unsure about, what if some track is kinda sleepy, an experienced person finds out about exercism and would like to pick up the slack in the mentoring queue? They may actually have trouble getting mentored in the language they would like to mentor for. Or they have to get mentoring for an exercise (maybe even a different language) they don’t want or need it for. That would be an artificial, forced mentoring experience.

I started mentoring on the Rust track when the queue had over 160 requests, I didn’t wait to get mentoring myself. But I made sure to solve each exercise before actually mentoring for it.

2 Likes

I think it’s reasonable to expect someone to solve an exercise before mentoring for it. I can’t imagine how it makes sense to mentor before having done it yourself.

But if the checkbox is ckecked by default, the exercises won’t show at all. This could lead to some mentoring requests being ignored for a long time, because the active mentors happen to not have solved that particular exercise.

Maybe they could still be shown in the list by default, but grayed out and unclickable? That would give all the mentoring requests equal visibility and encourage mentors to solve exercises where people are looking for help.

Isn’t some amount of reputation required already?

I’m pretty sure I’ve had mentors with less than 10 reputation.

Howdy. I’ve been mentoring for a couple of months now, mostly in JavaScript but also clearing months-old queues in 3 other languages.

I have completed a total of 2 exercises on Exercism (in JS). Got pulled in because my mom had completed the Ruby track and was starting the JS track, where I already had experience, and she wanted a simpatico mentor. Once I was here, I started picking up other things as well.

I look at each mentoring session as, more or less, a code review session (with a strong bias towards accepting that the person whose code is being reviewed is inexperienced, mistakes are expected and acceptable, etc.)

When I review code for a job, I have never ‘solved’ the problem before, because if I had, it would already have been checked in and someone else wouldn’t be working on the issue. A requirement to have already ‘solved’ the exercise seems largely irrelevant, little more than an irritant.

Is there a problem intended to be resolved here? Are there learners up in arms about Bad Mentors?

The original poster here had an experience (or so), which probably could have been solved by mentoring the mentor.

If anything is needed, I would think things like:

  • a feedback mechanism for a mentee to downrate a mentor
  • systemic monitoring to call human attention to mentors receiving an unusual level of downrates
  • any amount of gentle nudging you might like to add, suggesting mentors have Done The Thing beforehand
  • e.g. while browsing mentoring requests, highlight / lowlight individual cases to indicate have / haven’t done the exercise
  • when clicking to actually take on a mentoring session, if the mentor hasn’t done the exercise, throw in an ‘are you sure?’ dialog. This dialog should have a ‘don’t ask again’ widget!

I am much more concerned about how the four queues I monitor have almost no traffic. I don’t think it’s that other mentors are quickly scooping up all the opportunities, because I often come back 18 hours later to see that there are several which have dribbled in over that entire time range.

The ‘join the mentors, visit foreign languages and beaches’ materials make it sound like you’re entering into an environment where there will be thousands of open cases waiting for mentors; all sorts of tools are provided to filter down this giant mass of choices into something you’ll be able to handle. When in fact there are only 3 entire cases waiting in your 4 tracks, and you don’t need any tools at all (though a ‘show me all open cases from all my tracks in a single list’ tool would be nice).

Welcome to Exercism and mentoring!

Except … these aren’t code reviews :slight_smile: If you were conducting an interview, I’d imagine you would ask questions which you’ve already solved. Exercism concept exercises come with an “exemplar” solution. Many tracks have mentoring notes with exemplar solutions and points about what things to watch out for or call out. If you’re a strong programmer and can figure out how to solve the exercise easily, then approaching it as a code review often works fairly well. And if you’ve mentored a specific exercise a few dozen times, you get a really good idea of all the common approaches, with their pros and cons. If this was an actual code review, you shouldn’t be reviewing code which was already written many times over. This typically works but it’s also not the same.

I agree entirely … for the more difficult exercises. Not that I suggested this should be a requirement :slight_smile:

For the “easy” exercises, the hurdle of solving them for an experienced programmer should be very small. It shouldn’t take much time. It should help filter out people that maybe shouldn’t be mentors and would ensure mentors are fully familiar with the exercise. These often have very few approaches to solving them and most the good solutions tend to look pretty similar.

For the harder exercises, the solutions tend to be more involved and reviewing solutions do tend to feel more like a code review as the approach has a lot more room for creativity.

Your suggestion to nudge, and not require people to solve exercises prior to mentoring is something I think is a good idea! But I also think mentors should have solved at least some number of exercises themselves, and maybe even been mentored first to better understand the process.

The OP is calling out a problem. Whether or not learners are up in arms or not is irrelevant. If I had a handful of bad mentoring experiences, I can extrapolate that a large number of mentoring sessions may be of low quality. If learners are new to code reviews and code feedback, they may not know a good code review from a bad one. They may have a bad session as their first and only session, decide that mentoring sessions are a poor use of their time, and not try it again. I think my catching a few bad mentoring sessions is a decent signal that there is a problem here.

Per your later point, might I suggest you experience a few mentoring sessions prior to commenting on issues regarding mentoring sessions, if you have not yet done so?

  • I think something to nudge mentors to complete exercises prior to mentoring them is an excellent suggestion. That’s in a similar vein to what @bobahop suggested.
  • I do think the mentor feedback dialog may benefit from a redesign. The unhappy option is tied to reporting and blocking the mentor, which feels pretty extreme. I’ve had several mentoring sessions which left me … unsatisfied, but clicking the sad face and blocking/reporting the mentor felt like a lot. It might be good to separate a “block/report” from “unhappy with the quality of this session”.

Exercism v2 required solutions be mentored prior to completing them. The queues were long and usually backlogged. This was becoming problematic for allowing the platform to scale up. v3 dropped that requirement and the queues are typically much emptier. The new automated feedback and other recent features are intended to nudge students to use mentoring more. “Yay! You passed the tests. Here’s some automated feedback about things that can be done better. Are you sure you don’t want more feedback from a mentor?” It’s an evolving balancing act.

If you look at the top contributors over the last week for JS, it looks like about 27 solutions were mentored across 13 mentors. That’s definitely on the low side. Over the last 30 days, I see 151 solutions mentored. That’s about 3-5 solutions per day. I agree that’s a bit low. And I’m of a similar feeling that maybe the “become a mentor” might be pushed a bit too heavily here, though that may be an unpopular opinion.

Great points and suggestions. I have one small thought about mentoring just being a code review: Some exercises have very tight tests that specify a large part of how the exercise needs to be structured. I imagine it would be difficult for me to judge which part of the structure is given by the tests and what was done by the student if I didn’t even solve the exercises first. Although these kinds of exercises are not the majority.

1 Like

Just to be clear,

  1. The original proposal was to require aspiring mentors to have solved any (1, 5, 10) exercises.
  2. OP did not propose that mentors be required to have solved the exercises they would mentor on. Indeed, I suspect that they would be against this, as am I.
  3. OP did additionally propose that aspiring mentors be required to have been mentored themselves – on any exercise on any track.

@filbo I find your perspective valuable. However, from your post I have trouble figuring out where you stand on the above points. Could you please clarify?

@MatthijsBlom, I did not intend to state a clear opinion on those matters as I haven’t worked out a clear position on them. I personally would find those restrictions annoying – except that I technically pass 1 & 3 (did 2 exercises, was mentored on one – though I did not actually complete that mentoring session). If I had been required to rack up more finished exercises and/or mentorings, that would have slowed me down and likely prevented me from overcoming my own inertia and actually getting into mentoring. But that’s just me, I can’t speak for others.

I will say that I am clearly against #2, as are you and probably most people likely to chime in.

@senekor, I frequently consult the tests in the mentoring setup, for precisely the reason you mention. In many cases the nature of the ‘solution’ code is quite restricted by the tests; I need to see what’s being tested to understand the ways of the student’s code.

Would this process be better if I had worked the exercise myself? Sort of … Maybe? Someone else’s solution is going to be different. Even more to the point, just because I wrote some code at it doesn’t mean I remember all that much about it, months later. I could probably go rediscover my code, but it isn’t even necessarily relevant, if the student took a significantly different approach.

=====

Is there any mechanism by which others might review completed mentoring sessions? I don’t mean routinely, but maybe once in a while just dip out a few samples and see what they feel like. Nor do I mean some sort of intensive ‘find every flaw’ review, just a quick look-over and if something is grossly wrong, do something about it.

The first level of ‘do something’ would presumably be to contact the mentor and discuss whatever was wrong. I don’t care to speculate past that (hopefully there is no need for anything past that).

=====

@IsaacG,

Exercism v2 required solutions be mentored prior to completing them. The queues were long and usually backlogged. This was becoming problematic for allowing the platform to scale up. v3 dropped that requirement and the queues are typically much emptier.

Maybe there is a happy middle ground. For instance: offer to enroll each completed solution for a mentoring session; do this with a checkbox which is initially on, but is under student control, so they can disable it when they’re comfortable doing so. And maybe remember separate defaults (all initially on) for easy vs. concept vs. hard exercises. (however many categories / ‘levels’ seem apropriate)

(Of course I am biased in suggesting that, as I want to drive up the ‘mentoring economy’ so my queue isn’t blank :)

1 Like

6 posts were split to a new topic: Could we merge Exercism and Discourse notifications?

I think noone wants 2.

Though we all can agree on mentors need to know the language the mentor in and the platform they mentor on.

The latter can somewhat be measured by reputation overal.

The former by reputation gained in a track.

Good thread. Thanks everyone. I think people should have done some minimum on Exercism before mentoring. I think solving an exercise other than “Hello World” is probably an absolute minimum thing to have done.

Another idea might be asking someone to have some minimum reputation (e.g. 5 - which they would get through publishing 2-5 solutions). This acts as effectively the same barrier as above but means someone most be willing to publish their code, which I think is also a reasonable thing to ask an a potential mentor to do.

1 Like

Even while personal, this is important data.

I do not remember exactly how/when I became mentor, but it likely was after solving quite a few exercises and also after getting mentored a few times (which gave me confidence that yes, I can be a mentor).

I just checked and it looks like I have never been mentored in my main mentoring language. I do not think this is a problem. Besides, ‘fixing’ this might not be feasible for lack of mentors.

Kinda obvious, but for the record: I agree.

Do you have recordings of how people came to be mentors? Like how many exercises they had solved beforehand, whether they had been mentored, etc.


Potential problems with using reputation:

  • Getting a spelling fix PR merged gets you +12 reputation already
  • I do not expect correlation (+ nor -) between publish-happiness and mentor-suitability

(For what it’s worth, I am publish-averse.)

Then the labels on that PR have been to generous…

…or I am mistaken. I was under the impression that all PR’s get you +12 reputation.

You’re mistaken :slight_smile:

See details on Reputation | Exercism's Docs


But hello world + rep gate seems sane.

Hello i am Erfan
I have a problem is
Def hello():
Print hello world ()
i don’t know where is problem ?

Please help me,