Stars are a historical artifact now. A different approach occurred to me:
Mentors and maintainers can give endorsements to community solutions so that people browsing can see that “experts” think certain solutions are noteworthy.
The endorsers can optionally give a short blurb about why the solution stands out.
If an endorsed solution gets out of date with new tests, the endorser could receive a notification and can decide to rescind the endorsement.
I don’t think endorsements can be anonymous.
As a mentor I often encourage students to browse the community solutions, even knowing that it can be an unordered sea of daunting code. If students had a way to find the noteworthy solutions, it would be really helpful to help them find the islands in the sea. Also the solution owners get a nice prize, “Wow, Erik likes my solution.”
We are about to add a new ordering method to community solutions based on a user’s rep on that track. I really like this as part of a default order that starts with “number of endorsements” and then rolls out into track-rep of the user.
Couple of questions:
“Stars are a historical artifact now” - why do you say this? They’re still there and work AFAIK.
Is there an appetite for mentors/maintainers to endorse these solutions? (how do we get consensus/commitment from high-rep users to determine this)
a) Yes: Let’s do it
b) Unclear/no: Could we boost stars from mentors/maintainers instead so they carry more weight (maybe some function of the user’s rep?)
From my experience, quite often when I have gone to look at the community solutions for an exercise and ordered by most starred, very few have stars at all and those that do only have a few stars. I think this is part of what contributes towards them feeling like a ‘historical artifact’.
I think it would be really nice if upon completing a solution, you were given the option to see some endorsed solutions with a short description of why they have been endorsed.
I think it would be good if the endorsed solutions:
exemplify specific programming concepts/paradigms
make you think about the problem in a different way
make particuarly good use of the language’s features
It seems to me the main two desiderata for the default ranking are
high quality rises to the top, and
the top is diverse.
( “This solution should be useful, and the next one should be as well.” )
Presumably, sorting by author’s reputation would push solutions of high quality to the top.
Endorsements could help push diversity into the top, but that might well require potential endorsers to hold off on endorsing common solutions – on top of them needing to find the diversity in the first place. I expect endorsements to push quality more.
I’m not sure what effects stars have. (Aside from promoting old solutions that already have stars.)
( Speaking of stars — I vaguely remember – but cannot find now – a blog post by I guess Joel Spolsky about a ranking system for StackOverflow that takes into account the age of the up/downvoted. )
How many solutions, i.e. work, are we talking about? Roughly, given the desired statistics and number of maintainers/willing mentors.
Sounds like Approaches Light.
Which is good, actually. Approaches are a lot of work to produce (or they are when I am producing them, anyway), and this kind of endorsement seems like a relatively cheap way to achieve most of the same benefit.
( By the way: the present Dig Deeper stuff could very much use such an explicit recommendation as well. )
How do we find the solutions to endorse? This goes back to the old request to be able to search solutions by text. I don’t know if there is a performant way to do that.
Stars are somewhat of a historical artifact because many of them were given a long time ago and the language has since moved on. Some of the higher-starred solutions are not as idiomatic or performant as lower-starred solutions. But they were the early birds, and they got the worm, so to speak.
Another tricky thing is when someone copies another solution. You may find the copy and endorse it, and not have found the original that inspired it. Then the original author goes unendorsed. So awarding rep for being endorsed may not be fair.
Some of the higher-starred solutions are not as idiomatic or performant as lower-starred solutions.
Some of the starred solutions are starred for exactly these reasons. For example I may have starred a solution because of it being interesting rather than based on its performance or “idiomatic” form.
And there has not been, so far as I know, any guidance for why one should or would star a solution. I have never starred a solution because of those two criteria, well, unless performant of readability is what is meant by “performant”. I may have starred at least a couple for that definition of “performant”. But not for CPU cycles or Memory or Storage performance.
And perhaps that’s why endorsements should highlight their reasons.
But it would probably still lead to a high volume of solutions with a low number of endorsements.
Probably going overboard: somewhat similar to slashdot where (I might misrepresent what /. is doing, apologies) you can award “points” towards a few characteristics of a post (like “insightful”, “funny”, “interesting”), it could be helpful to award endorsement (or star?) points into a few categories like readability, performance, idiomaticness or interestingness. One could then try to find solutions that shine in certain areas. I realize this would complicate things too much, I’m going for “interesting, but awkward”.
I’m running into this A LOT. I mean, I recognize I’m at the beginning exercises and there aren’t many ways to solve them nicely. But it’s still uncanny. I’m on the (june) clojure track and so many solutions to the destructure exercise use (remove nil (flatten data)), which weren’t even introduced. I have to dig deep to find alternate solutions to learn from.
Even rarer is a solution with iterations. With that thought, I think iterations is a simple way to sort solutions. That brings both diversity (each iteration is a unique solution), and due to the inherent purpose (or definition) of iteration, the later iteration is “high quality”, even if it’s just changes suggested by the auto-checker, or someone copying a different solution, or adding a doc-string. Or maybe this idea will fall apart for the more advanced exercises.