How do we find the solutions to endorse? This goes back to the old request to be able to search solutions by text. I don’t know if there is a performant way to do that.
Stars are somewhat of a historical artifact because many of them were given a long time ago and the language has since moved on. Some of the higher-starred solutions are not as idiomatic or performant as lower-starred solutions. But they were the early birds, and they got the worm, so to speak.
Another tricky thing is when someone copies another solution. You may find the copy and endorse it, and not have found the original that inspired it. Then the original author goes unendorsed. So awarding rep for being endorsed may not be fair.
That was my thinking. Similar to Stackoverflow: answers from 2010 having thousands of upvotes, better answers from 2023 go unnoticed because they are buried at the bottom of the page.
Some of the higher-starred solutions are not as idiomatic or performant as lower-starred solutions.
Some of the starred solutions are starred for exactly these reasons. For example I may have starred a solution because of it being interesting rather than based on its performance or “idiomatic” form.
And there has not been, so far as I know, any guidance for why one should or would star a solution. I have never starred a solution because of those two criteria, well, unless performant of readability is what is meant by “performant”. I may have starred at least a couple for that definition of “performant”. But not for CPU cycles or Memory or Storage performance.
And perhaps that’s why endorsements should highlight their reasons.
But it would probably still lead to a high volume of solutions with a low number of endorsements.
Probably going overboard: somewhat similar to slashdot where (I might misrepresent what /. is doing, apologies) you can award “points” towards a few characteristics of a post (like “insightful”, “funny”, “interesting”), it could be helpful to award endorsement (or star?) points into a few categories like readability, performance, idiomaticness or interestingness. One could then try to find solutions that shine in certain areas. I realize this would complicate things too much, I’m going for “interesting, but awkward”.
I’m running into this A LOT. I mean, I recognize I’m at the beginning exercises and there aren’t many ways to solve them nicely. But it’s still uncanny. I’m on the (june) clojure track and so many solutions to the destructure exercise use (remove nil (flatten data)), which weren’t even introduced. I have to dig deep to find alternate solutions to learn from.
Even rarer is a solution with iterations. With that thought, I think iterations is a simple way to sort solutions. That brings both diversity (each iteration is a unique solution), and due to the inherent purpose (or definition) of iteration, the later iteration is “high quality”, even if it’s just changes suggested by the auto-checker, or someone copying a different solution, or adding a doc-string. Or maybe this idea will fall apart for the more advanced exercises.
Sorry for necro posting, but if it’s any consolation I was about to make a new topic suggesting this exact same thing. Also I don’t see anything in the topic about this idea ever being rejected.
I would appreciate something as simple as a "featured_solutions": [] key in an exercise’s config.json. Even if we don’t add any new UI about written endorsements, just having the top 2-3 solutions be decent and presentable would be nice. We can try this with minimum effort, no new database tables necessary.
I think in most situations discovery is not a problem, since most exercises are straightforward enough that an “ideal” solution can be found within the first 10 community solutions.
If we cannot find a solution that’s agreed upon as “ideal” we can work backwards and search by a student, or submit an iteration that’s “ideal” and feature that.
I keep running into exercises where the top solution is something from nearly 10 years ago (that’s a long time in Rust), and because I think the representer ignores variable names I’ve seen a few spelling errors which just doesn’t look very good.
No, I very much like the idea. It’s just a lot of work to mess with our search indexes.
The challenge is that everything below the search bar (see screenshot below) is coming via Elasticsearch. So we could add three solutions to the top of this list, but then does that confuse that they’re not the most submitted. If they search or filter, do we hide them? When do we show them again. I’m not sure what’s a clear way to do it.
Featured solutions require significant effort from maintainers. Imagine a track with 100 exercises, each having three featured solutions. What happens if users update their exercises? What if the tests are synced and the solutions no longer pass? What if another user submits a ‘better’ solution?
On a side note, the highest-rep sorting is also problematic. Many will assume that solutions from those users must be ‘better’, but there’s no real reason why that would necessarily be the case.
The OCR Numbers exercise on the Python track has 1890 solutions – and that’s one of the smaller ones. Grains has 25688. The track has 143+ exercises.
Even if we exclude anything older than a year, choosing the three “best” across 5 versions of Python (we currently support Python 3.7-11.5) seems daunting.
We can feature an iteration, so it doesn’t change, and we can remove it if it doesn’t pass any more.
I don’t think we’re looking for the “best” solutions - but more the idea being that highlighting a few different approaches is a useful thing for students. So if someone else submits a “better” one, maybe that gets picked up and featured, or maybe not - but that’s no different to today really.
All these questions do highlight why there is work involved though!
This is true, but it’s also likely they’re not actively bad. Whereas the most common submissions (which is how things are ordered by default) could be misleading.
Database tables aren’t really the issue here btw. Syncing extra fields in configs is much more work than creating an extra column or two. The only database issue really is the syncing into elasticsearch (so if you mean that as the database then I agree not having that lessens the work significantly)
I think the problem with the config.json approach is that this moves it from the crowdsourcing from high-rep users (which Glenn initially suggested) to another job for maintainers.
If everyone with >x rep on a track could feature 1-3 solutions, and then the default ordering is “Featured solutions” if there’s >6 then that might work.
I think for this to work it needs to be democratised like that, else I think it’ll just add burden/pressure to maintainers (who we’re effectively asking to become judges)
Isn’t this part of why we were writing approaches documents? Those approaches are a LOT of effort, but aren’t really prominent. If we’re going to reorder community solutions, should they be aligned to the approaches or otherwise mentioned in the community solutions context?
It feels like featuring certain solutions (and I am not opposed to that!) on the community page could end up working at cross purposes to some approaches docs.
Hard agree on that. Although some folx could choose to feature solutions that aren’t idiomatic or are “interesting” but not necessarily good in the way others might expect. But that’s a problem for another time,
Should we push for approaches articles instead and highlight that more prominently? Or make an “example solutions” article which would be like an approach but with less prose and instead just have some select solutions? Trying to use the community solutions to highlight specific solutions sounds like it has a bunch of complexity.
Yeah, featuring approaches in that page is a nice alternative. It is just a lot more work. And the fact it goes through GitHub means there’s a lot more gatekeeping, which puts strain on maintainers, and often tension between maintainers and contributors too.
Whether its gatekeeping on GitHub or gatekeeping by high rep or endorsement on the site (or some other set of criteria used to rank), its still making a judgement about worthiness and visibility.
Different people are always going to have tension and disagreement around that (and they should!). And there are also going to be many folx who write good code who will be overlooked under both systems.
I think endorsing or starring is great. But I also think that it will eventually fall out to the same discussion we’re having now.
Different people think different solutions are “good” for a whole host of different reasons. It’s not objective. But somehow, we all want it to be.
I agree, but I was thinking more along the lines of mentoring notes, which are hosted on Github and they also make judgments (that you can ignore) about pushing a user toward a certain solution. I thought it would be more of an exception than a rule for an exercise to have a featured solution at all.
Pretty much exactly what I was thinking. If the search parameter is present they don’t appear.
I admit in no part of this am I thinking about the meritocracy of this system for solution writers. I’m fully thinking about this as a practicality for exposing students to solutions which are considered worth seeing by someone who is familiar with the language in question.
How about an approach has one solution?
When you write an approach you can link to a finished solution that implements the approach.
Approaches with solutions take up to the first 3 lots (preferably fewer) in the community solutions page
Clicking an approach’s solution takes you to a page with the full solution at the top instead of the snippet
If you are intrigued or don’t understand the solution you have the approach’s explanation right under it
That was just me messing around with dev tools, I think the approaches should be made visually distinct so that a user who is interested only in community solutions can filter them out visually. Maybe even be half height since the footer is not necessary, but that would probably be a ton of work.
What if everyone could have their own list of featured solutions, each with a brief explanation or analysis? A special page could allow to sort users by reputation and see what a specific high-rep user features.