Another approach to community solutions

I’m running into this A LOT. I mean, I recognize I’m at the beginning exercises and there aren’t many ways to solve them nicely. But it’s still uncanny. I’m on the (june) clojure track and so many solutions to the destructure exercise use (remove nil (flatten data)), which weren’t even introduced. I have to dig deep to find alternate solutions to learn from.

Even rarer is a solution with iterations. With that thought, I think iterations is a simple way to sort solutions. That brings both diversity (each iteration is a unique solution), and due to the inherent purpose (or definition) of iteration, the later iteration is “high quality”, even if it’s just changes suggested by the auto-checker, or someone copying a different solution, or adding a doc-string. Or maybe this idea will fall apart for the more advanced exercises.

Sorry for necro posting, but if it’s any consolation I was about to make a new topic suggesting this exact same thing. Also I don’t see anything in the topic about this idea ever being rejected.


I would appreciate something as simple as a "featured_solutions": [] key in an exercise’s config.json. Even if we don’t add any new UI about written endorsements, just having the top 2-3 solutions be decent and presentable would be nice. We can try this with minimum effort, no new database tables necessary.

I think in most situations discovery is not a problem, since most exercises are straightforward enough that an “ideal” solution can be found within the first 10 community solutions.

If we cannot find a solution that’s agreed upon as “ideal” we can work backwards and search by a student, or submit an iteration that’s “ideal” and feature that.

I keep running into exercises where the top solution is something from nearly 10 years ago (that’s a long time in Rust), and because I think the representer ignores variable names I’ve seen a few spelling errors which just doesn’t look very good.

this idea ever being rejected.

No, I very much like the idea. It’s just a lot of work to mess with our search indexes.

The challenge is that everything below the search bar (see screenshot below) is coming via Elasticsearch. So we could add three solutions to the top of this list, but then does that confuse that they’re not the most submitted. If they search or filter, do we hide them? When do we show them again. I’m not sure what’s a clear way to do it.

If our rule is “Shows only on Page 1 of Most Submitted when there is nothing in the search” or something like that, then we can do it.

Featured solutions require significant effort from maintainers. Imagine a track with 100 exercises, each having three featured solutions. What happens if users update their exercises? What if the tests are synced and the solutions no longer pass? What if another user submits a ‘better’ solution?

On a side note, the highest-rep sorting is also problematic. Many will assume that solutions from those users must be ‘better’, but there’s no real reason why that would necessarily be the case.

1 Like

The OCR Numbers exercise on the Python track has 1890 solutions – and that’s one of the smaller ones. Grains has 25688. The track has 143+ exercises.

Even if we exclude anything older than a year, choosing the three “best” across 5 versions of Python (we currently support Python 3.7-11.5) seems daunting.

And by what criteria?

We can feature an iteration, so it doesn’t change, and we can remove it if it doesn’t pass any more.

I don’t think we’re looking for the “best” solutions - but more the idea being that highlighting a few different approaches is a useful thing for students. So if someone else submits a “better” one, maybe that gets picked up and featured, or maybe not - but that’s no different to today really.

All these questions do highlight why there is work involved though!

This is true, but it’s also likely they’re not actively bad. Whereas the most common submissions (which is how things are ordered by default) could be misleading.

Database tables aren’t really the issue here btw. Syncing extra fields in configs is much more work than creating an extra column or two. The only database issue really is the syncing into elasticsearch (so if you mean that as the database then I agree not having that lessens the work significantly)

I think the problem with the config.json approach is that this moves it from the crowdsourcing from high-rep users (which Glenn initially suggested) to another job for maintainers.

If everyone with >x rep on a track could feature 1-3 solutions, and then the default ordering is “Featured solutions” if there’s >6 then that might work.

I think for this to work it needs to be democratised like that, else I think it’ll just add burden/pressure to maintainers (who we’re effectively asking to become judges)

Isn’t this part of why we were writing approaches documents? Those approaches are a LOT of effort, but aren’t really prominent. If we’re going to reorder community solutions, should they be aligned to the approaches or otherwise mentioned in the community solutions context?

It feels like featuring certain solutions (and I am not opposed to that!) on the community page could end up working at cross purposes to some approaches docs.

Hard agree on that. Although some folx could choose to feature solutions that aren’t idiomatic or are “interesting” but not necessarily good in the way others might expect. But that’s a problem for another time, :slightly_smiling_face:

1 Like

Should we push for approaches articles instead and highlight that more prominently? Or make an “example solutions” article which would be like an approach but with less prose and instead just have some select solutions? Trying to use the community solutions to highlight specific solutions sounds like it has a bunch of complexity.

Yeah, featuring approaches in that page is a nice alternative. It is just a lot more work. And the fact it goes through GitHub means there’s a lot more gatekeeping, which puts strain on maintainers, and often tension between maintainers and contributors too.

1 Like

Whether its gatekeeping on GitHub or gatekeeping by high rep or endorsement on the site (or some other set of criteria used to rank), its still making a judgement about worthiness and visibility.

Different people are always going to have tension and disagreement around that (and they should!). And there are also going to be many folx who write good code who will be overlooked under both systems.

I think endorsing or starring is great. But I also think that it will eventually fall out to the same discussion we’re having now.

Different people think different solutions are “good” for a whole host of different reasons. It’s not objective. But somehow, we all want it to be.

I agree, but I was thinking more along the lines of mentoring notes, which are hosted on Github and they also make judgments (that you can ignore) about pushing a user toward a certain solution. I thought it would be more of an exception than a rule for an exercise to have a featured solution at all.

Pretty much exactly what I was thinking. If the search parameter is present they don’t appear.

I admit in no part of this am I thinking about the meritocracy of this system for solution writers. I’m fully thinking about this as a practicality for exposing students to solutions which are considered worth seeing by someone who is familiar with the language in question.

How about an approach has one solution?

  • When you write an approach you can link to a finished solution that implements the approach.
  • Approaches with solutions take up to the first 3 lots (preferably fewer) in the community solutions page
  • Clicking an approach’s solution takes you to a page with the full solution at the top instead of the snippet
  • If you are intrigued or don’t understand the solution you have the approach’s explanation right under it

This is what I’m picturing:


That was just me messing around with dev tools, I think the approaches should be made visually distinct so that a user who is interested only in community solutions can filter them out visually. Maybe even be half height since the footer is not necessary, but that would probably be a ton of work.

Sorry for the novel :face_with_diagonal_mouth:

I believe that’s how Approaches are supposed to currently work :smile:

It is slightly more complicated than that, depending. But yes, generally. Although the “summary doc” then includes multiples.

11 posts were split to a new topic: Can we improve the value people get from community solutions?

What if everyone could have their own list of featured solutions, each with a brief explanation or analysis? A special page could allow to sort users by reputation and see what a specific high-rep user features.

That sounds a bit like the stars system that had a bunch of issues and was removed.

So …

What if:

  • We removed all existing stars and started over by only allowing users with greater than a certain rep to give stars.
  • Requiring that solutions that have above a certain number of stars (the ones being “featured”) have an explanation of the code - automated or otherwise.
  • Any featured solution is anonymized, but it’s detail page lists the first (5? 10?) similar solutions sorted by user rep.
  • We use the display suggestion from ellnix above.
  • The featured slots stay no matter what the sort order is for the rest of the community solutions.

@IsaacG I’m not sure how my proposal is similar to the stars system. The mechanics are clearly different.

If I have my own page where I can feature selected solutions and explain what I find interesting about them, it gives me the opportunity to ‘teach’ people immediately, without them needing to request mentoring.

There’s no star system involved here.

Edit: I now see why you thought my proposal was similar to a star system. I’ll edit my previous post.

The stars are still there, so what is causing the idea that it was removed?

I recall something about the stars was removed. Possibly sorting by them.