I see the most common solutions simply use an array of the literal text of the twelve verses, which seems to completely miss the point the exercise was intended to teach. Ultimately most exercises could be solved by a simple lookup table of the expected outputs given the inputs provided by the tests! But I don’t think solving the exercises that way would demonstrate understanding of the concepts.
Should the “House” exercise be modified to provide some additional input and expect to generate variants on the classic rhyme, rather than just the original?
seems to completely miss the point the exercise was intended to teach.
What, in your opinion is the point of this particular exercise? And is that the only point? Could there be other approaches to it?
This exercise pulls it’s requirements from problem specifications, which is a cross-track repository for exercises. If you are going to propose or modify requirements, it needs wider discussion and approval from at least three different track maintainers. I suggest moving this to the building-exercism category, and not the Python track category.
…and a word on practice exercises…
Practice exercises are intended to be much more open-ended than the concept exercises that are the main nodes on the Syllabus tree. It is up to the student to craft and iterate on their solution to their (and their mentors) satisfaction. There is never just one solution or approach to them, which is what makes it difficult to craft tests and instructions and “cleanly” associate them with concept exercises. We design them to prompt questions and encourage the use of different techniques.
For some students, that means hard coding an array of strings. For others, it means something else. We don’t test for implementation - we test for result. It is up to the students to then iterate with or without mentor help to get to a place where their solution is the best they can make it. If that makes sense?
@IsaacG or @iHiD or another forum admin – could we move this to a more general category? Many thanks!
It’s certainly true that writing something that passes the tests and then iterating on it is a good way to learn! The fact that the top solutions set such a low bar and don’t have any later iterations suggests that not many participants are taking things further than the very basics, though. I’m not sure what to make of that. Not enough mentors?
Some folx don’t publish all their iterations. In fact, many don’t - so you can’t judge number of iterations based on what is published in community solutions. And there are a not-insignificant number of students who don’t publish their solutions at all. Other students publish a specific solution that shows something they were working on, or where they were at a specific point in time. There are also many who iterate, but do so on their own machine, and only upload when ready to publish.
But also? We don’t force people to participate. If they don’t see the value - they don’t see the value.
Those who regularly ask for and receive mentoring rave about how much they learn (as do the folx who volunteer as mentors!). But students aren’t required to participate. It is very much you get out of it what you put in.
The Python track has 6000+ folx who have signed up as mentors at one point or another. I don’t know how many are active this month, but I do know that the queue is seldom over 10 people waiting, and is often empty.
As for other stats, you might find this page interesting. It’s updated roughly once a day.