TL;DR - Either option works, but I really would caution you to read through the docs on concept exercises, so you understand all of the details. There is a lot, and we also have some particular rules with how tests are written, what topics are covered, and what syntax to use.
Checking again, I see that the concept docs have at least been written for dict-methods
, but I am not seeing an exercise start in the repo – so I might be wrong there. Let me dig a bit. I will link anything I find here.
You can totally come up with a story if you’d like. I know some folx like to pull from Shared Exercise Stories or other tracks – I have a preference for creating new stories or exercises (and you get a badge for creating an exercise story). Also - many of the dictionary related stories use High Scores
– but we’ve implemented that as a practice exercise, so can’t reuse it for a concept exercise. Although we can use the scoring scenario - but it needs to then have a different focus than that practice exercise does.
The exercise on dicts
uses a basic story of keeping inventory, and I wouldn’t be opposed to extending or riffing on that or any other similar-type tasks. Other ideas that I’ve had in the past include:
- “Explosion at the Paint Factory” (I use color names and hex values in the concept docs), where you need to re-arrange or create different colors sets from chaos.
- “Paint by Numbers” where you need to figure out which colors should be packed with which paint-by-numbers painting in a box for sale.
- “Coloring Book Conundrum” - which is essentially the same thing - you are an app developer (or coloring book author) that needs to assign color palettes to different coloring book/app pictures.
- “Webmaster Woes” where you need to figure out which hex values to use on a website, but the product manager has given you a list of seemingly random color names.
- “Recipe Recon” or “Menu Maven” where you create a Menu from different dishes that require different ingredients (see Cater Waiter for easy pre-assembled data/ideas).
Again - only random ideas, and you can come up with your own. But you do need to cover specific dict-methods
, and also have a limited set of tasks and allowable syntax - but I will let you read through the docs for the requirement details.
Global Meetup (I think) would be met with great enthusiasm by Erik. My suggestion would be to create the exercise in Python track at first, and we can go through it and test it out.
Once we have good tests and data, you can then draft up a PR to problem specs. It will take a bit to go through that and make it generalizable cross-track and get all the syntax and everything nailed down (and three maintainers approving).
Test cases may also change. Once that process is complete, we can double back and adjust the Python exercise accordingly. If you are not completely terrorized by that whole process, you can tackle making a Python test generation template for it (or not!)
Since I’d like to avoid the recent furor that has taken place in problem specs, I’d also suggest making a comment in the Global Meetup
issue on problem specs laying out that you are going to implement the exercise for Python and then PR to prob-specs
.
WHEW!! Let me know if that all makes sense, and if you have further questions or issues.