# How come the "Wordy" exercise labeled as "Easy"?

I spent more than two hours and couldn’t solve it. I have 10 years of programming experience (except for python).
Then I went to Dig Deeper tab, and it says that dunder method `__getattribute__` should be used. I know dunder was mentioned in syllabus, but it doesn’t mean that we should research all other available methods by ourselves. If we do, then thats not where “Easy” word is appropriate.

Solution has elements of recursion, which alone is not easy category.

We also didn’t cover `try except` as far as I remember.

So this exercise is not easy at all, it should labeled as medium or even hard. In that case I wouldn’t spend hours trying to solve it, I would go to Dig Deeper after about 20 minutes knowing that I’m not able to solve it.

1 Like

There are many other ways to solve the exercise, without using recursion or `__getattribute__`. Regardless, I agree this exercise is not easy.

Each track (language) has their own estimate of difficulty. Using a scale with 1-3 Easy, 4-7 Medium, 8-10 Hard, here is a “wisdom of crowds” median estimate of exercise difficulty. Many tracks consider Wordy to have Medium difficulty.

Exercise Difficulty
acronym 2
affine-cipher 4
all-your-base 4
allergies 3
alphametics 7
anagram 3
armstrong-numbers 2
atbash-cipher 4
bank-account 5
binary-search 3
binary-search-tree 5
bob 2
book-store 7
bottle-song 3
bowling 6
change 6
circular-buffer 4
clock 3
collatz-conjecture 2
complex-numbers 4
connect 8
crypto-square 4
custom-set 5
darts 2
diamond 4
difference-of-squares 2
dnd-character 3
dominoes 7
eliuds-eggs 2
etl 2
flatten-array 3
food-chain 5
forth 8
game-of-life 5
gigasecond 1
go-counting 9
grains 2
grep 5
hamming 2
hello-world 1
high-scores 2
house 4
isbn-verifier 3
isogram 2
killer-sudoku-helper 4
kindergarten-garden 3
knapsack 5
largest-series-product 4
leap 1
ledger 5
list-ops 4
luhn 4
markdown 5
matching-brackets 4
matrix 4
meetup 4
micro-blog 2
minesweeper 5
nth-prime 4
nucleotide-count 2
ocr-numbers 5
palindrome-products 6
pangram 2
parallel-letter-frequency 5
pascals-triangle 4
perfect-numbers 3
phone-number 3
pig-latin 4
poker 7
pov 9
prime-factors 3
protein-translation 3
proverb 3
pythagorean-triplet 5
queen-attack 3
rail-fence-cipher 6
raindrops 2
rational-numbers 5
react 8
rectangles 6
resistor-color 1
resistor-color-duo 2
resistor-color-trio 2
rest-api 6
reverse-string 1
rna-transcription 2
robot-simulator 3
roman-numerals 3
rotational-cipher 3
run-length-encoding 4
satellite 6
say 6
scrabble-score 2
secret-handshake 3
series 3
sgf-parsing 8
sieve 3
simple-cipher 5
space-age 2
spiral-matrix 5
square-root 2
state-of-tic-tac-toe 5
strain 2
sublist 3
sum-of-multiples 2
tournament 5
transpose 5
triangle 2
twelve-days 3
two-bucket 6
two-fer 1
variable-length-quantity 5
word-count 3
word-search 7
wordy 6
yacht 4
zebra-puzzle 7
zipper 8

If you spend enough time on various tracks here, you’ll come to find out that exercise difficulties are very rough estimates and quite subjective. Perhaps a better approach would be to look at the number of attempts and solves: Python impact and status on Exercism (there is a Practice Exercises section)

I know it’s incredibly frustrating to get stumped on something labeled easy, but on the flip side, you shouldn’t give up just because it says it’s hard.

I agree 100%. This was one of the hardest exercises so far, and I struggled with it for a whole day.

As a novice programmer, I have a limited set of tools in my bag, and I’m adding concepts as the learning track introduces them. I approached this using regex and eval(). I discovered eval() in a web search, and after reading about the risks I decided that my regex parsing was narrow enough that it was ok to use here.

My eventual ‘solution’ passed 17/25 test cases (yay, and phew). But some of the rest of the tests had me completely stumped. Wrangling my way through, I got to 23/25, and then I hacked my way through the last couple just so that I could unlock the Dig Deeper (I try to submit first, then view the DD or community solutions).

But the DD then introduced a totally new (and IMO really abstract) concept - magic methods. I find this really frustrating, as I don’t think it’s fair to be expected to know and use concepts like this at this stage.

In many other exercises, the README introduces the necessary concepts before detailing the challenge, or there’s something helpful in the HINTS file. This is helpful because it follows a logical learning path - theory first, then application, then troubleshoot/test/research, and finally submit.

I would urge the contributors to either add concepts & theory information to the README/HINTS, or else change the level of this exercise to hard.