Square root in Python: should we make it more stringent?

I did this exercise, and it felt quite complicated. It was only after I did an iteration and checked the community solutions did I realize that many people used ** 0.5. The instructions do mention not to use math.pow, and this is quite similar, so should it be accepted?

P.S.: A student I mentored wrote code that works for all natural numbers, not just perfect squares (which is what is covered in the tests). I’m unclear as to the meaning of a “natural radicand” - is that the same as a perfect square? I googled the term and couldn’t find anything. I (and this student) assumed that it meant a radicand that is part of the set of natural numbers.

It’s hard to write tests which test implementations vs results. If students want to call math.sqrt() and mark the exercise as complete without implementing a square root function themselves, then that’s fine. That’s their prerogative. Exercism provides exercises students can opt into solving … or not. We don’t force students to solve them in any particular way. We don’t stop students from copy pasting code. We don’t force a particular implementation (in practice exercises, at least).

You got it right; a natural radicand is a radicand which is a natural number.

A radicand is simply something of which the root (radix) is to be taken. Compare: an operator operates on operands, a divisor divides a dividend. But yes, I think another word should be used here. Even within mathematics this term is somewhat rare.

No, it is just a natural number that the (square) root is to be taken of.

I don’t think it is, in this case. Just newtype int and neglect to provide __pow__.

Regardless of whether or not we enforce the decision, should we make it clear in the instructions that ** 0.5 too shouldn’t be used? I didn’t notice people using .pow, which means that they were complying with the instructions technically. My question is whether we should add the ** operator as not allowed in the instructions.

Thanks for clarifying on the natural radicand point: perhaps we should implement tests covering imperfect squares too, then? Or was it intentionally left out: if so, I think it’d be beneficial to explicitly mention that.

You can’t really stop people from using builtin functions. There’s math.log and math.exp which can get you math.sqrt. And I bet there’s some other clever ways to combine builtins to get sqrt. It’s up to the students to play by the rules.

The problem comes from the problem spec repo, which is language agnostic. We should not have any Python-specific language in the problem spec description and we don’t need a Python-specific addendum. I think the problem is sufficiently clear as is. People can opt to use ** 0.5 or math.pow etc if they want. Or they can not. In general, problems should describe behavior and not mandate implementation. They can provide suggestions for implementations (“Check out the Wikipedia pages”).

What is the benefit of adding tests for imperfect squares? Consider reading though this (WIP) document, Suggesting Exercise Improvements and answering the questions posed there.

Yes, you can. In fact, Python makes this extremely easy relative to other languages. Builtins can be remapped. But that isn’t even necessary in this case: tests could be added that feed something other than int/float/complex to square_root.

Yeah. Matthijs mentioned creating a new type for the test which doesn’t have a __pow__ method. You can do this is Python … but users could also work around it pretty easily. If the __add__ returns an int, they can do (val + 0) ** 0.5. You could ensure all the operators do not return an __int__, but users could do a int(str(val)) ** 0.5.

You can easily throw an obstacle in the way. Students can easily move around the obstacle. It becomes an interesting problem of “how can we prevent the use of int.__pow__” … which is a completely separate challenge and problem. And, I believe, not germane here. Problem specs lay out function requirements, not function implementations.

This is from the description:
" While there is a mathematical formula that will find the square root of any number, we have gone the route of only testing natural numbers (positive integers)"

So this implies that tests for imperfect squares should exist, right?

No, for by this logic the test could be argued to be incomplete until they covered all natural numbers.

I don’t think that implies they should exist. But it certainly doesn’t imply that they shouldn’t exist.

That said, I think some of the linked approaches only work with perfect squares.

For the curious, there’s a nice algorithm for this solution on wikipedia.

Oh, okay.

The document raised good points, thanks. I think that adding tests for imperfect squares covers all possible implementations and therefore follows through with what was implied (in my opinion) in the instructions.

However, upon reading the document, this might be a lot of effort for something that is not required, perhaps. So I suggest we change the wording “natural numbers” to “natural numbers that are perfect squares”. This ensures that people can write simpler algorithms that work for perfect squares along, thereby saving a lot of effort on their end. This certainly applied to me: writing an algorithm that works for non-perfect squares would have been tough, to say the least. At the same time, current users don’t have to change their code to pass the updated tests.

Pretty much all the exercises have descriptions which do not fully describe all the requirements. The details of what is required is discoverable via the tests. This falls in line with the Test Driven Development philosophy. The instructions intentionally do not cover every detail of what is and isn’t required. The tests are intended as the actual requirements.

If students go above and beyond what the tests require, that’s completely fine. If the student wants to only solve what is required by the tests, they should read and run the tests. Solving non perfect squares is perfectly valid for this exercise. It’s not tested, but the description intentionally doesn’t call out what is or isn’t tested. That’s documented in the tests themselves.

See Test Driven Development from the Python track docs.

Oh, that explains a lot of what I’m seeing. Makes sense.