# Add a very large number test for difference-of-squares

As part of 48in24 I was looking at the solutions for difference of squares in R after solving it and the whole first page comprises of solutions that sum ranges (if that’s what you call them in R, I don’t know).

I feel like this is missing the point, and the exercise is failing to induce users to research the ideal solution ( (\sum^n_1)^2 = (\frac{n(n+1)}{2})^2 and \sum^n_1 n^2 =\frac{n(n+1)(2n+1)}{6})

I was considering adding a bonus test with a 1e9 (one billion) input and a 100ms timeout, at least for R and other languages that can handle big ints and timeouts.

I was looking at issues in various exercism repos to see how to go about adding this, and came across docs that said I should discuss it first, so here I am

I’m sorry if this is in the wrong category in the forum.

Firstly, hello and thank for the suggestion - welcome to the forum!

I’m a tentative on the suggestion of adding a test as I feel like Exercism’s exercises aren’t about forcing an implementation, but allowing a student to create a bad implementation and then refine it with help from mentors, approaches and exploring other solutions.

What I would love to see as an alternative would be:

• An article/approach that explains why summing ranges isn’t the best way to do this and what a good approach is.
• Some R-analyzer code that automatically tells students that there’s a better way to do this when they submit.
1 Like

Firstly, hello and thank for the suggestion - welcome to the forum!

Thank you for your welcome and unreal level community engagement

[…] Exercism’s exercises aren’t about forcing an implementation, but allowing a student to create a bad implementation and then refine it with help from mentors, approaches and exploring other solutions.

Noted for the future, but I don’t think dictating implementation can be avoided. I agree that this test is really inelegant and would go against the grain especially for an Easy exercise, but I was going off parallel letter frequency and most particularly reverse string in the Rust track.

Reverse-string in particular has a “bonus” test that you have to run locally and the instructions for it direct you to the exact crate you ought to use. I feel like in some cases you cannot gain a benefit from doing things your own way and you need to just be sent on a “fetch quest” to practice researching.

An article/approach that explains why summing ranges isn’t the best way to do this and what a good approach is.

That is a good idea, but I feel like the analyzer approach might be better since it provides immediate feedback that something is wrong and that might be enough to induce enough users to submit an iteration without a range for it to go to the top of community solutions (so that users who cannot find a good way even after being prompted by the analyzer can look in the community solutions)

Plus it’s better for UX to assume users dive headfirst into the exercise without reading and only pause at the feedback

Some R-analyzer code that automatically tells students that there’s a better way to do this when they submit.

I thought of that as well and would love to do it but unfortunately I’m very unfamiliar with the exercism ecosystem, all analyzers, and R itself. I’m the triple (lack of) threat

Sorry for the TED Talk, I’ll look into the analyzer when I have free time

1 Like

I would also be against having a test case that forces students to research a mathematical solution, for a couple of reasons:

• Many students are not comfortable with (or even dislike) math
• Ranges might not be optimal, you can still use them very elegantly here and this exercise can serve as a nice and gentle introduction to them

A good place to start would be the docs for building Exercism, and when you’re done reading those, take a look at the track-specific documentation for the track you’ve chosen.

You can write up an article/approach and then ask someone for help with drafting a PR, if you’re unsure you can manage yourself. Maybe take a look an another exercise that has an article/approach.
I usually hand in the JS track, but I’m not against helping a fellow volunteer (if I can).

Adding tests that force some implementations to fail is usually a nono, unless we have a very good reason for it. For example the Parallel Letter Frequency on the JS track had to have tests that limit solutions that are not parallel, because of the nature of the language and the point of the exercise.

I don’t see the case being the same here however, since we’re not necessarily aiming for a certain type/group of solution(s) so implementing a restrictive test would be detrimental (IMO)