Unhappy path tests for Kindergarten Garden exercise

Currently, the exercise does not test any unhappy paths, which means that it is possible to pass the exercise simply by using #method_missing, without having to use #respond_to_missing?. That is, the user can write code that has lots of unintended side-effects, whilst still passing the exercise and possibly not realising.

I feel like there is potential here to help teach the user about some of the risks of metaprogramming.

Some potential ideas for unhappy path tests:

  • If a non-existent student outside of the range is called (e.g. garden.peter), a NoMethodError is raised
  • If a non-existent student inside of the range is called (e.g. garden.alex), a NoMethodError is raised

Hi Harry,

In general we are not super interested in unhappy path testing, because it’s always arbitrary, but as you indicate, there may be something valuable to teach here in terms of metaprogramming.

In Ruby it is always a mistake to only extend #method_missing without #respond_to_missing?. I don’t think we must enforce the raised error is a NoMethodError, but I think two extra tests that assert_raises would be helpful for this particular exercise for that particular implementation without making the existing harder.

cc @kotp @iHiD your input is appreciated

1 Like

Hi @SleeplessByte, thanks for your reply.

I fully agree. Please disregard the NoMethodError, the assert_raises sounds sensible to me.

Since the exercise is about metaprogramming (Termed dynamic programming in the appended description) I think that failing to use respond_to_missing is feedback that should be given, either automated or by a mentor, if that is the approach the student chooses to use (method_missing).

Do you have a solution in mind for this? It should only be triggered if the student chooses to use method_missing rather than defined_method or its flavors.

There are no tests for students that would not exist, presumably, as mentioned. The behavior is “undefined”.

Know that this exercise does have canonical data in problem-specifications, so we should consider if there is anything that we can do at that level to address this, and then work the customization in the Ruby Track.

2 Likes

Hi @kotp, thanks for your reply. :pray:

Do you have a solution in mind for this?

I don’t have a specific solution in mind, this was just something that I observed whilst doing the exercise.

I had wanted to share it here as an opportunity for me to join in with the Ruby and/or Exercism community, and to potentially give back to the site through contributing.


I understand that we may be slightly reluctant to test that certain methods should not exist, since that might restrict the user’s implementation. However, if it were me, I might lean towards:

  1. Adding a line or two to the problem description about giving consideration to the class’s public interface, i.e. about making sure to only expose certain methods. This could be phrased within the scenario, e.g. “When asking for the plants of any children who do not belong to the class, an error should be raised.”
  2. Adding at least 2 tests, possibly more, for checking that non-existent method names raise errors (with the aim of this being to prompt the coder to consider a clean, scalable approach, rather than simply adding exceptions on a per-method basis to pass those tests).

It should only be triggered if the student chooses to use method_missing rather than defined_method or its flavors.

By taking this approach/mindset of considering the class’s public interface, perhaps this could be applied in all scenarios, rather than just for method_missing? implementations?

Playing around with define_method, it seems as though only expecting certain methods to be defined could be a reasonable expectation:

$ irb
irb(main):001:1* class Garden
irb(main):002:1*   CHILD_NAMES = %w[alice bob charlie]
irb(main):003:1*
irb(main):004:2*   class << self
irb(main):005:3*     CHILD_NAMES.each do |child_name|
irb(main):006:4*       define_method child_name do
irb(main):007:4*         child_name
irb(main):008:3*       end
irb(main):009:2*     end
irb(main):010:1*   end
irb(main):011:0> end
=> ["alice", "bob", "charlie"]
irb(main):012:0>
irb(main):013:0> Garden.alice
=> "alice"
irb(main):014:0> Garden.bob
=> "bob"
irb(main):015:0> Garden.charlie
=> "charlie"
irb(main):016:0> Garden.david
(irb):16:in `<main>': undefined method `david' for Garden:Class (NoMethodError)
        from /home/hgraham/.rbenv/versions/3.0.0/lib/ruby/gems/3.0.0/gems/irb-1.6.3/exe/irb:9:in `<top (required)>'
        from /home/hgraham/.rbenv/versions/3.0.0/bin/irb:23:in `load'
        from /home/hgraham/.rbenv/versions/3.0.0/bin/irb:23:in `<main>'

My response was stating that raising an exception if respond_to_missing? was not presented by the student, if method_missing? was used. It does not make sense to raise an exception because respond_to_missing? is not defined if they are doing define_method instead of, based on the names that are fixed.

The class’s public interface will naturally raise an exception if the method does not exist in the public interface.

Yes, that is true, and I would think that this would come out during mentoring, if this did not happen, while using that approach. instance_eval may be another way to do this, while this practice exercise only talks about a limited amount of ways that one might approach it. It is not an exhaustive list of approaches, though.

I am curious, what did your mentor say when discussing things like this?

I am curious, what did your mentor say when discussing things like this?

I didn’t request mentoring on this problem, as I was content with my solution for this exercise.

I generally don’t request mentoring on Exercism, I just work through each exercise independently to develop my instincts, reflect on any community solutions, and refactor where I feel I can learn something and improve my approach. I’ve found that this approach really works for me.

I’ve personally requested mentoring a couple of times in the past, and found that it used a lot of my energy, and whilst I learnt a few pieces of Ruby magic, I didn’t really learn anything that contributed towards my goals at the time (the mentoring I received was focused around string and regex magic, and whilst I didn’t use the “ideal” code in my solutions according to the mentoring, my code was well-encapsulated, which was sufficient for my goals/needs as a Software Engineer).

I’m sure I’ll need to practice giving and receiving mentoring more at some stage in the future, but it’s not something I actively use at this time.

That’s getting a bit off-topic though.


I may have interpreted the tone incorrectly in your message, but my apologies if I’m not using the exercises, the mentoring, or the forum as they are intended to be used.

Thank you both for your comments. I’m happy to leave this topic for now if you feel it won’t lead to anything useful at this time, or if you feel that any future discussion/changes here would be too much effort for too little value. I’m beginning to feel that may be the case.

Thank you both for your comments. I’m happy to leave this topic for now if you feel it won’t lead to anything useful at this time, or if you feel that any future discussion/changes here would be too much effort for too little value. I’m beginning to feel that may be the case.

I think a change to this exercise can still be warranted. You, @kotp and I all agree that repond_to_missing? must be implemented if method_missing is used.

The point Victor made (@kotp) is that there are ways to define methods (aka define_method) where we would NOT expect repond_to_missing? to be implemented and that to be okay, in that case.

It was not disagreeing with your observation or initial proposal, but merely pointing out that it’s a bit harder than just “raise if thing fails”.

I dont think I understand at the moment, sorry.

Wouldn’t we want certain methods to fail (eg a non-existent student name) regardless of the approach?

We do!

As long as we check that whatever the method of testing is doesn’t interfere with other solutions that are acceptable.

So, for arguments sake, what would be the potential downsides of adding tests like the two below (with better names)?

  def test_failure_1
    garden = Garden.new("RC\nGG")

    assert_raises StandardError do
      garden.alexander
    end
  end

  def test_failure_2
    garden = Garden.new("RC\nGG")

    assert_raises StandardError do
      garden.maxine
    end
  end

Specifically (other than the subjective choice of the exception class), the idea for the exercise is metaprogramming, and those methods called in the blocks may exist at some point in time. The test does not make sense for the problem, without an explicit requirement prepared that those methods must not exist.


Not to push too hard on the mentoring, the mentoring can help to talk about these things in the track where the idea to make the improvement come up. A mentor that is familiar with the exercise can help validate (or not) your ideas for reasons many different reasons.

It is definitely a good site to practice on, but the reasons for having mentors available are that as a learner, we can never see what we can not see. So the mentor may bring things to our attention that we would not even think about.

Even requesting a mentor for TwoFer, which seems like “I am an expert at this problem, for sure, nothing else to learn here, moving along” is a potential happy surprise when you learn something about the language that had you skipped the mentor, you may not get to for a long time, with a much more complex problem that gets in the way.

Of course, practice is valuable on its own, mentors can bring an exponential proponent to this practice.

1 Like

Specifically, concerning that this is a metaprogramming exercise, would these tests be guarded for when/if those names are in the list of methods available? Doable, and an interesting problem to solve, but only if that is considered valuable.

Ah. I understand what you’re saying.

I think this is the key point:

The test does not make sense for the problem, without an explicit requirement prepared that those methods must not exist.

@harry-graham the test itself is perfectly fine, but we need to update the instructions as well to something along the lines of “Only students that have a plot should be queryable yadaydayada. No methods should exist for students that don’t exist yadaydayday”

@kotp thank you for taking the time to write that message, and for patience with guiding me here, I really appreciate it.

I’ll try to make requesting mentoring a priority from now on whenever I attempt an exercise.

@SleeplessByte @kotp thank you both.

Like Victor said, it would likely be flagged in mentoring if the mentee had this issue in their solution, so please feel free to leave this as it is if we feel that is the simplest solution.

If we definitely see value in the change, please still feel free to shelve this for now, especially if there are still any concerns at all around the solution of “adding tests and updating the description”, or if there are any concerns at all around how this would work with other languages.

Prior to Victor’s earlier message, I hadn’t realised that any changes to exercises in Exercism had to be done for all languages (I am not very familiar with Exercism development).

I also don’t want to cause anyone any additional or unwanted work here, so please do feel free to leave this topic alone for now, please don’t feel obliged to continue with it.

To be sure, not all exercises are like this. The exercises in problem-specifications may be in a track, or may not be, at the track maintainers discretion, and there may be exercises that are only available in the track (or potentially only in a few), as well. It is not uniform, and pretty flexible. In addition, there are times when an exercise in Problem Specifications is modified for individual tracks.

Thank you for your interest in making Exercism a better place!

1 Like