Create new track for GDScript

The error handling business sure is tricky. From my own observation and my work on the powershell track (that I often took inspiration from python, javascript and C# tracks), almost every exercise can have error cases simply just from wrong input but most of the time there was no tests for them.
And I understand why, because quite frankly there is no end to it if we have to go around and focus on every single edge cases of error, so most of the error that was raised often tied to the actual logic of the exercise itself.

For example, go back to the grains exercise, there is really no need to test for error if user input is not an int, or too many int, because it is not really relevant to the test. However the logic of the exercise would break if the number is out of range, like -1 or 65.
Maybe for the time being, we can avoid them like you said.

About the error message, I understand the need to wrap value inside apostrophe, however imo you can just do 3 to signify an int and ‘3’ to signify a string, and 3.0 for float for example. If you want to be more explicit per harp you could wrap them inside a typeof() func or getype() method?

I was thinking about adding the type explicitly, but that would probably clutter the output … Using a quote for strings and lack of quote for ints makes sense, but I was worried about other types (Arrays, Dictionaries etc). However, on second thought, I think that our exercises won’t use that many types, and the main ones should have no problem with just strings being wrapped in quotes. I’ll try to implement this.

Other news: I just finished refactoring the test runner! :tada: The PR is still in draft, because I’d like to add 3 more tests:

  • string vs int output (as discussed above)
  • method that returns null (test runner interprets this as an execution error, but our users might accidentally create such methods, and we need to be prepared)
  • test for multiple failures (error messages are stored in stderr file and then read in Godot, I need to make sure that we don’t read output from previous tests).

That being said, the code is ready for review already! ;) Please check it and let me know of any issues/typos/possible improvements. I’ve also added a short description of the test file format (since we are using a custom unit test solution here), please check it as well if possible :smiley:

Hi! Quick update: I realized that this “return null if something went wrong” thing might actually cost us some valuable error messages, so now I’m trying a bit of a different approach to the test runner. The change will involve the test file format, but a lot of the test runner itself will stay the same. I would like to know, however, if we’re OK with test names like test_add_2_numbers in our final results.json file? (current implementation allows us to put any name we want, e.g. “Add 2 Numbers”, but the new implementation would probably use names of test methods)

Since Exercism encourages students to look at the test files to affirm/understand the problem requirements, I would say returning the name of the test methods in results.json would be helpful /important for their understanding.

We happen to remove the underscores for test names in Python for results.json, but otherwise stick to the formatting of the test method names - and I think it would be perfectly fine if the names included underscores.

Thank you @BethanyG ! :) I’ve started the refactoring process. In the end, I will squash all the commits, but for now there is a separate commit just for refactoring. Could you please take a look and check if the output format is satisfactory?

I’ve only changed 3 tests by far, but that should give a good idea of how the new tests will look like (both the test file format and the output JSON). There is also a weird trick for reading the error output from stderr file in that commit. I might find a better way, but if not, I’m afraid we will have to keep it. It should not really break anything, it’s just that Godot doesn’t have good error handling mechanisms, so I have to improvise ;)

Alright, I’ve added all the missing tests, refactored the test runner, and improved error message creation ;) To sum up the recent changes:

  • test runner will now automatically run all methods in the test file that start with test_ (so, similar to e.g. pytest). Those methods will be called with the solution script as the only argument. This allows us to test any method from the solution (also variables, although there are no tests for them yet) and perform any setup steps (if needed). But, most importantly, this means that the method will be called “normally” and will log any error output, which we can later retrieve and show to our users. This will give them much more detail about what went wrong with their code! :slight_smile:
  • formatting of error messages has been adjusted a bit and now String values are wrapped in single quotes, but other values are not. This allows us to print a relevant error message if the expected and actual values look the same, but have different types (e.g. if our user’s method returns a String “3”, but we expect an int 3)

I still have one TODO left, but I’ve updated the docs already. Please let me know if the runner in its current form is OK, and if the error messages are good enough (e.g. they have a lot of details like the name of the test script, I don’t know if that’s the standard way of printing errors in Exercism).

I will fix the TODO soon, but I will be a bit busy for the next 2 weeks, preparing for GodotCon 2023 (as a speaker). I think I will still have some time to implement the fixes you will suggest, as I want this track to start soon! ;)

I’ve fixed the last TODO and removed “draft” status from the PR. @ErikSchierboom already approved it, but that was before some major changes (still in draft), so I would be grateful for another review ;) If there is no feedback, let’s merge it! :)

1 Like

I’ve just approved the PR! Fantastic work. Now on to adding exercises to the track!

2 Likes

Will we be using the test runner for the CI? If that’s set up, I think it’s time to go for it and I can port some more exercises for the track.

I hope so, but I didn’t set it up yet. I assume that, since Dockerfiles are available, we can just plug in a Docker container to our CI process? Would that be difficult to configure?

Hello again!

I am a bit overwhelmed at the moment, but I didn’t forget about the GDScript track ;) I tried to add some config to config.json file to enable tests for “Hello, World” exercise, but apparently the paths are wrong. I used Python track as an inspiration, but I guess the configuration there is much more extensive (or I’ve missed some crucial steps for setting up a track).

I will take another look later this week, but I would be very grateful for any help you can offer, especially from people familiar with starting a track from scratch (perhaps @Meatball ?).

I can take a look this week. My Pyret track is getting close to launch so I’ve got some experience as its primary contributor. We can talk here or on Discord if you want. My account is the same name there as well.

Inside the status object, test_runner needs to be true once there’s a test runner Docker image set up. Once that key is true, you need to add test_runner object to the root, and that needs an average_run_time as an integer value.

The files array contains the paths (relative to an exercise folder) for the stub, the test file, the example (for a practice exercise) or exemplar (for a concept exercise) implementation file for CI purposes. That’ll help when we begin porting exercises. Otherwise, we need to fill those values out by hand which is a little error-prone and won’t make configlet happy.

Thanks @BNAndras , I’ve updated the files accordingly, and also noticed that, while copying the config from Python, I forgot to change the file extensions :man_facepalming: Which explains why the CI couldn’t locate the files :stuck_out_tongue:

I’ve checked Docker Hub and GDScript test runner image is there. However, the CI step is stuck at this message:

Requested labels: exercism/gdscript-test-runner
Job defined at: exercism/gdscript/.github/workflows/test.yml@refs/pull/3/merge
Waiting for a runner to pick up this job...

Is this expected or did I miss something?

I’m fairly new to GitHub Actions, but I don’t think the image name goes under runs-on. That’s for specifying the host OS which would be tied to a specific version of Ubuntu, Windows, or MacOS. I think you want container: exercism/gdscript-test-runner right underneath that.

Yeah, that fixes the issue. Next, you probably want to mount the repo inside the Docker image so it has access to the exercise implementations. Then you can loop over the mounted location and invoke the image’s bin/run.sh for each exercise. For Pyret, I renamed the exercise’s stub, copied the example solution into the exercise folder as the stub, ran the tests, and then undid the changes. That wasn’t with a Docker image of course, but I’m sure you can do something like that.

Thank you @BNAndras , that was it! Using your advice, and taking some inspiration from Python and Crystal tracks, I managed to run the tests for “Hello, World” exercise! :tada:

Current solution will iterate over all concept and practice exercises and use our GDScript test runner to validate them. If the resulting .json file does not have a status “pass”, the validation will fail (and print results.json for more details).

There is probably still a bit of config/documentation left, as I only focused on configuring the GitHub action, so we can have automated CI for GDScript track. Nevertheless, I changed the status of my PR from “draft” to “ready for review” :wink: Please feel free to point out any information I still need to provide for our first exercise, otherwise I’ll try to check the remaining TODOs and Configlet docs to see what is missing.

Great, I’ll update my existing PRs once that one is merged.

Wow, @BNAndras , you’re on fire, adding one exercise after another! ;) Fantastic job, thank you for taking care of this :+1:

Apart from exercises, we will need some info and config about the track itself. I think there are already some PRs for that created by @Meatball . Could you please let us know about the status of them?

The key features should be good to go as quick as I have fixed the comments you left.

Installation is at the moment only instructing how to setup godot as a command line application for linux and windows. Dont know how it is intended to be exectued locally, if the docker container is meant then that instruction isnt needed. But my experince is that docker in the way they are used on exercism is that they are not compatible with wsl at all(could have changed though). And windows docker is meeh. I could finish the install page but I would need some more info how it needs to be setup.

As for the two-fer exercises so could I rebase that as Andras suggested.