Redundancy in repo (test support)

Why is it that the includes for Catch2 are copied in each exercise directory instead of having it in a single place in the repo or, probably even better, have CMake download it?

The same goes for the CMakeLists.txt files. Those are identical for all exercises (maybe not all of them are exactly identical, but it’s really hard to tell).

I can imagine it is because each directory needs to be self-contained to allow for easy downloading of an exercise. Is that the case? Wouldn’t symlinks be an option? How exactly does the downloading work? From a brief look at the the CLI source code it looks like it goes through some kind of API, but I wasn’t able to quickly find where that is implemented.

  1. The API does not yet support a “common” files for download.
  2. Each exercise is intended to work as a stand alone program with all the needed resources in the current directory.

Plus the test runner containers do not have internet access

Thanks @IsaacG and @glennj!

Do you have a pointer to the API implementation? Maybe there is even a specification? I would like to understand better how the downloading works.

Also, how does the interaction between track repo and website work?

I very much think we should do this better. I came across the situation now multiple times where I would want to make changes to such “common” files, but having to touch a file of each and every exercise doesn’t feel right and also makes reviewing much harder than it should be.

@vaeng Would you be open to change the way this is organized? Once I have a better understanding of the requirements and how the different parts interact, I could try to come up with a proposal.

The website and API is written in Ruby and lives in this repo: GitHub - exercism/website: The codebase for Exercism's website.

The CLI download code lives here: cli/cmd/download.go at main · exercism/cli · GitHub

The API is not documented.

Exercism does not generally accept public/external PRs to that repo.

I would love to have all the “copy” files in the shared directory. According to Erik it is on his list and we cannot do anything to speed it up.

The test-runner is not using the catch2 library, which comes bundled with the cli-download. So, lucky us, no redundancy over there.

My main issue with this is the effects of updating shared files. It will result in every solution on a whole track being marked as out of date, and needing retesting. That feels like a huge overhead risk in my eyes. To the extent I’d almost want to add a CODEOWNERS lock on that directory to ensure Erik or I sign those things off.

On erik’s three points:

  • "configlet would have to be updated to also look for the files listed in the exercises/shared folder". I’d be happy for anyone to do this.
  • “we’d have to make sure the shared files are also considered when determining if an exercise is up-to-date.” This is one of the most complex areas of the website here so I want to be really sure it’s worth the effort. I’m getting the impression that it is, but I’d like that to be really shouted loudly at me :slight_smile:
  • “some API work (should be fine)” Agree with Erik - this bit is simple - a couple of hours work

We’d also need to think through the actual migration work of this. Just adding a shared directory and removing the shared file from each exercise would again result in potentially rerunning everything on the whole site. Which is obviously not acceptable. I think we can work our way around this using the “no important files changed” flags, but my level of wanting to be super-careful here is through the roof :slight_smile:

So I think in summary, I’m pro the change. But it’s not trivial, it’s involves work that I need to do (that middle bit) and work that Erik needs to do, and right now I have little time (due to the whole trying to keep Exercism alive thing :)). If y’all tell me it’s worth the effort then I can treat it as a hack-weekend for myself at some point, and assign the rest to Erik. I won’t promise a timescale at this point though! :slight_smile:

I am all for a folder, that needs approval by admins, because of the high cost it has on the site.

That said, I wonder what files would go into that directory, that would require the exercises to be re-run.

I only have my cpp and sqlite googles with me, so I can only talk about those two tracks.
What I would want from that feature is a cleanup of the exercise folders because it has so much duplicate code. And all of that is test-infrastructure. So if I want to update the test frameworks version, I would not want to trigger a rerun. I would have to change the same file in ~80 directories.
If I could do the same in one place, it would be a lot safer to see what has changed and judge if it would require a re-run of the submissions, which it should not (in my opinion).

Aside from that, it is all user code and never runs on the test-runner anway (for cpp).

Unfortunately, I don’t have access to that area of Discord it seems, but it looks like @iHiD’s quote contains the gist of it.

Updating test support files should not change anything about the validity of a solution, so the “no important files changed” flag would probably be reasonable here. However, I wonder why we even need to re-test existing solutions? Is it so they can get a flag if they don’t pass anymore or is there more to it?

I’m not sure how the “proper” solution would look like, but I imagine it would allow one to put more files into the .shared folder that would be added to the download of an exercise? Maybe even allow for some configurability of this via .meta/config.json?

Until we have this, at least for the C++ track we could consider alternatives, such as letting the student provide a copy of Catch2 and having a FindPackage call for this. If we cannot find a copy locally, we download it via FetchContent. We could update the docs to explain how to provide Catch2 locally. Other tracks also require students to install something for test support, so that wouldn’t be that unusual.

Symlinks could be another option. However, from the links from @IsaacG I was still not able to figure out how the interaction currently works. Also, it is unclear to me how the website and the tracks’ GH repositories are synchronized. Could anybody shed some light on this? My hope would be that this answers the question if symlinks would work. Because if they did, that would be a rather nice solution IMHO.

The concerns raised by @iHiD are valid for those approaches just the same, obviously. But with proper testing, setting the “no important files changed” flag could potentially be a way out here?

That is exactly my point!

Which is a bit of an issue, since the test runner uses a different version of the test framework plus in a different configuration. The CI tests also only make sure that the example solution passes the test, but they don’t test the user-downloadable test infrastructure. I suppose that’s a different topic. But when trying to come up with some way of running solutions through the test runner (so we would catch such an issue as we had recently with the test name split over multiple lines), I noticed that the parser expects a different XML format than what the Catch2 version used in the CI tests produces. So that makes anything but literally passing each exercise through the test-runner container a bit difficult to implement. Simplifying the entire setup would go a long way here. And having an option for common files would be an important building block for such simplification.

Yeah - and this is the risk. Because we have to presume that it does effect things (because it could) so we have to rerun everything unless we’re explicitly told not to (via the commit flag). An example would be updating to a new language version. In this case, solutions that previously failed might now pass (or vis-versa for deprecations), so we’d need to retest things.

What we could do though (thinking out loud) is follow the same rules that we follow normally, where only changes in the test directory (or specified other files in the exercise - I can’t remember exactly) trigger test runs, and things like .docs are ignored. I can’t remember the exact logic for this, but we could add some extra flags maybe. More up-front work, less long term headaches?

So that makes anything but literally passing each exercise through the test-runner container a bit difficult to implement

Why not do this? Many tracks do.

One of the reasons that this hasn’t got traction (from me!) is because there’s so much logic to get right here (conceptually, as well as in code). I think the first step is to all get alignment on exactly you’d want from a maintenance POV, and then I can work out what that means in terms of work at the website end.

I suggest @ErikSchierboom weighs in on this with his “maintainer” hat on rather than his “the poor person who has to code half of this up” hat on first to help get that spec in place too.

But if we could get use-cases, and examples, in place, that would really help. What I’d suggest would be two forum threads:

  • Thread 1: has the rule of one post per person where everyone lists their use-cases with as much technical detail as possible (e.g. the files, the effects on test runs, etc).
  • Thread 2: A discussion thread (like this but clean) that people can discuss this in more depth.

I’ll ping all maintainers to contribute to those, and then at the end, we just have all the use-cases for me and Erik to wrestle with, and hopefully a lot of the wrestling done here to get to that point.

Does that sound good? If so, Erik can create them tomorrow (or @vaeng - you can tonight if you prefer :))

1 Like

Background info:

I had to update the test-runner with a new Catch2 version to support “labels” for tests, so that they can be grouped in a way that is easy to translate into json for exercisms requirements of “version 3 of the test-runners”.

I would have updated the student-facing files as well, but that would have required a re-write of every single exercise testing structure and thus triggered every submission on the c++ track to be re-run.
Back then the no-important… -flag had not been in use and I judged it to be less important to be on the same version.

I would like to have them on the same version now, but waited for an eventual “one place for all changes” solution to go any further.

I am all up for it, because it is a solution for a pain point in the c++ track.
If there are other people and tracks who could benefit greatly from this, we should go forward.

I think there might be other solutions to change our c++ “problems”, which do not need that .shared folder changes. So if that thread does not get any action, that is fine with me.

1 Like

I have some thoughts/wishes here (although my needs around this are not as dire or complicated as C++, or potentially JavaScript) – but will wait for the thread creation.

I will quickly say that what I was intending the shared folder for would not in any way trigger re-tests, and is much more aimed at bringing local tooling more in line with how testing and reporting are already done on the website. Things like configuration tools for a students local environment. We could then avoid having students download them manually via instructions.

1 Like

First of all, I do think this would be a useful feature, which is also exactly why we have a .shared directory already :)

The way I see this working is to treat everything in the .shared folder as being “virtual”, where we would imagine the contents are copied into every exercise directory. This would mean that you could (and should) refer to these files as if they were in the exercise directory. So a file in exercises/shared/package.json would be referred to from the .meta/config.json file as just package.json. I think that is what we should be aiming for.