Creating an Idris Track

@isberg I’ve also reviewed your test runner PR.

1 Like

Thanks @ErikSchierboom! I see that the only option to automatically merge the PR in the idris repo is using squash, which I am not fond of. Is there some guidelines anywhete. It seems that manually mergin is a suggestion, so I might try that. My goal is to avoid confusion in the future an avoid loosing information. Thoughs anyone?

We have set squash merging as the default for all our repos. We’ve found that it helped create histories sane.

The way we do that on Exercism is by having pull requests do just a single thing. That way you’ll still keep things separately, whilst also offering all the benefits that come with smaller PRs (easier to review, quicker to be merged).

1 Like

CODE RUN is a part of the exercism online enviroment feedback. It shows the code from the tests file that is run for this test case. That way students can see all arrangements, actions and assertions that are part of the test case even in concept exercises. See the test runner interface docs for details.

1 Like

PR: Use idris2 tooling (in test runner) is updated based on feedback from Erik and everything is green. If I hear no objection I will merge it tomorrow. :slight_smile: I plan next to look into the flawed tests @keiraville reported. I have seen that existing test cases in some exercises seem to differ from the ones in the problem specification repo and not in a good way, I will keep that in mind while updating the tests for the misbehaving exercises.

2 Likes

All good to merge.

You have mentioned a plan to work on a test generator. Perhaps a first step would be to pick (or add) one or two exercises with exemplary tests.

1 Like

The workflow failed for the test runner. :disappointed:
Merge pull request #4 from isberg/feature/use-idris2-tooling · exercism/idris-test-runner@4be628b · GitHub

That is to be expected. We need to do some work to enable it. We’ll get on it.

1 Like

I’ve fixed the Docker part of the deploy. Once Add idris test runner by ErikSchierboom · Pull Request #113 · exercism/terraform · GitHub is merged, you can re-run the deploy workflow and then ECR will also work.

1 Like

I have created PR: Align leap tests with canonical-data where I based on the data in the problem specification repo manually updated the tests to match. I see also that there is a related local file tests.toml. I am pondering how one should do if one wanted to generated the tests based on the problem specification repo. The tests.toml does no contain test data, just specifies which case should be included. It seems sketchy to depend on an external repo when running the tests. Somewhat boring, but with less magic, would probably be to generated the tests and then have them checked in. Thoughts?

There is also: Feature/align rna transctiption with canonical data by isberg · Pull Request #137 · exercism/idris · GitHub
Which kind of indicates that tests for all exercises should be streamlined. So I guess next step would be to pick an exercise, perhaps one that does not exist, and try to generate the tests from the problem repo.

Align rna-transcription with canonical-data by isberg · Pull Request #138 · exercism/idris · GitHub is a better PR @keiraville

That’s what the configlet tool helps you do. Configlet | Exercism's Docs

Some tracks do auto generate the tests directly from the specs, eg Python and jq.

1 Like

Test Generators | Exercism's Docs seems like a good resource too. :slight_smile:

I am not understanding, I think. The purpose of the tests.toml file is to indicate which cases will be picked up and generated from the problem specifications repo. Once the test file for an exercise on a track is generated, it is checked into the track. Tests are run from the track – not from problem specifications. Problem Specifications is only used during the generation phase, as the maintainer updates things. At least that’s how the Python tracks test generator works:

  1. configlet is run to see if there are any problem specifications changes to documetation or test data for various exercises.
  2. configlet is then used to sync metadata, documents, and any pending test case changes. These synced changes are made to the specific track repo.
  3. For test cases, telling configlet to accept a test case enters that test case into the tests.toml file for the given track/exercise.
  4. Running the test generator for the track/exercise then regenerates the entire test file for the exercise, incorporating the tests.toml listed test cases from problem specifications, along with their test data.
  5. The changes made through configlet sync, as well as the regenerated test file for the exercise on the track is then checked into the track repo.

The reason that a test generator is so handy is that the process is mostly automated. Without a generator, the test file has to be done manually, and care needs to be taken to keep a record of which tests have been implemented, and which haven’t been for a particular track and exercise.