I posted in the general forum and was directed to the New Track documentation which has led me to post here.
I am interested in developing and maintaining an exercism track for Dyalog APL. Our organisation has begun work on GitHub - Dyalog/exercism: Repository to start an Exercism track for Dyalog APL. We are considering how tests should be written and will finalise this if we are approved to create an official track on exercism and can then begin work on the test runner.
Until the test runner works, users will need to import the exercise folder into Dyalog and run the tests as described in Dyalog/exercism/blob/main/docs/TESTS.md.
Can there be no submissions in the track until the test runner is working? Because presumably there is no other way to verify a correct solution.
Can there be no submissions in the track until the test runner is working? Because presumably there is no other way to verify a correct solution.
There would be no way to verify a correct solution on the platform. We’d be asking the students/coders to self-verify based on setting up the language and running the provided tests themselves locally. And that would then need to be supported.
So … no, we really don’t want to be doing that. It’s far better to take the time to get the test runner done and tested before track launch.
Besides, you will also need to get at least 20 exercises (with tests) made before launch as well - and those exercises can be used to work out kinks in the test runner.
And there is a bunch of other stuff on the site that doesn’t quite work if you take the test runner out of the process – like having verification of a passing solution in community solutions, and (I think) exercise locking/unlocking. There is more functionality I am probably forgetting.
One could submit locally using the CLI and then mark the exercise as complete without a test runner. However, without an active test runner, someone could submit invalid Dyalog APL code and publish it. That significantly reduces the value of looking at community solutions if you don’t know whether the code presented actually solved the exercise.
Also, I haven’t seen stats on online editor vs. CLI submissions, but I would not be surprised if a good number of students use the online editor preferentially. Without a test runner, you’re not going to be as accessible to students, and that might drive away potential interest at a critical juncture. If you add a test runner after launch, those folks aren’t guaranteed to come back.
A large part of the platform is the ability to test the code directly from the website. Jeremy would need to weigh in on whether or not a track can go live without a test runner. A test runner might be a prerequisite, though Jeremy would need to weigh in when he’s back from traveling.
We only launch tracks without a test runner when creating one is not possible. Is that the case here? If not, a test runner should be created. We have extensive documentation on building a test runner as well as building a track.
Oh and I think an APL track would be a valuable addition, so consider it approved. Can you give me the github usernames of the people who will be building the track?