I have been working on the Elixir track and love it. However, a common issue I encounter when using the online editor is that my solution may pass the tests, but then when submitting I get a warning about things I could do better (mostly related to naming things). The solution is trivial, but then I end up with a couple of basically identical iterations, and have to remember to delete the first one for clarity.
It would be great if we could know about those suggestions before the submission is complete. Maybe the dialogue which warns you about it could display “Submit anyway” or “Go back to editor” options, and if “Go back to editor” is selected the submission does not go through.
You do not have to delete the iteration, having it and a comment on it reflecting the changes made are a useful note for your notebook.
Also, it has been discussed over the years that a diff view that can be seen by selecting two different iterations would help in “clarity” to see differences, including things like whitespace changes, etc.
If you got the feedback while you were writing the solution would that help you remember to make the changes? By deleting the iteration that is not as it should be, are you removing some important information that “This is what I tend to do, the next iteration is what I have determined that I should do.” Over time, and in other exercises these things will reduce as your level of experience increases and your opinions mature, and being able to see “I used to code like that, and later I now code like this.”
Not against nor for this requested feature, but only suggesting what value may disappear if this is brought into place. But I do like the suggestion that is “allow it to happen anyway, as it is a record of what happened, and I can make a new iteration that shows what I learned from the feedback, or stop the submission and let me correct it so that it appears that I do this without any prompting.”
Just to give some perspective, I do like having my iterations be different approaches, or improvements on an original approach. My idea is that I should learn something by comparing any two iterations.
In the case of Elixir one of the warnings you get from the tooling is that unused parameters in a function should be named with a leading underscore. It’s something easy to forget when copy-pasting different clauses of the same function, and trivial to fix. If you develop offline, you will get a warning about it when running the tests, so you can correct it before submitting, but if developing on the online editor you don’t get that comment until after having made the submission.
Or sometimes you accidentally left in debug logging that Exercim detects and recommends removing.
With my idea of what iterations are useful for, I do not feel these deserve their own iteration. I consider an iteration which is identical to other but with a stray debug logging not too useful.
I understand this may be particular to me, and other people want to save iterations more freely. Hence why I suggested that the user is given the chance to go back to the editor without having this last attempt count as an iteration. People could still decide to publish the iteration as-is, or to publish both versions.
I think my suggestion may be summarised as: “Display warnings to the user before their solution is added as an iteration, and let them choose if they want to create an iteration as-is or rework it”.
only suggesting what value may disappear if this is brought into place.
The way I see it, this value may already be getting lost. I am already deleting most of my iterations which have automatic warnings in favour of other almost-identical iterations fixing it. It is just more work and a bit more noise in the mentoring requests (as mentors may hit past iterations which no longer exist).
I do strongly suggest writing the exercises locally, you will have better control and visibility of things, and the described problem only solidifies this as “good advice” to me.
I don’t really see how this would help? Submitting would still give an iteration, with feedback exactly the same as using the online editor.
In fact, @NachoGoro, this is really the problem with the system. We try and keep a consistent experience between both students working locally and working online. Because we can’t give feedback to students working on their local machines until they submit, we don’t give it to students working on the online editor either.
I agree with you that it’s not ideal, and that it would be nicer to get the feedback before the iteration is submitted. There’s one other problem though, which is we don’t want our automated systems to be flow-blockers. That’s because if we hit significant load, they might be slow. So the feedback is designed to come async later. It’s just that generally it’s actually really fast, so it feels like it could be sync (and it probably could be a lot of the time).
Realistically this isn’t going to be something we change. Even if I decided it was the best decision to do so (and that is a realistic if) it would be a hugely deep-cutting change, as all feedback is based around iterations. The amount of effort I’d have to put into evolving it is not something I’m going to have time for in the foreseeable future.
One option we could consider would be that if you do get feedback, there was a button to say “Delete iteration and return to editor” or something. So that step was done semi-automatically. But I would worry that would confuse people in general.
The benefit is that locally, while it will only happen once it is evaluated, that is true, your information is available before submission. Like you mention later in this message, we attempt to give the same information. If they are working locally and see the information before they submit, then the fix can be done before the submission happens.
It is the timing of the information in regard to getting the iteration to the Exercism site, as opposed to getting the information from the tooling before it is submitted.
The OP was saying that when working in the editor, they get the information (when we expect it) after submitting, and they are emulating getting the information that they would have from the tooling before the code is saved on the remote and stored as an iteration, by deleting the iteration and pretending that it never got there.
In the case they are working locally, they are still “submitting the code” but only to the tooling locally, until they are ready to “submit the code to Exercism and its tooling”.
But the tooling being discussed is the analyzer/representer, which only runs once someone has submitted a solution. So they never get the information before the submission happens.
In part, yes, but in the quote above yours, also talking about the same information locally, to paraphrase “when local, you will get a warning about it when running the tests”.
While it may include the analyzer and representer, we are also talking about tooling that is avialable locally, via testing.
Hopefully the analyzer and representer is reporting on other things than the things that the testing reports.
I think you misunderstood the OP. This conversation is about the feedback from the analyser/representers, not from the tests. So none of that is available locally. In both the online editor, and locally, you only see that feedback once you submit an iteration.
I did not misunderstand the original post (OP), but had the advantage of having read that alongside the later message which I posted.
The original poster (OP) stated what I quoted and paraphrased, so in part, some of the information is coming from the tests as they admit. For at least that amount of feedback, they have the benefit to have fixed those things before submitting, and then, yes, there is feedback on the platform that does not mirror a local experience that would be as you state:
But we do not providing machining for that locally.
You are correct that not all of the feedback is available locally to do in full the preventative submission to avoid, but I think that is fair, since the analyzer/representer is in place to do what a mentor would presumably do, and you would not get that without having submitted the code.
I also understand from the post where they stated “for clarification” that they are more interested in having iterations that show different approaches, rather than different ways to communicate code, such as through name changes, as they also say is common feedback they get and apply and then delete that iteration, wishing that they had that information available earlier than having made an iteration.
If they are working online and testing, I suppose that they wonder if (though it raises expense) the representer/analyzer could run at the same time, before submitting.
Just to clarify, I did mean the information from the analyser. However, in the case of Elixir, some of the warnings from the analyser (like naming unused parameters with a leading underscore) are also provided when running the tests by the local tooling, hence why I mentioned that when running locally to do get that warning (though many others, like extraneous debug statements you won’t).
Realistically this isn’t going to be something we change.
I understand the reasoning and it makes sense. I never noticed it wasn’t synchronous feedback! It makes sense :)