I’m not sure I completely understand reputation in automation yet, so maybe I’m completely off here - in that case, please forgive me!
What I’ve observed is that I get reputation for creating a new “Feedback” item and I think I’ve read somewhere that one gets reputation if one’s feedback is shown to someone.
I think the second one is fine, even though it might be cool if students could indicate if the feedback was helpful or not - probably not strictly necessary since mentors “prove” they are good at mentoring through student feedback from the at least 100 sessions required to unlock automation.
The first one seems to be at odds with improving the representer: if one “merges” representations through new rules in the representer, that leads to less “Feedback” items to create. Of course everyone who can will still do it because it simplifies our lives, but in terms of reputation one would be “punished” for spending time to improve the representer instead of copy / pasting our same comment n times. Probably not an issue, but could this be improved easily by making rep dependent on the number of solutions affected? E.g something based on how many solutions out of the total of all solutions ever submitted in the same track are affected (as compared to e.g. a logarithm, which would under-reward work on a smaller track)?
Probably not the most important thing and certainly not urgent, but I strongly believe incentives should be aligned with goals ;-)