How to Ignore some tests/tasks in testing code?

I’ve been working through the python track for several weeks now. Many of the exercises have multiple functions that are regarded as tasks in the testing code. Is there a way to mark, or tell, the testing routines which tasks to run? I wish to only test one or two tasks at time. I would really help cut down on digging through all the error messages of a task I am not working on yet.

Are you running the tests locally or in the WebUI?

When running locally via pytest, you can pass pytest specific tests to run. You can also specify which markers to enable for tests with markers.

When running in the browser based environment, you cannot limit which tests are run.

1 Like

I am currently working locally, I realize that I can pass pytest with “-k” or “-m” options. Say for example, one of the testing functions is labeled:
@pytest.mark.task(taskno=2) and the test function name is:
test_count_failed_students(…). Supposedly I can executed pytest with the name of the maker using:
pytest -m “something” to run the tests based upon marker names. Yet if I invoke:
pytest -m “taskno = 2”. no gets an “unexpected character error ‘=’”
If I try
pytest -m “2”:
I get 6 items selected, 6 items deseleted and no tests are run.
I could also use the -k option, as in
pytest -k “taskno=2”, with the resulting "unexpected character error ‘=’
If I try pytest -k “2”
Again I get 6 items selected, 6 items deseleted, and no tests are run.

  1. Which option (pytest -k or pytest -m) do I use?
  2. Is there example of how to handle a default value for a marker? (In this example case taskno=2) I can not find any clear examples of how to do this.

There’s pytest docs that might be helpful! Working with custom markers — pytest documentation
I personally don’t use markers much so I wouldn’t know the correct invocation without playing with the command and reading through said docs ;)

I’ve read through that documentation, and that really didn’t answer my questions. But thank you very much for trying.

You could also ignore the markers and just pass pytest the test name (pytest module.class.function).

Hi @Neptune8Triton :wave:

Apologies for the difficulties you are having selecting Pytest markers.

When we implemented those markers, we were only focused on using them to group and filter items within our online test runner, and so we used kwargs within the marker decorator.

Unfortunately, PyTest doesn’t have a default way of getting at those marker kwargs during a test run from the command line – it requires a bit of custom code via to pull out the values and filter the test collection.

So Isaacs suggestion of selecting things by test name might be easier for you. But if you’d like to dive into some PyTest scripting, read on. :smile:

Below is some code you can save into a file.

Placing in the exercism/python folder should add the command-line option --task to PyTest.

You should then be able to run tests based on the task number by running the following (assuming you are in the exercise directory):

pytest <_pytest options_> --config-file=../../../ --task=<_task number here_> <_test_file_name_test_>.py

So running Pytest in verbose mode for task number 4 on Ghost Gobble Arcade Game would look like:

pytest -v --config-file=../../../ --task=4

Here’s the code:

def pytest_configure(config):
    # registering marker to avoid warnings
        "markers", "task(taskno): Mark a concept exercise task."

def pytest_addoption(parser):
    # new filter option
                     help="Only run tests matching the exercise task number.")

def pytest_collection_modifyitems(config, items):
    # check if you got an option like --task=1.  Set to None if nothing passed
    specified_task = int(config.getoption("--task")) if config.getoption("--task") else None

    if specified_task:
        new_items = [item for item in items if item.get_closest_marker("task").kwargs["taskno"] == specified_task]
        items[:] = new_items

Please note that the script does not have any error handling or input validation, so passing more than one task at a time, or passing values that are not convertible to int() is going to toss a PyTest stack trace. :smile:

Making it more robust is on my to-do list…

Let me know if that works for you, and if you have any questions or issues.