profile
viewpoint
Chris NeJame SalmonMode Remesh SDET

SalmonMode/PyPCOM 7

Python page component object model for Selenium

SalmonMode/contextional 2

A functional testing tool for Python

truveris/flake8-truveris 2

Flake8 extension for checking Python code against Truveris's code style guide

SalmonMode/example-product-challenge 1

A simple exercise to demonstrate how changes in product implementation can impact test maintainability

SalmonMode/pytest-django 1

A Django plugin for pytest.

SalmonMode/vscode-python 1

Python extension for Visual Studio Code

SalmonMode/advanced-markdown 0

Learn about advanced markdown techniques

SalmonMode/btt-mattermost 0

Touchbar utility for Mattermost mentions

SalmonMode/flake8-commas 0

Flake8 extension for enforcing trailing commas in python

SalmonMode/flake8-quotes 0

Flake8 extension for checking quotes in python

fork SalmonMode/mocha

☕️ simple, flexible, fun javascript test framework for node.js & the browser

https://mochajs.org

fork in a day

push eventSalmonMode/pytest

Ran Benita

commit sha a1f841d5d26364b45de70f5a61a03adecc6b5462

skipping: use pytest_runtest_call instead of pytest_pyfunc_call `@pytest.mark.xfail` is meant to work with arbitrary items, and there is a test `test_mark_xfail_item` which verifies this. However, the code for some reason uses `pytest_pyfunc_call` for the call phase check, which only works for Function items. The test mentioned above only passed "accidentally" because the `pytest_runtest_makereport` hook also runs a `evalxfail.istrue()` which triggers and evaluation, but conceptually it shouldn't do that. Change to `pytest_runtest_call` to make the xfail checking properly generic.

view details

Ran Benita

commit sha 6072c9950d76a20da5397547932557842b84e078

skipping: move MarkEvaluator from _pytest.mark.evaluate to _pytest.skipping This type was actually in `_pytest.skipping` previously, but was moved to `_pytest.mark.evaluate` in cf40c0743c565ed25bc14753e2350e010b39025a. I think the previous location was more appropriate, because the `MarkEvaluator` is not a generic mark facility, it is explicitly and exclusively used by the `skipif` and `xfail` marks to evaluate their particular set of arguments. So it is better to put it in the plugin code. Putting `skipping` related functionality into the core `_pytest.mark` module also causes some import cycles which we can avoid.

view details

Ram Rachum

commit sha dd446bee5eb2d3ab0976309803dc77821eeac93e

Fix exception causes all over the codebase

view details

Ran Benita

commit sha 3e6fe92b7ea3c120e8024a970bf37a7c6c137714

skipping: refactor skipif/xfail mark evaluation Previously, skipif/xfail marks were evaluated using a `MarkEvaluator` class. I found this class very difficult to understand. Instead of `MarkEvaluator`, rewrite using straight functions which are hopefully easier to follow. I tried to keep the semantics exactly as before, except improving a few error messages.

view details

Gleb Nikonorov

commit sha ac89d6532a8c1f652f6a68c0b9caad80cde0042f

replace stderr warnings with the warnings module

view details

Gleb Nikonorov

commit sha a9d50aeab671bd67d58abb19a1839736cb4a7966

remove extra whitespace

view details

Gleb Nikonorov

commit sha fe68c5869866149b1c00d6280f3e491883d0b7e9

add test_warn_missing case for --assert=plain

view details

Gleb Nikonorov

commit sha 33de350619cf541476d1ab987377f9bc2f06179f

parametrize test_warn_missing for a cleaner test

view details

Ran Benita

commit sha c9737ae914891027da5f0bd39494dd51a3b3f19f

skipping: simplify xfail handling during call phase There is no need to do the XPASS check here, pytest_runtest_makereport already handled that (the current handling there is dead code). All the hook needs to do is refresh the xfail evaluation if needed, and check the NOTRUN condition again.

view details

Ran Benita

commit sha 7d8d1b4440028660c81ca242968df89e8c6b896e

skipping: better links in --markers output Suggested by Bruno.

view details

Ran Benita

commit sha b3fb5a2d47743a09c551555da22da27ce9e73f41

Type annotate pytest.mark.* builtin marks

view details

Ran Benita

commit sha 27492cf7a0cf680181f496e6b3235b8c549dbd54

Merge pull request #7379 from bluetech/typing-builtin-marks Type annotate pytest.mark.{skip,skipif,xfail,parametrize,usefixtures,filterwarnings}

view details

Ran Benita

commit sha 99d34ba0292e796a8be826c74c8455039f6a9b6f

Merge pull request #7388 from bluetech/mark-evaluate skipping: refactor mark evaluation

view details

Ran Benita

commit sha 83891d9022076375cede03bfd8c932d450e6fcf8

Merge pull request #7387 from cool-RR/2020-06-11-raise-from Fix exception causes all over the codebase

view details

Ran Benita

commit sha 4655b7998540d47e6f8dd783c82b37588719556d

config: improve typing

view details

Ran Benita

commit sha 04a6d378234e3c72055c7e90084b1a2d36d3f89d

nodes: fix string possibly stored in Node.keywords instead of MarkDecorator This mistake was introduced in 7259c453d6c1dba6727cd328e6db5635ccf5821c.

view details

Ran Benita

commit sha 8994e1e3a17bd625e0c258d0a402062542908fe3

config: make _get_plugin_specs_as_list a little clearer and more general

view details

Ran Benita

commit sha c6f4c2e5c64450c6e68129db44eb37dc329549a9

Merge pull request #7402 from bluetech/fix-nodes-keywords-typo nodes: fix string possibly stored in Node.keywords instead of MarkDecorator

view details

Ran Benita

commit sha 3624acb665b07a96cf0217932034aab34bcb9afd

Merge pull request #7401 from bluetech/typing-config config: improve typing

view details

David Diaz Barquero

commit sha 617bf8be5b0d5fa59dfb72a27c66f4f5f54f7e26

Add details to error message for junit (#7390) Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in a day

issue commentpytest-dev/pytest

[Feature] Pytest should have its own native solution for using several asserts in one test case

@GeorgeFischhof I'd run with my example using multiple tests within a test class. Since you have 30 comparisons you need to make, I would include a test method for each one. This way, each test method can be named in regards to what it's looking for, which might be able to provide the explanation you're looking to give. Or you can take the assert x == y, "explanation" approach.

I'm working on some updates for the docs currently, and I've included an explanation of this class-based approach. You can find it here.

I'm not quite certain about which is best for your uses, but another approach is to rely on the __repr__ produced by your dataclass classes by having an expected instance and comparing against that. Pytest will try to highlight where the differences are. For example, using your MyData class:

def test_fruit():
    assert MyData(apple=2, banana="yes") == MyData(apple=3, banana="no")

Might produce a failure message like this (I'm on mobile, so this is just a rough approximation):

-<MyData apple=2 banana='yes'>
               ^         ^^^
+<MyData apple=3 banana='no'>
               ^         ^^
GeorgeFischhof

comment created time in 4 days

push eventSalmonMode/pytest

Chris NeJame

commit sha dab0f1ba1a8f71a8562b51e72cdc6efcb09b6d9e

Added small note encouraging use of more fixtures

view details

push time in 8 days

pull request commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

@nicoddemus @bluetech I restructured it quite a bit and added a bunch of more introductory material to the beginning before diving in to stuff. I tried to go over the basics first, and then ramp up to more advanced concepts. Let me know what you think 😁

SalmonMode

comment created time in 10 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 99f249df37ca4c8d05d675fed00c3e457f0d6dbe

moved/added introductory material to the beginning of the doc

view details

push time in 10 days

issue commentpytest-dev/pytest

Dynamic test suites

I'm currently reading through the paper you published on it last year to get a better idea of the testing methodology.

In section 4 of the paper, you mention segments and checkpoints, and the "INIT". Is the INIT effectively the result of each call of the test suite? As in this is where the seed is generated and the structure of the test suite is determined?

mciepluc

comment created time in 13 days

issue commentpytest-dev/pytest

Dynamic test suites

This comes up occasionally, and isn't possible yet. But I always recommend against it anyway.

Ideally, tests are repeatable, and the steps and data they use are apparent just from reading them. Randomized inputs can easily make tests nondeterministic, or have no actual impact on what behavior is being testing. In the latter case, it renders parameterization based off of randomized sets moot, since no new behavior is being testing on top of them not being easily repeatable (so it's kind of a double whammy).

This doesn't mean randomized input is inherently bad, for example, I randomized emails for my generated users to ensure they're unique. But if it changes the behavior under test, then you're better off making sure each set of data that would trigger different behavior is built in so it can always be run.

But I find in general that if all the tests are defined up front, things become much easier in pytest and similar frameworks.

Ideally, tests shouldn't be run based off of other tests, either, as it means confounding variables are being introduced since the tests aren't being engineered solely around the behavior you want them to test as they're being made to have unnecessary dependencies. If tests aren't dependent on things they don't fundamentally need to be, then there's less things involved that could be borked and cause your tests to error out before actually being able to perform the test, which means you get a clearer picture of all the things that are broken, sooner.

Sometimes, in more complex systems/tests (e.g. e2e tests), you may have dependencies that you just can't mock because you have to engage with those systems to prepare for the test (e.g. using the backend's API to generate a user to log in as). This can usually be taken advantage of, though, to "kill" tests early by relying on simple validation that you would have built into your tools anyway. For example, I make an API client to abstract how I deal with my backend's API, and in its methods, it parses the response, often into custom data types. So if I try to log in through that client, if the response isn't good or even parsable, an error would get thrown, causing the test to error out early, saving a ton of time (and I also get a convenient error message telling me it failed to log in so I don't have to go digging around to find the cause). If I have several tests that need to log in through the API to work, they'll all error out very quickly, so it's effectively the same as not running those tests if the test that checks logging in works had failed (the run time of the test suite would have a more or less negligible difference between these two cases).

That said, you may be looking to do something called "property-based testing". In which case, I recommend Hypothesis.

If you provide some more context, I'd be happy to help come up with a more complete solution (provided Hypothesis wasn't what you were looking for). 😁

mciepluc

comment created time in 14 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 8f1d6d33d45e5c40cc77344b888d590b06757f7f

add order number labels

view details

push time in 14 days

pull request commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

@bluetech I can move the autouse fixture section up and revamp things a bit to cover the basics a bit better. Should that be done in this PR? or should I make a separate one?

SalmonMode

comment created time in 14 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 containers for different environments. See the example below.     def docker_container():         yield spawn_container() +.. _`fixture order`:++Fixture instantiation order+---------------------------++When pytest wants to execute a test, once it knows what fixtures will be+executed, it has to figure out the order they'll be executed in. To do this, it+considers 3 factors:++1. scope+2. dependencies+3. autouse

I mentioned it above, but perhaps the description of what autouse fixtures are should be moved up?

SalmonMode

comment created time in 14 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 1972f11a722c008bc276b2bd51b49fb0512ecd47

changed folder to directory

view details

push time in 15 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 5554a99675b9406794f32d084c2cc20a60a8fbe6

Update doc/en/fixture.rst Co-authored-by: Ran Benita <ran@unusedvar.com>

view details

push time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they run, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note::+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in: the order is mandated by the logic described+    :ref:`here <fixture order>`.++``conftest.py``: sharing fixtures across multiple files+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++The ``conftest.py`` file serves as a means of providing fixtures for an entire+folder. Any fixture defined in a ``conftest.py`` can be used by any test+in that package without needing to import them (pytest will automatically+discover them).++You can have multiple nested folders/packages containing your tests, and each

👍

SalmonMode

comment created time in 15 days

push eventSalmonMode/pytest

Chris NeJame

commit sha a9fd9b0f059b5d994e495f60146d9459a00aa4ac

simplify language

view details

push time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 containers for different environments. See the example below.     def docker_container():         yield spawn_container() +.. _`fixture order`:++Fixture instantiation order+---------------------------++When pytest wants to execute a test, once it knows what fixtures will be+executed, it has to figure out the order they'll be executed in. To do this, it+considers 3 factors:++1. scope+2. dependencies+3. autouse++Names of fixtures or tests, where they're defined, the order they're defined in,+and the order fixtures are requested in have no bearing on execution order+beyond coincidense. While pytest will try to make sure coincidenses like these+stay consistent from run to run, it's not something that should be depended on.+If you want to control the order, it's safest to rely on these 3 things and make+sure dependencies are clearly established.++Higher-scoped fixtures are executed first+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++Within a function request for fixtures, those of higher-scopes (such as+``session``) are executed before lower-scoped fixtures (such as ``function`` or+``class``).++Here's an example:++.. literalinclude:: example/fixtures/test_fixtures_order_scope.py++The test will pass because the larger scoped fixtures are executing first.++The order breaks down to this:++.. image:: example/fixtures/test_fixtures_order_scope.svg+    :align: center++Fixtures of the same order execute based on dependencies+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +If a fixture is requested by either a fixture or a test, it means the requested

Looks good! I'll modify it

SalmonMode

comment created time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is

I see the distinction you're mentioning, but in this case, I think it's helpful for the user's mental model to have it be used both ways. The scopes draw boundaries between where fixtures can be used/re-used as well as where they can be defined; it's two sides of the same coin. The visual aides I put in are meant to help illustrate this.

SalmonMode

comment created time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they run, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note::+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in: the order is mandated by the logic described+    :ref:`here <fixture order>`.++``conftest.py``: sharing fixtures across multiple files+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++The ``conftest.py`` file serves as a means of providing fixtures for an entire+folder. Any fixture defined in a ``conftest.py`` can be used by any test+in that package without needing to import them (pytest will automatically+discover them).++You can have multiple nested folders/packages containing your tests, and each+folder can have its own ``conftest.py`` with its own fixtures, adding on to the+ones provided by the ``conftest.py`` files in parent folders.++For example, given a test file structure like this:++::++    tests/+        __init__.py++        conftest.py+            # content of tests/conftest.py+            import pytest++            @pytest.fixture+            def order():+                return []++            @pytest.fixture+            def top(order, innermost):+                order.append("top")++        test_top.py+            # content of tests/test_top.py+            import pytest++            @pytest.fixture+            def innermost(order):+                order.append("innermost top")++            def test_order(order, top):+                assert order == ["innermost top", "top"]++        subpackage/+            __init__.py++            conftest.py+                # content of tests/subpackage/conftest.py+                import pytest++                @pytest.fixture+                def mid(order):+                    order.append("mid subpackage")++            test_subpackage.py+                # content of tests/subpackage/test_subpackage.py+                import pytest++                @pytest.fixture+                def innermost(order, mid):+                    order.append("innermost subpackage")++                def test_order(order, top):+                    assert order == ["mid subpackage", "innermost subpackage", "top"]++The boundaries of the scopes can be visualized like this:++.. image:: example/fixtures/fixture_availability.svg+    :align: center++The folders become their own sort of scope where fixtures that are defined in a+``conftest.py`` file in that folder become available for that whole scope.++Tests are allowed to search upward (stepping outside a circle) for fixtures, but+can never go down (stepping inside a circle) to continue their search. So+``tests/subpackage/test_subpackage.py::test_order`` would be able to find the+``innermost`` fixture defined in ``tests/subpackage/test_subpackage.py``, but+the one defined in ``tests/test_top.py`` would be unavailable to it because it+would have to step down a level (step inside a circle) to find it.++The first fixture the test finds is the one that will be used, so+:ref:`fixtures can be overriden <override fixtures>` if you need to change or+extend what one does for a particular scope.  You can also use the ``conftest.py`` file to implement :ref:`local per-directory plugins <conftest.py plugins>`. +Fixtures from third-party plugins+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++Fixtures don't have to be defined in this structure to be available for tests,+though. They can also be provided by third-party plugins that are installed, and+this is how many pytest plugins operate. As long as those plugins are installed,+the fixtures they provide can be requested from anywhere in your test suite.++Because they're provided from outside the structure of your test suite,+third-party plugins don't really provide a scope like `conftest.py` files and+the folders in your test suite do. As a result, pytest will search for fixtures

This isn't about which fixture gets used or the order they're used in. This is just meant to explain availability.

SalmonMode

comment created time in 15 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 714c1689ebd0cbfe108fbbab0008ae75b8fa213d

Update doc/en/fixture.rst Co-authored-by: Ran Benita <ran@unusedvar.com>

view details

push time in 15 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 5a7b788c9481a92a47f18c5f856f40ca1376afb1

Update doc/en/fixture.rst Co-authored-by: Ran Benita <ran@unusedvar.com>

view details

push time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 containers for different environments. See the example below.     def docker_container():         yield spawn_container() +.. _`fixture order`:++Fixture instantiation order+---------------------------++When pytest wants to execute a test, once it knows what fixtures will be+executed, it has to figure out the order they'll be executed in. To do this, it+considers 3 factors:++1. scope+2. dependencies+3. autouse++Names of fixtures or tests, where they're defined, the order they're defined in,+and the order fixtures are requested in have no bearing on execution order+beyond coincidense. While pytest will try to make sure coincidenses like these+stay consistent from run to run, it's not something that should be depended on.+If you want to control the order, it's safest to rely on these 3 things and make+sure dependencies are clearly established.++Higher-scoped fixtures are executed first+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++Within a function request for fixtures, those of higher-scopes (such as+``session``) are executed before lower-scoped fixtures (such as ``function`` or+``class``).++Here's an example:++.. literalinclude:: example/fixtures/test_fixtures_order_scope.py++The test will pass because the larger scoped fixtures are executing first.++The order breaks down to this:++.. image:: example/fixtures/test_fixtures_order_scope.svg+    :align: center++Fixtures of the same order execute based on dependencies+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +If a fixture is requested by either a fixture or a test, it means the requested+fixture must be executed before the fixture or test that requested it can be+executed. A fixture or test requesting a fixture doesn't necessarily mean that+fixture or test needs to make use of the value the requested fixture returned.+It can also just mean that the requesting fixture or test should execute after+the requested fixture. -Order: Higher-scoped fixtures are instantiated first-----------------------------------------------------+So if fixture ``a`` requests fixture ``b``, fixture ``b`` should execute first,+because ``a`` depends on ``b`` and can't operate without it. Even if ``a``+doesn't need the result of ``b``, it can still request ``b`` if it needs to make+sure it is executed after ``b``. +For example: +.. literalinclude:: example/fixtures/test_fixtures_order_dependencies.py -Within a function request for fixtures, those of higher-scopes (such as ``session``) are instantiated before-lower-scoped fixtures (such as ``function`` or ``class``). The relative order of fixtures of same scope follows-the declared order in the test function and honours dependencies between fixtures. Autouse fixtures will be-instantiated before explicitly used fixtures.+If we map out what depends on what, we get something that look like this: -Consider the code below:+.. image:: example/fixtures/test_fixtures_order_dependencies.svg+    :align: center -.. literalinclude:: example/fixtures/test_fixtures_order.py+The rules provided by each fixture (as to what fixture(s) each one has to come+after) are comprehensive enough that it can be flattened to this: -The fixtures requested by ``test_order`` will be instantiated in the following order:+.. image:: example/fixtures/test_fixtures_order_dependencies_flat.svg+    :align: center++Enough information has to be provided through these requests in order for pytest+to be able to figure out a clear, linear chain of dependencies, and as a result,+an order of operations for a given test. If there's any ambiguity, and the order+of operations can be interpreted more than one way, you should assume pytest+could go with any one of those interpretations at any point.++For example, if ``d`` didn't request ``c``, i.e.the graph would look like this:++.. image:: example/fixtures/test_fixtures_order_dependencies_unclear.svg+    :align: center++Because nothing requested ``c`` other than ``g``, and ``g`` also requests ``f``,+it's now unclear if ``c`` should go before/after ``f``, ``e``, or ``d``. The+only rules that were set for ``c`` is that it must execute after ``b`` and+before ``g``.++Pytest doesn't know where ``c`` should go in the case, so it should be assumed+that it could go anywhere between ``g`` and ``b``.++This isn't necessarily bad, but it's something to keep in mind. If the order+they execute in could affect the behavior a test is targetting, or could+otherwise influence the result of a test, then the order should be defined+explicitely in a way that allows pytest to linearize/"flatten" that order.++.. _`autouse order`:++Autouse fixtures are executed first within their scope+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++Autouse fixtures are assumed to apply to every test that could reference them,+so they are executed first of all the fixtures in that scope. Fixtures that are

Great suggestion! I like this phrasing better, since it doesn't imply they are run before other fixtures that are implicitly made into autouse fixtures because autouse fixtures requested them, so it's less confusing.

SalmonMode

comment created time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they run, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note::+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in: the order is mandated by the logic described+    :ref:`here <fixture order>`.++``conftest.py``: sharing fixtures across multiple files+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++The ``conftest.py`` file serves as a means of providing fixtures for an entire+folder. Any fixture defined in a ``conftest.py`` can be used by any test+in that package without needing to import them (pytest will automatically+discover them).++You can have multiple nested folders/packages containing your tests, and each+folder can have its own ``conftest.py`` with its own fixtures, adding on to the+ones provided by the ``conftest.py`` files in parent folders.++For example, given a test file structure like this:++::++    tests/+        __init__.py++        conftest.py+            # content of tests/conftest.py+            import pytest++            @pytest.fixture+            def order():+                return []++            @pytest.fixture+            def top(order, innermost):+                order.append("top")++        test_top.py+            # content of tests/test_top.py+            import pytest++            @pytest.fixture+            def innermost(order):+                order.append("innermost top")++            def test_order(order, top):+                assert order == ["innermost top", "top"]++        subpackage/+            __init__.py++            conftest.py+                # content of tests/subpackage/conftest.py+                import pytest++                @pytest.fixture+                def mid(order):+                    order.append("mid subpackage")++            test_subpackage.py+                # content of tests/subpackage/test_subpackage.py+                import pytest++                @pytest.fixture+                def innermost(order, mid):+                    order.append("innermost subpackage")++                def test_order(order, top):+                    assert order == ["mid subpackage", "innermost subpackage", "top"]++The boundaries of the scopes can be visualized like this:++.. image:: example/fixtures/fixture_availability.svg+    :align: center++The folders become their own sort of scope where fixtures that are defined in a+``conftest.py`` file in that folder become available for that whole scope.++Tests are allowed to search upward (stepping outside a circle) for fixtures, but+can never go down (stepping inside a circle) to continue their search. So+``tests/subpackage/test_subpackage.py::test_order`` would be able to find the+``innermost`` fixture defined in ``tests/subpackage/test_subpackage.py``, but+the one defined in ``tests/test_top.py`` would be unavailable to it because it+would have to step down a level (step inside a circle) to find it.++The first fixture the test finds is the one that will be used, so+:ref:`fixtures can be overriden <override fixtures>` if you need to change or+extend what one does for a particular scope.  You can also use the ``conftest.py`` file to implement :ref:`local per-directory plugins <conftest.py plugins>`. +Fixtures from third-party plugins+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++Fixtures don't have to be defined in this structure to be available for tests,+though. They can also be provided by third-party plugins that are installed, and+this is how many pytest plugins operate. As long as those plugins are installed,+the fixtures they provide can be requested from anywhere in your test suite.++Because they're provided from outside the structure of your test suite,+third-party plugins don't really provide a scope like `conftest.py` files and+the folders in your test suite do. As a result, pytest will search for fixtures+stepping out through scopes as explained previously, only reaching fixtures+defined in plugins *last*.++For example, given the following file structure:++::++    tests/+        __init__.py++        conftest.py+            # content of tests/conftest.py+            import pytest++            @pytest.fixture+            def order():+                return []++        subpackage/+            __init__.py++            conftest.py+                # content of tests/subpackage/conftest.py+                import pytest++                @pytest.fixture(autouse=True)+                def mid(order, b_fix):+                    order.append("mid subpackage")++            test_subpackage.py+                # content of tests/subpackage/test_subpackage.py+                import pytest++                @pytest.fixture+                def inner(order, mid, a_fix):+                    order.append("inner subpackage")++                def test_order(order, inner):+                    assert order == ["b_fix", "mid subpackage", "a_fix", "inner subpackage"]++If ``plugin_a`` is installed and provides the fixture ``a_fix``, and+``plugin_b`` is installed and provides the fixture ``b_fix``, then this is what+the test's search for fixtures would look like:

Good idea! I'm trying to keep the SVGs a little compact so the text stays visible, but I'll toy around with it a little to see how I can do this consistently for all the availability examples (I'm thinking of using a mask to create a buffer around the number so I can have it on the circle stroke itself, e.g. ---- 3 ----)

SalmonMode

comment created time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they run, ``outer`` will have no problem finding ``inner``, because

I'm sure many users will read this from top to bottom once through, or maybe even a few times. But I partially wrote this from the perspective of someone who is coming back to the docs to find out the details of some behavior/system or to get ideas about how to structure their tests/fixtures.

I think the only reason this isn't something the user will likely encounter, is because this behavior isn't spelled out in the docs. Some users may hold the belief that if a fixture isn't available from the perspective of the fixture requesting it, it won't work. But the general idea is somewhat common. A similar concept is in Python itself with the NotImplementedError exception.

But I agree that this may be stepping up things a little too quickly provided what came before it. I think there should be better descriptions of the basics of the fixture system above it. The concept of "requesting" a fixture doesn't seem to be covered, and things are discussed in a more technical sense at the beginning of the page (e.g. Test functions can receive fixture objects by naming them as an input argument). I think going over those sorts of concepts, and terminology first with very simple examples can prime them for the behavior described in this part.

SalmonMode

comment created time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a

The reason for having two classes is to demonstrate that each test class has its own inner fixture, and aren't pulling it from the other class' scope. I also didn't want to involve anything about overriding fixtures just yet because that wasn't what this part was about, and there's a section for that already. In order for these tests to pass, both classes must have their own inner fixture defined.

Regarding order, I pulled this from another example in the docs. I think it's ideal because it demonstrates an actual implication for the tests, whereas prints don't impact the tests. That said, actually seeing it may be beneficial as well. Maybe I should add a note about running it with --setup-show/plan?

SalmonMode

comment created time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test

The concept of autouse fixtures has a few nuances, and the implications (both positive and potentially negative) of it may not be immediately apparent. I wanted to explicitly mention this here and provide a link to the order section for them, as it touches on this topic in more details

I don't think it not being mentioned yet is a problem necessarily, but perhaps the autouse fixture section should be moved above this section?

SalmonMode

comment created time in 15 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they run, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note::+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in: the order is mandated by the logic described+    :ref:`here <fixture order>`.++``conftest.py``: sharing fixtures across multiple files+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++The ``conftest.py`` file serves as a means of providing fixtures for an entire+folder. Any fixture defined in a ``conftest.py`` can be used by any test+in that package without needing to import them (pytest will automatically+discover them).++You can have multiple nested folders/packages containing your tests, and each+folder can have its own ``conftest.py`` with its own fixtures, adding on to the+ones provided by the ``conftest.py`` files in parent folders.++For example, given a test file structure like this:++::++    tests/+        __init__.py++        conftest.py+            # content of tests/conftest.py+            import pytest++            @pytest.fixture+            def order():+                return []++            @pytest.fixture+            def top(order, innermost):+                order.append("top")++        test_top.py+            # content of tests/test_top.py+            import pytest++            @pytest.fixture+            def innermost(order):+                order.append("innermost top")++            def test_order(order, top):+                assert order == ["innermost top", "top"]++        subpackage/+            __init__.py++            conftest.py+                # content of tests/subpackage/conftest.py+                import pytest++                @pytest.fixture+                def mid(order):+                    order.append("mid subpackage")++            test_subpackage.py+                # content of tests/subpackage/test_subpackage.py+                import pytest++                @pytest.fixture+                def innermost(order, mid):+                    order.append("innermost subpackage")++                def test_order(order, top):+                    assert order == ["mid subpackage", "innermost subpackage", "top"]++The boundaries of the scopes can be visualized like this:++.. image:: example/fixtures/fixture_availability.svg+    :align: center++The folders become their own sort of scope where fixtures that are defined in a+``conftest.py`` file in that folder become available for that whole scope.++Tests are allowed to search upward (stepping outside a circle) for fixtures, but+can never go down (stepping inside a circle) to continue their search. So+``tests/subpackage/test_subpackage.py::test_order`` would be able to find the+``innermost`` fixture defined in ``tests/subpackage/test_subpackage.py``, but+the one defined in ``tests/test_top.py`` would be unavailable to it because it+would have to step down a level (step inside a circle) to find it.++The first fixture the test finds is the one that will be used, so

Right, I didn't want to provide details about overriding fixtures just yet, as there was already a section for that, and trying to demonstrate overriding here would have complicated it a bit. This was about showing which fixtures each test can reference and which they can't.

SalmonMode

comment created time in 15 days

issue commentpytest-dev/pytest

Package scoped fixture is evaluated multiple times if used in a sub-package

Right, but how do you attach the fixture the package.a, and not package.a.a1?

s0undt3ch

comment created time in 16 days

issue commentpytest-dev/pytest

Package scoped fixture is evaluated multiple times if used in a sub-package

Is there a reference to the details about the intended behavior for this scope? I checked the docs but didn't see any specifics.

My assumption is, based on the description in the docs, that it would tear down twice if the last test to run was tests/sub_package/test_a.py::test_thing, once for exiting the sub_package package and again for exiting tests package, but that doesn't seem ideal (or practical).

My assumption based on the general idea, is that it would be allowed to run once per lowest package level and then teardown once for each lowest package level that it was run for. For example, if I had tests/sub_package_a/ and tests/sub_package_b, then it could run once for the sub_package_a package, and be torn down as it exits that package, and then again for the sub_package_b package.

But this presents a problem, as another sub-package being introduced inside sub_package_a would suddenly prevent tests that are in the sub_package_a package level from using the fixture as they were designed to. And it would also mean you have a lack of an ability to run a fixture for an entire specific package at any level if it has any sub-packages inside it.

Another way I could see it implemented would be if it's only meant to apply to the package who's conftest.py it was defined in, only being allowed to run once in that package, and only being torn down as it leaves that package. For example, being defined in tests/sub_package_a/conftest.py and only running once for that package, despite there also being tests/sub_package_a/sub_sub_package/. But then this would mean you wouldn't be able to define fixtures that are meant to reset between the sub-packages of tests/sub_package_a/ without having to define the same fixture in each one of them.

s0undt3ch

comment created time in 16 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 11100377b418341061215d925d38518001e6a63b

Clarify fixture order and provide images for visual learners The documentation previously stated that fixture execution order was dictated by the order fixtures were requested in, but this isn't a very reliable method controlling fixture order and can lead to some confusing situations. The only reliable way to control fixture order is based on scope, fixture dependencies, and autouse. This reflects that in the documentation, and provides several examples and images to help clarify that. Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

pull request commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

Thanks for looking at this @nicoddemus! I'll squash it into one commit now

SalmonMode

comment created time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 5a0a0a3184d9203b368afb4175313f666098e2de

add reference to fixture instantiation order

view details

push time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 24f678e89a52ff84b110d5b5ab844e370fc48cbd

resolving suggestions

view details

push time in 17 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they runs, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note:+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in.++``conftest.py``: sharing fixtures across multiple files+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++The ``conftest.py`` file serves as a means of providing fixtures for an entire+package (folder). Any fixture defined in a ``conftest.py`` can be used by any test+in that package without needing to import them (pytest will automatically+discover them).++You can have multiple nested folders contaning your tests (each one being a+"package", or "subpackage" if it's inside a package already), and each folder+can have its own ``conftest.py`` with its own fixtures, adding on to (and/or+modifying) the ones provided by the ``conftest.py`` files in parent folders.++For example, given a test file structure like this:++::++    tests/+        __init__.py++        conftest.py+            # content of tests/conftest.py+            import pytest++            @pytest.fixture+            def order():+                return []++            @pytest.fixture+            def top(order, innermost):+                order.append("top")++        test_top.py+            # content of tests/test_top.py+            import pytest++            @pytest.fixture+            def innermost(order):+                order.append("innermost top")++            def test_order(order, top):+                assert order == ["innermost top", "top"]++        subpackage/+            __init__.py++            conftest.py+                # content of tests/subpackage/conftest.py+                import pytest++                @pytest.fixture+                def mid(order):+                    order.append("mid subpackage")++            test_subpackage.py+                # content of tests/subpackage/test_subpackage.py+                import pytest++                @pytest.fixture+                def innermost(order, mid):+                    order.append("innermost subpackage")++                def test_order(order, top):+                    assert order == ["mid subpackage", "innermost subpackage", "top"]++The boundaries of the scopes can be visualized like this:++.. image:: example/fixtures/fixture_availability.svg+    :align: center++The folders become their own sort of scope where fixtures that are defined in a+``conftest.py`` file in that folder become available for that whole scope.++Tests are allowed to search upward (stepping outside a circle) for fixtures, but+can never go down (stepping inside a circle) to continue their search. So+``tests/subpackage/test_subpackage.py::test_order`` would be able to find the+``innermost`` fixture defined in ``tests/subpackage/test_subpackage.py``, but+the one defined in `tests/test_top.py`` would be unavailable to it because it+would have to step down a level (step inside a circle) to find it.++The first fixture the test finds is the one that will be used, so+:ref:`fixtures can be overriden <override fixtures>` if you need to change or+extend what one does for a particular scope.  You can also use the ``conftest.py`` file to implement :ref:`local per-directory plugins <conftest.py plugins>`. +Fixtures from third-party plugins+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++Fixtures don't have to be defined in this structure to be available for tests,+though. They can also be provided by third-party plugins that are installed, and+this is how many pytest plugins operate. As long as those plugins are installed,+the fixtures they provide can be requested from anywhere in your test suite.++Because they're provided from outside the structure of your test suite,+third-party plugins don't really provide a scope like `conftest.py` files and+the folders in your test suite do. As a result, when running a test, pytest

Yeah, that's much clearer, and also more concise!

SalmonMode

comment created time in 17 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they runs, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note:+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in.++``conftest.py``: sharing fixtures across multiple files+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++The ``conftest.py`` file serves as a means of providing fixtures for an entire+package (folder). Any fixture defined in a ``conftest.py`` can be used by any test+in that package without needing to import them (pytest will automatically+discover them).++You can have multiple nested folders contaning your tests (each one being a+"package", or "subpackage" if it's inside a package already), and each folder+can have its own ``conftest.py`` with its own fixtures, adding on to (and/or

Good point! I'll take that bit out

SalmonMode

comment created time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 2b7f885e83482ada9263db360b1044b69a7cbed1

Update doc/en/fixture.rst Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they runs, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note:+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in.

Hmmm I think trying to sum up how the order is controlled here might leave some ambiguity. How do you feel about something like this?

instantiated in: the order is mandated by the logic described :ref:`here <fixture order>`.
SalmonMode

comment created time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 8acdfcea0b4063f3f25d563147b570a460bae3dd

Update doc/en/fixture.rst Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 containers for different environments. See the example below.         yield spawn_container()  +Fixture instantiation order+---------------------------++When pytest wants to execute a test, once it knows what fixtures will be+executed, it has to figure out the order they'll be executed in. To do this, it+considers 3 factors:+1. scope

Aha! Good catch!

SalmonMode

comment created time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha c56e49a659758a58b3bdbf0852167c141606814e

Update doc/en/fixture.rst Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 3cacafb0950120698c422f66438bfede91289b18

Update doc/en/fixture.rst Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 14fedda34b5aa4aca9346417b085eea6765fdc76

Update doc/en/fixture.rst Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

Pull request review commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

 functions take the role of the *injector* and test functions are the .. _`conftest.py`: .. _`conftest`: -``conftest.py``: sharing fixture functions-------------------------------------------+Fixture availabiility+--------------------- -If during implementing your tests you realize that you-want to use a fixture function from multiple test files you can move it-to a ``conftest.py`` file.-You don't need to import the fixture you want to use in a test, it-automatically gets discovered by pytest. The discovery of-fixture functions starts at test classes, then test modules, then-``conftest.py`` files and finally builtin and third party plugins.+Fixture availability is determined from the perspective of the test. A fixture+is only available for tests to request if they are in the scope that fixture is+defined in. If a fixture is defined inside a class, it can only be requested by+tests inside that class. But if a fixture is defined inside the global scope of+the module, than every test in that module, even if it's defined inside a class,+can request it.++Similarly, a test can also only be affected by an autouse fixture if that test+is in the same scope that autouse fixture is defined in (see+:ref:`autouse order`).++A fixture can also request any other fixture, no matter where it's defined, so+long as the test requesting them can see all fixtures involved.++For example, here's a test file with a fixture (``outer``) that requests a+fixture (``inner``) from a scope it wasn't defined in:++.. literalinclude:: example/fixtures/test_fixtures_request_different_scope.py++From the tests' perspectives, they have no problem seeing each of the fixtures+they're dependent on:++.. image:: example/fixtures/test_fixtures_request_different_scope.svg+    :align: center++So when they runs, ``outer`` will have no problem finding ``inner``, because+pytest searched from the tests' perspectives.++.. note:+    The scope a fixture is defined in has no bearing on the order it will be+    instantiated in.++``conftest.py``: sharing fixtures across multiple files+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^++The ``conftest.py`` file serves as a means of providing fixtures for an entire+package (folder). Any fixture defined in a ``conftest.py`` can be used by any test+in that package without needing to import them (pytest will automatically+discover them).++You can have multiple nested folders contaning your tests (each one being a+"package", or "subpackage" if it's inside a package already), and each folder+can have its own ``conftest.py`` with its own fixtures, adding on to (and/or+modifying) the ones provided by the ``conftest.py`` files in parent folders.++For example, given a test file structure like this:++::++    tests/+        __init__.py++        conftest.py+            # content of tests/conftest.py+            import pytest++            @pytest.fixture+            def order():+                return []++            @pytest.fixture+            def top(order, innermost):+                order.append("top")++        test_top.py+            # content of tests/test_top.py+            import pytest++            @pytest.fixture+            def innermost(order):+                order.append("innermost top")++            def test_order(order, top):+                assert order == ["innermost top", "top"]++        subpackage/+            __init__.py++            conftest.py+                # content of tests/subpackage/conftest.py+                import pytest++                @pytest.fixture+                def mid(order):+                    order.append("mid subpackage")++            test_subpackage.py+                # content of tests/subpackage/test_subpackage.py+                import pytest++                @pytest.fixture+                def innermost(order, mid):+                    order.append("innermost subpackage")++                def test_order(order, top):+                    assert order == ["mid subpackage", "innermost subpackage", "top"]++The boundaries of the scopes can be visualized like this:++.. image:: example/fixtures/fixture_availability.svg+    :align: center++The folders become their own sort of scope where fixtures that are defined in a+``conftest.py`` file in that folder become available for that whole scope.++Tests are allowed to search upward (stepping outside a circle) for fixtures, but

Thanks! 😁

SalmonMode

comment created time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha de2f8b0643dc3576f0964ebb0efd7714e334cd7e

Update doc/en/fixture.rst Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

push eventSalmonMode/pytest

Chris NeJame

commit sha f8b7bbffcd82e8033628b72a768209f0f6584399

Update doc/en/fixture.rst Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

push time in 17 days

startedseleniumbase/SeleniumBase

started time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha bca61ffab34ac57137929927fa9d5289b63e471e

Clarify fixture order and provide images for visual learners The documentation previously stated that fixture execution order was dictated by the order fixtures were requested in, but this isn't a very reliable method controlling fixture order and can lead to some confusing situations. The only reliable way to control fixture order is based on scope, fixture dependencies, and autouse. This reflects that in the documentation, and provides several examples and images to help clarify that.

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 1e9ff9a6bbb97de791135a0ffb52e69d1907052a

Clarify fixture order and provide images for visual learners The documentation previously stated that fixture execution order was dictated by the order fixtures were requested in, but this isn't a very reliable method controlling fixture order and can lead to some confusing situations. The only reliable way to control fixture order is based on scope, fixture dependencies, and autouse. This reflects that in the documentation, and provides several examples and images to help clarify that.

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 30b91e29055a998ba3b1fb1e6ec2e032bceaaceb

black:

view details

push time in 18 days

pull request commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

I'm on mobile at the moment, but I'll squash these commits into one once I get to my computer

SalmonMode

comment created time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha ac35b949a0d34aac58efd39d359b70b9f1692b65

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 465a47a329612f97f688aa8bb34a77f095b7761d

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 01e5ed5cc37367ca34b728479dd83a929e10fcfe

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 5eaac61e20d96e56561b4153275f63c09ace9b20

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha f662fd6cbb830db3ff99d64f304ce9d308254a61

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha dbe85b38b33e02403a73920890001e7edf7105aa

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha fade74573d0dcb93f3b7bdff18ad375f0d7494d9

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 79a663ab3db5860badcd711e7a34fd1909b74b32

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 8605b4b2a7b5cd25d7fa3d86e5af28e8766cdd76

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Chris NeJame

commit sha 37313bd3557d97daff98d45a1c101a837adaa49d

New line at the end of the file

view details

push time in 18 days

push eventSalmonMode/pytest

Anthony Sottile

commit sha 8cca0238406fc2c34c50ea44f45fdf5fbc36efa4

cache the pre-commit environment

view details

Gleb Nikonorov

commit sha 2a3c21645e5c303a71694c0ff68d0a56c2d734d5

Commit solution thus far, needs to be polished up pre PR

view details

Gleb Nikonorov

commit sha f760b105efa12ebc14adccda3c840ad3a61936ef

Touchup pre-PR

view details

Gleb Nikonorov

commit sha 3f6b3e7faa49c891e0b3036f07873296a73c8618

update help for --strict-config

view details

Gleb Nikonorov

commit sha f1746c50eae96e23ef66743e905489bfe896b197

Merge remote-tracking branch 'origin/master' into issue_7305

view details

Gleb Nikonorov

commit sha 42deba59e7d6cfe596414d0beff6fafaa14b02a3

Update documentation as suggested

view details

Gleb Nikonorov

commit sha d2bb67bfdafcbadd39f9551a52635188f54954e0

validate plugins before keys in config files

view details

Gleb Nikonorov

commit sha 13add4df43eef412bf7369926345e62eca0624b1

documentation fixes

view details

Bruno Oliveira

commit sha c17d50829f3173c85a9810520458a112971d551c

Add pyproject.toml support (#7247)

view details

Fabio Zadrozny

commit sha 322190fd84e1b86d7b9a2d71f086445ca80c39b3

Fix issue where working dir becomes wrong on subst drive on Windows. Fixes #5965 (#6523) Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

Bruno Oliveira

commit sha a76855912b599d53865c9019b10ae934875fbe04

Introduce guidelines for closing stale issues/PRs (#7332) * Introduce guidelines for closing stale issues/PRs Close #7282 Co-authored-by: Anthony Sottile <asottile@umich.edu> Co-authored-by: Zac Hatfield-Dodds <Zac-HD@users.noreply.github.com> Co-authored-by: Anthony Sottile <asottile@umich.edu> Co-authored-by: Zac Hatfield-Dodds <Zac-HD@users.noreply.github.com>

view details

Prashant Anand

commit sha e78207c936c43478aa5d5531d7c0b90aa240c9e0

7119: data loss with mistyped --basetemp (#7170) Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com> Co-authored-by: Ran Benita <ran@unusedvar.com>

view details

Bruno Oliveira

commit sha fcbaab8b0b89abc622dbfb7982cf9bd8c91ef301

Allow tests to override "global" `log_level` (rebased) (#7340) Co-authored-by: Ruaridh Williamson <ruaridh.williamson@flexciton.com>

view details

piotrhm

commit sha 0b70300ba4c00f2fdab1415b33ac6b035418e648

Added requested modifications

view details

piotrhm

commit sha 51fb11c1d1436fb438cfe4d014b34c46fc342b70

Added tests

view details

piotrhm

commit sha 5e0e12d69b2494135e35ef3dcba9434daa932914

Fixed linting

view details

piotrhm

commit sha 2be1c61eb3a0c202df4ca9ee0d764b5bdaad2001

Fixed linting 2

view details

piotrhm

commit sha df562533ffc467dda8da94c1d87f0722851223eb

Fixed test

view details

Bruno Oliveira

commit sha d5a8bf7c6cfed4950b758a5539fb229497b7dca8

Improve CHANGELOG

view details

Bruno Oliveira

commit sha 357f9b6e839d6f7021904b28d974933aeb0f219b

Add type annotations

view details

push time in 18 days

push eventSalmonMode/pytest

Anthony Sottile

commit sha 8cca0238406fc2c34c50ea44f45fdf5fbc36efa4

cache the pre-commit environment

view details

Gleb Nikonorov

commit sha 2a3c21645e5c303a71694c0ff68d0a56c2d734d5

Commit solution thus far, needs to be polished up pre PR

view details

Gleb Nikonorov

commit sha f760b105efa12ebc14adccda3c840ad3a61936ef

Touchup pre-PR

view details

Gleb Nikonorov

commit sha 3f6b3e7faa49c891e0b3036f07873296a73c8618

update help for --strict-config

view details

Gleb Nikonorov

commit sha f1746c50eae96e23ef66743e905489bfe896b197

Merge remote-tracking branch 'origin/master' into issue_7305

view details

Gleb Nikonorov

commit sha 42deba59e7d6cfe596414d0beff6fafaa14b02a3

Update documentation as suggested

view details

Gleb Nikonorov

commit sha d2bb67bfdafcbadd39f9551a52635188f54954e0

validate plugins before keys in config files

view details

Gleb Nikonorov

commit sha 13add4df43eef412bf7369926345e62eca0624b1

documentation fixes

view details

Bruno Oliveira

commit sha c17d50829f3173c85a9810520458a112971d551c

Add pyproject.toml support (#7247)

view details

Fabio Zadrozny

commit sha 322190fd84e1b86d7b9a2d71f086445ca80c39b3

Fix issue where working dir becomes wrong on subst drive on Windows. Fixes #5965 (#6523) Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>

view details

Bruno Oliveira

commit sha a76855912b599d53865c9019b10ae934875fbe04

Introduce guidelines for closing stale issues/PRs (#7332) * Introduce guidelines for closing stale issues/PRs Close #7282 Co-authored-by: Anthony Sottile <asottile@umich.edu> Co-authored-by: Zac Hatfield-Dodds <Zac-HD@users.noreply.github.com> Co-authored-by: Anthony Sottile <asottile@umich.edu> Co-authored-by: Zac Hatfield-Dodds <Zac-HD@users.noreply.github.com>

view details

Prashant Anand

commit sha e78207c936c43478aa5d5531d7c0b90aa240c9e0

7119: data loss with mistyped --basetemp (#7170) Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com> Co-authored-by: Ran Benita <ran@unusedvar.com>

view details

Bruno Oliveira

commit sha fcbaab8b0b89abc622dbfb7982cf9bd8c91ef301

Allow tests to override "global" `log_level` (rebased) (#7340) Co-authored-by: Ruaridh Williamson <ruaridh.williamson@flexciton.com>

view details

piotrhm

commit sha 0b70300ba4c00f2fdab1415b33ac6b035418e648

Added requested modifications

view details

piotrhm

commit sha 51fb11c1d1436fb438cfe4d014b34c46fc342b70

Added tests

view details

piotrhm

commit sha 5e0e12d69b2494135e35ef3dcba9434daa932914

Fixed linting

view details

piotrhm

commit sha 2be1c61eb3a0c202df4ca9ee0d764b5bdaad2001

Fixed linting 2

view details

piotrhm

commit sha df562533ffc467dda8da94c1d87f0722851223eb

Fixed test

view details

Bruno Oliveira

commit sha d5a8bf7c6cfed4950b758a5539fb229497b7dca8

Improve CHANGELOG

view details

Bruno Oliveira

commit sha 357f9b6e839d6f7021904b28d974933aeb0f219b

Add type annotations

view details

push time in 19 days

pull request commentpytest-dev/pytest

Clarify fixture execution order and provide visual aids

Definitely! I wanna see the new feature too haha

SalmonMode

comment created time in 19 days

issue commentpytest-dev/pytest

How to test a custom collection filter?

I love pytest, but man some of this is hard to wrap my head around

I personally find it easier to structure my tests so that I'm never defining tests I don't intend to run, and use the pytest_collection_modifyitems just to add marks. Then I can use file structure, marks, and/or test name matching (-k) to filter for the tests I want to run (or filter the tests out that I don't want to run).

This requires the structuring of my tests and fixtures to be a bit more spread out, but it reduces complexity significantly, and reduces headaches.

joshuatbadger

comment created time in 19 days

issue commentpytest-dev/pytest

Enable auto-build documentation during PRs

Looks like it should work. They said we might need to check if it has the proper permissions and webhooks https://docs.readthedocs.io/en/latest/guides/autobuild-docs-for-pull-requests.html#troubleshooting

nicoddemus

comment created time in 20 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

Absolutely. It'd be a big shift, but it's been enormously helpful for me, and definitely helped keep things straightforward, fast, flexible, and safe (safe in regards to the teardowns)

amitwer

comment created time in 20 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

Ahhhhhh, I see the problem.

My approach is to keep everything out of the test function itself except for a single assert, which is only a non-state-changing query.

I rely on multiple fixtures to perform the "arrange" steps, and "act" command, and limit my fixtures to doing (state-changing commands) and/or providing one thing at most. This helps with injecting dependencies into the flow higher up, as I can change any individual resource In my fixtures, and influence any step of the process. I can also pick and choose the steps I want, and more easily coordinate the "order of operations" so to speak of fixtures for a given test or collection of tests.

It also allows me to have very safe teardowns, because a fixture that throws an error likely won't actually change the state (because it failed to do what it wanted), and that means all the state-changing fixtures that ran before it will still try to teardown (also the logic for "undoing" a state change is right next to the logic that did that change). This also has the benefit of providing a nice readout when I run pytest with --setup-plan because it spits out every step and resource used for a test (and I find it's much easier to name fixtures when doing this).

I also try to handle parameterization in isolation from everything by doing it in its own fixture, as this makes manipulating that parameterization easy, and makes depending on that parameterization easy as well, since any test that can see that fixture can request it.

I also rely heavily on scope to segregate certain tests from others, but this also makes it super easy to provide a modified version of a specific fixture only for those scopes.

(I also use larger scopes like class to house multiple test functions to run on a given state that they all share the same setup for, which helps optimize a bit, but this isn't necessary. I'll set the fixtures to use the class scope for this example to demonstrate this, but the general solution will still work inside a class without the fixtures being scoped that way)

So if I restructure your test a bit with that in mind, I'd get this (assuming this is in a test file meant only for testing undo):

@pytest.fixture(scope="class")
def file_original_content(os_data, file_path):
    # requests os_data to ensure it's impacted by parameterization
    return read_file(file_path)


@pytest.fixture(scope="class")
def edit_file(edit_method, file_path, os_data, file_original_contents):
    # requests os_data to ensure it's impacted by parameterization

    # requests file_original_contents to ensure the contents are grabbed before editing
    edit_file(edit_method, file_path)


@pytest.fixture(scope="class")
def reboot_client(edit_file, client):
    # requests edit_file because this needs to be done after editing the file
    # will be affected by parameterization because edit_file requests os_data
    client.reboot()


@pytest.fixture(scope="class", autouse=True)
def undo_edit(reboot_client, file_path):
    # requests reboot_client because this needs to be done after the client is rebooted

    # will be affected by parameterization because reboot_client effectively requests os_data

    # autouse is used as this is the last fixture to run in the dependency tree and this ensures all 
    # fixtures will run before the test functions begin their asserts. It also cuts out the need to
    # request the undo_edit fixture in the test.
    undo_edit(file_path)


class TestWriteOverEvent
    @pytest.fixture(scope="class", params=filter(lambda x: True, os_version_config_data_stuff))
    def os_data(self, request):
        return request.param

    def test_contents_are_restored(self, file_path, file_original_contents):
        assert read_file(file_path) == file_original_contents

    def test_server_saw_edit_event(self, server, file_path):
        # doesn't trigger a re-run of the fixtures because they were all class-scoped and autouse
       assert file_path in server.get_edited_files()

To add in the bit that enables undo, I would just modify the requests of undo_edit and add another fixture:


@pytest.fixture(scope="class")
def enable_undo(reboot_client, client):
    # requests reboot_client because this needs to be done after the client is rebooted

    # will be affected by parameterization because reboot_client effectively requests os_data
    client.enable_undo()
    

@pytest.fixture(scope="class", autouse=True)
def undo_edit(reboot_client, enable_undo, file_path):
    # requests reboot_client because this needs to be done after the client is rebooted

    # requests enable_undo to ensure it only attempts to enable it after the client is rebooted

    # will be affected by parameterization because reboot_client effectively requests os_data

    # autouse is used as this is the last fixture to run in the dependency tree and this ensures all 
    # fixtures will run before the test functions begin their asserts. It also cuts out the need to
    # request the undo_edit fixture in the test.
    undo_edit(file_path)

A lot of these fixtures can be defined in a conftest.py file in a higher scope to avoid duplication.

amitwer

comment created time in 20 days

issue commentpytest-dev/pytest

Enable auto-build documentation during PRs

Hmm that would be an odd restriction, but I could see how it might be limited in that way. Are there any branches in the main repo with PRs where this is working?

nicoddemus

comment created time in 20 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

With parameterization, you'll effectively still have as many test cases. This approach just compartmentalizes the complexity a bit.

amitwer

comment created time in 20 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

Ah, gotcha! Thanks for clarifying and providing the extra context.

So in this case, if I understand correctly, I would define 4 separate tests, one for each edit method. You can mark and/or parameterize them individually according to the supported environments, and then have something external (either a human or a script) detect what the current environment is, and then know how to call pytest so that only the applicable tests are running (probably by using the marks). Having the logic and marks in place before the tests are run also avoids any runtime hiccups related to trying to diagnose what to run automatically (or at least decouples it from your test suite).

It may seem a little against DRY, but ultimately, you get something simple and descriptive (e.g. test_write_over_is_monitored), and KISS/DAMP are more beneficial than DRY in the long run.

amitwer

comment created time in 21 days

PR opened pytest-dev/pytest

Clarify fixture execution order and provide visual aids

<!-- Thanks for submitting a PR, your contribution is really appreciated!

Here is a quick checklist that should be present in PRs.

  • [ ] Include documentation when adding new features.
  • [ ] Include new tests or update existing tests when applicable.
  • [X] Allow maintainers to push and squash when merging my commits. Please uncheck this if you prefer to squash the commits yourself.

If this change fixes an issue, please:

  • [ ] Add text like closes #XYZW to the PR description and/or commits (where XYZW is the issue number). See the github docs for more information.

Unless your change is trivial or a small documentation fix (e.g., a typo or reword of a small section) please:

  • [ ] Create a new changelog file in the changelog folder, with a name like <ISSUE NUMBER>.<TYPE>.rst. See changelog/README.rst for details.

    Write sentences in the past or present tense, examples:

    • Improved verbose diff output with sequences.
    • Terminal summary statistics now use multiple colors.

    Also make sure to end the sentence with a ..

  • [x] Add yourself to AUTHORS in alphabetical order. -->

I noticed in the docs, it mentions that fixtures are executed in the order a test requests them (unless scope, dependencies, or autouse are involved), but this can be very misleading depending on how the tests and fixtures are set up, especially as a test suite gets more complex.

The only truly guaranteed way to control order is by leveraging scope, dependencies, and autouse, and making sure these 3 things make up a linearizable map of fixtures for each test.

The changes I made to the docs clarify this by mentioning it explicitly. I also broke down the order section into 3 parts (one for each of those ordering mechanisms), and provided several examples and images to help visualize it.

I also gave a similar treatment to the fixture availability section as it tied into the changes I made to the fixture order section. Plus, it was easy to repurpose some of the SVGs I made already.

The changes have certain implications, because fixture request order was the documented behavior, so some are still depending on it. But, given the internals of pytest, and how scope/dependencies/autouse have been the only ways to guarantee order for a while, the users that are currently dependent on that behavior have already been at risk of that behavior breaking down due to test execution order/scope/dependencies/autouse, and they will be at the same risk (or lack thereof) of it breaking down after this documentation change (especially since I learned my lesson about sets not having a deterministic iteration order 😂 , and we'll pretty much always iterate over fixture requests in the order they were requested by the test).

So ultimately, this doc change doesn't actually change anything for those users, and provides clearer, more helpful information for anyone looking to learn how to structure their fixtures.

Side note: I changed usage of the word "instantiate"/"instantiation" in these sections to "execute"/"execution", because it more closely reflects how pytest currently does things with fixture defs, and "execute" is easier to understand IMO.

+1323 -63

0 comment

19 changed files

pr created time in 21 days

push eventSalmonMode/pytest

Chris NeJame

commit sha ad9ee893f359c2b2d24e63f7302160f520bd4321

added to AUTHORS

view details

push time in 21 days

create barnchSalmonMode/pytest

branch : fixture-order-docs-update

created branch time in 21 days

push eventSalmonMode/pytest

Daniel Hahler

commit sha 04f27d4eb469fb9c76fd2d100564a0f7d30028df

unittest: do not use TestCase.debug() with `--pdb` Fixes https://github.com/pytest-dev/pytest/issues/5991 Fixes https://github.com/pytest-dev/pytest/issues/3823 Ref: https://github.com/pytest-dev/pytest-django/issues/772 Ref: https://github.com/pytest-dev/pytest/pull/1890 Ref: https://github.com/pytest-dev/pytest-django/pull/782 - inject wrapped testMethod - adjust test_trial_error - add test for `--trace` with unittests

view details

Bruno Oliveira

commit sha f7b1de70c037d0ca43adc25966677d5a78034abc

No need to call tearDown on expected failures - Isolate logic for getting expected exceptions - Use original method name, as users see it when entering the debugger

view details

Bruno Oliveira

commit sha 59369651dbe6a3bac420e16dcded9ad095b1680b

Bring back explicit tear down Otherwise 'normal' failures won't call teardown explicitly

view details

Daniel Hahler

commit sha 426a4cdca901606d3ffd716c063e394cec964f9f

_idval: remove trailing newline from exception

view details

Ran Benita

commit sha 51f9cd0e02371d1a4770625aff178a3f8ab6db5d

argparsing: remove "map_long_option" Action attribute support This feature was added in commit 007a77c2ba14b3df8790efb433a2f849edf4f5d2, but was never used in pytest itself. A GitHub code search doesn't find any users either (only pytest repo copies). It seems safe to clean up.

view details

Daniel Hahler

commit sha c0b1a39192a998b4368ac859677b7e22f8ee56f2

minor: move internal _pformat_dispatch function

view details

Daniel Hahler

commit sha ccb3ef3b33fcf419d03260c1d18f352da373725d

testing/python/metafunc.py: import _idval once

view details

Daniel Hahler

commit sha f1224a0e855bb4707c05d7d943cfbe87f6e982a7

Merge pull request #6243 from blueyed/move-_pformat_dispatch minor: move internal _pformat_dispatch function

view details

Daniel Hahler

commit sha 98c899c9b0d18148bb8fdbae53d04ad533f8486e

Merge pull request #6245 from blueyed/tests-_idval testing/python/metafunc.py: import _idval once

view details

Daniel Hahler

commit sha 2c941b5d13cc8688a1d655fc0ef41a4de8c4e251

parametrized: ids: support generator/iterator Fixes https://github.com/pytest-dev/pytest/issues/759 - Adjust test_parametrized_ids_invalid_type, create list to convert tuples Ref: https://github.com/pytest-dev/pytest/issues/1857#issuecomment-552922498 - Changelog for int to str conversion Ref: https://github.com/pytest-dev/pytest/issues/1857#issuecomment-552932952

view details

Ran Benita

commit sha dac16cd9e5113a5b769d89557e9dcdc5001fe205

Add type annotations to _pytest.config.argparsing and corresponding Config code

view details

Daniel Hahler

commit sha 2d449e95e4c270f1b3a15239b72cd3c5338f798b

Respect --fulltrace with collection errors

view details

Daniel Hahler

commit sha 6b75a7733ba7b8f09a0ad768342dff7100a6a365

Merge pull request #6247 from blueyed/collecterror-fulltrace Respect --fulltrace with collection errors

view details

Daniel Hahler

commit sha ed012c808a20f983088157424a219ecce0d9dea7

Merge pull request #6174 from blueyed/ids-iter parametrized: ids: support generator/iterator

view details

Ran Benita

commit sha 5820c5c5a61579b7baa92d6d22ac9602a50c9132

Merge pull request #6241 from bluetech/type-annotations-9 Add type annotations to _pytest.config.argparsing and corresponding Config code

view details

Daniel Hahler

commit sha df0c652333f7533bb783da3d366fc7b0e0a5450b

Merge master into features

view details

Daniel Hahler

commit sha 2fa0518e899dd8750770c5e62db6528198e7fe4d

Merge pull request #6259 from blueyed/merge-master-into-features Merge master into features

view details

Philipp Loose

commit sha a02310a1401a185676f3421c6598434f2d73594a

Add stacklevel tests for warnings, 'location' to pytest_warning_captured Resolves #4445 and #5928 (thanks to allanlewis) Add CHANGELOG for location parameter

view details

Daniel Hahler

commit sha b0ebcfb7857ef9e14064ca22baad4f6623f0251a

pytester: remove special handling of env during inner runs Closes https://github.com/pytest-dev/pytest/issues/6213.

view details

Daniel Hahler

commit sha a4408eb9c13283f9285df263e91ab2b3219d7b5a

Merge pull request #6219 from blueyed/testdir-use-monkeypatch pytester: remove special handling of env during inner runs

view details

push time in 21 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

Hmmm I think I'm getting a clearer picture for what you need, but I'm not positive. Can you provide some more detail about the complex objects you mentioned and what the relevant parameters would be as a result of them?

Also, "parameters" is a kind of loaded term in this context, though, so just to clarify, it can mean one of:

  1. a collection of inputs from which to generate multiple sets of tests, one for each input. FOr example, "chrome" and "firefox" being used to parameterize a driver fixture so that all tests that use driver will have two versions of themselves made so that they all run once with chrome, and then again with firefox.
  2. A collection of data used to configure some entity. For example, ["xlsx", "pdf", "zip"] being used to tell a server what files it should automatically be rejecting.

I'm sure I'm missing some definitions, and this distinction may seem arbitrary, but I'm mostly looking to distinguish between the first definition and everything else, as this can change the approach I'd take.

When you say "relevant parameters" are you referring to the 1st definition? or something else?

amitwer

comment created time in 21 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

I'm not sure I follow, so I'm gonna propose something, and hopefully that'll highlight what I might be missing.

I'm not exactly sure what your situation is, but I find it's easier to leave server instantiation outside of the test logic, and treat it as infrastructure so everything in the test logic can just assume it is already there. Then it's as easy as using configs to point the tests to the right server. This applies even when you have different tests for each server.

That said, it looks like trying to compact the logic for parameterization is where the complexity becomes cumbersome, so that's where I'd say it's better to keep things simple. Off the top of my head, I'm thinking of something like this:

tests/conftest.py (top level conftest)

def get_supported_file_type_params():
    supported_file_types = []
    for client_os_version in OS_VERSION:
        for server_os_version in OS_VERSION:
            file_types = ["xlsx", "pdf", "zip"]
            if server_os_version is OS_VERSION.WINDOWS7 and client_os_version.family is OS.WINDOWS:
                file_types.append("rtf")

            supported_file_types = {
                "client_os": client_os_version,
                "server_os_": server_os_version,
                "file_types": file_types,
            }
    return supported_file_types


def os_id_func(value):
    return "<OSData Client: {client_os} Server: {server_os} >"

@pytest.fixture(scope="session", params=get_supported_file_types(), ids=os_id_func)
def os_data(request):
    return request.param


@pytest.fixture(scope="session")
def client_os(os_data):
    return os_data["client_os"]


@pytest.fixture(scope="session")
def server_os(os_data):
    return os_data["server_os"]


@pytest.fixture(scope="session")
def file_types(os_data):
    return os_data["file_types"]


@pytest.fixture(scope="session")
def client(request):
    return Client(request.config.option.server_url)


def pytest_collection_modifyitems(items):
    for item in items:
        item_params = item.nodeid.split("[")[-1][:-1].split("-")
        
        os_data_repr = result = filter(lambda x: x.startswith("<OSData "), item_params)
        _, _, client_os, _, server_os, _ = os_data_repr.split()

        item.add_marker(f"client_os_{client_os.lower().replace(" ", "_")}")
        item.add_marker(f"client_os_family_{OS(client_os).family.lower().replace(" ", "_")}")
        item.add_marker(f"server_os_{server_os.lower().replace(" ", "_")}")

tests/subpackage/test_something.py

@pytest.fixture
def process_file(client, file_types):
    client.do_something(file_types)


def test_process_file(process_file, client, file_types):
    assert client.get_latest_files() == file_types

With this, all tests exist, and it's just a matter of providing the configs necessary so the client can connect to the server properly, and then using the marks to select the tests based on the currently active infrastructure, e.g. pytest -m "client_os_windows_7 and server_os_windows_7". Because the fixtures in the top-level conftest are session-scoped, they will only be executed once per parameter.

Keep in mind that fixtures are evaluated based on the perspective of the test that's about to run, so you can override os_data to be whatever you like for subsets of tests that don't need to apply to all permutations. So if none of the tests in tests/subpackage need to run when the client is Windows 7, you can define another conftest.py in there and override the os_data fixture with new params by just filtering out what is given by get_supported_file_type_params (so this may be best defined in a separate part of the package). Because of how pytest fixtures work, you only need to override that one fixture and everything else will be taken care of.

Also, this is super generalized, and really just meant to showcase a few options/techniques. So it's just my attempt to throw stuff at the wall and see what sticks. I'm still not sure what your exact needs are, but with more context I can probably provide a better solution if this doesn't do it for you.

amitwer

comment created time in 22 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

Sounds like you have a pretty complex file. I would recommend creating a custom data type and a parser where that logic is contained, so you can show tuck the logic away through abstraction but have a reference that goes straight to it. I'm imagining something like this:

class Fruits(Enum):
    apple = Apple
    banana = Banana
    cherry = Cherry

fruits = []
with open("fruit_data.csv") as csvfile:
    reader = csv.DictReader(csvfile)
    for row in reader:
        fruits.append(Fruits[row["type"]](**row))

def fruit_id(fruit):
    fruit_repr = repr(fruit)
    if fruit.is_premium:  # would be pulled from the CSV and the fruit class would look for this arg during __init__
        return fruit_repr.replace("<", "<+", 1)
    return fruit_repr.replace("<", "<-", 1)

def pytest_collection_modifyitems(items):
    for item in items:
        if "<-" in item.nodeid:
            item.add_marker(pytest.mark.standard_fruit)
        elif "<+" in item.nodeid:
            item.add_marker(pytest.mark.premium_fruit)

@pytest.fixture(scope="session", params=fruit, ids=fruit_id)
def fruit(request):
    return request.param

def test_me(fruit):
    # stuff

The role of the DictReader can be replaced by a custom parser if you're not working with a CSV.

This would allow selecting based on marks, rather than test names.

I'm not sure if this approach is applicable to your scenario, but if you provide some more context, I might be able to come up with something more appropriate.

amitwer

comment created time in 23 days

issue commentpytest-dev/pytest

Using fixtures in the collection phase

I don't mean to gatekeep, and I feel like my ideal definition of fixtures falls within your own.

The distinction I draw is based in practicality. A test is only as good as it is repeatable. If your tests don't know what they're going to do before they start running, and they're beholden to the state of the system as it already is, then they aren't completely repeatable because they aren't in control.

Pytest's fixtures as they stand now, from my pov, are a perfect system for laying out the steps of a repeatable test, and for describing the resources those steps depend on. They describe the essence of what a given test is; not a series of steps to find out what tests can be done.

Again, that's just my pov. My goal isn't to force others to write tests a certain way. It's to encourage them to write tests that are repeatable and in complete control of the SUT.

I'm not really gonna complain if such a feature as this is implemented, since I can just not use that feature. But I did want to throw my 2 cents out there.

thisch

comment created time in 24 days

issue commentpytest-dev/pytest

Using fixtures in the collection phase

Gotcha.

I would still say those are still 2 separate things, despite a shared resource, and I do still believe that would be a red flag in terms of test design due to the lack of control of what tests will be performed, and being beholden to whatever state the system is already in, as opposed to putting the system into the state that the test you want to run is meant to run from.

thisch

comment created time in 24 days

issue commentpytest-dev/pytest

Using fixtures in the collection phase

@RonnyPfannschmidt I'm not 100% sure I follow, so I'll rephrase, and you can let me know if I missed something.

If I use an HTTP client to fetch some resource that contains iterable information that I would pump into a parameterized fixture's params arg in order to generate multiple tests (one for each item in the iterable information), those tests may then need that HTTP client, in combination with the item they received as a result of that parameterization, to make additional requests in order to make the assertion(s) they're meant to. So both the parameterization process and the steps needed to be preformed by the tests required that HTTP client.

Is that accurate?

thisch

comment created time in 24 days

issue commentpytest-dev/pytest

Using fixtures in the collection phase

@RonnyPfannschmidt gotcha, but then how would we be meant to distinguish between fixtures necessary for a given test, and fixtures meant for defining what tests exist? Or is that the ideas you were referring to?

@iwanb ah I think we've talked about this before. I don't recall how that makes marking sections of tests not scalable, though.

thisch

comment created time in 24 days

issue commentpytest-dev/pytest

Using fixtures in the collection phase

@RonnyPfannschmidt I think I understand what you're saying (but I'm not sure, so correct me if I misunderstood), but I think that's a good separation of concerns to have. Part 1 is gathering requirements/parameters, part 2 is defining and collecting tests around requirements/parameters, and part 3 is executing tests and the fixtures they depend on. Knowing what tests you need is different than getting the data to perform those tests, so they're 2 different types of dependencies.

While I wouldn't agree with such an approach (because it means you don't have a consistent set of tests defined and don't know what tests you should have before running the tests, unless this is just data you already have locally), there's always the option to use code in the global scope to gather the data needed to pipe into the params field of your fixtures.

That said, my stance on this is from the perspective of coopting the pytest.fixture decorator for this, not the fixture pipeline itself, which I'm sure could be leverage to create a more streamlined version of that global scope process bit.

@iwanb manually tagging tests/scopes with marks may not scale particularly well, but you can programmatically attach any number of marks after collection, and then filter out the tests you want/don't want using them with the -m flag.

I'm curious to know more about your complex system, and if this approach would work. If not, I'm sure there's other ways to simplify things and I'd be happy to help explore some options.

thisch

comment created time in 24 days

issue commentpytest-dev/pytest

Using fixtures in the collection phase

My opinion is that such a feature is unnecessary, and would only lead to overly complex test structures. It would also obfuscate the intent behind the fixture system (i.e. performing the steps and providing the resources needed for a test at test runtime), and blur the boundaries for when a test actually begins running, and what the uses are for fixtures.

For this, I would recommend making a custom hook (e.g. pytest_doctest_global_setup(docstring_namespace) that has a docstring namespace dict created for it, and then the hook can be used to configure that dict.

Then you can have your plugin override the default docstring_namespace, replacing it with the one configured by your hook.

You can then have your new docstring_namespace fixture evaluate that skipif logic before it returns the dict.

I use pytest for integration testing and we have a lot of test environments which are used differently depending on the test case, so you want to be able to skip your testcase depending on that environment and on what the testcase decides to test (which is computed in fixtures).

@iwanb I'm not sure I follow, but this sounds like something that could have a simpler solution. Are you saying that you have test cases that don't know what they're supposed to test until after they start running, and you don't know what it will test until after it decides what it will test?

thisch

comment created time in 24 days

issue commentpytest-dev/pytest

Feature Request - Support using session scope-fixtures when using pytest.mark.parametrize

You can define all the tests, and then filter which ones you want to run, rather than providing a system for generating tests. E.g.:

@pytest.fixture(scope="session", params=[(1, 3, 7), (1, 4)], ids=("premium", "standard")
def x(request):
    return request.param

def test_me(x):
    # stuff

And then you can just call it with pytest -k "premium". You could also leverage marks instead of ids.

amitwer

comment created time in 24 days

issue commentpytest-dev/pytest

Repeatedly run class instead of tests with parametrize

The reason dependencies between tests is bad, is because it means the depending test is designed to introduce confounding variables, and isn't designed around the specific behavior it's meant to test. If test 2 picks up where test 1 left off, using test 1's stuff, then a failure in test 1 can be completely unrelated to the behavior test 2 is targeting, and yet, test 2 would still have problems. It both threatens the validity of the test results, and the practicality of automation.

In this case, test_3 is tightly coupled to test_1, and this is because there's more than 3 behaviors being tested at once.

A solution is to break things down further. Leverage the backend's web API to establish multiple sorts of checkpoints of confidence.

Going with just test_3 for a moment, you can have it broken down into 3 tests:

  1. Test the frontend's implementation of the backend's web API by launching the browser and performing that interaction. That interaction will send some web request to the backend's web API, creating a record (or multiple records) in the DB that will theoretically be sent to the browser session from test_1. But this test won't be using that browser session. Instead, you can use an API client to request that data from the backend's web API to make sure the records were created appropriately. That's all that's needed for this test.
  2. Test that sending the same request that the browser from test_3 would have sent, but this time, sending it through an API client, can be retrieved by an API client. This way, a failure in the frontend's implementation of the backend's API doesn't mean you can't know if the API isn't working.
  3. Test a different part of the frontend's implementation of the backend's API by landing the browser, going to the home page, and then sending that interaction request from an API client, and then make sure it appears on the home page in the browser.

This allows you to test these behaviors in isolation from each other, and as a result, the ones that fail will tell you where the bugs are. They can each fail individually and tell you about a different bug.

As for your test structure, I can see some other ways to improve it to make things much easier to maintain and add on to.

A lot of my recommendations can be found here and here.

I also recommend, as you said, using a page object framework. [Here's the one I made] (https://pypcom.readthedocs.io/en/latest/), which you might find useful.

I would also recommend not using unittest.TestCase, or self when working with pytest. Fixtures are incredibly powerful in pytest, and make organizing tests and their dependencies incredibly easy. They become how you manage and reference state. You can use as many as you like for a single test, and even structure them in ways that apply to multiple tests, but I recommend limiting them to providing one resource and/or performing one state-changing action each.

Regarding your parameterization, I recommend only using it when differing input should trigger the same input, and result in the same output. For example, using firefox vs chrome should trigger the exact same behavior and yield the exact same result if the same actions are performed.

If the result is different, then it likely means you are testing different behavior, and if it's different behavior, another test should be defined to cover it. This way you can engineer a test around a specific behavior and not sacrifice anything either in terms of the complexity of your test logic/structure, the validity of the test results, or even just how readable the test names are.

I touch on it a little in the links I pasted above, but a single action can result in multiple behaviors being triggered. That's fine, and you can use larger scopes, like a class, to house multiple tests to assess the resulting state from those various behaviors. The important bit is to follow "arrange, act, assert", and not "arrange, act, assert, act, assert, ...".

MstWntd

comment created time in 24 days

issue commentpytest-dev/pytest

pass list in command line args and use in fixture

A simpler solution is to use the pytest_collection_modifyitems to mark the tests according to the browser referenced in their nodeid, then you can just do pytest -m firefox or pytest -m "chrome or firefox".

vbpatel73

comment created time in 24 days

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha 5689b3a3b243409b4cced6378d431d203be6c3ad

Update 2020-05-11-lets-talk-about-cypress.md

view details

push time in 25 days

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha 10a7138586c08710bbbc1c650062c884c38b415f

Update 2020-05-11-lets-talk-about-cypress.md

view details

push time in 25 days

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha 52c41a29569c1786ab412f137ad8f1f9e1f532aa

Update 2020-05-11-lets-talk-about-cypress.md

view details

push time in 25 days

issue commentpytest-dev/pytest

Conflicting fixtures

okay, just to be clear. do you find this test acceptable? ... this also true about my proposal, isn't it? so if you find this acceptable, should you not find my proposal acceptable?

Ah, no. That I do not find acceptable, because there is ambiguity as it's unclear if x should execute before y or the other way around (remember, the docs are wrong).

My goal with the x/y/z example was to demonstrate that your proposal doesn't get rid of ambiguity, because pytest would still have to decide which fixtures execute before the others, and it would change based on the tests that would be running for that test run.

i think i showed how the order of execution of test in my proposal, regardless of the above, does not in any way change the set up plan of individual fixtures. do you mean something else by “order of operations of tests”? ... would you explain what decisions pytest would be making?

I see what you're saying, but your proposal does not do enough to eliminate ambiguity because a given test can still have a different fixture execution order depending on what other tests are running at that time. At that point, it wouldn't bring anything to the table we don't already have with how fixtures currently work.

As I demonstrated in the x/y/z example, pytest still has to decide which fixture executes before another.

In that x/y/z example, if I run pytest -k z, it would run these tests:

test::test_z
test::test_z_x
test::test_y_z

with these groupings:

<z>
    <test_z />
    <x>
        <test_z_x />
    </x>
    <y>
        <test_y_z />
    </y>
</z>

and if I run pytest -k y, then I get these tests:

test::test_y
test::test_x_y
test::test_y_z

with these groupings:

<y>
    <test_y />
    <x>
        <test_x_y />
    </x>
    <z>
        <test_y_z />
    </z>
</y>

It's not specified whether y should always execute before z, or the other way around. So in the pytest -k y case, y happens before z, and in the pytest -k z case, z happens before y.

Pytest had to make decide which would execute before the other in both cases, because the information was not provided by the programmer.

if a fixture a depends on b, then b is executed first. this might seem obvious, but it is only true as this is something that pytest documentation guarantees. if there is a test test_foo(a, b), then, according to my proposal, fixture a must be run before fixture b. pytest doesn't have to make any decisions, the documentation would clearly state what pytest should do in both cases.

Again, the documentation is wrong, and needs to be corrected. Fixture request order in a given test/fixture signature simply cannot guarantee fixture execution order, and this can be demonstrated like so:

@pytest.fixture(scope="module")
def order():
    return []

@pytest.fixture(scope="module")
def x(order):
    order.append("x")

@pytest.fixture(scope="module")
def y(order):
    order.append("y")

def test_x_y(x, y, order):
    assert order == ["x", "y"]

def test_y_x(y, x, order):
    assert order == ["y", "x"]

If either test is run in isolation, they will pass. But if run as a suite, one will always fail, despite each one providing what you believe to be clear instructions on which fixture should execute first.

i don't see why it's unavoidable in principle. if you just remove explicit scopes altogether, you won't have this problem

Removing explicit scopes would reduce everything to the function scope level, as pytest wouldn't be able to assume which fixtures don't need to be re-executed for certain groups of tests, and wouldn't eliminate the ambiguity. Pytest would still have to decide for you which fixtures go before others if clear dependencies aren't established explicitly.

If everything were reduced to the function level because explicit scopes were removed, and the order fixtures are requested in fixture/test signatures did control execution order, then it still wouldn't eliminate ambiguity because of autouse fixtures. For example:

@pytest.fixture
def order():
    return []

@pytest.fixture(autouse=True)
def y(order):
    order.append("y")

@pytest.fixture(autouse=True)
def x(order):
    order.append("x")

def test_order(order):
    assert order == ["y", "x"]

If explicit scopes and autouse fixtures were eliminated, and fixture request order did control fixture execution order, then fixture execution order could be determined in exactly the same way as MRO, and only then would ambiguity be eliminated (an algorithm other than MRO could be used, but then it would just be inconsistent with Python as a whole), because you'd be forced to explicitly state what is dependent on what, and in what order. But that's already a requirement if you want to eliminate ambiguity with how pytest currently works.

If you then make a new proposal after eliminating scope and autouse fixtures to somehow mark fixtures so that they know which groups of tests they don't have to re-execute between, you'd have gone full circle and reimplemented scopes.

That's why the potential for ambiguity is unavoidable in pytest.

averater

comment created time in a month

issue commentpytest-dev/pytest

Conflicting fixtures

so, you are saying that the documentation is wrong, and that ambiguity is acceptable?

The documentation is incorrect, yes, but the ambiguity isn't a question if acceptability. I'm saying that ambiguity is unavoidable as I demonstrated with the bit about swapping the order and making the fixtures module-scoped.

and then you are arguing against a proposal that does away with ambiguity?

No, I'm arguing against a proposal that isn't unambiguous all of the time.

The current system defaults to ambiguous when no explicit dependency chain is established, and allows you to make it unambiguous in a straightforward way by explicitly establishing a dependency chain.

Your proposal makes things unambiguous only some of the time, and in a way that requires you to factor in every single test that's being run that test run to figure it out. When a test suite is executed as a whole, it would still likely be ambiguous.

I abhor ambiguity. And that's exactly why I don't want pytest making decisions about the order of operations on my behalf.

But your proposal only forces pytest to make decisions on your behalf with regards to the order of operations, and do so differently based on which tests are running, while also adding complexity to both the code base and the mental model for the fixture pipeline because you now have to factor in all tests, and all ways tests may or may not be filtered out to figure out the potential orders of operation.

Letting pytest make decisions about what the order of things should be isn't eliminating ambiguity; it's rolling dice. The program doesn't eliminate ambiguity in logic for the programmer. It's the other way around. We, as the programmer have to provide the logic to remove ambiguity for the program.

Pytest provides all the tools necessary that allow us to eliminate ambiguity as it stands. If you don't want ambiguity, the only way to actually solve it is to establish clear dependencies chains.

averater

comment created time in a month

issue commentpytest-dev/pytest

Conflicting fixtures

but it is clear? according to the documentation, yshould run first (and it does in this case)

Nope. I see where that's referenced in the docs, but it's actually incorrect, and it's quite easy to prove. If those fixtures were module-scoped, and there was another test function with the order reversed, it would be ambiguous.

This comes up every once in a while, but nothing outside of the fixture scope/dependency system is intended to control order of operations, because it would fall apart too easily otherwise.

If you need something to happen before something else, the only reliable way to do this is to either have the latter thing request the former thing, rely on the latter thing having a smaller scope then the former thing, or have the former thing be autouse and make sure the latter thing isn't requested by any autouse things (that last one isn't too dependable for the same reason).

The fact that it works is merely coincidence and a result of a deterministic fixture pipeline (deterministic in the literal sense).

I should make a note to fix that.

but this is wrong? you have fixture z inside of fixture x. the correct grouping that follows existing pytest promises (and my proposal) would be:

There is nothing to indicate z can't be in x, unless certain tests aren't executed, which isn't always the case, hence the ambiguous ambiguity.

Like I said, unless you rely on scope or an explicit dependency chain (or autouse, but that's kinda iffy), then the rules become ambiguous.

averater

comment created time in a month

issue commentpytest-dev/pytest

Conflicting fixtures

if we are not optimizing, we can just use function-scoped fixtures...

You absolutely could use function-scoped fixtures exclusively.

However, I never said don't optimize. Optimization is just a secondary concern.

My primary goal is organization, and scopes larger than that level can help build a mental model by showing how dependencies are shared. But structuring them this way also makes optimizing trivial because I've already laid out things based on dependencies, so I just have to specify whether a fixture should run once for everything under it's umbrella, or if it should re-execute for every iteration on a certain scope.

For example, to adapt my solution above so that dog is only executed once for all those tests, I would just have to change its scope to "package".

Pytest doesn't really optimize for you. But it gives you the tools that make optimizing very easy through scopes and structures. In other words, it doesn't do the optimization work for you; you line the dominoes up, and it follows along. That's what's being talked about here (although I think I should probably update that bit in the docs to make it clearer).

Things also just get a little janky at the function level, so I try to stay away from that.

the order of operations for one test can be dictated by any other test

i don't think it can? how do you mean?

When there isn't a clear, explicit chain of dependencies in fixtures, pytest has to make assumptions about which fixture should execute before another. For example, in this case:

@pytest.fixture
def x():
    pass

@pytest.fixture
def y(x):
    pass

def test_thing(y, x):
    pass

it's clear that y is dependent on x, and therefore x must execute first. But in this example:

@pytest.fixture
def x():
    pass

@pytest.fixture
def y():
    pass

def test_thing(y, x):
    pass

it's unclear which should execute first, so pytest has to pick one to happen first. However, no matter how many other tests are also run, the other tests that are run can't influence the order that pytest chooses for that test, and of the possible ways it could choose, none of them really conflict with each other.

The "only the tests that use this fixture" scope you're proposing would have pytest attempt to optimize when that fixture is run, so it's only active during the fewest tests possible while still only executing once. Because this scope effectively is providing instructions for where that fixture should be placed in the order of operations, using it would mean you should be able to expect that fixture to execute at a consistent point in that order every time, no matter what other tests are supposed to be running.

Consider the following fixtures and tests:

@pytest.fixture(scope="only the tests that use this fixture")
def x():
    pass

@pytest.fixture(scope="only the tests that use this fixture")
def y():
    pass

@pytest.fixture(scope="only the tests that use this fixture")
def z():
    pass

def test_x(x):
    pass

def test_x_y(x, y):
    pass

def test_y(y):
    pass

def test_y_z(y, z):
    pass

def test_z(z):
    pass

def test_z_x(z, x):
    pass

The order of operations for just calling pytest can't really be determined through static analysis, But we can probably figure out all the possibilities (I won't list them, of course, because that's a lot of text). However, The grouping would be pretty strict for the following commands:

  • pytest -k x, which runs these tests:
test::test_x
test::test_x_y
test::test_z_x

with these groupings:

<x>
    <test_x />
    <y>
        <test_x_y />
    </y>
    <z>
        <test_z_x />
    </z>
</x>
  • pytest -k y, which runs these tests:
test::test_y
test::test_x_y
test::test_y_z

with these groupings:

<y>
    <test_y />
    <x>
        <test_x_y />
    </x>
    <z>
        <test_y_z />
    </z>
</y>
  • pytest -k z, which runs these tests:
test::test_z
test::test_z_x
test::test_y_z

with these groupings:

<z>
    <test_z />
    <x>
        <test_z_x />
    </x>
    <y>
        <test_y_z />
    </y>
</z>

While the sorting within the outermost group may vary, the nesting is predictable. These are all in direct conflict with each other. What makes it unpredictable, is the fact that a given test can be absolutely certain what its order of operation will be, but only some of the time, as pytest would then consider all tests that it would be attempting to run before deciding what the OoO for that test would be.

For example, in this case, if test_z_x knows test_y_z will also be running, but test_x_y won't, then it can know that z will be executing before x. But if all 3 will be running, it won't have any idea. It's effectively ambiguous ambiguity.

averater

comment created time in a month

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha c1647f835ed1996e7235a2a313bcbc6836406dc6

Update flows_js.html

view details

push time in a month

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha 3220bbab9972986e1577fcdcbdb18099bcc2186d

Update 2020-06-07-grey-box-less-is-more.md

view details

push time in a month

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha ba6a71ff1e53a9aee16ea93c4abd264faaa11984

Create flows_js.html

view details

push time in a month

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha 95f8e7bd64c6d5ce2a1fa0d8b8e8165649db31a4

Create 2020-06-07-grey-box-less-is-more.md

view details

push time in a month

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha fc9c68a92d7ded9ebcbe426910b728b061f6024b

Add files via upload

view details

push time in a month

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha 66fd7d2c328b4de96e07a12aeb62888cf74f18cf

Add files via upload

view details

push time in a month

push eventSalmonMode/salmonmode.github.io

Chris NeJame

commit sha 389d4f1981aa3234c1d5cf359077e56fdf55a3b6

Update 2020-05-31-one-assert-per-test.md

view details

push time in a month

issue commentpytest-dev/pytest

Conflicting fixtures

Not the same instance, no. My solution is just for providing the same fixture, not for providing the same instance of the fixture.

But that brings us back to relying on logical structures and the scopes tied to them.

Taking a look back at your original comment with more context, it looks like what you are looking for isn't really possible, or at least wouldn't result in something congruent.

Your examples up until now have only had two fixtures with the "only the tests that use this fixture", but it doesn't seem hold up beyond that. It quickly becomes too unpredictable because it inherently means the order of operations for one test can be dictated by any other test.

averater

comment created time in a month

more