2.3 KiB
Tests
Tests are automatically run via GitHub actions. There are two workflows:
The first workflow is required to pass for all PRs (and it doesn't do any OpenAI calls). The second workflow is required for changes that affect the OpenAI tests (and does actually call LLM). The second workflow requires approval to run. When writing tests that require OpenAI calls, please use pytest.mark.skipif to make them run in only when openai package is installed. If additional dependency for this test is required, install the dependency in the corresponding python version in openai.yml.
Make sure all tests pass, this is required for build.yml checks to pass
Running tests locally
To run tests, install the [test] option:
pip install -e."[test]"
Then you can run the tests from the test folder using the following command:
pytest test
Tests for the autogen.agentchat.contrib module may be skipped automatically if the
required dependencies are not installed. Please consult the documentation for
each contrib module to see what dependencies are required.
See here for how to run notebook tests.
Skip flags for tests
--skip-openaifor skipping tests that require access to OpenAI services.--skip-dockerfor skipping tests that explicitly use docker--skip-redisfor skipping tests that require a Redis server
For example, the following command will skip tests that require access to OpenAI and docker services:
pytest test --skip-openai --skip-docker
Coverage
Any code you commit should not decrease coverage. To run all unit tests, install the [test] option:
pip install -e."[test]"
coverage run -m pytest test
Then you can see the coverage report by
coverage report -m or coverage html.