391fe212ae
This commit begins the process of documenting how we test Pulumi, from unit tests to language conformance. It's nowhere near complete yet but the various sections have been stubbed out and anything that already exists has been ported over and linked to other new pages where possible. This commit is a good example of where we can start to take advantage of Sphinx's rich cross-references, placing e.g. `README.md` files close to the things they document (such as `cmd/pulumi-test-language/README.md`) while folding them into a larger section on testing using Sphinx's tables of contents (TOCs). |
||
---|---|---|
.. | ||
README.md | ||
integration.md | ||
unit.md |
README.md
(testing)=
Testing
The surface are of pulumi/pulumi
is large. A single release of "Pulumi"
encapsulates both a version of the deployment engine and CLI, for multiple
platforms (e.g. Linux, macOS and Windows), and a full set of language SDKs
(across TypeScript/NodeJS, Python, Go, .Net, Java, YAML, and maybe more by the
time you are reading this). Automated testing is a critical part of making sure
that things work as expected without requiring undue manual intervention. Within
the repository there are a number of different types of tests that are run as
part of the development, CI/CD and release processes.
(codegen-tests)=
Code generation tests
:::{toctree} :maxdepth: 1 :titlesonly:
/docs/architecture/testing/unit /docs/architecture/testing/integration /pkg/engine/lifecycletest/README /cmd/pulumi-test-language/README :::