pulumi/tests/performance
Julien c5842613a0
Add simple performance gate to integration tests (#17364)
Add a performance gate for PRs and merges to the master branch. The
performance tests can be run locally using `make test_performance`.

This PR adds an initial batch of performance tests, all using Python:

* TestPerfEmptyUpdate: an empty Python program that does nothibg
* TestPerfManyComponentUpdate: a Python program that creates 100
resources
* TestPerfParentChainUpdate: a Python program that creates 100
resources, parented in a chain

More tests for other scenarios and languages can be added to
`tests/performance/performance_test.go`.

The tests are run in a separate GitHub actions workflow so that we can
use binaries that are built without coverage instrumentation or race
detection, which could otherwise have an impact on performance. This
also ensures that we use the same setup in PRs and in the merge queue.

The thresholds used to determine if a test has passed or failed are
highly dependent on the GitHub Actions Runners. Initial thresholds have
been set by running the tests multiple times, taking the slowest run,
and adding 10% (rounded to 100ms). These thresholds are not perfect and
may need to be adjusted over time.

### TestPerfEmptyUpdate
5.06
5.11
4.82
4.95
5.74

### TestPerfManyComponentUpdate
16.7
17.1
16.62
16.28
16.29

### TestPerfParentChainUpdate
17.58
17.58
17.46
17.23
17.34

Fixes https://github.com/pulumi/pulumi/issues/15347
2024-11-04 21:26:27 +00:00
..
python Add simple performance gate to integration tests (#17364) 2024-11-04 21:26:27 +00:00
README.md Add simple performance gate to integration tests (#17364) 2024-11-04 21:26:27 +00:00
performance_test.go Add simple performance gate to integration tests (#17364) 2024-11-04 21:26:27 +00:00

README.md

Performance Tests

This package contains basic performance tests for the Pulumi CLI. These tests are intended to run as part of pull request and prevent us from introducing major performance regressions.

The cli-performance-metrics repository contains more comprehensive performance tests that regularly run as a cron job.

Thresholds

The thresholds used to determine if a test has passed or failed are highly dependent on the GitHub Actions Runners. Initial thresholds have been set by running the tests multiple times, taking the slowest run, and adding 10% (rounded to 100ms). These thresholds are not perfect and may need to be adjusted over time.

Running the Tests

From the root of the repository, run:

make test_performance