Reverts pulumi/pulumi#15234 to disable the `--rerun-fails` option again.
We've done quite a bit of work for flaky tests (there's only some rare
cases left after https://github.com/pulumi/pulumi/pull/15642 gets
merged), so we should be able to disable this option again now, and have
it as a forcing function to keep tests non-flaky.
When `--rerun-fails` is specified and a test is re-run, the produced
coverage data only contains the coverage from the failed but re-executed
sub-tests. See https://github.com/gotestyourself/gotestsum/issues/274
This change sees how we fare with this disabled as we've recently done
some work to address test flakiness. We'll be relying on the much less
granular `./scripts/retry` for retries, which doesn't have the coverage
problem. The downside is it will re-run all the tests.
Fixes https://github.com/pulumi/pulumi/issues/13826.
This brings python inline with node and go as being it's own module and
running codegen via gRPC.
Also includes some improvements to the node and go codegen interfaces
from review.
**Overview**
This re-enables tracking of code coverage.
For Go, there are two kinds of coverage at play:
unit test and integration test coverage.
Unit tests follow the usual pattern of running
`go test -cover -coverprofile=whatever.cov`.
For integration tests, we use the new integration test profiling support
[added in Go 1.20](https://go.dev/testing/coverage/).
In short, the way it works is:
# Build a coverage instrumented binary:
go build -cover
# Set GOCOVERDIR to a directory and run the integration tests
# that will invoke this coverage-instrumented binary.
GOCOVERDIR=$(pwd)/coverage
go test ./tests
# $GOCOVERDIR will now be filled with coverage data
# from every invocation of the coverage-instrumented binary.
# Combine it into a single coverage file:
go tool covdata textfmt -i=$(GOCOVERDIR) -o=out.cov
# The resulting file can be uploaded to codecov as-is.
The above replaces the prior, partially working hacks we had in place
to get coverage-instrumented binaries with `go test -c`
and hijacking the TestMain.
**Notable changes**
- TestMain hijacking is deleted from the Pulumi CLI.
We no longer need this to build coverage-instrumented binaries.
- ProgramTest no longer tracks or passes PULUMI_TEST_COVERAGE_PATH
because the Pulumi binary no longer accepts a test.coverprofile flag.
This information is now in the GOCOVERDIR environment variable.
- We add an `enable-coverage` parameter to the `ci-build-binaries`
workflow to mirror some of the other workflows.
It will produce coverage-instrumented binaries if this is true.
These binaries are then used by `ci-run-test` which will set
`GOCOVERDIR` and merge the coverage results from it.
- Coverage configuration no longer counts tests, testdata,
and Protobuf-generated code against coverage.
- go-wrapper.sh:
Because we're no longer relying on the `go test -c` hack,
this no longer excludes Windows and language providers
from coverage tracking.
- go-test.py and go-wrapper.sh will include pulumi-language-go and
pulumi-language-nodejs in covered packages.
*Other changes*
- go-test.py:
Fixed a bug where `args` parameters added for coverage were ignored.
Note that this change DOES NOT track coverage for calls made to Pulumi
packages by plugins downloaded from external sources,
e.g. provider plugins. Arguably, that's out of scope of coverage
trackcing for the Pulumi repository.
Resolves#8615, #11419
These changes add support for gathering code coverage data during tests.
For tests that do not involve the Pulumi CLI, this is straightforward: all of
the ecosystems we target already support gathering coverage data, and we follow
the rules accordingly. Support for each language is broken out into its own
commit.
For tests that do involve the Pulumi CLI, the picture is a bit more complicated.
Go does not make it trivial to perform a coverage-instrumented build (go build
does not have a -cover flag, for example). In lieu of official support, we abuse
go test -c and TestMain to produce a build of the CLI that supports collecting
and reporting coverage data.
* Experiment with gotestsum and test timings
* Fix to locating the helper script
* Fix the code for installing gotestsum
* Try alternative installation method
* Use go to compute test stats; Python fails parsing time values
* Try version without v
* Try with fixed gorelaser config
* Fix test time correlation
* Try a stable test stat sort finally
* Use more accurate test duration aggregation
* Include python and auto-api tests in the Go timing counts
* Bring back TESTPARALLELISM
* Fix test compilation
* Only top 100 slow tests
* Try to fracture build matrix to fan out tests
* Do not run Publish Test Results on unsuppored Mac
* Auto-create test-results-dir
* Fix new flaky test by polling for logs
* Try to move native tests to their own config
* Actually skip
* Do not fail on empty test-results folder
* Try again
* Try once more
* Integration test config is the crit path - make it smaller
* Squash underutilized test configurations
* Remove the test result summary box from PR - counts now incorrec
* Remove debugging step