pulumi/pkg/engine/lifecycletest/step_generator_test.go

380 lines
13 KiB
Go
Raw Permalink Normal View History

package lifecycletest
import (
"context"
"testing"
"github.com/blang/semver"
2023-11-21 15:16:13 +00:00
. "github.com/pulumi/pulumi/pkg/v3/engine" //nolint:revive
"github.com/pulumi/pulumi/pkg/v3/resource/deploy"
"github.com/pulumi/pulumi/pkg/v3/resource/deploy/deploytest"
"github.com/pulumi/pulumi/sdk/v3/go/common/resource"
"github.com/pulumi/pulumi/sdk/v3/go/common/resource/plugin"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestDuplicateURN tests that duplicate URNs are disallowed.
func TestDuplicateURN(t *testing.T) {
t.Parallel()
loaders := []*deploytest.ProviderLoader{
deploytest.NewProviderLoader("pkgA", semver.MustParse("1.0.0"), func() (plugin.Provider, error) {
return &deploytest.Provider{}, nil
}),
}
Lifecycle tests shouldn't use a closed host (#14063) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> This PR fixes the inadvertent use of a closed plugin host in the lifecycle tests. The tests override the host that is provided to the engine, for good reasons, but that same host is re-used across multiple engine operations. Since the engine closes the supplied host at the end of each operation, subsequent operations are handed a closed host. In order to detect engine bugs related to the use of a closed host (see https://github.com/pulumi/pulumi/pull/14057), the fake host should return an error if it is used after being closed (as does the real host). This PR addresses this. The detailed change is to shift to using a host factory that produces a host in `TestOp.Run`. The `TestPlan` now takes a `TestUpdateOptions` with `HostF` and an embedded `UpdateOptions`. Note that two tests fail due to https://github.com/pulumi/pulumi/pull/14057 which was being masked by the problem that is fixed here. This PR disables those tests and the other PR will re-enable them. - `TestCanceledRefresh` - `TestProviderCancellation` ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-09-28 21:50:18 +00:00
programF := deploytest.NewLanguageRuntimeF(func(_ plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resA", true)
require.NoError(t, err)
_, err = monitor.RegisterResource("pkgA:m:typA", "resA", true)
assert.Error(t, err)
// Reads use the same URN namespace as register so make sure this also errors
_, _, err = monitor.ReadResource("pkgA:m:typA", "resA", "id", "", resource.PropertyMap{}, "", "", "", "")
assert.Error(t, err)
return nil
})
Lifecycle tests shouldn't use a closed host (#14063) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> This PR fixes the inadvertent use of a closed plugin host in the lifecycle tests. The tests override the host that is provided to the engine, for good reasons, but that same host is re-used across multiple engine operations. Since the engine closes the supplied host at the end of each operation, subsequent operations are handed a closed host. In order to detect engine bugs related to the use of a closed host (see https://github.com/pulumi/pulumi/pull/14057), the fake host should return an error if it is used after being closed (as does the real host). This PR addresses this. The detailed change is to shift to using a host factory that produces a host in `TestOp.Run`. The `TestPlan` now takes a `TestUpdateOptions` with `HostF` and an embedded `UpdateOptions`. Note that two tests fail due to https://github.com/pulumi/pulumi/pull/14057 which was being masked by the problem that is fixed here. This PR disables those tests and the other PR will re-enable them. - `TestCanceledRefresh` - `TestProviderCancellation` ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-09-28 21:50:18 +00:00
hostF := deploytest.NewPluginHostF(nil, nil, programF, loaders...)
p := &TestPlan{
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
Options: TestUpdateOptions{T: t, HostF: hostF},
}
project := p.GetProject()
_, err := TestOp(Update).Run(project, p.GetTarget(t, nil), p.Options, false, p.BackendClient, nil)
assert.Error(t, err)
}
// TestDuplicateAlias tests that multiple new resources may not claim to be aliases for the same old resource.
func TestDuplicateAlias(t *testing.T) {
t.Parallel()
loaders := []*deploytest.ProviderLoader{
deploytest.NewProviderLoader("pkgA", semver.MustParse("1.0.0"), func() (plugin.Provider, error) {
return &deploytest.Provider{}, nil
}),
}
program := func(monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resA", true)
assert.NoError(t, err)
return nil
}
Lifecycle tests shouldn't use a closed host (#14063) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> This PR fixes the inadvertent use of a closed plugin host in the lifecycle tests. The tests override the host that is provided to the engine, for good reasons, but that same host is re-used across multiple engine operations. Since the engine closes the supplied host at the end of each operation, subsequent operations are handed a closed host. In order to detect engine bugs related to the use of a closed host (see https://github.com/pulumi/pulumi/pull/14057), the fake host should return an error if it is used after being closed (as does the real host). This PR addresses this. The detailed change is to shift to using a host factory that produces a host in `TestOp.Run`. The `TestPlan` now takes a `TestUpdateOptions` with `HostF` and an embedded `UpdateOptions`. Note that two tests fail due to https://github.com/pulumi/pulumi/pull/14057 which was being masked by the problem that is fixed here. This PR disables those tests and the other PR will re-enable them. - `TestCanceledRefresh` - `TestProviderCancellation` ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-09-28 21:50:18 +00:00
runtimeF := deploytest.NewLanguageRuntimeF(func(_ plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
return program(monitor)
})
Lifecycle tests shouldn't use a closed host (#14063) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> This PR fixes the inadvertent use of a closed plugin host in the lifecycle tests. The tests override the host that is provided to the engine, for good reasons, but that same host is re-used across multiple engine operations. Since the engine closes the supplied host at the end of each operation, subsequent operations are handed a closed host. In order to detect engine bugs related to the use of a closed host (see https://github.com/pulumi/pulumi/pull/14057), the fake host should return an error if it is used after being closed (as does the real host). This PR addresses this. The detailed change is to shift to using a host factory that produces a host in `TestOp.Run`. The `TestPlan` now takes a `TestUpdateOptions` with `HostF` and an embedded `UpdateOptions`. Note that two tests fail due to https://github.com/pulumi/pulumi/pull/14057 which was being masked by the problem that is fixed here. This PR disables those tests and the other PR will re-enable them. - `TestCanceledRefresh` - `TestProviderCancellation` ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-09-28 21:50:18 +00:00
hostF := deploytest.NewPluginHostF(nil, nil, runtimeF, loaders...)
p := &TestPlan{
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
Options: TestUpdateOptions{T: t, HostF: hostF},
}
resURN := p.NewURN("pkgA:m:typA", "resA", "")
project := p.GetProject()
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
snap, err := TestOp(Update).RunStep(project, p.GetTarget(t, nil), p.Options, false, p.BackendClient, nil, "0")
assert.NoError(t, err)
program = func(monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resB", true, deploytest.ResourceOptions{
AliasURNs: []resource.URN{resURN},
})
require.NoError(t, err)
_, err = monitor.RegisterResource("pkgA:m:typA", "resC", true, deploytest.ResourceOptions{
AliasURNs: []resource.URN{resURN},
})
assert.Error(t, err)
return nil
}
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
_, err = TestOp(Update).RunStep(project, p.GetTarget(t, snap), p.Options, false, p.BackendClient, nil, "1")
assert.Error(t, err)
}
func TestSecretMasked(t *testing.T) {
t.Parallel()
loaders := []*deploytest.ProviderLoader{
deploytest.NewProviderLoader("pkgA", semver.MustParse("1.0.0"), func() (plugin.Provider, error) {
return &deploytest.Provider{
CreateF: func(_ context.Context, req plugin.CreateRequest) (plugin.CreateResponse, error) {
// Return the secret value as an unmasked output. This should get masked by the engine.
return plugin.CreateResponse{
ID: "id",
Properties: resource.PropertyMap{
"shouldBeSecret": resource.NewStringProperty("bar"),
},
Status: resource.StatusOK,
}, nil
},
}, nil
}),
}
Lifecycle tests shouldn't use a closed host (#14063) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> This PR fixes the inadvertent use of a closed plugin host in the lifecycle tests. The tests override the host that is provided to the engine, for good reasons, but that same host is re-used across multiple engine operations. Since the engine closes the supplied host at the end of each operation, subsequent operations are handed a closed host. In order to detect engine bugs related to the use of a closed host (see https://github.com/pulumi/pulumi/pull/14057), the fake host should return an error if it is used after being closed (as does the real host). This PR addresses this. The detailed change is to shift to using a host factory that produces a host in `TestOp.Run`. The `TestPlan` now takes a `TestUpdateOptions` with `HostF` and an embedded `UpdateOptions`. Note that two tests fail due to https://github.com/pulumi/pulumi/pull/14057 which was being masked by the problem that is fixed here. This PR disables those tests and the other PR will re-enable them. - `TestCanceledRefresh` - `TestProviderCancellation` ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-09-28 21:50:18 +00:00
programF := deploytest.NewLanguageRuntimeF(func(_ plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resA", true, deploytest.ResourceOptions{
Inputs: resource.PropertyMap{
"shouldBeSecret": resource.MakeSecret(resource.NewStringProperty("bar")),
},
})
require.NoError(t, err)
return nil
})
Lifecycle tests shouldn't use a closed host (#14063) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> This PR fixes the inadvertent use of a closed plugin host in the lifecycle tests. The tests override the host that is provided to the engine, for good reasons, but that same host is re-used across multiple engine operations. Since the engine closes the supplied host at the end of each operation, subsequent operations are handed a closed host. In order to detect engine bugs related to the use of a closed host (see https://github.com/pulumi/pulumi/pull/14057), the fake host should return an error if it is used after being closed (as does the real host). This PR addresses this. The detailed change is to shift to using a host factory that produces a host in `TestOp.Run`. The `TestPlan` now takes a `TestUpdateOptions` with `HostF` and an embedded `UpdateOptions`. Note that two tests fail due to https://github.com/pulumi/pulumi/pull/14057 which was being masked by the problem that is fixed here. This PR disables those tests and the other PR will re-enable them. - `TestCanceledRefresh` - `TestProviderCancellation` ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-09-28 21:50:18 +00:00
hostF := deploytest.NewPluginHostF(nil, nil, programF, loaders...)
p := &TestPlan{
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
// Skip display tests because secrets are serialized with the blinding crypter and can't be restored
Options: TestUpdateOptions{T: t, HostF: hostF, SkipDisplayTests: true},
}
project := p.GetProject()
snap, err := TestOp(Update).Run(project, p.GetTarget(t, nil), p.Options, false, p.BackendClient, nil)
assert.NoError(t, err)
assert.NotNil(t, snap)
if snap != nil {
assert.True(t, snap.Resources[1].Outputs["shouldBeSecret"].IsSecret())
}
}
// TestReadReplaceStep creates a resource and then replaces it with a read resource.
func TestReadReplaceStep(t *testing.T) {
t.Parallel()
// Create resource.
newTestBuilder(t, nil).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
CreateF: func(_ context.Context, req plugin.CreateRequest) (plugin.CreateResponse, error) {
return plugin.CreateResponse{
ID: "created-id",
Properties: req.Properties,
Status: resource.StatusOK,
}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resA", true)
assert.NoError(t, err)
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, true).
Then(func(snap *deploy.Snapshot, err error) {
assert.NoError(t, err)
assert.NotNil(t, snap)
assert.Nil(t, snap.VerifyIntegrity())
assert.Len(t, snap.Resources, 2)
assert.Equal(t, resource.URN("urn:pulumi:test::test::pkgA:m:typA::resA"), snap.Resources[1].URN)
assert.False(t, snap.Resources[1].External)
// ReadReplace resource.
newTestBuilder(t, snap).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
ReadF: func(_ context.Context, req plugin.ReadRequest) (plugin.ReadResponse, error) {
return plugin.ReadResponse{
ReadResult: plugin.ReadResult{Outputs: resource.PropertyMap{}},
Status: resource.StatusOK,
}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, _, err := monitor.ReadResource("pkgA:m:typA", "resA", "read-id", "", nil, "", "", "", "")
assert.NoError(t, err)
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, false).
Then(func(snap *deploy.Snapshot, err error) {
assert.NoError(t, err)
assert.NotNil(t, snap)
assert.Nil(t, snap.VerifyIntegrity())
assert.Len(t, snap.Resources, 2)
assert.Equal(t, resource.URN("urn:pulumi:test::test::pkgA:m:typA::resA"), snap.Resources[1].URN)
assert.True(t, snap.Resources[1].External)
})
})
}
func TestRelinquishStep(t *testing.T) {
t.Parallel()
const resourceID = "my-resource-id"
newTestBuilder(t, nil).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
CreateF: func(_ context.Context, req plugin.CreateRequest) (plugin.CreateResponse, error) {
// Should match the ReadResource resource ID.
return plugin.CreateResponse{
ID: resourceID,
Properties: req.Properties,
Status: resource.StatusOK,
}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resA", true)
assert.NoError(t, err)
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, true).
Then(func(snap *deploy.Snapshot, err error) {
assert.NotNil(t, snap)
assert.Nil(t, snap.VerifyIntegrity())
assert.Len(t, snap.Resources, 2)
assert.Equal(t, resource.URN("urn:pulumi:test::test::pkgA:m:typA::resA"), snap.Resources[1].URN)
assert.False(t, snap.Resources[1].External)
newTestBuilder(t, snap).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
ReadF: func(_ context.Context, req plugin.ReadRequest) (plugin.ReadResponse, error) {
return plugin.ReadResponse{
ReadResult: plugin.ReadResult{Outputs: resource.PropertyMap{}},
Status: resource.StatusOK,
}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, _, err := monitor.ReadResource("pkgA:m:typA", "resA", resourceID, "", nil, "", "", "", "")
assert.NoError(t, err)
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, true).
Then(func(snap *deploy.Snapshot, err error) {
assert.NoError(t, err)
assert.NotNil(t, snap)
assert.Nil(t, snap.VerifyIntegrity())
assert.Len(t, snap.Resources, 2)
assert.Equal(t, resource.URN("urn:pulumi:test::test::pkgA:m:typA::resA"), snap.Resources[1].URN)
assert.True(t, snap.Resources[1].External)
})
})
}
func TestTakeOwnershipStep(t *testing.T) {
t.Parallel()
newTestBuilder(t, nil).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
ReadF: func(_ context.Context, req plugin.ReadRequest) (plugin.ReadResponse, error) {
return plugin.ReadResponse{
ReadResult: plugin.ReadResult{Outputs: resource.PropertyMap{}},
Status: resource.StatusOK,
}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, _, err := monitor.ReadResource("pkgA:m:typA", "resA", "my-resource-id", "", nil, "", "", "", "")
assert.NoError(t, err)
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, false).
Then(func(snap *deploy.Snapshot, err error) {
assert.NoError(t, err)
assert.NotNil(t, snap)
assert.Nil(t, snap.VerifyIntegrity())
assert.Len(t, snap.Resources, 2)
assert.Equal(t, resource.URN("urn:pulumi:test::test::pkgA:m:typA::resA"), snap.Resources[1].URN)
assert.True(t, snap.Resources[1].External)
// Create new resource for this snapshot.
newTestBuilder(t, snap).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
CreateF: func(_ context.Context, req plugin.CreateRequest) (plugin.CreateResponse, error) {
// Should match the ReadF resource ID.
return plugin.CreateResponse{
ID: "my-resource-id",
Properties: req.Properties,
Status: resource.StatusOK,
}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resA", true)
assert.NoError(t, err)
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, true).
Then(func(snap *deploy.Snapshot, err error) {
assert.NoError(t, err)
assert.NotNil(t, snap)
assert.Nil(t, snap.VerifyIntegrity())
assert.Len(t, snap.Resources, 2)
assert.Equal(t, resource.URN("urn:pulumi:test::test::pkgA:m:typA::resA"), snap.Resources[1].URN)
assert.False(t, snap.Resources[1].External)
})
})
}
func TestInitErrorsStep(t *testing.T) {
t.Parallel()
// Create new resource for this snapshot.
newTestBuilder(t, &deploy.Snapshot{
Resources: []*resource.State{
{
Type: "pulumi:providers:pkgA",
URN: "urn:pulumi:test::test::pulumi:providers:pkgA::default",
Custom: true,
Delete: false,
ID: "935b2216-aec5-4810-96fd-5f6eae57ac88",
},
{
Type: "pkgA:m:typA",
URN: "urn:pulumi:test::test::pkgA:m:typA::resA",
Custom: true,
ID: "my-resource-id",
Provider: "urn:pulumi:test::test::pulumi:providers:pkgA::default::935b2216-aec5-4810-96fd-5f6eae57ac88",
InitErrors: []string{
`errors should yield an empty update to "continue" awaiting initialization.`,
},
},
},
}).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
CreateF: func(_ context.Context, req plugin.CreateRequest) (plugin.CreateResponse, error) {
return plugin.CreateResponse{
ID: "my-resource-id",
Properties: req.Properties,
Status: resource.StatusOK,
}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, err := monitor.RegisterResource("pkgA:m:typA", "resA", true)
assert.NoError(t, err)
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, false).
Then(func(snap *deploy.Snapshot, err error) {
assert.NoError(t, err)
assert.NotNil(t, snap)
assert.Nil(t, snap.VerifyIntegrity())
assert.Len(t, snap.Resources, 2)
assert.Equal(t, resource.URN("urn:pulumi:test::test::pkgA:m:typA::resA"), snap.Resources[1].URN)
assert.Empty(t, snap.Resources[1].InitErrors)
})
}
func TestReadNilOutputs(t *testing.T) {
t.Parallel()
const resourceID = "my-resource-id"
newTestBuilder(t, nil).
WithProvider("pkgA", "1.0.0", &deploytest.Provider{
ReadF: func(_ context.Context, req plugin.ReadRequest) (plugin.ReadResponse, error) {
return plugin.ReadResponse{}, nil
},
}).
RunUpdate(func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error {
_, _, err := monitor.ReadResource("pkgA:m:typA", "resA", resourceID, "", nil, "", "", "", "")
assert.ErrorContains(t, err, "resource 'my-resource-id' does not exist")
return nil
Add display to the engine tests (#16050) We want to add more test coverage to the display code. The best way to do that is to add it to the engine tests, that already cover most of the pulumi functionality. It's probably not really possible to review all of the output, but at least it gives us a baseline, which we can work with. There's a couple of tests that are flaky for reasons I don't quite understand yet. I marked them as to skip and we can look at them later. I'd rather get in the baseline tests sooner, rather than spending a bunch of time looking at that. The output differences also seem very minor, so not super concerning. The biggest remaining issue is that this doesn't interact well with the Chdir we're doing in the engine. We could either pass the CWD through, or just try to get rid of that Chdir. So this should only be merged after https://github.com/pulumi/pulumi/pull/15607. I've tried to split this into a few commits, separating out adding the testdata, so it's hopefully a little easier to review, even though the PR is still quite large. One other thing to note is that we're comparing that the output has all the same lines, and not that it is exactly the same. Because of how the engine is implemented, there's a bunch of race conditions otherwise, that would make us have to skip a bunch of tests, just because e.g. resource A is sometimes deleted before resource B and sometimes it's the other way around. The biggest downside of that is that running with `PULUMI_ACCEPT` will produce a diff even when there are no changes. Hopefully we won't have to run that way too often though, so it might not be a huge issue? --------- Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
}, true).
Then(func(snap *deploy.Snapshot, err error) {
assert.ErrorContains(t, err,
"BAIL: step executor errored: step application failed: resource 'my-resource-id' does not exist")
})
}