2021-09-21 17:00:44 +00:00
|
|
|
//nolint:revive
|
2020-10-15 17:35:09 +00:00
|
|
|
package lifecycletest
|
|
|
|
|
|
|
|
import (
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
"bytes"
|
2020-10-15 17:35:09 +00:00
|
|
|
"context"
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
"encoding/base64"
|
|
|
|
"encoding/json"
|
2024-01-03 17:32:13 +00:00
|
|
|
"errors"
|
2024-04-11 22:54:08 +00:00
|
|
|
"fmt"
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
"io"
|
|
|
|
"os"
|
|
|
|
"path/filepath"
|
2020-10-15 17:35:09 +00:00
|
|
|
"reflect"
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
"regexp"
|
|
|
|
"strings"
|
2020-10-15 17:35:09 +00:00
|
|
|
"testing"
|
|
|
|
|
2023-12-10 21:29:37 +00:00
|
|
|
"github.com/blang/semver"
|
2020-10-15 17:35:09 +00:00
|
|
|
"github.com/mitchellh/copystructure"
|
|
|
|
"github.com/stretchr/testify/assert"
|
[engine] Only record a resource's chosen alias. (#9288)
As we discovered when removing aliases from the state entirely, the
snapshotter needs to be alias-aware so that it can fix up references to
resources that were aliased. After a resource operation finishes, the
snapshotter needs to write out a new copy of the snapshot. However, at
the time we write the snapshot, there may be resources that have not yet
been registered that refer to the just-registered resources by a
different URN due to aliasing. Those references need to be fixed up
prior to writing the snapshot in order to preserve the snapshot's
integrity (in particular, the property that all URNs refer to resources
that exist in the snapshot).
For example, consider the following simple dependency graph: A <-- B.
When that graph is serialized, B will contain a reference to A in its
dependency list. Let the next run of the program produces the graph A'
<-- B where A' is aliased to A. After A' is registered, the snapshotter
needs to write a snapshot that contains its state, but B must also be
updated so it references A' instead of A, which will no longer be in the
snapshot.
These changes take advantage of the fact that although a resource can
provide multiple aliases, it can only ever resolve those aliases to a
single resource in the existing state. Therefore, at the time the
statefile is fixed up, each resource in the statefile could only have
been aliased to a single old resource, and it is sufficient to store
only the URN of the chosen resource rather than all possible aliases. In
addition to preserving the ability to fix up references to aliased
resources, retaining the chosen alias allows the history of a logical
resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
|
|
|
"github.com/stretchr/testify/require"
|
2020-10-15 17:35:09 +00:00
|
|
|
|
2024-04-11 22:54:08 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/backend"
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
bdisplay "github.com/pulumi/pulumi/pkg/v3/backend/display"
|
2023-09-18 11:01:28 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/display"
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/engine"
|
2021-03-17 13:20:05 +00:00
|
|
|
. "github.com/pulumi/pulumi/pkg/v3/engine"
|
|
|
|
"github.com/pulumi/pulumi/pkg/v3/resource/deploy"
|
2023-09-28 21:50:18 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/resource/deploy/deploytest"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/resource/deploy/providers"
|
2024-04-11 22:54:08 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/secrets/b64"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/util/cancel"
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/apitype"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/diag/colors"
|
2023-11-15 14:53:12 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/promise"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/resource"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/resource/config"
|
2023-12-10 21:29:37 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/resource/plugin"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/slice"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/tokens"
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/cmdutil"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/contract"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/result"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/workspace"
|
2020-10-15 17:35:09 +00:00
|
|
|
)
|
|
|
|
|
2024-04-11 22:54:08 +00:00
|
|
|
func snapshotEqual(journal, manager *deploy.Snapshot) error {
|
|
|
|
// Just want to check the same operations and resources are counted, but order might be slightly different.
|
|
|
|
if journal == nil && manager == nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
if journal == nil {
|
2024-04-19 06:20:33 +00:00
|
|
|
return errors.New("journal snapshot is nil")
|
2024-04-11 22:54:08 +00:00
|
|
|
}
|
|
|
|
if manager == nil {
|
2024-04-19 06:20:33 +00:00
|
|
|
return errors.New("manager snapshot is nil")
|
2024-04-11 22:54:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Manifests and SecretsManagers are known to differ because we don't thread them through for the Journal code.
|
|
|
|
|
|
|
|
if len(journal.PendingOperations) != len(manager.PendingOperations) {
|
2024-04-19 06:20:33 +00:00
|
|
|
return errors.New("journal and manager pending operations differ")
|
2024-04-11 22:54:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for _, jop := range journal.PendingOperations {
|
|
|
|
found := false
|
|
|
|
for _, mop := range manager.PendingOperations {
|
|
|
|
if reflect.DeepEqual(jop, mop) {
|
|
|
|
found = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !found {
|
|
|
|
return fmt.Errorf("journal and manager pending operations differ, %v not found in manager", jop)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(journal.Resources) != len(manager.Resources) {
|
2024-04-19 06:20:33 +00:00
|
|
|
return errors.New("journal and manager resources differ")
|
2024-04-11 22:54:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for _, jr := range journal.Resources {
|
|
|
|
found := false
|
|
|
|
for _, mr := range manager.Resources {
|
|
|
|
if reflect.DeepEqual(jr, mr) {
|
|
|
|
found = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !found {
|
|
|
|
return fmt.Errorf("journal and manager resources differ, %v not found in manager", jr)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
type updateInfo struct {
|
|
|
|
project workspace.Project
|
|
|
|
target deploy.Target
|
|
|
|
}
|
|
|
|
|
|
|
|
func (u *updateInfo) GetRoot() string {
|
2024-01-25 23:28:58 +00:00
|
|
|
// These tests run in-memory, so we don't have a real root. Just pretend we're at the filesystem root.
|
|
|
|
return "/"
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (u *updateInfo) GetProject() *workspace.Project {
|
|
|
|
return &u.project
|
|
|
|
}
|
|
|
|
|
|
|
|
func (u *updateInfo) GetTarget() *deploy.Target {
|
|
|
|
return &u.target
|
|
|
|
}
|
|
|
|
|
2020-11-11 05:11:30 +00:00
|
|
|
func ImportOp(imports []deploy.Import) TestOp {
|
2022-01-31 10:31:51 +00:00
|
|
|
return TestOp(func(info UpdateInfo, ctx *Context, opts UpdateOptions,
|
2023-03-03 16:36:39 +00:00
|
|
|
dryRun bool,
|
2023-10-11 14:44:09 +00:00
|
|
|
) (*deploy.Plan, display.ResourceChanges, error) {
|
2020-11-11 05:11:30 +00:00
|
|
|
return Import(info, ctx, opts, imports, dryRun)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2023-10-11 14:44:09 +00:00
|
|
|
type TestOp func(UpdateInfo, *Context, UpdateOptions, bool) (*deploy.Plan, display.ResourceChanges, error)
|
2020-11-11 05:11:30 +00:00
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
type ValidateFunc func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
events []Event, err error) error
|
2020-10-15 17:35:09 +00:00
|
|
|
|
2023-09-28 21:50:18 +00:00
|
|
|
func (op TestOp) Plan(project workspace.Project, target deploy.Target, opts TestUpdateOptions,
|
2023-03-03 16:36:39 +00:00
|
|
|
backendClient deploy.BackendClient, validate ValidateFunc,
|
2023-10-11 14:44:09 +00:00
|
|
|
) (*deploy.Plan, error) {
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
plan, _, err := op.runWithContext(context.Background(), project, target, opts, true, backendClient, validate, "")
|
2023-10-11 14:44:09 +00:00
|
|
|
return plan, err
|
2022-01-31 10:31:51 +00:00
|
|
|
}
|
|
|
|
|
2023-09-28 21:50:18 +00:00
|
|
|
func (op TestOp) Run(project workspace.Project, target deploy.Target, opts TestUpdateOptions,
|
2023-03-03 16:36:39 +00:00
|
|
|
dryRun bool, backendClient deploy.BackendClient, validate ValidateFunc,
|
2023-10-11 14:44:09 +00:00
|
|
|
) (*deploy.Snapshot, error) {
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
return op.RunStep(project, target, opts, dryRun, backendClient, validate, "")
|
|
|
|
}
|
|
|
|
|
|
|
|
func (op TestOp) RunStep(project workspace.Project, target deploy.Target, opts TestUpdateOptions,
|
|
|
|
dryRun bool, backendClient deploy.BackendClient, validate ValidateFunc, name string,
|
|
|
|
) (*deploy.Snapshot, error) {
|
|
|
|
return op.RunWithContextStep(context.Background(), project, target, opts, dryRun, backendClient, validate, name)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (op TestOp) RunWithContext(
|
|
|
|
callerCtx context.Context, project workspace.Project,
|
2023-09-28 21:50:18 +00:00
|
|
|
target deploy.Target, opts TestUpdateOptions, dryRun bool,
|
2023-03-03 16:36:39 +00:00
|
|
|
backendClient deploy.BackendClient, validate ValidateFunc,
|
2023-10-11 14:44:09 +00:00
|
|
|
) (*deploy.Snapshot, error) {
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
return op.RunWithContextStep(callerCtx, project, target, opts, dryRun, backendClient, validate, "")
|
|
|
|
}
|
|
|
|
|
|
|
|
func (op TestOp) RunWithContextStep(
|
|
|
|
callerCtx context.Context, project workspace.Project,
|
|
|
|
target deploy.Target, opts TestUpdateOptions, dryRun bool,
|
|
|
|
backendClient deploy.BackendClient, validate ValidateFunc, name string,
|
|
|
|
) (*deploy.Snapshot, error) {
|
|
|
|
_, snap, err := op.runWithContext(callerCtx, project, target, opts, dryRun, backendClient, validate, name)
|
2023-10-11 14:44:09 +00:00
|
|
|
return snap, err
|
2022-01-31 10:31:51 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (op TestOp) runWithContext(
|
|
|
|
callerCtx context.Context, project workspace.Project,
|
2023-09-28 21:50:18 +00:00
|
|
|
target deploy.Target, opts TestUpdateOptions, dryRun bool,
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
backendClient deploy.BackendClient, validate ValidateFunc, name string,
|
2023-10-11 14:44:09 +00:00
|
|
|
) (*deploy.Plan, *deploy.Snapshot, error) {
|
2020-10-15 17:35:09 +00:00
|
|
|
// Create an appropriate update info and context.
|
|
|
|
info := &updateInfo{project: project, target: target}
|
|
|
|
|
|
|
|
cancelCtx, cancelSrc := cancel.NewContext(context.Background())
|
|
|
|
done := make(chan bool)
|
|
|
|
defer close(done)
|
|
|
|
go func() {
|
|
|
|
select {
|
|
|
|
case <-callerCtx.Done():
|
|
|
|
cancelSrc.Cancel()
|
|
|
|
case <-done:
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
|
|
|
events := make(chan Event)
|
|
|
|
journal := NewJournal()
|
2024-04-11 22:54:08 +00:00
|
|
|
persister := &backend.InMemoryPersister{}
|
|
|
|
secretsManager := b64.NewBase64SecretsManager()
|
|
|
|
snapshotManager := backend.NewSnapshotManager(persister, secretsManager, target.Snapshot)
|
|
|
|
|
|
|
|
combined := &CombinedManager{
|
|
|
|
Managers: []SnapshotManager{journal, snapshotManager},
|
|
|
|
}
|
2020-10-15 17:35:09 +00:00
|
|
|
|
|
|
|
ctx := &Context{
|
|
|
|
Cancel: cancelCtx,
|
|
|
|
Events: events,
|
2024-04-11 22:54:08 +00:00
|
|
|
SnapshotManager: combined,
|
2020-10-15 17:35:09 +00:00
|
|
|
BackendClient: backendClient,
|
|
|
|
}
|
|
|
|
|
2023-09-28 21:50:18 +00:00
|
|
|
updateOpts := opts.Options()
|
|
|
|
defer func() {
|
|
|
|
if updateOpts.Host != nil {
|
|
|
|
contract.IgnoreClose(updateOpts.Host)
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
// Begin draining events.
|
2023-11-15 14:53:12 +00:00
|
|
|
firedEventsPromise := promise.Run(func() ([]Event, error) {
|
|
|
|
var firedEvents []Event
|
2020-10-15 17:35:09 +00:00
|
|
|
for e := range events {
|
|
|
|
firedEvents = append(firedEvents, e)
|
|
|
|
}
|
2023-11-15 14:53:12 +00:00
|
|
|
return firedEvents, nil
|
|
|
|
})
|
2020-10-15 17:35:09 +00:00
|
|
|
|
|
|
|
// Run the step and its validator.
|
2024-05-06 17:34:24 +00:00
|
|
|
plan, _, opErr := op(info, ctx, updateOpts, dryRun)
|
2022-07-12 16:39:07 +00:00
|
|
|
close(events)
|
2024-04-11 22:54:08 +00:00
|
|
|
closeErr := combined.Close()
|
2020-10-15 17:35:09 +00:00
|
|
|
|
2023-11-16 09:58:30 +00:00
|
|
|
// Wait for the events to finish. You'd think this would cancel with the callerCtx but tests explicitly use that for
|
|
|
|
// the deployment context, not expecting it to have any effect on the test code here. See
|
|
|
|
// https://github.com/pulumi/pulumi/issues/14588 for what happens if you try to use callerCtx here.
|
|
|
|
firedEvents, err := firedEventsPromise.Result(context.Background())
|
2023-11-15 14:53:12 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
if validate != nil {
|
2024-05-06 17:34:24 +00:00
|
|
|
opErr = validate(project, target, journal.Entries(), firedEvents, opErr)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
2024-04-11 22:54:08 +00:00
|
|
|
|
2024-01-03 17:32:13 +00:00
|
|
|
errs := []error{opErr, closeErr}
|
2021-11-24 22:13:29 +00:00
|
|
|
if dryRun {
|
2024-01-03 17:32:13 +00:00
|
|
|
return plan, nil, errors.Join(errs...)
|
2021-11-24 22:13:29 +00:00
|
|
|
}
|
2020-10-15 17:35:09 +00:00
|
|
|
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
if !opts.SkipDisplayTests {
|
|
|
|
// base64 encode the name if it contains special characters
|
|
|
|
if ok, err := regexp.MatchString(`^[0-9A-Za-z-_]*$`, name); !ok && name != "" {
|
|
|
|
assert.NoError(opts.T, err)
|
|
|
|
name = base64.StdEncoding.EncodeToString([]byte(name))
|
|
|
|
if len(name) > 64 {
|
|
|
|
name = name[0:64]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
testName := opts.T.Name()
|
|
|
|
if ok, _ := regexp.MatchString(`^[0-9A-Za-z-_]*$`, testName); !ok {
|
|
|
|
testName = strings.ReplaceAll(testName, "[", "_")
|
|
|
|
testName = strings.ReplaceAll(testName, "]", "_")
|
|
|
|
testName = strings.ReplaceAll(testName, `"`, "_")
|
|
|
|
if ok, _ := regexp.MatchString(`^[0-9A-Za-z-_]*$`, testName); !ok {
|
|
|
|
assert.NoError(opts.T, err)
|
|
|
|
testName = base64.StdEncoding.EncodeToString([]byte(testName))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
assertDisplay(opts.T, firedEvents, filepath.Join("testdata", "output", testName, name))
|
|
|
|
}
|
|
|
|
|
2024-01-05 23:16:40 +00:00
|
|
|
entries := journal.Entries()
|
|
|
|
// Check that each possible snapshot we could have created is valid
|
|
|
|
var snap *deploy.Snapshot
|
|
|
|
for i := 0; i <= len(entries); i++ {
|
|
|
|
var err error
|
|
|
|
snap, err = entries[0:i].Snap(target.Snapshot)
|
|
|
|
if err != nil {
|
|
|
|
// if any snapshot fails to create just return this error, don't keep going
|
|
|
|
errs = append(errs, err)
|
|
|
|
snap = nil
|
|
|
|
break
|
|
|
|
}
|
2024-01-03 17:32:13 +00:00
|
|
|
err = snap.VerifyIntegrity()
|
|
|
|
if err != nil {
|
2024-01-05 23:16:40 +00:00
|
|
|
// Likewise as soon as one snapshot fails to validate stop checking
|
2024-01-03 17:32:13 +00:00
|
|
|
errs = append(errs, err)
|
2024-01-05 23:16:40 +00:00
|
|
|
snap = nil
|
|
|
|
break
|
2024-01-03 17:32:13 +00:00
|
|
|
}
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
2024-01-03 17:32:13 +00:00
|
|
|
|
2024-04-11 22:54:08 +00:00
|
|
|
// Verify the saved snapshot from SnapshotManger is the same(ish) as that from the Journal
|
|
|
|
errs = append(errs, snapshotEqual(snap, persister.Snap))
|
|
|
|
|
2024-01-03 17:32:13 +00:00
|
|
|
return nil, snap, errors.Join(errs...)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
// We're just checking that we have the right number of events and
|
|
|
|
// that they have the expected types. We don't do a deep comparison
|
|
|
|
// here, because all that matters is that we have the same events in
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
// some order. The non-display tests are responsible for actually
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
// checking the events properly.
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
func compareEvents(t testing.TB, expected, actual []engine.Event) {
|
|
|
|
encountered := make(map[int]struct{})
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
if len(expected) != len(actual) {
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
t.Logf("expected %d events, got %d", len(expected), len(actual))
|
|
|
|
t.Fail()
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
}
|
|
|
|
for _, e := range expected {
|
|
|
|
found := false
|
|
|
|
for i, a := range actual {
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
if _, ok := encountered[i]; ok {
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
if a.Type == e.Type {
|
|
|
|
found = true
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
encountered[i] = struct{}{}
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !found {
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
t.Logf("expected event %v not found", e)
|
|
|
|
t.Fail()
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
}
|
|
|
|
}
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
for i, e := range actual {
|
|
|
|
if _, ok := encountered[i]; ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
t.Logf("did not expect event %v", e)
|
|
|
|
}
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func loadEvents(path string) (events []engine.Event, err error) {
|
|
|
|
f, err := os.Open(path)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("opening '%v': %w", path, err)
|
|
|
|
}
|
|
|
|
defer contract.IgnoreClose(f)
|
|
|
|
|
|
|
|
dec := json.NewDecoder(f)
|
|
|
|
for {
|
|
|
|
var jsonEvent apitype.EngineEvent
|
|
|
|
if err = dec.Decode(&jsonEvent); err != nil {
|
|
|
|
if err == io.EOF {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil, fmt.Errorf("decoding event %d: %w", len(events), err)
|
|
|
|
}
|
|
|
|
|
|
|
|
event, err := bdisplay.ConvertJSONEvent(jsonEvent)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("converting event %d: %w", len(events), err)
|
|
|
|
}
|
|
|
|
events = append(events, event)
|
|
|
|
}
|
|
|
|
|
|
|
|
// If there are no events or if the event stream does not terminate with a cancel event,
|
|
|
|
// synthesize one here.
|
|
|
|
if len(events) == 0 || events[len(events)-1].Type != engine.CancelEvent {
|
|
|
|
events = append(events, engine.NewCancelEvent())
|
|
|
|
}
|
|
|
|
|
|
|
|
return events, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func assertDisplay(t testing.TB, events []Event, path string) {
|
|
|
|
var expectedStdout []byte
|
|
|
|
var expectedStderr []byte
|
|
|
|
accept := cmdutil.IsTruthy(os.Getenv("PULUMI_ACCEPT"))
|
|
|
|
if !accept {
|
|
|
|
var err error
|
|
|
|
expectedStdout, err = os.ReadFile(filepath.Join(path, "diff.stdout.txt"))
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
|
|
|
expectedStderr, err = os.ReadFile(filepath.Join(path, "diff.stderr.txt"))
|
|
|
|
require.NoError(t, err)
|
|
|
|
}
|
|
|
|
|
|
|
|
eventChannel, doneChannel := make(chan engine.Event), make(chan bool)
|
|
|
|
var stdout bytes.Buffer
|
|
|
|
var stderr bytes.Buffer
|
|
|
|
|
|
|
|
var expectedEvents []engine.Event
|
|
|
|
if accept {
|
|
|
|
// Write out the events to a file for acceptance testing.
|
|
|
|
err := os.MkdirAll(path, 0o700)
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
|
|
|
f, err := os.OpenFile(filepath.Join(path, "eventstream.json"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o600)
|
|
|
|
require.NoError(t, err)
|
|
|
|
defer f.Close()
|
|
|
|
|
|
|
|
enc := json.NewEncoder(f)
|
|
|
|
for _, e := range events {
|
|
|
|
apiEvent, err := bdisplay.ConvertEngineEvent(e, false)
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
|
|
|
err = enc.Encode(apiEvent)
|
|
|
|
require.NoError(t, err)
|
|
|
|
}
|
|
|
|
|
|
|
|
expectedEvents = events
|
|
|
|
} else {
|
|
|
|
var err error
|
|
|
|
expectedEvents, err = loadEvents(filepath.Join(path, "eventstream.json"))
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
compareEvents(t, expectedEvents, events)
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// ShowProgressEvents
|
|
|
|
|
|
|
|
go bdisplay.ShowDiffEvents("test", eventChannel, doneChannel, bdisplay.Options{
|
|
|
|
Color: colors.Raw,
|
|
|
|
ShowSameResources: true,
|
|
|
|
ShowReplacementSteps: true,
|
|
|
|
ShowReads: true,
|
|
|
|
Stdout: &stdout,
|
|
|
|
Stderr: &stderr,
|
|
|
|
DeterministicOutput: true,
|
|
|
|
})
|
|
|
|
|
|
|
|
for _, e := range expectedEvents {
|
|
|
|
eventChannel <- e
|
|
|
|
}
|
|
|
|
<-doneChannel
|
|
|
|
|
|
|
|
if !accept {
|
|
|
|
assert.Equal(t, string(expectedStdout), stdout.String())
|
|
|
|
assert.Equal(t, string(expectedStderr), stderr.String())
|
|
|
|
} else {
|
|
|
|
err := os.MkdirAll(path, 0o700)
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
|
|
|
err = os.WriteFile(filepath.Join(path, "diff.stdout.txt"), stdout.Bytes(), 0o600)
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
|
|
|
err = os.WriteFile(filepath.Join(path, "diff.stderr.txt"), stderr.Bytes(), 0o600)
|
|
|
|
require.NoError(t, err)
|
|
|
|
}
|
|
|
|
|
|
|
|
expectedStdout = []byte{}
|
|
|
|
expectedStderr = []byte{}
|
|
|
|
if !accept {
|
|
|
|
var err error
|
|
|
|
expectedStdout, err = os.ReadFile(filepath.Join(path, "progress.stdout.txt"))
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
|
|
|
expectedStderr, err = os.ReadFile(filepath.Join(path, "progress.stderr.txt"))
|
|
|
|
require.NoError(t, err)
|
|
|
|
}
|
|
|
|
|
|
|
|
eventChannel, doneChannel = make(chan engine.Event), make(chan bool)
|
|
|
|
stdout.Reset()
|
|
|
|
stderr.Reset()
|
|
|
|
|
|
|
|
go bdisplay.ShowProgressEvents(
|
|
|
|
"test", apitype.UpdateUpdate,
|
|
|
|
tokens.MustParseStackName("stack"), "project", "http://example.com",
|
|
|
|
eventChannel, doneChannel, bdisplay.Options{
|
|
|
|
Color: colors.Raw,
|
|
|
|
ShowSameResources: true,
|
|
|
|
ShowReplacementSteps: true,
|
|
|
|
ShowReads: true,
|
|
|
|
SuppressProgress: true,
|
|
|
|
Stdout: &stdout,
|
|
|
|
Stderr: &stderr,
|
|
|
|
DeterministicOutput: true,
|
|
|
|
}, false)
|
|
|
|
|
|
|
|
for _, e := range expectedEvents {
|
|
|
|
eventChannel <- e
|
|
|
|
}
|
|
|
|
<-doneChannel
|
|
|
|
|
|
|
|
if !accept {
|
|
|
|
assert.Equal(t, string(expectedStdout), stdout.String())
|
|
|
|
assert.Equal(t, string(expectedStderr), stderr.String())
|
|
|
|
} else {
|
|
|
|
err := os.WriteFile(filepath.Join(path, "progress.stdout.txt"), stdout.Bytes(), 0o600)
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
|
|
|
err = os.WriteFile(filepath.Join(path, "progress.stderr.txt"), stderr.Bytes(), 0o600)
|
|
|
|
require.NoError(t, err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
type TestStep struct {
|
|
|
|
Op TestOp
|
|
|
|
ExpectFailure bool
|
|
|
|
SkipPreview bool
|
|
|
|
Validate ValidateFunc
|
|
|
|
}
|
|
|
|
|
2021-12-17 22:52:01 +00:00
|
|
|
func (t *TestStep) ValidateAnd(f ValidateFunc) {
|
|
|
|
o := t.Validate
|
|
|
|
t.Validate = func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
events []Event, err error,
|
2023-10-11 14:44:09 +00:00
|
|
|
) error {
|
2024-05-06 17:34:24 +00:00
|
|
|
r := o(project, target, entries, events, err)
|
2021-12-17 22:52:01 +00:00
|
|
|
if r != nil {
|
|
|
|
return r
|
|
|
|
}
|
2024-05-06 17:34:24 +00:00
|
|
|
return f(project, target, entries, events, err)
|
2021-12-17 22:52:01 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-09-28 21:50:18 +00:00
|
|
|
// TestUpdateOptions is UpdateOptions for a TestPlan.
|
|
|
|
type TestUpdateOptions struct {
|
|
|
|
UpdateOptions
|
|
|
|
// a factory to produce a plugin host for an update operation.
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
HostF deploytest.PluginHostFactory
|
|
|
|
T testing.TB
|
|
|
|
SkipDisplayTests bool
|
2023-09-28 21:50:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Options produces UpdateOptions for an update operation.
|
|
|
|
func (o TestUpdateOptions) Options() UpdateOptions {
|
|
|
|
opts := o.UpdateOptions
|
|
|
|
if o.HostF != nil {
|
|
|
|
opts.Host = o.HostF()
|
|
|
|
}
|
|
|
|
return opts
|
|
|
|
}
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
type TestPlan struct {
|
|
|
|
Project string
|
|
|
|
Stack string
|
|
|
|
Runtime string
|
|
|
|
RuntimeOptions map[string]interface{}
|
|
|
|
Config config.Map
|
|
|
|
Decrypter config.Decrypter
|
|
|
|
BackendClient deploy.BackendClient
|
2023-09-28 21:50:18 +00:00
|
|
|
Options TestUpdateOptions
|
2020-10-15 17:35:09 +00:00
|
|
|
Steps []TestStep
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
// Count the number of times Run is called on this plan. Used to generate unique names for display snapshot tests.
|
|
|
|
run int
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
|
Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
func (p *TestPlan) getNames() (stack tokens.StackName, project tokens.PackageName, runtime string) {
|
2020-10-15 17:35:09 +00:00
|
|
|
project = tokens.PackageName(p.Project)
|
|
|
|
if project == "" {
|
|
|
|
project = "test"
|
|
|
|
}
|
|
|
|
runtime = p.Runtime
|
|
|
|
if runtime == "" {
|
|
|
|
runtime = "test"
|
|
|
|
}
|
Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
stack = tokens.MustParseStackName("test")
|
|
|
|
if p.Stack != "" {
|
|
|
|
stack = tokens.MustParseStackName(p.Stack)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
return stack, project, runtime
|
|
|
|
}
|
|
|
|
|
|
|
|
func (p *TestPlan) NewURN(typ tokens.Type, name string, parent resource.URN) resource.URN {
|
|
|
|
stack, project, _ := p.getNames()
|
|
|
|
var pt tokens.Type
|
|
|
|
if parent != "" {
|
2023-09-14 19:52:27 +00:00
|
|
|
pt = parent.QualifiedType()
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
2023-11-20 08:59:00 +00:00
|
|
|
return resource.NewURN(stack.Q(), project, pt, typ, name)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (p *TestPlan) NewProviderURN(pkg tokens.Package, name string, parent resource.URN) resource.URN {
|
|
|
|
return p.NewURN(providers.MakeProviderType(pkg), name, parent)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (p *TestPlan) GetProject() workspace.Project {
|
|
|
|
_, projectName, runtime := p.getNames()
|
|
|
|
|
|
|
|
return workspace.Project{
|
|
|
|
Name: projectName,
|
|
|
|
Runtime: workspace.NewProjectRuntimeInfo(runtime, p.RuntimeOptions),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-05-04 23:28:49 +00:00
|
|
|
func (p *TestPlan) GetTarget(t testing.TB, snapshot *deploy.Snapshot) deploy.Target {
|
2020-10-15 17:35:09 +00:00
|
|
|
stack, _, _ := p.getNames()
|
|
|
|
|
|
|
|
cfg := p.Config
|
|
|
|
if cfg == nil {
|
|
|
|
cfg = config.Map{}
|
|
|
|
}
|
|
|
|
|
|
|
|
return deploy.Target{
|
|
|
|
Name: stack,
|
|
|
|
Config: cfg,
|
|
|
|
Decrypter: p.Decrypter,
|
2021-12-09 09:09:48 +00:00
|
|
|
// note: it's really important that the preview and update operate on different snapshots. the engine can and
|
|
|
|
// does mutate the snapshot in-place, even in previews, and sharing a snapshot between preview and update can
|
|
|
|
// cause state changes from the preview to persist even when doing an update.
|
|
|
|
Snapshot: CloneSnapshot(t, snapshot),
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// CloneSnapshot makes a deep copy of the given snapshot and returns a pointer to the clone.
|
2023-05-04 23:28:49 +00:00
|
|
|
func CloneSnapshot(t testing.TB, snap *deploy.Snapshot) *deploy.Snapshot {
|
2020-10-15 17:35:09 +00:00
|
|
|
t.Helper()
|
|
|
|
if snap != nil {
|
|
|
|
copiedSnap := copystructure.Must(copystructure.Copy(*snap)).(deploy.Snapshot)
|
|
|
|
assert.True(t, reflect.DeepEqual(*snap, copiedSnap))
|
|
|
|
return &copiedSnap
|
|
|
|
}
|
|
|
|
|
|
|
|
return snap
|
|
|
|
}
|
|
|
|
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
func (p *TestPlan) RunWithName(t testing.TB, snapshot *deploy.Snapshot, name string) *deploy.Snapshot {
|
2020-10-15 17:35:09 +00:00
|
|
|
project := p.GetProject()
|
|
|
|
snap := snapshot
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
for i, step := range p.Steps {
|
2020-10-15 17:35:09 +00:00
|
|
|
// note: it's really important that the preview and update operate on different snapshots. the engine can and
|
|
|
|
// does mutate the snapshot in-place, even in previews, and sharing a snapshot between preview and update can
|
|
|
|
// cause state changes from the preview to persist even when doing an update.
|
2021-12-09 09:09:48 +00:00
|
|
|
// GetTarget ALWAYS clones the snapshot, so the previewTarget.Snapshot != target.Snapshot
|
2020-10-15 17:35:09 +00:00
|
|
|
if !step.SkipPreview {
|
2021-12-09 09:09:48 +00:00
|
|
|
previewTarget := p.GetTarget(t, snap)
|
2021-11-24 22:13:29 +00:00
|
|
|
// Don't run validate on the preview step
|
2023-10-11 14:44:09 +00:00
|
|
|
_, err := step.Op.Run(project, previewTarget, p.Options, true, p.BackendClient, nil)
|
2020-10-15 17:35:09 +00:00
|
|
|
if step.ExpectFailure {
|
2023-10-13 09:46:07 +00:00
|
|
|
assert.Error(t, err)
|
2020-10-15 17:35:09 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-10-13 09:46:07 +00:00
|
|
|
assert.NoError(t, err)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
|
2023-10-11 14:44:09 +00:00
|
|
|
var err error
|
2021-12-09 09:09:48 +00:00
|
|
|
target := p.GetTarget(t, snap)
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
snap, err = step.Op.RunStep(project, target, p.Options, false, p.BackendClient, step.Validate,
|
|
|
|
fmt.Sprintf("%s-%d-%d", name, i, p.run))
|
2020-10-15 17:35:09 +00:00
|
|
|
if step.ExpectFailure {
|
2023-10-13 09:46:07 +00:00
|
|
|
assert.Error(t, err)
|
2020-10-15 17:35:09 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-10-11 14:44:09 +00:00
|
|
|
if err != nil {
|
|
|
|
if result.IsBail(err) {
|
2023-10-13 09:46:07 +00:00
|
|
|
t.Logf("Got unexpected bail result: %v", err)
|
2020-10-15 17:35:09 +00:00
|
|
|
t.FailNow()
|
|
|
|
} else {
|
2023-10-11 14:44:09 +00:00
|
|
|
t.Logf("Got unexpected error result: %v", err)
|
2020-10-15 17:35:09 +00:00
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-10-13 09:46:07 +00:00
|
|
|
assert.NoError(t, err)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
|
|
|
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
p.run += 1
|
2020-10-15 17:35:09 +00:00
|
|
|
return snap
|
|
|
|
}
|
|
|
|
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
func (p *TestPlan) Run(t testing.TB, snapshot *deploy.Snapshot) *deploy.Snapshot {
|
|
|
|
return p.RunWithName(t, snapshot, "")
|
|
|
|
}
|
|
|
|
|
2022-06-23 00:30:01 +00:00
|
|
|
// resCount is the expected number of resources registered during this test.
|
2020-10-15 17:35:09 +00:00
|
|
|
func MakeBasicLifecycleSteps(t *testing.T, resCount int) []TestStep {
|
|
|
|
return []TestStep{
|
|
|
|
// Initial update
|
|
|
|
{
|
|
|
|
Op: Update,
|
|
|
|
Validate: func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
_ []Event, err error,
|
2023-10-11 14:44:09 +00:00
|
|
|
) error {
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
2021-09-29 23:05:45 +00:00
|
|
|
// Should see only creates or reads.
|
2020-10-15 17:35:09 +00:00
|
|
|
for _, entry := range entries {
|
2021-09-29 23:05:45 +00:00
|
|
|
op := entry.Step.Op()
|
|
|
|
assert.True(t, op == deploy.OpCreate || op == deploy.OpRead)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
[engine] Only record a resource's chosen alias. (#9288)
As we discovered when removing aliases from the state entirely, the
snapshotter needs to be alias-aware so that it can fix up references to
resources that were aliased. After a resource operation finishes, the
snapshotter needs to write out a new copy of the snapshot. However, at
the time we write the snapshot, there may be resources that have not yet
been registered that refer to the just-registered resources by a
different URN due to aliasing. Those references need to be fixed up
prior to writing the snapshot in order to preserve the snapshot's
integrity (in particular, the property that all URNs refer to resources
that exist in the snapshot).
For example, consider the following simple dependency graph: A <-- B.
When that graph is serialized, B will contain a reference to A in its
dependency list. Let the next run of the program produces the graph A'
<-- B where A' is aliased to A. After A' is registered, the snapshotter
needs to write a snapshot that contains its state, but B must also be
updated so it references A' instead of A, which will no longer be in the
snapshot.
These changes take advantage of the fact that although a resource can
provide multiple aliases, it can only ever resolve those aliases to a
single resource in the existing state. Therefore, at the time the
statefile is fixed up, each resource in the statefile could only have
been aliased to a single old resource, and it is sufficient to store
only the URN of the chosen resource rather than all possible aliases. In
addition to preserving the ability to fix up references to aliased
resources, retaining the chosen alias allows the history of a logical
resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
|
|
|
snap, err := entries.Snap(target.Snapshot)
|
|
|
|
require.NoError(t, err)
|
|
|
|
assert.Len(t, snap.Resources, resCount)
|
2023-10-11 14:44:09 +00:00
|
|
|
return err
|
2020-10-15 17:35:09 +00:00
|
|
|
},
|
|
|
|
},
|
|
|
|
// No-op refresh
|
|
|
|
{
|
|
|
|
Op: Refresh,
|
|
|
|
Validate: func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
_ []Event, err error,
|
2023-10-11 14:44:09 +00:00
|
|
|
) error {
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
// Should see only refresh-sames.
|
|
|
|
for _, entry := range entries {
|
|
|
|
assert.Equal(t, deploy.OpRefresh, entry.Step.Op())
|
|
|
|
assert.Equal(t, deploy.OpSame, entry.Step.(*deploy.RefreshStep).ResultOp())
|
|
|
|
}
|
[engine] Only record a resource's chosen alias. (#9288)
As we discovered when removing aliases from the state entirely, the
snapshotter needs to be alias-aware so that it can fix up references to
resources that were aliased. After a resource operation finishes, the
snapshotter needs to write out a new copy of the snapshot. However, at
the time we write the snapshot, there may be resources that have not yet
been registered that refer to the just-registered resources by a
different URN due to aliasing. Those references need to be fixed up
prior to writing the snapshot in order to preserve the snapshot's
integrity (in particular, the property that all URNs refer to resources
that exist in the snapshot).
For example, consider the following simple dependency graph: A <-- B.
When that graph is serialized, B will contain a reference to A in its
dependency list. Let the next run of the program produces the graph A'
<-- B where A' is aliased to A. After A' is registered, the snapshotter
needs to write a snapshot that contains its state, but B must also be
updated so it references A' instead of A, which will no longer be in the
snapshot.
These changes take advantage of the fact that although a resource can
provide multiple aliases, it can only ever resolve those aliases to a
single resource in the existing state. Therefore, at the time the
statefile is fixed up, each resource in the statefile could only have
been aliased to a single old resource, and it is sufficient to store
only the URN of the chosen resource rather than all possible aliases. In
addition to preserving the ability to fix up references to aliased
resources, retaining the chosen alias allows the history of a logical
resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
|
|
|
snap, err := entries.Snap(target.Snapshot)
|
|
|
|
require.NoError(t, err)
|
|
|
|
assert.Len(t, snap.Resources, resCount)
|
2023-10-11 14:44:09 +00:00
|
|
|
return err
|
2020-10-15 17:35:09 +00:00
|
|
|
},
|
|
|
|
},
|
|
|
|
// No-op update
|
|
|
|
{
|
|
|
|
Op: Update,
|
|
|
|
Validate: func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
_ []Event, err error,
|
2023-10-11 14:44:09 +00:00
|
|
|
) error {
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
// Should see only sames.
|
|
|
|
for _, entry := range entries {
|
2021-09-29 23:05:45 +00:00
|
|
|
op := entry.Step.Op()
|
|
|
|
assert.True(t, op == deploy.OpSame || op == deploy.OpRead)
|
2020-10-15 17:35:09 +00:00
|
|
|
}
|
[engine] Only record a resource's chosen alias. (#9288)
As we discovered when removing aliases from the state entirely, the
snapshotter needs to be alias-aware so that it can fix up references to
resources that were aliased. After a resource operation finishes, the
snapshotter needs to write out a new copy of the snapshot. However, at
the time we write the snapshot, there may be resources that have not yet
been registered that refer to the just-registered resources by a
different URN due to aliasing. Those references need to be fixed up
prior to writing the snapshot in order to preserve the snapshot's
integrity (in particular, the property that all URNs refer to resources
that exist in the snapshot).
For example, consider the following simple dependency graph: A <-- B.
When that graph is serialized, B will contain a reference to A in its
dependency list. Let the next run of the program produces the graph A'
<-- B where A' is aliased to A. After A' is registered, the snapshotter
needs to write a snapshot that contains its state, but B must also be
updated so it references A' instead of A, which will no longer be in the
snapshot.
These changes take advantage of the fact that although a resource can
provide multiple aliases, it can only ever resolve those aliases to a
single resource in the existing state. Therefore, at the time the
statefile is fixed up, each resource in the statefile could only have
been aliased to a single old resource, and it is sufficient to store
only the URN of the chosen resource rather than all possible aliases. In
addition to preserving the ability to fix up references to aliased
resources, retaining the chosen alias allows the history of a logical
resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
|
|
|
snap, err := entries.Snap(target.Snapshot)
|
|
|
|
require.NoError(t, err)
|
|
|
|
assert.Len(t, snap.Resources, resCount)
|
2023-10-11 14:44:09 +00:00
|
|
|
return err
|
2020-10-15 17:35:09 +00:00
|
|
|
},
|
|
|
|
},
|
|
|
|
// No-op refresh
|
|
|
|
{
|
|
|
|
Op: Refresh,
|
|
|
|
Validate: func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
_ []Event, err error,
|
2023-10-11 14:44:09 +00:00
|
|
|
) error {
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
// Should see only refresh-sames.
|
|
|
|
for _, entry := range entries {
|
|
|
|
assert.Equal(t, deploy.OpRefresh, entry.Step.Op())
|
|
|
|
assert.Equal(t, deploy.OpSame, entry.Step.(*deploy.RefreshStep).ResultOp())
|
|
|
|
}
|
[engine] Only record a resource's chosen alias. (#9288)
As we discovered when removing aliases from the state entirely, the
snapshotter needs to be alias-aware so that it can fix up references to
resources that were aliased. After a resource operation finishes, the
snapshotter needs to write out a new copy of the snapshot. However, at
the time we write the snapshot, there may be resources that have not yet
been registered that refer to the just-registered resources by a
different URN due to aliasing. Those references need to be fixed up
prior to writing the snapshot in order to preserve the snapshot's
integrity (in particular, the property that all URNs refer to resources
that exist in the snapshot).
For example, consider the following simple dependency graph: A <-- B.
When that graph is serialized, B will contain a reference to A in its
dependency list. Let the next run of the program produces the graph A'
<-- B where A' is aliased to A. After A' is registered, the snapshotter
needs to write a snapshot that contains its state, but B must also be
updated so it references A' instead of A, which will no longer be in the
snapshot.
These changes take advantage of the fact that although a resource can
provide multiple aliases, it can only ever resolve those aliases to a
single resource in the existing state. Therefore, at the time the
statefile is fixed up, each resource in the statefile could only have
been aliased to a single old resource, and it is sufficient to store
only the URN of the chosen resource rather than all possible aliases. In
addition to preserving the ability to fix up references to aliased
resources, retaining the chosen alias allows the history of a logical
resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
|
|
|
snap, err := entries.Snap(target.Snapshot)
|
|
|
|
require.NoError(t, err)
|
|
|
|
assert.Len(t, snap.Resources, resCount)
|
2023-10-11 14:44:09 +00:00
|
|
|
return err
|
2020-10-15 17:35:09 +00:00
|
|
|
},
|
|
|
|
},
|
|
|
|
// Destroy
|
|
|
|
{
|
|
|
|
Op: Destroy,
|
|
|
|
Validate: func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
_ []Event, err error,
|
2023-10-11 14:44:09 +00:00
|
|
|
) error {
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
// Should see only deletes.
|
|
|
|
for _, entry := range entries {
|
|
|
|
switch entry.Step.Op() {
|
|
|
|
case deploy.OpDelete, deploy.OpReadDiscard:
|
|
|
|
// ok
|
|
|
|
default:
|
|
|
|
assert.Fail(t, "expected OpDelete or OpReadDiscard")
|
|
|
|
}
|
|
|
|
}
|
[engine] Only record a resource's chosen alias. (#9288)
As we discovered when removing aliases from the state entirely, the
snapshotter needs to be alias-aware so that it can fix up references to
resources that were aliased. After a resource operation finishes, the
snapshotter needs to write out a new copy of the snapshot. However, at
the time we write the snapshot, there may be resources that have not yet
been registered that refer to the just-registered resources by a
different URN due to aliasing. Those references need to be fixed up
prior to writing the snapshot in order to preserve the snapshot's
integrity (in particular, the property that all URNs refer to resources
that exist in the snapshot).
For example, consider the following simple dependency graph: A <-- B.
When that graph is serialized, B will contain a reference to A in its
dependency list. Let the next run of the program produces the graph A'
<-- B where A' is aliased to A. After A' is registered, the snapshotter
needs to write a snapshot that contains its state, but B must also be
updated so it references A' instead of A, which will no longer be in the
snapshot.
These changes take advantage of the fact that although a resource can
provide multiple aliases, it can only ever resolve those aliases to a
single resource in the existing state. Therefore, at the time the
statefile is fixed up, each resource in the statefile could only have
been aliased to a single old resource, and it is sufficient to store
only the URN of the chosen resource rather than all possible aliases. In
addition to preserving the ability to fix up references to aliased
resources, retaining the chosen alias allows the history of a logical
resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
|
|
|
snap, err := entries.Snap(target.Snapshot)
|
|
|
|
require.NoError(t, err)
|
|
|
|
assert.Len(t, snap.Resources, 0)
|
2023-10-11 14:44:09 +00:00
|
|
|
return err
|
2020-10-15 17:35:09 +00:00
|
|
|
},
|
|
|
|
},
|
|
|
|
// No-op refresh
|
|
|
|
{
|
|
|
|
Op: Refresh,
|
|
|
|
Validate: func(project workspace.Project, target deploy.Target, entries JournalEntries,
|
2024-05-06 17:34:24 +00:00
|
|
|
_ []Event, err error,
|
2023-10-11 14:44:09 +00:00
|
|
|
) error {
|
|
|
|
require.NoError(t, err)
|
|
|
|
|
2020-10-15 17:35:09 +00:00
|
|
|
assert.Len(t, entries, 0)
|
[engine] Only record a resource's chosen alias. (#9288)
As we discovered when removing aliases from the state entirely, the
snapshotter needs to be alias-aware so that it can fix up references to
resources that were aliased. After a resource operation finishes, the
snapshotter needs to write out a new copy of the snapshot. However, at
the time we write the snapshot, there may be resources that have not yet
been registered that refer to the just-registered resources by a
different URN due to aliasing. Those references need to be fixed up
prior to writing the snapshot in order to preserve the snapshot's
integrity (in particular, the property that all URNs refer to resources
that exist in the snapshot).
For example, consider the following simple dependency graph: A <-- B.
When that graph is serialized, B will contain a reference to A in its
dependency list. Let the next run of the program produces the graph A'
<-- B where A' is aliased to A. After A' is registered, the snapshotter
needs to write a snapshot that contains its state, but B must also be
updated so it references A' instead of A, which will no longer be in the
snapshot.
These changes take advantage of the fact that although a resource can
provide multiple aliases, it can only ever resolve those aliases to a
single resource in the existing state. Therefore, at the time the
statefile is fixed up, each resource in the statefile could only have
been aliased to a single old resource, and it is sufficient to store
only the URN of the chosen resource rather than all possible aliases. In
addition to preserving the ability to fix up references to aliased
resources, retaining the chosen alias allows the history of a logical
resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
|
|
|
snap, err := entries.Snap(target.Snapshot)
|
|
|
|
require.NoError(t, err)
|
|
|
|
assert.Len(t, snap.Resources, 0)
|
2023-10-11 14:44:09 +00:00
|
|
|
return err
|
2020-10-15 17:35:09 +00:00
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
2023-12-10 21:29:37 +00:00
|
|
|
|
|
|
|
type testBuilder struct {
|
|
|
|
t *testing.T
|
|
|
|
loaders []*deploytest.ProviderLoader
|
|
|
|
snap *deploy.Snapshot
|
|
|
|
}
|
|
|
|
|
|
|
|
func newTestBuilder(t *testing.T, snap *deploy.Snapshot) *testBuilder {
|
|
|
|
return &testBuilder{
|
|
|
|
t: t,
|
|
|
|
snap: snap,
|
|
|
|
loaders: slice.Prealloc[*deploytest.ProviderLoader](1),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (b *testBuilder) WithProvider(name string, version string, prov *deploytest.Provider) *testBuilder {
|
2024-01-26 16:20:45 +00:00
|
|
|
loader := deploytest.NewProviderLoader(
|
|
|
|
tokens.Package(name), semver.MustParse(version), func() (plugin.Provider, error) {
|
|
|
|
return prov, nil
|
|
|
|
})
|
2023-12-10 21:29:37 +00:00
|
|
|
b.loaders = append(b.loaders, loader)
|
|
|
|
return b
|
|
|
|
}
|
|
|
|
|
|
|
|
type Result struct {
|
|
|
|
snap *deploy.Snapshot
|
|
|
|
err error
|
|
|
|
}
|
|
|
|
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
func (b *testBuilder) RunUpdate(
|
|
|
|
program func(info plugin.RunInfo, monitor *deploytest.ResourceMonitor) error, skipDisplayTests bool,
|
|
|
|
) *Result {
|
2023-12-10 21:29:37 +00:00
|
|
|
programF := deploytest.NewLanguageRuntimeF(program)
|
|
|
|
hostF := deploytest.NewPluginHostF(nil, nil, programF, b.loaders...)
|
|
|
|
|
|
|
|
p := &TestPlan{
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
Options: TestUpdateOptions{T: b.t, HostF: hostF, SkipDisplayTests: skipDisplayTests},
|
2023-12-10 21:29:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Run an update for initial state.
|
|
|
|
var err error
|
|
|
|
snap, err := TestOp(Update).Run(
|
|
|
|
p.GetProject(), p.GetTarget(b.t, b.snap), p.Options, false, p.BackendClient, nil)
|
|
|
|
return &Result{
|
|
|
|
snap: snap,
|
|
|
|
err: err,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Then() is used to convey dependence between program runs via program structure.
|
|
|
|
func (res *Result) Then(do func(snap *deploy.Snapshot, err error)) {
|
|
|
|
do(res.snap, res.err)
|
|
|
|
}
|