pulumi/pkg/resource/deploy/step_generator.go

2293 lines
91 KiB
Go
Raw Permalink Normal View History

// Copyright 2016-2021, Pulumi Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package deploy
import (
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
"context"
cryptorand "crypto/rand"
Enable perfsprint linter (#14813) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Prompted by a comment in another review: https://github.com/pulumi/pulumi/pull/14654#discussion_r1419995945 This lints that we don't use `fmt.Errorf` when `errors.New` will suffice, it also covers a load of other cases where `Sprintf` is sub-optimal. Most of these edits were made by running `perfsprint --fix`. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-12-12 12:19:42 +00:00
"errors"
"fmt"
"slices"
"strings"
"time"
mapset "github.com/deckarep/golang-set/v2"
"github.com/pulumi/pulumi/pkg/v3/resource/deploy/providers"
"github.com/pulumi/pulumi/pkg/v3/resource/graph"
"github.com/pulumi/pulumi/sdk/v3/go/common/apitype"
"github.com/pulumi/pulumi/sdk/v3/go/common/diag"
"github.com/pulumi/pulumi/sdk/v3/go/common/resource"
"github.com/pulumi/pulumi/sdk/v3/go/common/resource/plugin"
"github.com/pulumi/pulumi/sdk/v3/go/common/slice"
"github.com/pulumi/pulumi/sdk/v3/go/common/tokens"
"github.com/pulumi/pulumi/sdk/v3/go/common/util/contract"
"github.com/pulumi/pulumi/sdk/v3/go/common/util/logging"
"github.com/pulumi/pulumi/sdk/v3/go/common/util/result"
)
// stepGenerator is responsible for turning resource events into steps that can be fed to the deployment executor.
// It does this by consulting the deployment and calculating the appropriate step action based on the requested goal
// state and the existing state of the world.
type stepGenerator struct {
deployment *Deployment // the deployment to which this step generator belongs
// signals that one or more errors have been reported to the user, and the deployment should terminate
// in error. This primarily allows `preview` to aggregate many policy violation events and
// report them all at once.
sawError bool
urns map[resource.URN]bool // set of URNs discovered for this deployment
reads map[resource.URN]bool // set of URNs read for this deployment
deletes map[resource.URN]bool // set of URNs deleted in this deployment
replaces map[resource.URN]bool // set of URNs replaced in this deployment
updates map[resource.URN]bool // set of URNs updated in this deployment
creates map[resource.URN]bool // set of URNs created in this deployment
sames map[resource.URN]bool // set of URNs that were not changed in this deployment
// set of URNs that would have been created, but were filtered out because the user didn't
// specify them with --target
skippedCreates map[resource.URN]bool
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
pendingDeletes map[*resource.State]bool // set of resources (not URNs!) that are pending deletion
providers map[resource.URN]*resource.State // URN map of providers that we have seen so far.
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// a map from URN to a list of property keys that caused the replacement of a dependent resource during a
// delete-before-replace.
dependentReplaceKeys map[resource.URN][]resource.PropertyKey
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
// a map from old names (aliased URNs) to the new URN that aliased to them.
aliased map[resource.URN]resource.URN
// a map from current URN of the resource to the old URN that it was aliased from.
aliases map[resource.URN]resource.URN
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
// targetsActual is the set of targets explicitly targeted by the engine. This
// can be different from deployment.opts.targets if --target-dependents is
// true. This does _not_ include resources that have been implicitly targeted,
// like providers.
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
targetsActual UrnTargets
}
// isTargetedForUpdate returns if `res` is targeted for update. The function accommodates
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
// `--target-dependents`.
func (sg *stepGenerator) isTargetedForUpdate(res *resource.State) bool {
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.Targets.Contains(res.URN) {
return true
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
} else if !sg.deployment.opts.TargetDependents {
return false
}
if ref := res.Provider; ref != "" {
2023-07-21 14:38:50 +00:00
proivderRef, err := providers.ParseReference(ref)
contract.AssertNoErrorf(err, "failed to parse provider reference: %v", ref)
2023-07-21 14:38:50 +00:00
providerURN := proivderRef.URN()
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
if sg.targetsActual.Contains(providerURN) {
return true
}
}
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
if res.Parent != "" {
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
if sg.targetsActual.Contains(res.Parent) {
return true
}
}
for _, dep := range res.Dependencies {
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
if dep != "" && sg.targetsActual.Contains(dep) {
return true
}
}
for _, deps := range res.PropertyDependencies {
for _, dep := range deps {
if dep != "" && sg.targetsActual.Contains(dep) {
return true
}
}
}
if res.DeletedWith != "" && sg.targetsActual.Contains(res.DeletedWith) {
return true
}
return false
}
func (sg *stepGenerator) isTargetedReplace(urn resource.URN) bool {
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
return sg.deployment.opts.ReplaceTargets.IsConstrained() && sg.deployment.opts.ReplaceTargets.Contains(urn)
}
func (sg *stepGenerator) Errored() bool {
return sg.sawError
}
// checkParent checks that the parent given is valid for the given resource type, and returns a default parent
// if there is one.
func (sg *stepGenerator) checkParent(parent resource.URN, resourceType tokens.Type) (resource.URN, error) {
// Some goal settings are based on the parent settings so make sure our parent is correct.
// TODO(fraser): I think every resource but the RootStack should have a parent, however currently a
// number of our tests do not create a RootStack resource, feels odd that it's possible for the engine
// to run without a RootStack resource. I feel this ought to be fixed by making the engine always
// create the RootStack before running the user program, however that leaves some questions of what to
// do if we ever support changing any of the settings (such as the provider map) on the RootStack
// resource. For now we set it to the root stack if we can find it, but we don't error on blank parents
// If it is set check the parent exists.
if parent != "" {
// The parent for this resource hasn't been registered yet. That's an error and we can't continue.
if _, hasParent := sg.urns[parent]; !hasParent {
return "", fmt.Errorf("could not find parent resource %v", parent)
}
} else { //nolint:staticcheck // https://github.com/pulumi/pulumi/issues/10950
// Else try and set it to the root stack
// TODO: It looks like this currently has some issues with state ordering (see
// https://github.com/pulumi/pulumi/issues/10950). Best I can guess is the stack resource is
// hitting the step generator and so saving it's URN to sg.urns and issuing a Create step but not
// actually getting to writing it's state to the snapshot. Then in parallel with this something
// else is causing a pulumi:providers:pulumi default provider to be created, this picks up the
// stack URN from sg.urns and so sets it's parent automatically, but then races the step executor
// to write itself to state before the stack resource manages to. Long term we want to ensure
// there's always a stack resource present, and so that all resources (except the stack) have a
// parent (this will save us some work in each SDK), but for now lets just turn this support off.
//for urn := range sg.urns {
// if urn.Type() == resource.RootStackType {
// return urn, nil
// }
//}
}
return parent, nil
}
// bailDiag prints the given diagnostic to the error stream and then returns a bail error with the same message.
func (sg *stepGenerator) bailDaig(diag *diag.Diag, args ...interface{}) error {
sg.deployment.Diag().Errorf(diag, args...)
return result.BailErrorf(diag.Message, args...)
}
// generateURN generates a URN for a new resource and confirms we haven't seen it before in this deployment.
func (sg *stepGenerator) generateURN(
Allow anything in resource names (#14107) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13968. Fixes https://github.com/pulumi/pulumi/issues/8949. This requires changing the parsing of URN's slightly, it is _very_ likely that providers will need to update to handle URNs like this correctly. This changes resource names to be `string` not `QName`. We never validated this before and it turns out that users have put all manner of text for resource names so we just updating the system to correctly reflect that. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-11-20 08:59:00 +00:00
parent resource.URN, ty tokens.Type, name string,
) (resource.URN, error) {
// Generate a URN for this new resource, confirm we haven't seen it before in this deployment.
urn := sg.deployment.generateURN(parent, ty, name)
if sg.urns[urn] {
// TODO[pulumi/pulumi-framework#19]: improve this error message!
return "", sg.bailDaig(diag.GetDuplicateResourceURNError(urn), urn)
}
sg.urns[urn] = true
return urn, nil
}
// GenerateReadSteps is responsible for producing one or more steps required to service
// a ReadResourceEvent coming from the language host.
func (sg *stepGenerator) GenerateReadSteps(event ReadResourceEvent) ([]Step, error) {
// Some event settings are based on the parent settings so make sure our parent is correct.
parent, err := sg.checkParent(event.Parent(), event.Type())
if err != nil {
return nil, err
}
urn, err := sg.generateURN(parent, event.Type(), event.Name())
if err != nil {
return nil, err
}
newState := resource.NewState(event.Type(),
urn,
true, /*custom*/
false, /*delete*/
event.ID(),
event.Properties(),
make(resource.PropertyMap), /* outputs */
parent,
false, /*protect*/
true, /*external*/
event.Dependencies(),
Implement first-class providers. (#1695) ### First-Class Providers These changes implement support for first-class providers. First-class providers are provider plugins that are exposed as resources via the Pulumi programming model so that they may be explicitly and multiply instantiated. Each instance of a provider resource may be configured differently, and configuration parameters may be source from the outputs of other resources. ### Provider Plugin Changes In order to accommodate the need to verify and diff provider configuration and configure providers without complete configuration information, these changes adjust the high-level provider plugin interface. Two new methods for validating a provider's configuration and diffing changes to the same have been added (`CheckConfig` and `DiffConfig`, respectively), and the type of the configuration bag accepted by `Configure` has been changed to a `PropertyMap`. These changes have not yet been reflected in the provider plugin gRPC interface. We will do this in a set of follow-up changes. Until then, these methods are implemented by adapters: - `CheckConfig` validates that all configuration parameters are string or unknown properties. This is necessary because existing plugins only accept string-typed configuration values. - `DiffConfig` either returns "never replace" if all configuration values are known or "must replace" if any configuration value is unknown. The justification for this behavior is given [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106) - `Configure` converts the config bag to a legacy config map and configures the provider plugin if all config values are known. If any config value is unknown, the underlying plugin is not configured and the provider may only perform `Check`, `Read`, and `Invoke`, all of which return empty results. We justify this behavior becuase it is only possible during a preview and provides the best experience we can manage with the existing gRPC interface. ### Resource Model Changes Providers are now exposed as resources that participate in a stack's dependency graph. Like other resources, they are explicitly created, may have multiple instances, and may have dependencies on other resources. Providers are referred to using provider references, which are a combination of the provider's URN and its ID. This design addresses the need during a preview to refer to providers that have not yet been physically created and therefore have no ID. All custom resources that are not themselves providers must specify a single provider via a provider reference. The named provider will be used to manage that resource's CRUD operations. If a resource's provider reference changes, the resource must be replaced. Though its URN is not present in the resource's dependency list, the provider should be treated as a dependency of the resource when topologically sorting the dependency graph. Finally, `Invoke` operations must now specify a provider to use for the invocation via a provider reference. ### Engine Changes First-class providers support requires a few changes to the engine: - The engine must have some way to map from provider references to provider plugins. It must be possible to add providers from a stack's checkpoint to this map and to register new/updated providers during the execution of a plan in response to CRUD operations on provider resources. - In order to support updating existing stacks using existing Pulumi programs that may not explicitly instantiate providers, the engine must be able to manage the "default" providers for each package referenced by a checkpoint or Pulumi program. The configuration for a "default" provider is taken from the stack's configuration data. The former need is addressed by adding a provider registry type that is responsible for managing all of the plugins required by a plan. In addition to loading plugins froma checkpoint and providing the ability to map from a provider reference to a provider plugin, this type serves as the provider plugin for providers themselves (i.e. it is the "provider provider"). The latter need is solved via two relatively self-contained changes to plan setup and the eval source. During plan setup, the old checkpoint is scanned for custom resources that do not have a provider reference in order to compute the set of packages that require a default provider. Once this set has been computed, the required default provider definitions are conjured and prepended to the checkpoint's resource list. Each resource that requires a default provider is then updated to refer to the default provider for its package. While an eval source is running, each custom resource registration, resource read, and invoke that does not name a provider is trapped before being returned by the source iterator. If no default provider for the appropriate package has been registered, the eval source synthesizes an appropriate registration, waits for it to complete, and records the registered provider's reference. This reference is injected into the original request, which is then processed as usual. If a default provider was already registered, the recorded reference is used and no new registration occurs. ### SDK Changes These changes only expose first-class providers from the Node.JS SDK. - A new abstract class, `ProviderResource`, can be subclassed and used to instantiate first-class providers. - A new field in `ResourceOptions`, `provider`, can be used to supply a particular provider instance to manage a `CustomResource`'s CRUD operations. - A new type, `InvokeOptions`, can be used to specify options that control the behavior of a call to `pulumi.runtime.invoke`. This type includes a `provider` field that is analogous to `ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
nil, /* initErrors */
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
event.Provider(),
nil, /* propertyDependencies */
false, /* deleteBeforeCreate */
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
event.AdditionalSecretOutputs(),
nil, /* aliases */
nil, /* customTimeouts */
"", /* importID */
false, /* retainOnDelete */
"", /* deletedWith */
nil, /* created */
nil, /* modified */
[engine] Add support for source positions These changes add support for passing source position information in gRPC metadata and recording the source position that corresponds to a resource registration in the statefile. Enabling source position information in the resource model can provide substantial benefits, including but not limited to: - Better errors from the Pulumi CLI - Go-to-defintion for resources in state - Editor integration for errors, etc. from `pulumi preview` Source positions are (file, line) or (file, line, column) tuples represented as URIs. The line and column are stored in the fragment portion of the URI as "line(,column)?". The scheme of the URI and the form of its path component depends on the context in which it is generated or used: - During an active update, the URI's scheme is `file` and paths are absolute filesystem paths. This allows consumers to easily access arbitrary files that are available on the host. - In a statefile, the URI's scheme is `project` and paths are relative to the project root. This allows consumers to resolve source positions relative to the project file in different contexts irrespective of the location of the project itself (e.g. given a project-relative path and the URL of the project's root on GitHub, one can build a GitHub URL for the source position). During an update, source position information may be attached to gRPC calls as "source-position" metadata. This allows arbitrary calls to be associated with source positions without changes to their protobuf payloads. Modifying the protobuf payloads is also a viable approach, but is somewhat more invasive than attaching metadata, and requires changes to every call signature. Source positions should reflect the position in user code that initiated a resource model operation (e.g. the source position passed with `RegisterResource` for `pet` in the example above should be the source position in `index.ts`, _not_ the source position in the Pulumi SDK). In general, the Pulumi SDK should be able to infer the source position of the resource registration, as the relationship between a resource registration and its corresponding user code should be static per SDK. Source positions in state files will be stored as a new `registeredAt` property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
event.SourcePosition(),
Change `pulumi refresh` to report diff relative to desired state instead of relative to only output changes (#16146) Presently, the behaviour of diffing during refresh steps is incomplete, returning only an "output diff" that presents the changes in outputs. This commit changes refresh steps so that: * they compute a diff similar to the one that would be computed if a `preview` were run immediately after the refresh, which is more typically what users expect and want; and * `IgnoreChanges` resource options are respected when performing the new desired-state diffs, so that property additions or changes reported by a refresh can be ignored. In particular, `IgnoreChanges` can now be used to acknowledge that part or all of a resource may change in the provider, but the user is OK with this and doesn't want to be notified about it during a refresh. Importantly, this means that the diff won't be reported, but also that the changes won't be applied to state. The implementation covers the following: * A diff is computed using the inputs from the program and then inverting the result, since in the case of a refresh the diff is being driven by the provider side and not the program. This doesn't change what is stored back into the state, but it does produce a diff that is more aligned with the "true changes to the desired state". * `IgnoreChanges` resource options are now stored in state, so that this information can be used in refresh operations that do not have access to/run the program. * In the context of a refresh operation, `IgnoreChanges` applies to *both* input and output properties. This differs from the behaviour of a normal update operation, where `IgnoreChanges` only considers input properties. * The special `"*"` value for `IgnoreChanges` can be used to ignore all properties. It _also_ ignores the case where the resource cannot be found in the provider, and instead keeps the resource intact in state with its existing input and output properties. Because the program is not run for refresh operations, `IgnoreChanges` options must be applied separately before a refresh takes place. This can be accomplished using e.g. a `pulumi up` that applies the options prior to a refresh. We should investigate perhaps providing a `pulumi state set ...`-like CLI to make these sorts of changes directly to a state. For use cases relying on the legacy refresh diff provider, the `PULUMI_USE_LEGACY_REFRESH_DIFF` environment variable can be set, which will disable desired-state diff computation. We only need to perform checks in `RefreshStep.{ResultOp,Apply}`, since downstream code will work correctly based on the presence or absence of a `DetailedDiff` in the step. ### Notes - https://github.com/pulumi/pulumi/issues/16144 affects some of these cases - though its technically orthogonal - https://github.com/pulumi/pulumi/issues/11279 is another technically orthogonal issue that many providers (at least TFBridge ones) - do not report back changes to input properties on Read when the input property (or property path) was missing on the inputs. This is again technically orthogonal - but leads to cases that appear "wrong" in terms of what is stored back into the state still - though the same as before this change. - Azure Native doesn't seem to handle `ignoreChanges` passed to Diff, so the ability to ignore changes on refresh doesn't currently work for Azure Native. ### Fixes * Fixes #16072 * Fixes #16278 * Fixes #16334 * Not quite #12346, but likely replaces the need for that Co-authored-by: Will Jones <will@sacharissa.co.uk>
2024-06-12 16:17:05 +00:00
nil, /* ignoreChanges */
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
)
old, hasOld := sg.deployment.Olds()[urn]
if newState.ID == "" {
return nil, fmt.Errorf("Expected an ID for %v", urn)
}
// If the snapshot has an old resource for this URN and it's not external, we're going
// to have to delete the old resource and conceptually replace it with the resource we
// are about to read.
//
// We accomplish this through the "read-replacement" step, which atomically reads a resource
// and marks the resource it is replacing as pending deletion.
//
// In the event that the new "read" resource's ID matches the existing resource,
// we do not need to delete the resource - we know exactly what resource we are going
// to get from the read.
//
2019-09-21 00:50:44 +00:00
// This operation is tentatively called "relinquish" - it semantically represents the
// release of a resource from the management of Pulumi.
if hasOld && !old.External && old.ID != event.ID() {
logging.V(7).Infof(
"stepGenerator.GenerateReadSteps(...): replacing existing resource %s, ids don't match", urn)
sg.replaces[urn] = true
return []Step{
NewReadReplacementStep(sg.deployment, event, old, newState),
NewReplaceStep(sg.deployment, old, newState, nil, nil, nil, true),
}, nil
}
if bool(logging.V(7)) && hasOld && old.ID == event.ID() {
logging.V(7).Infof("stepGenerator.GenerateReadSteps(...): recognized relinquish of resource %s", urn)
}
sg.reads[urn] = true
return []Step{
NewReadStep(sg.deployment, event, old, newState),
}, nil
}
// GenerateSteps produces one or more steps required to achieve the goal state specified by the
// incoming RegisterResourceEvent.
//
// If the given resource is a custom resource, the step generator will invoke Diff and Check on the
// provider associated with that resource. If those fail, an error is returned.
func (sg *stepGenerator) GenerateSteps(event RegisterResourceEvent) ([]Step, error) {
steps, err := sg.generateSteps(event)
if err != nil {
contract.Assertf(len(steps) == 0, "expected no steps if there is an error")
return nil, err
}
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
// Check each proposed step against the relevant resource plan, if any
for _, s := range steps {
logging.V(5).Infof("Checking step %s for %s", s.Op(), s.URN())
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
if sg.deployment.plan != nil {
if resourcePlan, ok := sg.deployment.plan.ResourcePlans[s.URN()]; ok {
if len(resourcePlan.Ops) == 0 {
return nil, fmt.Errorf("%v is not allowed by the plan: no more steps were expected for this resource", s.Op())
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
constraint := resourcePlan.Ops[0]
// We remove the Op from the list before doing the constraint check.
// This is because we look at Ops at the end to see if any expected operations didn't attempt to happen.
// This op has been attempted, it just might fail its constraint.
resourcePlan.Ops = resourcePlan.Ops[1:]
if !ConstrainedTo(s.Op(), constraint) {
return nil, fmt.Errorf("%v is not allowed by the plan: this resource is constrained to %v", s.Op(), constraint)
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
} else {
if !ConstrainedTo(s.Op(), OpSame) {
return nil, fmt.Errorf("%v is not allowed by the plan: no steps were expected for this resource", s.Op())
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
}
}
// If we're generating plans add the operation to the plan being generated
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.GeneratePlan {
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
// Resource plan might be aliased
urn, isAliased := sg.aliased[s.URN()]
if !isAliased {
urn = s.URN()
}
if resourcePlan, ok := sg.deployment.newPlans.get(urn); ok {
// If the resource is in the plan, add the operation to the plan.
resourcePlan.Ops = append(resourcePlan.Ops, s.Op())
} else if !ConstrainedTo(s.Op(), OpSame) {
return nil, fmt.Errorf("Expected a new resource plan for %v", urn)
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
}
}
// TODO(dixler): `--replace a` currently is treated as a targeted update, but this is not correct.
// Removing `|| sg.replaceTargetsOpt.IsConstrained()` would result in a behavior change
// that would require some thinking to fully understand the repercussions.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if !(sg.deployment.opts.Targets.IsConstrained() || sg.deployment.opts.ReplaceTargets.IsConstrained()) {
return steps, nil
}
// We got a set of steps to perform during a targeted update. If any of the steps are not same steps and depend on
// creates we skipped because they were not in the --target list, issue an error that that the create was necessary
// and that the user must target the resource to create.
for _, step := range steps {
if step.Op() == OpSame || step.New() == nil {
continue
}
Error if a resource's parent is a skipped create (#14672) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/14531. We had a check in the step generator to not execute steps for a resource that had a dependency which was a skipped create. But this didn't check for parent or provider dependencies. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-12-02 22:37:12 +00:00
// Check direct dependencies but also parents and providers.
dependencies := step.New().Dependencies
if step.New().Parent != "" {
dependencies = append(dependencies, step.New().Parent)
}
if step.New().Provider != "" {
prov, err := providers.ParseReference(step.New().Provider)
if err != nil {
return nil, fmt.Errorf(
"could not parse provider reference %s for %s: %w",
step.New().Provider, step.New().URN, err)
}
dependencies = append(dependencies, prov.URN())
}
for _, urn := range dependencies {
if sg.skippedCreates[urn] {
// Targets were specified, but didn't include this resource to create. And a
// resource we are producing a step for does depend on this created resource.
// Give a particular error in that case to let them know. Also mark that we're
// in an error state so that we eventually will error out of the entire
// application run.
d := diag.GetResourceWillBeCreatedButWasNotSpecifiedInTargetList(step.URN())
sg.deployment.Diag().Errorf(d, step.URN(), urn)
sg.sawError = true
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if !sg.deployment.opts.DryRun {
// In preview we keep going so that the user will hear about all the problems and can then
// fix up their command once (as opposed to adding a target, rerunning, adding a target,
// rerunning, etc. etc.).
//
// Doing a normal run. We should not proceed here at all. We don't want to create
// something the user didn't ask for.
return nil, result.BailErrorf("untargeted create")
}
// Remove the resource from the list of skipped creates so that we do not issue duplicate diagnostics.
delete(sg.skippedCreates, urn)
}
}
}
return steps, nil
}
func (sg *stepGenerator) collapseAliasToUrn(goal *resource.Goal, alias resource.Alias) resource.URN {
if alias.URN != "" {
return alias.URN
}
n := alias.Name
if n == "" {
Allow anything in resource names (#14107) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13968. Fixes https://github.com/pulumi/pulumi/issues/8949. This requires changing the parsing of URN's slightly, it is _very_ likely that providers will need to update to handle URNs like this correctly. This changes resource names to be `string` not `QName`. We never validated this before and it turns out that users have put all manner of text for resource names so we just updating the system to correctly reflect that. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-11-20 08:59:00 +00:00
n = goal.Name
}
t := alias.Type
if t == "" {
t = string(goal.Type)
}
[engine] Fix aliasing children There is an issue with how the engine computes the aliases when the resource is a child and doesn't have `Parent` set on the alias spec (and the parent doesn't have any aliases). ```python class FooResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:FooResource", name, None, opts) class ComponentResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:ComponentResource", name, None, opts) FooResource("childrenamed", pulumi.ResourceOptions( parent=self, aliases=[pulumi.Alias(name="child")] )) ``` In the example above, `ComponentResource` has a child `FooResource` which was renamed from `child` to `childrenamed`. The engine does not compute the correct alias: ``` expected: urn:pulumi:stack::project::my:module:ComponentResource$my:module:FooResource::child actual: urn:pulumi:stack::project::my:module:FooResource::child ``` The problem is due to: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/step_generator.go#L370-L382 ... and: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26 Because the alias spec doesn't have `parent` specified, the parent type is not being included the computed alias URN. Existing tests such as https://github.com/pulumi/pulumi/tree/master/tests/integration/aliases/python/rename_component_and_child didn't catch the problem because the alias specifies both the `name` and `parent`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L15 In this case, specifying `parent` on the alias shouldn't be necessary. However, even after removing `parent` from the alias spec, the test still succeeds because the parent itself has an alias: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L18 ... and parent aliases are inherited as part of a child's aliases, so we still get an alias that works from the inheritance. If we change the test to make no changes to the parent such that it doesn't have any aliases, then we get the failure as we'd expect. A similar problem will happen when retyping a child. **Fix** The fix involves using the child's parent in the calculated alias URN when `Parent` isn't specified for the alias. As part of this, we need to properly handled `NoParent` because right now the engine is not correctly using it. The struct representing an alias in the engine does not have a `NoParent` field: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L8-L15 And therefore does not copy it over in the gRPC request: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/source_eval.go#L1082-L1088 Instead, the `Alias` struct has an incorrect `NoParent` method which returns `true` if the `Parent` field has a value of `""`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26
2023-05-01 21:36:32 +00:00
parent := alias.Parent
if parent == "" {
parent = goal.Parent
} else {
// If the parent used an alias then use it's old URN here, as that will be this resource old URN as well.
if parentAlias, has := sg.aliases[parent]; has {
parent = parentAlias
}
[engine] Fix aliasing children There is an issue with how the engine computes the aliases when the resource is a child and doesn't have `Parent` set on the alias spec (and the parent doesn't have any aliases). ```python class FooResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:FooResource", name, None, opts) class ComponentResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:ComponentResource", name, None, opts) FooResource("childrenamed", pulumi.ResourceOptions( parent=self, aliases=[pulumi.Alias(name="child")] )) ``` In the example above, `ComponentResource` has a child `FooResource` which was renamed from `child` to `childrenamed`. The engine does not compute the correct alias: ``` expected: urn:pulumi:stack::project::my:module:ComponentResource$my:module:FooResource::child actual: urn:pulumi:stack::project::my:module:FooResource::child ``` The problem is due to: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/step_generator.go#L370-L382 ... and: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26 Because the alias spec doesn't have `parent` specified, the parent type is not being included the computed alias URN. Existing tests such as https://github.com/pulumi/pulumi/tree/master/tests/integration/aliases/python/rename_component_and_child didn't catch the problem because the alias specifies both the `name` and `parent`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L15 In this case, specifying `parent` on the alias shouldn't be necessary. However, even after removing `parent` from the alias spec, the test still succeeds because the parent itself has an alias: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L18 ... and parent aliases are inherited as part of a child's aliases, so we still get an alias that works from the inheritance. If we change the test to make no changes to the parent such that it doesn't have any aliases, then we get the failure as we'd expect. A similar problem will happen when retyping a child. **Fix** The fix involves using the child's parent in the calculated alias URN when `Parent` isn't specified for the alias. As part of this, we need to properly handled `NoParent` because right now the engine is not correctly using it. The struct representing an alias in the engine does not have a `NoParent` field: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L8-L15 And therefore does not copy it over in the gRPC request: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/source_eval.go#L1082-L1088 Instead, the `Alias` struct has an incorrect `NoParent` method which returns `true` if the `Parent` field has a value of `""`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26
2023-05-01 21:36:32 +00:00
}
parentIsRootStack := parent != "" && parent.QualifiedType() == resource.RootStackType
[engine] Fix aliasing children There is an issue with how the engine computes the aliases when the resource is a child and doesn't have `Parent` set on the alias spec (and the parent doesn't have any aliases). ```python class FooResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:FooResource", name, None, opts) class ComponentResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:ComponentResource", name, None, opts) FooResource("childrenamed", pulumi.ResourceOptions( parent=self, aliases=[pulumi.Alias(name="child")] )) ``` In the example above, `ComponentResource` has a child `FooResource` which was renamed from `child` to `childrenamed`. The engine does not compute the correct alias: ``` expected: urn:pulumi:stack::project::my:module:ComponentResource$my:module:FooResource::child actual: urn:pulumi:stack::project::my:module:FooResource::child ``` The problem is due to: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/step_generator.go#L370-L382 ... and: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26 Because the alias spec doesn't have `parent` specified, the parent type is not being included the computed alias URN. Existing tests such as https://github.com/pulumi/pulumi/tree/master/tests/integration/aliases/python/rename_component_and_child didn't catch the problem because the alias specifies both the `name` and `parent`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L15 In this case, specifying `parent` on the alias shouldn't be necessary. However, even after removing `parent` from the alias spec, the test still succeeds because the parent itself has an alias: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L18 ... and parent aliases are inherited as part of a child's aliases, so we still get an alias that works from the inheritance. If we change the test to make no changes to the parent such that it doesn't have any aliases, then we get the failure as we'd expect. A similar problem will happen when retyping a child. **Fix** The fix involves using the child's parent in the calculated alias URN when `Parent` isn't specified for the alias. As part of this, we need to properly handled `NoParent` because right now the engine is not correctly using it. The struct representing an alias in the engine does not have a `NoParent` field: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L8-L15 And therefore does not copy it over in the gRPC request: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/source_eval.go#L1082-L1088 Instead, the `Alias` struct has an incorrect `NoParent` method which returns `true` if the `Parent` field has a value of `""`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26
2023-05-01 21:36:32 +00:00
if alias.NoParent || parentIsRootStack {
parent = ""
}
project := alias.Project
if project == "" {
project = sg.deployment.source.Project().String()
}
stack := alias.Stack
if stack == "" {
stack = sg.deployment.Target().Name.String()
}
[engine] Fix aliasing children There is an issue with how the engine computes the aliases when the resource is a child and doesn't have `Parent` set on the alias spec (and the parent doesn't have any aliases). ```python class FooResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:FooResource", name, None, opts) class ComponentResource(pulumi.ComponentResource): def __init__(self, name, opts=None): super().__init__("my:module:ComponentResource", name, None, opts) FooResource("childrenamed", pulumi.ResourceOptions( parent=self, aliases=[pulumi.Alias(name="child")] )) ``` In the example above, `ComponentResource` has a child `FooResource` which was renamed from `child` to `childrenamed`. The engine does not compute the correct alias: ``` expected: urn:pulumi:stack::project::my:module:ComponentResource$my:module:FooResource::child actual: urn:pulumi:stack::project::my:module:FooResource::child ``` The problem is due to: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/step_generator.go#L370-L382 ... and: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26 Because the alias spec doesn't have `parent` specified, the parent type is not being included the computed alias URN. Existing tests such as https://github.com/pulumi/pulumi/tree/master/tests/integration/aliases/python/rename_component_and_child didn't catch the problem because the alias specifies both the `name` and `parent`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L15 In this case, specifying `parent` on the alias shouldn't be necessary. However, even after removing `parent` from the alias spec, the test still succeeds because the parent itself has an alias: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/tests/integration/aliases/python/rename_component_and_child/step2/__main__.py#L18 ... and parent aliases are inherited as part of a child's aliases, so we still get an alias that works from the inheritance. If we change the test to make no changes to the parent such that it doesn't have any aliases, then we get the failure as we'd expect. A similar problem will happen when retyping a child. **Fix** The fix involves using the child's parent in the calculated alias URN when `Parent` isn't specified for the alias. As part of this, we need to properly handled `NoParent` because right now the engine is not correctly using it. The struct representing an alias in the engine does not have a `NoParent` field: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L8-L15 And therefore does not copy it over in the gRPC request: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/pkg/resource/deploy/source_eval.go#L1082-L1088 Instead, the `Alias` struct has an incorrect `NoParent` method which returns `true` if the `Parent` field has a value of `""`: https://github.com/pulumi/pulumi/blob/117955ce14b6cf2dd093f8356ec7e26427aca4bf/sdk/go/common/resource/alias.go#L24-L26
2023-05-01 21:36:32 +00:00
return resource.CreateURN(n, t, parent, project, stack)
}
// inheritedChildAlias computes the alias that should be applied to a child based on an alias applied to it's
// parent. This may involve changing the name of the resource in cases where the resource has a named derived
// from the name of the parent, and the parent name changed.
func (sg *stepGenerator) inheritedChildAlias(
childType tokens.Type,
Allow anything in resource names (#14107) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13968. Fixes https://github.com/pulumi/pulumi/issues/8949. This requires changing the parsing of URN's slightly, it is _very_ likely that providers will need to update to handle URNs like this correctly. This changes resource names to be `string` not `QName`. We never validated this before and it turns out that users have put all manner of text for resource names so we just updating the system to correctly reflect that. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-11-20 08:59:00 +00:00
childName, parentName string,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
parentAlias resource.URN,
) resource.URN {
// If the child name has the parent name as a prefix, then we make the assumption that
// it was constructed from the convention of using '{name}-details' as the name of the
// child resource. To ensure this is aliased correctly, we must then also replace the
// parent aliases name in the prefix of the child resource name.
//
// For example:
// * name: "newapp-function"
// * options.parent.__name: "newapp"
// * parentAlias: "urn:pulumi:stackname::projectname::awsx:ec2:Vpc::app"
// * parentAliasName: "app"
// * aliasName: "app-function"
// * childAlias: "urn:pulumi:stackname::projectname::aws:s3/bucket:Bucket::app-function"
aliasName := childName
Allow anything in resource names (#14107) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13968. Fixes https://github.com/pulumi/pulumi/issues/8949. This requires changing the parsing of URN's slightly, it is _very_ likely that providers will need to update to handle URNs like this correctly. This changes resource names to be `string` not `QName`. We never validated this before and it turns out that users have put all manner of text for resource names so we just updating the system to correctly reflect that. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-11-20 08:59:00 +00:00
if strings.HasPrefix(childName, parentName) {
aliasName = parentAlias.Name() + strings.TrimPrefix(childName, parentName)
}
return resource.NewURN(
sg.deployment.Target().Name.Q(),
sg.deployment.source.Project(),
parentAlias.QualifiedType(),
childType,
aliasName)
}
func (sg *stepGenerator) generateAliases(goal *resource.Goal) []resource.URN {
var result []resource.URN
aliases := make(map[resource.URN]struct{}, 0)
addAlias := func(alias resource.URN) {
if _, has := aliases[alias]; !has {
aliases[alias] = struct{}{}
result = append(result, alias)
}
}
for _, alias := range goal.Aliases {
urn := sg.collapseAliasToUrn(goal, alias)
addAlias(urn)
}
// Now multiply out any aliases our parent had.
if goal.Parent != "" {
if parentAlias, has := sg.aliases[goal.Parent]; has {
addAlias(sg.inheritedChildAlias(goal.Type, goal.Name, goal.Parent.Name(), parentAlias))
for _, alias := range goal.Aliases {
childAlias := sg.collapseAliasToUrn(goal, alias)
aliasedChildType := childAlias.Type()
aliasedChildName := childAlias.Name()
inheritedAlias := sg.inheritedChildAlias(aliasedChildType, aliasedChildName, goal.Parent.Name(), parentAlias)
addAlias(inheritedAlias)
}
}
}
return result
}
func (sg *stepGenerator) generateSteps(event RegisterResourceEvent) ([]Step, error) {
var invalid bool // will be set to true if this object fails validation.
goal := event.Goal()
// Some goal settings are based on the parent settings so make sure our parent is correct.
parent, err := sg.checkParent(goal.Parent, goal.Type)
if err != nil {
return nil, err
}
goal.Parent = parent
urn, err := sg.generateURN(goal.Parent, goal.Type, goal.Name)
if err != nil {
return nil, err
}
// Generate the aliases for this resource.
aliases := sg.generateAliases(goal)
if previousAliasURN, alreadyAliased := sg.aliased[urn]; alreadyAliased {
// This resource is claiming to be X but we've already seen another resource claim that via aliases
invalid = true
sg.deployment.Diag().Errorf(diag.GetDuplicateResourceAliasedError(urn), urn, previousAliasURN)
}
// Check for an old resource so that we can figure out if this is a create, delete, etc., and/or
// to diff. We look up first by URN and then by any provided aliases. If it is found using an
// alias, record that alias so that we do not delete the aliased resource later.
var oldInputs resource.PropertyMap
var oldOutputs resource.PropertyMap
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
var old *resource.State
var hasOld bool
var alias []resource.Alias
var createdAt, modifiedAt *time.Time
// Important: Check the URN first, then aliases. Otherwise we may pick the wrong resource which
// could lead to a corrupt snapshot.
for _, urnOrAlias := range append([]resource.URN{urn}, aliases...) {
old, hasOld = sg.deployment.Olds()[urnOrAlias]
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
if hasOld {
oldInputs = old.Inputs
oldOutputs = old.Outputs
createdAt = old.Created
modifiedAt = old.Modified
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
if urnOrAlias != urn {
if _, alreadySeen := sg.urns[urnOrAlias]; alreadySeen {
// This resource is claiming to X but we've already seen that urn created
invalid = true
sg.deployment.Diag().Errorf(diag.GetDuplicateResourceAliasError(urn), urnOrAlias, urn, urn)
}
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
if previousAliasURN, alreadyAliased := sg.aliased[urnOrAlias]; alreadyAliased {
// This resource is claiming to be X but we've already seen another resource claim that
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
invalid = true
sg.deployment.Diag().Errorf(diag.GetDuplicateResourceAliasError(urn), urnOrAlias, urn, previousAliasURN)
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
}
sg.aliased[urnOrAlias] = urn
2021-07-28 19:12:53 +00:00
// register the alias with the provider registry
sg.deployment.providers.RegisterAlias(urn, urnOrAlias)
[engine] Only record a resource's chosen alias. (#9288) As we discovered when removing aliases from the state entirely, the snapshotter needs to be alias-aware so that it can fix up references to resources that were aliased. After a resource operation finishes, the snapshotter needs to write out a new copy of the snapshot. However, at the time we write the snapshot, there may be resources that have not yet been registered that refer to the just-registered resources by a different URN due to aliasing. Those references need to be fixed up prior to writing the snapshot in order to preserve the snapshot's integrity (in particular, the property that all URNs refer to resources that exist in the snapshot). For example, consider the following simple dependency graph: A <-- B. When that graph is serialized, B will contain a reference to A in its dependency list. Let the next run of the program produces the graph A' <-- B where A' is aliased to A. After A' is registered, the snapshotter needs to write a snapshot that contains its state, but B must also be updated so it references A' instead of A, which will no longer be in the snapshot. These changes take advantage of the fact that although a resource can provide multiple aliases, it can only ever resolve those aliases to a single resource in the existing state. Therefore, at the time the statefile is fixed up, each resource in the statefile could only have been aliased to a single old resource, and it is sufficient to store only the URN of the chosen resource rather than all possible aliases. In addition to preserving the ability to fix up references to aliased resources, retaining the chosen alias allows the history of a logical resource to be followed across aliases.
2022-03-28 15:36:08 +00:00
// NOTE: we save the URN of the existing resource so that the snapshotter can replace references to the
// existing resource with the URN of the newly-registered resource. We do not need to save any of the
// resource's other possible aliases.
alias = []resource.Alias{{URN: urnOrAlias}}
// Save the alias actually being used so we can look it up later if anything has this as a parent
sg.aliases[urn] = urnOrAlias
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
}
break
}
}
// Create the desired inputs from the goal state
inputs := goal.Properties
if hasOld {
// Set inputs back to their old values (if any) for any "ignored" properties
processedInputs, err := processIgnoreChanges(inputs, oldInputs, goal.IgnoreChanges)
if err != nil {
return nil, err
}
inputs = processedInputs
}
aliasUrns := make([]resource.URN, len(alias))
for i, a := range alias {
aliasUrns[i] = a.URN
}
// Produce a new state object that we'll build up as operations are performed. Ultimately, this is what will
// get serialized into the checkpoint file.
new := resource.NewState(goal.Type, urn, goal.Custom, false, "", inputs, nil, goal.Parent, goal.Protect, false,
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
goal.Dependencies, goal.InitErrors, goal.Provider, goal.PropertyDependencies, false,
goal.AdditionalSecretOutputs, aliasUrns, &goal.CustomTimeouts, "", goal.RetainOnDelete, goal.DeletedWith,
Change `pulumi refresh` to report diff relative to desired state instead of relative to only output changes (#16146) Presently, the behaviour of diffing during refresh steps is incomplete, returning only an "output diff" that presents the changes in outputs. This commit changes refresh steps so that: * they compute a diff similar to the one that would be computed if a `preview` were run immediately after the refresh, which is more typically what users expect and want; and * `IgnoreChanges` resource options are respected when performing the new desired-state diffs, so that property additions or changes reported by a refresh can be ignored. In particular, `IgnoreChanges` can now be used to acknowledge that part or all of a resource may change in the provider, but the user is OK with this and doesn't want to be notified about it during a refresh. Importantly, this means that the diff won't be reported, but also that the changes won't be applied to state. The implementation covers the following: * A diff is computed using the inputs from the program and then inverting the result, since in the case of a refresh the diff is being driven by the provider side and not the program. This doesn't change what is stored back into the state, but it does produce a diff that is more aligned with the "true changes to the desired state". * `IgnoreChanges` resource options are now stored in state, so that this information can be used in refresh operations that do not have access to/run the program. * In the context of a refresh operation, `IgnoreChanges` applies to *both* input and output properties. This differs from the behaviour of a normal update operation, where `IgnoreChanges` only considers input properties. * The special `"*"` value for `IgnoreChanges` can be used to ignore all properties. It _also_ ignores the case where the resource cannot be found in the provider, and instead keeps the resource intact in state with its existing input and output properties. Because the program is not run for refresh operations, `IgnoreChanges` options must be applied separately before a refresh takes place. This can be accomplished using e.g. a `pulumi up` that applies the options prior to a refresh. We should investigate perhaps providing a `pulumi state set ...`-like CLI to make these sorts of changes directly to a state. For use cases relying on the legacy refresh diff provider, the `PULUMI_USE_LEGACY_REFRESH_DIFF` environment variable can be set, which will disable desired-state diff computation. We only need to perform checks in `RefreshStep.{ResultOp,Apply}`, since downstream code will work correctly based on the presence or absence of a `DetailedDiff` in the step. ### Notes - https://github.com/pulumi/pulumi/issues/16144 affects some of these cases - though its technically orthogonal - https://github.com/pulumi/pulumi/issues/11279 is another technically orthogonal issue that many providers (at least TFBridge ones) - do not report back changes to input properties on Read when the input property (or property path) was missing on the inputs. This is again technically orthogonal - but leads to cases that appear "wrong" in terms of what is stored back into the state still - though the same as before this change. - Azure Native doesn't seem to handle `ignoreChanges` passed to Diff, so the ability to ignore changes on refresh doesn't currently work for Azure Native. ### Fixes * Fixes #16072 * Fixes #16278 * Fixes #16334 * Not quite #12346, but likely replaces the need for that Co-authored-by: Will Jones <will@sacharissa.co.uk>
2024-06-12 16:17:05 +00:00
createdAt, modifiedAt, goal.SourcePosition, goal.IgnoreChanges)
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// Mark the URN/resource as having been seen. So we can run analyzers on all resources seen, as well as
// lookup providers for calculating replacement of resources that use the provider.
sg.deployment.goals.Store(urn, goal)
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
if providers.IsProviderType(goal.Type) {
sg.providers[urn] = new
}
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// Fetch the provider for this resource.
prov, err := sg.loadResourceProvider(urn, goal.Custom, goal.Provider, goal.Type)
if err != nil {
return nil, err
}
// We only allow unknown property values to be exposed to the provider if we are performing an update preview.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
allowUnknowns := sg.deployment.opts.DryRun
// We may be re-creating this resource if it got deleted earlier in the execution of this deployment.
_, recreating := sg.deletes[urn]
// We may be creating this resource if it previously existed in the snapshot as an External resource
wasExternal := hasOld && old.External
// If we have a plan for this resource we need to feed the saved seed to Check to remove non-determinism
var randomSeed []byte
if sg.deployment.plan != nil {
if resourcePlan, ok := sg.deployment.plan.ResourcePlans[urn]; ok {
randomSeed = resourcePlan.Seed
}
}
// If the above didn't set the seed, generate a new random one. If we're running with plans but this
// resource was missing a seed then if the seed is used later checks will fail.
if randomSeed == nil {
randomSeed = make([]byte, 32)
n, err := cryptorand.Read(randomSeed)
contract.AssertNoErrorf(err, "failed to generate random seed")
contract.Assertf(n == len(randomSeed),
"generated fewer (%d) than expected (%d) random bytes", n, len(randomSeed))
}
// If the goal contains an ID, this may be an import. An import occurs if there is no old resource or if the old
// resource's ID does not match the ID in the goal state.
var oldImportID resource.ID
if hasOld {
oldImportID = old.ID
// If the old resource has an ImportID, look at that rather than the ID, since some resources use a different
// format of identifier for the import input than the ID property.
if old.ImportID != "" {
oldImportID = old.ImportID
}
}
isImport := goal.Custom && goal.ID != "" && (!hasOld || old.External || oldImportID != goal.ID)
if isImport {
// TODO(seqnum) Not sure how sequence numbers should interact with imports
// Write the ID of the resource to import into the new state and return an ImportStep or an
// ImportReplacementStep
new.ID = goal.ID
new.ImportID = goal.ID
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
// If we're generating plans create a plan, Imports have no diff, just a goal state
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.GeneratePlan {
newResourcePlan := &ResourcePlan{
Seed: randomSeed,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
Goal: NewGoalPlan(nil, goal),
}
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
sg.deployment.newPlans.set(urn, newResourcePlan)
}
if isReplace := hasOld && !recreating; isReplace {
return []Step{
NewImportReplacementStep(sg.deployment, event, old, new, goal.IgnoreChanges, randomSeed),
NewReplaceStep(sg.deployment, old, new, nil, nil, nil, true),
}, nil
}
return []Step{NewImportStep(sg.deployment, event, new, goal.IgnoreChanges, randomSeed)}, nil
}
isImplicitlyTargetedResource := providers.IsProviderType(urn.Type()) || urn.QualifiedType() == resource.RootStackType
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
// Internally managed resources are under Pulumi's control and changes or creations should be invisible to
// the user, we also implicitly target providers (both default and explicit, see
// https://github.com/pulumi/pulumi/issues/13557 and https://github.com/pulumi/pulumi/issues/13591 for
// context on why).
// Resources are targeted by default
isTargeted := true
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.Targets.IsConstrained() && !isImplicitlyTargetedResource {
isTargeted = sg.isTargetedForUpdate(new)
}
// Ensure the provider is okay with this resource and fetch the inputs to pass to subsequent methods.
if prov != nil {
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
var resp plugin.CheckResponse
checkInputs := prov.Check
if !isTargeted {
// If not targeted, stub out the provider check and use the old inputs directly.
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
checkInputs = func(context.Context, plugin.CheckRequest) (plugin.CheckResponse, error) {
return plugin.CheckResponse{Properties: oldInputs}, nil
}
}
// If we are re-creating this resource because it was deleted earlier, the old inputs are now
// invalid (they got deleted) so don't consider them. Similarly, if the old resource was External,
// don't consider those inputs since Pulumi does not own them. Finally, if the resource has been
// targeted for replacement, ignore its old state.
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
if recreating || wasExternal || sg.isTargetedReplace(urn) || !hasOld {
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
resp, err = checkInputs(context.TODO(), plugin.CheckRequest{
URN: urn,
News: goal.Properties,
AllowUnknowns: allowUnknowns,
RandomSeed: randomSeed,
})
} else {
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
resp, err = checkInputs(context.TODO(), plugin.CheckRequest{
URN: urn,
Olds: oldInputs,
News: inputs,
AllowUnknowns: allowUnknowns,
RandomSeed: randomSeed,
})
}
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
inputs = resp.Properties
if err != nil {
return nil, err
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
} else if issueCheckErrors(sg.deployment, new, urn, resp.Failures) {
invalid = true
}
new.Inputs = inputs
}
// If the resource is valid and we're generating plans then generate a plan
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if !invalid && sg.deployment.opts.GeneratePlan {
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
if recreating || wasExternal || sg.isTargetedReplace(urn) || !hasOld {
oldInputs = nil
}
inputDiff := oldInputs.Diff(inputs)
// Generate the output goal plan, if we're recreating this it should already exist
if recreating {
plan, ok := sg.deployment.newPlans.get(urn)
if !ok {
return nil, fmt.Errorf("no plan for resource %v", urn)
}
// The plan will have had it's Ops already partially filled in for the delete operation, but we
// now have the information needed to fill in Seed and Goal.
plan.Seed = randomSeed
plan.Goal = NewGoalPlan(inputDiff, goal)
} else {
newResourcePlan := &ResourcePlan{
Seed: randomSeed,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
Goal: NewGoalPlan(inputDiff, goal),
}
sg.deployment.newPlans.set(urn, newResourcePlan)
}
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
// If there is a plan for this resource, validate that the program goal conforms to the plan.
// If theres no plan for this resource check that nothing has been changed.
// We don't check plans if the resource is invalid, it's going to fail anyway.
if !invalid && sg.deployment.plan != nil {
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
resourcePlan, ok := sg.deployment.plan.ResourcePlans[urn]
if !ok {
if old == nil {
// We could error here, but we'll trigger an error later on anyway that Create isn't valid here
} else if err := checkMissingPlan(old, inputs, goal); err != nil {
return nil, fmt.Errorf("resource %s violates plan: %w", urn, err)
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
} else {
if err := resourcePlan.checkGoal(oldInputs, inputs, goal); err != nil {
return nil, fmt.Errorf("resource %s violates plan: %w", urn, err)
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
}
}
// Send the resource off to any Analyzers before being operated on. We do two passes: first we perform
// remediatoins, and *then* we do analysis, since we want analyzers to run on the final resource states.
analyzers := sg.deployment.ctx.Host.ListAnalyzers()
for _, remediate := range []bool{true, false} {
for _, analyzer := range analyzers {
r := plugin.AnalyzerResource{
URN: new.URN,
Type: new.Type,
Name: new.URN.Name(),
Properties: inputs,
Options: plugin.AnalyzerResourceOptions{
Protect: new.Protect,
IgnoreChanges: goal.IgnoreChanges,
DeleteBeforeReplace: goal.DeleteBeforeReplace,
AdditionalSecretOutputs: new.AdditionalSecretOutputs,
Aliases: new.GetAliases(),
CustomTimeouts: new.CustomTimeouts,
},
}
providerResource := sg.getProviderResource(new.URN, new.Provider)
if providerResource != nil {
r.Provider = &plugin.AnalyzerProviderResource{
URN: providerResource.URN,
Type: providerResource.Type,
Name: providerResource.URN.Name(),
Properties: providerResource.Inputs,
}
}
if remediate {
// During the first pass, perform remediations. This ensures subsequent analyzers run
// against the transformed properties, ensuring nothing circumvents the analysis checks.
tresults, err := analyzer.Remediate(r)
if err != nil {
return nil, fmt.Errorf("failed to run remediation: %w", err)
} else if len(tresults) > 0 {
for _, tresult := range tresults {
if tresult.Diagnostic != "" {
// If there is a diagnostic, we have a warning to display.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
sg.deployment.events.OnPolicyViolation(new.URN, plugin.AnalyzeDiagnostic{
PolicyName: tresult.PolicyName,
PolicyPackName: tresult.PolicyPackName,
PolicyPackVersion: tresult.PolicyPackVersion,
Description: tresult.Description,
Message: tresult.Diagnostic,
EnforcementLevel: apitype.Advisory,
URN: new.URN,
})
} else if tresult.Properties != nil {
// Emit a nice message so users know what was remediated.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
sg.deployment.events.OnPolicyRemediation(new.URN, tresult, inputs, tresult.Properties)
// Use the transformed inputs rather than the old ones from this point onwards.
inputs = tresult.Properties
new.Inputs = tresult.Properties
}
}
}
} else {
// During the second pass, perform analysis. This happens after remediations so that
// analyzers see properties as they were after the transformations have occurred.
diagnostics, err := analyzer.Analyze(r)
if err != nil {
return nil, fmt.Errorf("failed to run policy: %w", err)
}
for _, d := range diagnostics {
if d.EnforcementLevel == apitype.Remediate {
// If we ran a remediation, but we are still somehow triggering a violation,
// "downgrade" the level we report from remediate to mandatory.
d.EnforcementLevel = apitype.Mandatory
}
if d.EnforcementLevel == apitype.Mandatory {
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if !sg.deployment.opts.DryRun {
invalid = true
}
sg.sawError = true
}
// For now, we always use the URN we have here rather than a URN specified with the diagnostic.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
sg.deployment.events.OnPolicyViolation(new.URN, d)
}
2019-06-10 22:20:44 +00:00
}
}
}
// If the resource isn't valid, don't proceed any further.
if invalid {
return nil, result.BailErrorf("resource %s is invalid", urn)
}
// There are four cases we need to consider when figuring out what to do with this resource.
//
// Case 1: recreating
// In this case, we have seen a resource with this URN before and we have already issued a
// delete step for it. This happens when the engine has to delete a resource before it has
// enough information about whether that resource still exists. A concrete example is
// when a resource depends on a resource that is delete-before-replace: the engine must first
// delete the dependent resource before depending the DBR resource, but the engine can't know
// yet whether the dependent resource is being replaced or deleted.
//
// In this case, we are seeing the resource again after deleting it, so it must be a replacement.
//
// Logically, recreating implies hasOld, since in order to delete something it must have
// already existed.
contract.Assertf(!recreating || hasOld, "cannot recreate a resource that doesn't exist")
if recreating {
logging.V(7).Infof("Planner decided to re-create replaced resource '%v' deleted due to dependent DBR", urn)
// Unmark this resource as deleted, we now know it's being replaced instead.
delete(sg.deletes, urn)
sg.replaces[urn] = true
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
keys := sg.dependentReplaceKeys[urn]
return []Step{
NewReplaceStep(sg.deployment, old, new, nil, nil, nil, false),
NewCreateReplacementStep(sg.deployment, event, old, new, keys, nil, nil, false),
}, nil
}
// Case 2: wasExternal
// In this case, the resource we are operating upon exists in the old snapshot, but it
// was "external" - Pulumi does not own its lifecycle. Conceptually, this operation is
// akin to "taking ownership" of a resource that we did not previously control.
//
// Since we are not allowed to manipulate the existing resource, we must create a resource
// to take its place. Since this is technically a replacement operation, we pend deletion of
// read until the end of the deployment.
if wasExternal {
logging.V(7).Infof("Planner recognized '%s' as old external resource, creating instead", urn)
sg.creates[urn] = true
if err != nil {
return nil, err
}
return []Step{
NewCreateReplacementStep(sg.deployment, event, old, new, nil, nil, nil, true),
NewReplaceStep(sg.deployment, old, new, nil, nil, nil, true),
}, nil
}
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
// This looks odd that we have to recheck isTargetedForUpdate but it's to cover implicitly targeted
// resources like providers (where isTargeted is always true), but which might have been _explicitly_
// targeted due to being in the --targets list or being explicitly pulled in by --target-dependents.
if isTargeted && sg.isTargetedForUpdate(new) {
// Transitive dependencies are not initially targeted, ensure that they are in the Targets so that the
Tests and fix for --target-dependents with explicit providers (#14238) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/13591. This changes the logic for providers to always be targeted, this means they can be skipped from --targets lists most of the time. Because they don't need to be in the --targets list it makes the behaviour of --target-dependents much more useful. If you want to update a resource and it's children but it has an explicit provider you can just --targets the resource. If you want to use --target-dependents to target _all_ the resources managed by an explicit provider that will work if the provider is in --targets. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-10-18 13:10:22 +00:00
// step_generator identifies that the URN is targeted if applicable
sg.targetsActual.addLiteral(urn)
}
// Case 3: hasOld
// In this case, the resource we are operating upon now exists in the old snapshot.
Implement first-class providers. (#1695) ### First-Class Providers These changes implement support for first-class providers. First-class providers are provider plugins that are exposed as resources via the Pulumi programming model so that they may be explicitly and multiply instantiated. Each instance of a provider resource may be configured differently, and configuration parameters may be source from the outputs of other resources. ### Provider Plugin Changes In order to accommodate the need to verify and diff provider configuration and configure providers without complete configuration information, these changes adjust the high-level provider plugin interface. Two new methods for validating a provider's configuration and diffing changes to the same have been added (`CheckConfig` and `DiffConfig`, respectively), and the type of the configuration bag accepted by `Configure` has been changed to a `PropertyMap`. These changes have not yet been reflected in the provider plugin gRPC interface. We will do this in a set of follow-up changes. Until then, these methods are implemented by adapters: - `CheckConfig` validates that all configuration parameters are string or unknown properties. This is necessary because existing plugins only accept string-typed configuration values. - `DiffConfig` either returns "never replace" if all configuration values are known or "must replace" if any configuration value is unknown. The justification for this behavior is given [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106) - `Configure` converts the config bag to a legacy config map and configures the provider plugin if all config values are known. If any config value is unknown, the underlying plugin is not configured and the provider may only perform `Check`, `Read`, and `Invoke`, all of which return empty results. We justify this behavior becuase it is only possible during a preview and provides the best experience we can manage with the existing gRPC interface. ### Resource Model Changes Providers are now exposed as resources that participate in a stack's dependency graph. Like other resources, they are explicitly created, may have multiple instances, and may have dependencies on other resources. Providers are referred to using provider references, which are a combination of the provider's URN and its ID. This design addresses the need during a preview to refer to providers that have not yet been physically created and therefore have no ID. All custom resources that are not themselves providers must specify a single provider via a provider reference. The named provider will be used to manage that resource's CRUD operations. If a resource's provider reference changes, the resource must be replaced. Though its URN is not present in the resource's dependency list, the provider should be treated as a dependency of the resource when topologically sorting the dependency graph. Finally, `Invoke` operations must now specify a provider to use for the invocation via a provider reference. ### Engine Changes First-class providers support requires a few changes to the engine: - The engine must have some way to map from provider references to provider plugins. It must be possible to add providers from a stack's checkpoint to this map and to register new/updated providers during the execution of a plan in response to CRUD operations on provider resources. - In order to support updating existing stacks using existing Pulumi programs that may not explicitly instantiate providers, the engine must be able to manage the "default" providers for each package referenced by a checkpoint or Pulumi program. The configuration for a "default" provider is taken from the stack's configuration data. The former need is addressed by adding a provider registry type that is responsible for managing all of the plugins required by a plan. In addition to loading plugins froma checkpoint and providing the ability to map from a provider reference to a provider plugin, this type serves as the provider plugin for providers themselves (i.e. it is the "provider provider"). The latter need is solved via two relatively self-contained changes to plan setup and the eval source. During plan setup, the old checkpoint is scanned for custom resources that do not have a provider reference in order to compute the set of packages that require a default provider. Once this set has been computed, the required default provider definitions are conjured and prepended to the checkpoint's resource list. Each resource that requires a default provider is then updated to refer to the default provider for its package. While an eval source is running, each custom resource registration, resource read, and invoke that does not name a provider is trapped before being returned by the source iterator. If no default provider for the appropriate package has been registered, the eval source synthesizes an appropriate registration, waits for it to complete, and records the registered provider's reference. This reference is injected into the original request, which is then processed as usual. If a default provider was already registered, the recorded reference is used and no new registration occurs. ### SDK Changes These changes only expose first-class providers from the Node.JS SDK. - A new abstract class, `ProviderResource`, can be subclassed and used to instantiate first-class providers. - A new field in `ResourceOptions`, `provider`, can be used to supply a particular provider instance to manage a `CustomResource`'s CRUD operations. - A new type, `InvokeOptions`, can be used to specify options that control the behavior of a call to `pulumi.runtime.invoke`. This type includes a `provider` field that is analogous to `ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
// It must be an update or a replace. Which operation we do depends on the the specific change made to the
// resource's properties:
//
// - if the user has requested that only specific resources be updated, and this resource is
// not in that set, do no 'Diff' and just treat the resource as 'same' (i.e. unchanged).
//
Implement first-class providers. (#1695) ### First-Class Providers These changes implement support for first-class providers. First-class providers are provider plugins that are exposed as resources via the Pulumi programming model so that they may be explicitly and multiply instantiated. Each instance of a provider resource may be configured differently, and configuration parameters may be source from the outputs of other resources. ### Provider Plugin Changes In order to accommodate the need to verify and diff provider configuration and configure providers without complete configuration information, these changes adjust the high-level provider plugin interface. Two new methods for validating a provider's configuration and diffing changes to the same have been added (`CheckConfig` and `DiffConfig`, respectively), and the type of the configuration bag accepted by `Configure` has been changed to a `PropertyMap`. These changes have not yet been reflected in the provider plugin gRPC interface. We will do this in a set of follow-up changes. Until then, these methods are implemented by adapters: - `CheckConfig` validates that all configuration parameters are string or unknown properties. This is necessary because existing plugins only accept string-typed configuration values. - `DiffConfig` either returns "never replace" if all configuration values are known or "must replace" if any configuration value is unknown. The justification for this behavior is given [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106) - `Configure` converts the config bag to a legacy config map and configures the provider plugin if all config values are known. If any config value is unknown, the underlying plugin is not configured and the provider may only perform `Check`, `Read`, and `Invoke`, all of which return empty results. We justify this behavior becuase it is only possible during a preview and provides the best experience we can manage with the existing gRPC interface. ### Resource Model Changes Providers are now exposed as resources that participate in a stack's dependency graph. Like other resources, they are explicitly created, may have multiple instances, and may have dependencies on other resources. Providers are referred to using provider references, which are a combination of the provider's URN and its ID. This design addresses the need during a preview to refer to providers that have not yet been physically created and therefore have no ID. All custom resources that are not themselves providers must specify a single provider via a provider reference. The named provider will be used to manage that resource's CRUD operations. If a resource's provider reference changes, the resource must be replaced. Though its URN is not present in the resource's dependency list, the provider should be treated as a dependency of the resource when topologically sorting the dependency graph. Finally, `Invoke` operations must now specify a provider to use for the invocation via a provider reference. ### Engine Changes First-class providers support requires a few changes to the engine: - The engine must have some way to map from provider references to provider plugins. It must be possible to add providers from a stack's checkpoint to this map and to register new/updated providers during the execution of a plan in response to CRUD operations on provider resources. - In order to support updating existing stacks using existing Pulumi programs that may not explicitly instantiate providers, the engine must be able to manage the "default" providers for each package referenced by a checkpoint or Pulumi program. The configuration for a "default" provider is taken from the stack's configuration data. The former need is addressed by adding a provider registry type that is responsible for managing all of the plugins required by a plan. In addition to loading plugins froma checkpoint and providing the ability to map from a provider reference to a provider plugin, this type serves as the provider plugin for providers themselves (i.e. it is the "provider provider"). The latter need is solved via two relatively self-contained changes to plan setup and the eval source. During plan setup, the old checkpoint is scanned for custom resources that do not have a provider reference in order to compute the set of packages that require a default provider. Once this set has been computed, the required default provider definitions are conjured and prepended to the checkpoint's resource list. Each resource that requires a default provider is then updated to refer to the default provider for its package. While an eval source is running, each custom resource registration, resource read, and invoke that does not name a provider is trapped before being returned by the source iterator. If no default provider for the appropriate package has been registered, the eval source synthesizes an appropriate registration, waits for it to complete, and records the registered provider's reference. This reference is injected into the original request, which is then processed as usual. If a default provider was already registered, the recorded reference is used and no new registration occurs. ### SDK Changes These changes only expose first-class providers from the Node.JS SDK. - A new abstract class, `ProviderResource`, can be subclassed and used to instantiate first-class providers. - A new field in `ResourceOptions`, `provider`, can be used to supply a particular provider instance to manage a `CustomResource`'s CRUD operations. - A new type, `InvokeOptions`, can be used to specify options that control the behavior of a call to `pulumi.runtime.invoke`. This type includes a `provider` field that is analogous to `ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
// - If the resource's provider reference changed, the resource must be replaced. This behavior is founded upon
// the assumption that providers are recreated iff their configuration changed in such a way that they are no
// longer able to manage existing resources.
//
Implement first-class providers. (#1695) ### First-Class Providers These changes implement support for first-class providers. First-class providers are provider plugins that are exposed as resources via the Pulumi programming model so that they may be explicitly and multiply instantiated. Each instance of a provider resource may be configured differently, and configuration parameters may be source from the outputs of other resources. ### Provider Plugin Changes In order to accommodate the need to verify and diff provider configuration and configure providers without complete configuration information, these changes adjust the high-level provider plugin interface. Two new methods for validating a provider's configuration and diffing changes to the same have been added (`CheckConfig` and `DiffConfig`, respectively), and the type of the configuration bag accepted by `Configure` has been changed to a `PropertyMap`. These changes have not yet been reflected in the provider plugin gRPC interface. We will do this in a set of follow-up changes. Until then, these methods are implemented by adapters: - `CheckConfig` validates that all configuration parameters are string or unknown properties. This is necessary because existing plugins only accept string-typed configuration values. - `DiffConfig` either returns "never replace" if all configuration values are known or "must replace" if any configuration value is unknown. The justification for this behavior is given [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106) - `Configure` converts the config bag to a legacy config map and configures the provider plugin if all config values are known. If any config value is unknown, the underlying plugin is not configured and the provider may only perform `Check`, `Read`, and `Invoke`, all of which return empty results. We justify this behavior becuase it is only possible during a preview and provides the best experience we can manage with the existing gRPC interface. ### Resource Model Changes Providers are now exposed as resources that participate in a stack's dependency graph. Like other resources, they are explicitly created, may have multiple instances, and may have dependencies on other resources. Providers are referred to using provider references, which are a combination of the provider's URN and its ID. This design addresses the need during a preview to refer to providers that have not yet been physically created and therefore have no ID. All custom resources that are not themselves providers must specify a single provider via a provider reference. The named provider will be used to manage that resource's CRUD operations. If a resource's provider reference changes, the resource must be replaced. Though its URN is not present in the resource's dependency list, the provider should be treated as a dependency of the resource when topologically sorting the dependency graph. Finally, `Invoke` operations must now specify a provider to use for the invocation via a provider reference. ### Engine Changes First-class providers support requires a few changes to the engine: - The engine must have some way to map from provider references to provider plugins. It must be possible to add providers from a stack's checkpoint to this map and to register new/updated providers during the execution of a plan in response to CRUD operations on provider resources. - In order to support updating existing stacks using existing Pulumi programs that may not explicitly instantiate providers, the engine must be able to manage the "default" providers for each package referenced by a checkpoint or Pulumi program. The configuration for a "default" provider is taken from the stack's configuration data. The former need is addressed by adding a provider registry type that is responsible for managing all of the plugins required by a plan. In addition to loading plugins froma checkpoint and providing the ability to map from a provider reference to a provider plugin, this type serves as the provider plugin for providers themselves (i.e. it is the "provider provider"). The latter need is solved via two relatively self-contained changes to plan setup and the eval source. During plan setup, the old checkpoint is scanned for custom resources that do not have a provider reference in order to compute the set of packages that require a default provider. Once this set has been computed, the required default provider definitions are conjured and prepended to the checkpoint's resource list. Each resource that requires a default provider is then updated to refer to the default provider for its package. While an eval source is running, each custom resource registration, resource read, and invoke that does not name a provider is trapped before being returned by the source iterator. If no default provider for the appropriate package has been registered, the eval source synthesizes an appropriate registration, waits for it to complete, and records the registered provider's reference. This reference is injected into the original request, which is then processed as usual. If a default provider was already registered, the recorded reference is used and no new registration occurs. ### SDK Changes These changes only expose first-class providers from the Node.JS SDK. - A new abstract class, `ProviderResource`, can be subclassed and used to instantiate first-class providers. - A new field in `ResourceOptions`, `provider`, can be used to supply a particular provider instance to manage a `CustomResource`'s CRUD operations. - A new type, `InvokeOptions`, can be used to specify options that control the behavior of a call to `pulumi.runtime.invoke`. This type includes a `provider` field that is analogous to `ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
// - Otherwise, we invoke the resource's provider's `Diff` method. If this method indicates that the resource must
// be replaced, we do so. If it does not, we update the resource in place.
if hasOld {
contract.Assertf(old != nil, "must have old resource if hasOld is true")
// If the user requested only specific resources to update, and this resource was not in
Propagate deleted dependencies of untargeted resources (#16247) When using `--target` to target specific resources during an update, we use the list of targets to decide which steps to generate given a set of resource registrations. Specifically: * If the registration event names a resource that is targeted, we process it as usual. * If the registration event names a resource that _is not_ targeted, we emit a `SameStep` for it. In the latter case, the emission of a `SameStep` means that the old state for the resource will be copied across to the new state. This is the desired behaviour -- the resource was not targeted and so the new state should contain the resource exactly as it was prior to the update. However, this presents a problem if the old state has references to resources that either will not appear in the new state, or will appear in the wrong place. Consider the following program in TypeScript-esque pseudocode: ```typescript const a = new Resource("a") const b = new Resource("b", { dependency: a }) const c = new Resource("c") ``` Here, `b` depends on `a`, while `a` and `c` have no dependencies. We run this program without specifying targets and obtain a state containing `a`, `b` and `c`, with `a` appearing before `b` due to `b`'s dependency on `a`. We now modify the program as follows: ```typescript const b = new Resource("b") const c = new Resource("c") ``` `a` has been removed from the program and consequently `b` no longer depends on it. We once more run the program, this time with a `--target` of `c`. That is to say, neither `a` nor `b` is targeted. The execution proceeds as follows: * `a` is not in the program, so no `RegisterResourceEvent` will be emitted and processed for it. * `b` is in the program, but it is not targeted. Its `RegisterResourceEvent` will be turned into a `SameStep` and `b`'s _old state will be copied as-is to the new state_. * `c` is in the program and is targeted. It will be processed as normal. At the end of execution when we come to write the snapshot, we take the following actions: * We first write the processed resources: `b`'s old state and `c`'s new state. * We then copy over any unprocessed resources from the base (previous) snapshot. This includes `a` (which is again desirable since its deletion should not be processed due to it not being targeted). Our snapshot is now not topologically sorted and thus invalid: `b` has a dependency on `a`, but `a` appears after it. Presently this bug will manifest irrespective of the nature of the dependency: `.Dependencies`, `.PropertyDependencies` and `.DeletedWith` are all affected. This commit fixes this issue by traversing all untargeted resource dependency relationships and ensuring that `SameStep`s (or better if they have been targeted) are emitted before emitting the depending resource's `SameStep`. * Fixes #16052 * Fixes #15959
2024-05-23 12:31:03 +00:00
// that set, then we should emit a SameStep for it.
if !isTargeted {
logging.V(7).Infof(
"Planner decided not to update '%v' due to not being in target group (same) (inputs=%v)", urn, new.Inputs)
Fix panic when changing untargetted provider versions (#15716) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/15704. When doing a targeted run the source evaluator isn't aware of targets but it is responsible for registering default providers. As such on getting a resource event with a new provider version (e.g 1.0 -> 2.0) it will send of a registration for the new version it's seen, which as a default provider the step generator will accept and add to state (this is probably ok). However when the step generator runs for the resource using this provider it will see it's not targetted and ignore its new goal state just reusing its old state. This old state will be referring to the old version of the provider (e.g "default_aws_1_0_0" rather than "default_aws_2_0_0"), which was causing a panic in the step generator when trying to build the overall stack state for StackAnalyze as the old provider had never been registered. We now catch this situation when generating the same step for a non-targeted resource and error out that this isn't supported. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-03-20 09:44:49 +00:00
// We need to check that we have the provider for this resource.
if old.Provider != "" {
ref, err := providers.ParseReference(old.Provider)
if err != nil {
return nil, err
}
_, has := sg.deployment.GetProvider(ref)
if !has {
// This provider hasn't been registered yet. This happens when a user changes the default
// provider version in a targeted update. See https://github.com/pulumi/pulumi/issues/15704
// for more information.
var providerResource *resource.State
for _, r := range sg.deployment.olds {
if r.URN == ref.URN() && r.ID == ref.ID() {
providerResource = r
break
}
}
if providerResource == nil {
return nil, fmt.Errorf("could not find provider %v in old state", ref)
}
// Return a more friendly error to the user explaining this isn't supported.
return nil, fmt.Errorf("provider %s for resource %s has not been registered yet, this is "+
"due to a change of providers mixed with --target. "+
"Change your program back to the original providers", ref, urn)
}
}
Propagate deleted dependencies of untargeted resources (#16247) When using `--target` to target specific resources during an update, we use the list of targets to decide which steps to generate given a set of resource registrations. Specifically: * If the registration event names a resource that is targeted, we process it as usual. * If the registration event names a resource that _is not_ targeted, we emit a `SameStep` for it. In the latter case, the emission of a `SameStep` means that the old state for the resource will be copied across to the new state. This is the desired behaviour -- the resource was not targeted and so the new state should contain the resource exactly as it was prior to the update. However, this presents a problem if the old state has references to resources that either will not appear in the new state, or will appear in the wrong place. Consider the following program in TypeScript-esque pseudocode: ```typescript const a = new Resource("a") const b = new Resource("b", { dependency: a }) const c = new Resource("c") ``` Here, `b` depends on `a`, while `a` and `c` have no dependencies. We run this program without specifying targets and obtain a state containing `a`, `b` and `c`, with `a` appearing before `b` due to `b`'s dependency on `a`. We now modify the program as follows: ```typescript const b = new Resource("b") const c = new Resource("c") ``` `a` has been removed from the program and consequently `b` no longer depends on it. We once more run the program, this time with a `--target` of `c`. That is to say, neither `a` nor `b` is targeted. The execution proceeds as follows: * `a` is not in the program, so no `RegisterResourceEvent` will be emitted and processed for it. * `b` is in the program, but it is not targeted. Its `RegisterResourceEvent` will be turned into a `SameStep` and `b`'s _old state will be copied as-is to the new state_. * `c` is in the program and is targeted. It will be processed as normal. At the end of execution when we come to write the snapshot, we take the following actions: * We first write the processed resources: `b`'s old state and `c`'s new state. * We then copy over any unprocessed resources from the base (previous) snapshot. This includes `a` (which is again desirable since its deletion should not be processed due to it not being targeted). Our snapshot is now not topologically sorted and thus invalid: `b` has a dependency on `a`, but `a` appears after it. Presently this bug will manifest irrespective of the nature of the dependency: `.Dependencies`, `.PropertyDependencies` and `.DeletedWith` are all affected. This commit fixes this issue by traversing all untargeted resource dependency relationships and ensuring that `SameStep`s (or better if they have been targeted) are emitted before emitting the depending resource's `SameStep`. * Fixes #16052 * Fixes #15959
2024-05-23 12:31:03 +00:00
// When emitting a SameStep for an untargeted resource, we must also check
// for dependencies of the resource that may have been both deleted and
// not targeted. Consider:
//
// * When a resource is deleted from a program, no resource registration
// will be sent for it. Moreover, no other resource in the program can
// refer to it (since it forms no part of the program source).
//
// * In the event of an untargeted update, resources that previously
// referred to the now-deleted resource will be updated and the
// dependencies removed. The deleted resource will be removed from the
// state later in the operation.
//
// HOWEVER, in the event of a targeted update that targets _neither the
// deleted resource nor its dependencies_:
//
// * The dependencies will have SameSteps emitted and their old states
// will be copied into the new state.
//
// * The deleted resource will not have a resource registration sent for
// it. However, by virtue of not being targeted, it will (correctly) not
// be deleted from the state. Thus, its old state will be copied over
// before the new snapshot is written. Alas, it will therefore appear
// after the resources that depend upon it in the new snapshot, which is
// invalid!
//
// We therefore have a special case where we can't rely on previous steps
// to have copied our dependencies over for us. We address this by
// manually traversing the dependencies of untargeted resources with old
// state and ensuring that they have SameSteps emitted before we emit our
// own.
//
// Note:
//
// * This traversal has to be depth-first -- we need to push steps for our
// dependencies before we push a step for ourselves.
//
// * "Dependencies" here includes dependencies, property dependencies, and
// deleted-with relationships.
var getDependencySteps func(old *resource.State, event RegisterResourceEvent) ([]Step, error)
getDependencySteps = func(old *resource.State, event RegisterResourceEvent) ([]Step, error) {
sg.sames[urn] = true
var steps []Step
for _, dep := range old.Dependencies {
generatedDep := sg.hasGeneratedStep(dep)
if !generatedDep {
depOld, has := sg.deployment.Olds()[dep]
if !has {
return nil, result.BailErrorf(
"dependency %s of untargeted resource %s has no old state",
dep,
urn,
)
}
depSteps, err := getDependencySteps(depOld, nil)
if err != nil {
return nil, err
}
steps = append(steps, depSteps...)
}
}
for p, deps := range old.PropertyDependencies {
for _, dep := range deps {
generatedDep := sg.hasGeneratedStep(dep)
if !generatedDep {
depOld, has := sg.deployment.Olds()[dep]
if !has {
return nil, result.BailErrorf(
"property dependency %s of untargeted resource %s's property %s has no old state",
dep,
urn,
p,
)
}
depSteps, err := getDependencySteps(depOld, nil)
if err != nil {
return nil, err
}
steps = append(steps, depSteps...)
}
}
}
if old.DeletedWith != "" {
generatedDep := sg.hasGeneratedStep(old.DeletedWith)
if !generatedDep {
depOld, has := sg.deployment.Olds()[old.DeletedWith]
if !has {
return nil, result.BailErrorf(
"deleted with dependency %s of untargeted resource %s has no old state",
old.DeletedWith,
urn,
)
}
depSteps, err := getDependencySteps(depOld, nil)
if err != nil {
return nil, err
}
steps = append(steps, depSteps...)
}
}
rootStep := NewSameStep(sg.deployment, event, old, old)
steps = append(steps, rootStep)
return steps, nil
}
steps, err := getDependencySteps(old, event)
if err != nil {
return nil, err
}
return steps, nil
}
Propagate deleted dependencies of untargeted resources (#16247) When using `--target` to target specific resources during an update, we use the list of targets to decide which steps to generate given a set of resource registrations. Specifically: * If the registration event names a resource that is targeted, we process it as usual. * If the registration event names a resource that _is not_ targeted, we emit a `SameStep` for it. In the latter case, the emission of a `SameStep` means that the old state for the resource will be copied across to the new state. This is the desired behaviour -- the resource was not targeted and so the new state should contain the resource exactly as it was prior to the update. However, this presents a problem if the old state has references to resources that either will not appear in the new state, or will appear in the wrong place. Consider the following program in TypeScript-esque pseudocode: ```typescript const a = new Resource("a") const b = new Resource("b", { dependency: a }) const c = new Resource("c") ``` Here, `b` depends on `a`, while `a` and `c` have no dependencies. We run this program without specifying targets and obtain a state containing `a`, `b` and `c`, with `a` appearing before `b` due to `b`'s dependency on `a`. We now modify the program as follows: ```typescript const b = new Resource("b") const c = new Resource("c") ``` `a` has been removed from the program and consequently `b` no longer depends on it. We once more run the program, this time with a `--target` of `c`. That is to say, neither `a` nor `b` is targeted. The execution proceeds as follows: * `a` is not in the program, so no `RegisterResourceEvent` will be emitted and processed for it. * `b` is in the program, but it is not targeted. Its `RegisterResourceEvent` will be turned into a `SameStep` and `b`'s _old state will be copied as-is to the new state_. * `c` is in the program and is targeted. It will be processed as normal. At the end of execution when we come to write the snapshot, we take the following actions: * We first write the processed resources: `b`'s old state and `c`'s new state. * We then copy over any unprocessed resources from the base (previous) snapshot. This includes `a` (which is again desirable since its deletion should not be processed due to it not being targeted). Our snapshot is now not topologically sorted and thus invalid: `b` has a dependency on `a`, but `a` appears after it. Presently this bug will manifest irrespective of the nature of the dependency: `.Dependencies`, `.PropertyDependencies` and `.DeletedWith` are all affected. This commit fixes this issue by traversing all untargeted resource dependency relationships and ensuring that `SameStep`s (or better if they have been targeted) are emitted before emitting the depending resource's `SameStep`. * Fixes #16052 * Fixes #15959
2024-05-23 12:31:03 +00:00
updateSteps, err := sg.generateStepsFromDiff(
event, urn, old, new, oldInputs, oldOutputs, inputs, prov, goal, randomSeed)
if err != nil {
return nil, err
}
if len(updateSteps) > 0 {
// 'Diff' produced update steps. We're done at this point.
return updateSteps, nil
}
// Diff didn't produce any steps for this resource. Fall through and indicate that it
// is same/unchanged.
logging.V(7).Infof("Planner decided not to update '%v' after diff (same) (inputs=%v)", urn, new.Inputs)
// No need to update anything, the properties didn't change.
sg.sames[urn] = true
return []Step{NewSameStep(sg.deployment, event, old, new)}, nil
}
// Case 4: Not Case 1, 2, or 3
// If a resource isn't being recreated and it's not being updated or replaced,
// it's just being created.
// We're in the create stage now. In a normal run just issue a 'create step'. If, however, the
// user is doing a run with `--target`s, then we need to operate specially here.
//
// 1. If the user did include this resource urn in the --target list, then we can proceed
// normally and issue a create step for this.
//
// 2. However, if they did not include the resource in the --target list, then we want to flat
// out ignore it (just like we ignore updates to resource not in the --target list). This has
// interesting implications though. Specifically, what to do if a prop from this resource is
// then actually needed by a property we *are* doing a targeted create/update for.
//
// In that case, we want to error to force the user to be explicit about wanting this resource
// to be created. However, we can't issue the error until later on when the resource is
// referenced. So, to support this we create a special "same" step here for this resource. That
// "same" step has a bit on it letting us know that it is for this case. If we then later see a
// resource that depends on this resource, we will issue an error letting the user know.
//
// We will also not record this non-created resource into the checkpoint as it doesn't actually
// exist.
if !isTargeted {
sg.sames[urn] = true
sg.skippedCreates[urn] = true
return []Step{NewSkippedCreateStep(sg.deployment, event, new)}, nil
}
sg.creates[urn] = true
logging.V(7).Infof("Planner decided to create '%v' (inputs=%v)", urn, new.Inputs)
return []Step{NewCreateStep(sg.deployment, event, new)}, nil
}
func (sg *stepGenerator) generateStepsFromDiff(
event RegisterResourceEvent, urn resource.URN, old, new *resource.State,
oldInputs, oldOutputs, inputs resource.PropertyMap,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
prov plugin.Provider, goal *resource.Goal, randomSeed []byte,
) ([]Step, error) {
// We only allow unknown property values to be exposed to the provider if we are performing an update preview.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
allowUnknowns := sg.deployment.opts.DryRun
diff, err := sg.diff(urn, old, new, oldInputs, oldOutputs, inputs, prov, allowUnknowns, goal.IgnoreChanges)
// If the plugin indicated that the diff is unavailable, assume that the resource will be updated and
// report the message contained in the error.
if _, ok := err.(plugin.DiffUnavailableError); ok {
diff = plugin.DiffResult{Changes: plugin.DiffSome}
sg.deployment.ctx.Diag.Warningf(diag.RawMessage(urn, err.Error()))
} else if err != nil {
return nil, err
}
// Ensure that we received a sensible response.
if diff.Changes != plugin.DiffNone && diff.Changes != plugin.DiffSome {
return nil, fmt.Errorf(
"unrecognized diff state for %s: %d", urn, diff.Changes)
}
hasInitErrors := len(old.InitErrors) > 0
// Update the diff to apply any replaceOnChanges annotations and to include initErrors in the diff.
diff, err = applyReplaceOnChanges(diff, goal.ReplaceOnChanges, hasInitErrors)
if err != nil {
return nil, err
}
// If there were changes check for a replacement vs. an in-place update.
Don't re-delete resources that are `PendingReplacement` (#16510) As well as indicating that a resource's state has changes, a diff can also indicate that those changes require the _replacement_ of the resource, meaning that it must be recreated and not just updated. In this scenario, there are two possible ways to replace the resource -- by first creating another new resource before deleting the old one ("create-before-replace"), or by first deleting the old resource before creating its replacement ("delete-before-replace"). Create-before-replace is the default since generally, if possible to implement, it should result in fewer instances of "downtime", where a desired resource does not exist in the system. Should delete-before-replace be chosen, Pulumi implements this under the hood as three steps: delete for replacement, replace, and create replacement. To track things consistently, as well as enable resumption of an interrupted operation, Pulumi writes a flag, `PendingReplacement` to the state of a deleted resource that will later be cleaned up by a completed replacement. Should an interrupted operation be resumed, Pulumi does not currently take `PendingReplacement` into account, and always enqueues a(nother) delete operation. This is typically fine (albeit wasteful) since deletes are (should) be idempotent, but unnecessary. This commit adds @jesse-triplewhale's fix for this behaviour whereby the `PendingReplacement` flag is simply removed before the remainder of the required steps (replace, create replacement) are actioned as normal. It also extends this work with some lifecycle tests for this scenario and a few others that may arise as a result of an interrupted replacement. Fixes #16288 Closes #16303 Co-authored-by: Jesse Grodman <jesse@triplewhale.com>
2024-06-28 23:16:20 +00:00
if diff.Changes == plugin.DiffSome || old.PendingReplacement {
if diff.Replace() || old.PendingReplacement {
// If this resource is protected we can't replace it because that entails a delete
// Note that we do allow unprotecting and replacing to happen in a single update
// cycle, we don't look at old.Protect here.
if new.Protect && old.Protect {
message := fmt.Sprintf("unable to replace resource %q\n"+
"as it is currently marked for protection. To unprotect the resource, "+
"remove the `protect` flag from the resource in your Pulumi "+
"program and run `pulumi up`", urn)
sg.deployment.ctx.Diag.Errorf(diag.StreamMessage(urn, message, 0))
sg.sawError = true
Don't bail at preview when a protected resource needs replacement (#15969) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description Don't bail at preview when a protected resource needs replacement, just error. This makes sure that users can see the actual diff that causes the replacement. Fixes #5027 Before: ``` $ pulumi preview Type Name Plan Info pulumi:pulumi:Stack azure-native-ts-dev2 └─ azure-native:resources:ResourceGroup rg 1 error Diagnostics: azure-native:resources:ResourceGroup (rg): error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` $ pulumi preview --diff pulumi:pulumi:Stack: (same) [urn=urn:pulumi:dev2::azure-native-ts::pulumi:pulumi:Stack::azure-native-ts-dev2] error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` Resources: 1 unchanged ``` After: ``` $ pulumi preview Type Name Plan Info pulumi:pulumi:Stack azure-native-ts-dev2 1 error; 2 warnings +- └─ azure-native:resources:ResourceGroup rg replace [diff: ~location]; 1 error Diagnostics: azure-native:resources:ResourceGroup (rg): error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` pulumi:pulumi:Stack (azure-native-ts-dev2): error: preview failed $ pulumi preview --diff pulumi:pulumi:Stack: (same) [urn=urn:pulumi:dev2::azure-native-ts::pulumi:pulumi:Stack::azure-native-ts-dev2] error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` +-azure-native:resources:ResourceGroup: (replace) 🔒 [id=/subscriptions/0282681f-7a9e-424b-80b2-96babd57a8a1/resourceGroups/rg5f8e30e4] [urn=urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg] [provider=urn:pulumi:dev2::azure-native-ts::pulumi:providers:azure-native::default_2_30_0::3c957b2a-4852-439c-b211-22a115bbe89a] ~ location: "westeurope" => "westus2" error: preview failed Resources: +-1 to replace 1 unchanged ``` ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-04-18 16:45:52 +00:00
// In Preview, we mark the deployment as Error but continue to next steps,
// so that the preview is shown to the user and they can see the diff causing it.
// In Update mode, we bail to stop any further actions immediately. If we don't bail and
// we're doing a create before delete replacement we'll execute the create before getting
// to the delete error.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if !sg.deployment.opts.DryRun {
Don't bail at preview when a protected resource needs replacement (#15969) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description Don't bail at preview when a protected resource needs replacement, just error. This makes sure that users can see the actual diff that causes the replacement. Fixes #5027 Before: ``` $ pulumi preview Type Name Plan Info pulumi:pulumi:Stack azure-native-ts-dev2 └─ azure-native:resources:ResourceGroup rg 1 error Diagnostics: azure-native:resources:ResourceGroup (rg): error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` $ pulumi preview --diff pulumi:pulumi:Stack: (same) [urn=urn:pulumi:dev2::azure-native-ts::pulumi:pulumi:Stack::azure-native-ts-dev2] error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` Resources: 1 unchanged ``` After: ``` $ pulumi preview Type Name Plan Info pulumi:pulumi:Stack azure-native-ts-dev2 1 error; 2 warnings +- └─ azure-native:resources:ResourceGroup rg replace [diff: ~location]; 1 error Diagnostics: azure-native:resources:ResourceGroup (rg): error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` pulumi:pulumi:Stack (azure-native-ts-dev2): error: preview failed $ pulumi preview --diff pulumi:pulumi:Stack: (same) [urn=urn:pulumi:dev2::azure-native-ts::pulumi:pulumi:Stack::azure-native-ts-dev2] error: unable to replace resource "urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg" as it is currently marked for protection. To unprotect the resource, remove the `protect` flag from the resource in your Pulumi program and run `pulumi up` +-azure-native:resources:ResourceGroup: (replace) 🔒 [id=/subscriptions/0282681f-7a9e-424b-80b2-96babd57a8a1/resourceGroups/rg5f8e30e4] [urn=urn:pulumi:dev2::azure-native-ts::azure-native:resources:ResourceGroup::rg] [provider=urn:pulumi:dev2::azure-native-ts::pulumi:providers:azure-native::default_2_30_0::3c957b2a-4852-439c-b211-22a115bbe89a] ~ location: "westeurope" => "westus2" error: preview failed Resources: +-1 to replace 1 unchanged ``` ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-04-18 16:45:52 +00:00
return nil, result.BailErrorf(message)
}
}
// If the goal state specified an ID, issue an error: the replacement will change the ID, and is
// therefore incompatible with the goal state.
if goal.ID != "" {
const message = "previously-imported resources that still specify an ID may not be replaced; " +
"please remove the `import` declaration from your program"
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.DryRun {
sg.deployment.ctx.Diag.Warningf(diag.StreamMessage(urn, message, 0))
} else {
Enable perfsprint linter (#14813) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Prompted by a comment in another review: https://github.com/pulumi/pulumi/pull/14654#discussion_r1419995945 This lints that we don't use `fmt.Errorf` when `errors.New` will suffice, it also covers a load of other cases where `Sprintf` is sub-optimal. Most of these edits were made by running `perfsprint --fix`. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2023-12-12 12:19:42 +00:00
return nil, errors.New(message)
}
}
sg.replaces[urn] = true
// If we are going to perform a replacement, we need to recompute the default values. The above logic
// had assumed that we were going to carry them over from the old resource, which is no longer true.
//
// Note that if we're performing a targeted replace, we already have the correct inputs.
if prov != nil && !sg.isTargetedReplace(urn) {
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
resp, err := prov.Check(context.TODO(), plugin.CheckRequest{
URN: urn,
News: goal.Properties,
AllowUnknowns: allowUnknowns,
RandomSeed: randomSeed,
})
failures := resp.Failures
inputs := resp.Properties
if err != nil {
return nil, err
} else if issueCheckErrors(sg.deployment, new, urn, failures) {
return nil, result.BailErrorf("resource %v has check errors: %v", urn, failures)
}
new.Inputs = inputs
}
if logging.V(7) {
logging.V(7).Infof("Planner decided to replace '%v' (oldprops=%v inputs=%v replaceKeys=%v)",
urn, oldInputs, new.Inputs, diff.ReplaceKeys)
}
// We have two approaches to performing replacements:
//
// * CreateBeforeDelete: the default mode first creates a new instance of the resource, then
// updates all dependent resources to point to the new one, and finally after all of that,
// deletes the old resource. This ensures minimal downtime.
//
// * DeleteBeforeCreate: this mode can be used for resources that cannot be tolerate having
// side-by-side old and new instances alive at once. This first deletes the resource and
// then creates the new one. This may result in downtime, so is less preferred. Note that
// until pulumi/pulumi#624 is resolved, we cannot safely perform this operation on resources
// that have dependent resources (we try to delete the resource while they refer to it).
//
// The provider is responsible for requesting which of these two modes to use. The user can override
// the provider's decision by setting the `deleteBeforeReplace` field of `ResourceOptions` to either
// `true` or `false`.
deleteBeforeReplace := diff.DeleteBeforeReplace
if goal.DeleteBeforeReplace != nil {
deleteBeforeReplace = *goal.DeleteBeforeReplace
}
if deleteBeforeReplace {
logging.V(7).Infof("Planner decided to delete-before-replacement for resource '%v'", urn)
contract.Assertf(sg.deployment.depGraph != nil,
"dependency graph must be available for delete-before-replace")
// DeleteBeforeCreate implies that we must immediately delete the resource. For correctness,
// we must also eagerly delete all resources that depend directly or indirectly on the resource
// being replaced and would be replaced by a change to the relevant dependency.
//
// To do this, we'll utilize the dependency information contained in the snapshot if it is
// trustworthy, which is interpreted by the DependencyGraph type.
var steps []Step
toReplace, err := sg.calculateDependentReplacements(old)
if err != nil {
return nil, err
}
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// Deletions must occur in reverse dependency order, and `deps` is returned in dependency
// order, so we iterate in reverse.
for i := len(toReplace) - 1; i >= 0; i-- {
dependentResource := toReplace[i].res
// If we already deleted this resource due to some other DBR, don't do it again.
if sg.pendingDeletes[dependentResource] {
continue
}
// If we're generating plans create a plan for this delete
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.GeneratePlan {
if _, ok := sg.deployment.newPlans.get(dependentResource.URN); !ok {
// We haven't see this resource before, create a new
// resource plan for it with no goal (because it's going to be a delete)
resourcePlan := &ResourcePlan{}
sg.deployment.newPlans.set(dependentResource.URN, resourcePlan)
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
}
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
sg.dependentReplaceKeys[dependentResource.URN] = toReplace[i].keys
logging.V(7).Infof("Planner decided to delete '%v' due to dependence on condemned resource '%v'",
dependentResource.URN, urn)
// This resource might already be pending-delete
if dependentResource.Delete {
steps = append(steps, NewDeleteStep(sg.deployment, sg.deletes, dependentResource))
} else {
Check for protect in replace chains (#15776) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/15763. When doing a replace chain we need to check if the resources implicated for deletion in the chain are actually ok to delete (i.e. they don't have `protect` set). This change fixes the step generator to check that when building the delete chain, and adds a test for it as well. Sadly because of the nature of the system you have to `state unprotect` these resources if you actually want them replaced, you can't just change `protect` in the resource options in the program. This is because at the point we're processing the "to be replaced" resource we've only got the old state for the implicated resource because we won't have got it's new goal state yet because it depends on the resource we're currently processing. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-03-25 20:07:36 +00:00
// Check if the resource is protected, if it is we can't do this replacement chain.
if dependentResource.Protect {
message := fmt.Sprintf("unable to replace resource %q as part of replacing %q "+
"as it is currently marked for protection. To unprotect the resource, "+
"remove the `protect` flag from the resource in your Pulumi "+
"program and run `pulumi up`, or use the command:\n"+
"`pulumi state unprotect %q`",
dependentResource.URN, urn, dependentResource.URN)
sg.deployment.ctx.Diag.Errorf(diag.StreamMessage(urn, message, 0))
sg.sawError = true
return nil, result.BailErrorf(message)
}
steps = append(steps, NewDeleteReplacementStep(sg.deployment, sg.deletes, dependentResource, true))
}
// Mark the condemned resource as deleted. We won't know until later in the deployment whether
// or not we're going to be replacing this resource.
sg.deletes[dependentResource.URN] = true
sg.pendingDeletes[dependentResource] = true
}
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
// We're going to delete the old resource before creating the new one. We need to make sure
// that the old provider is loaded.
err = sg.deployment.EnsureProvider(old.Provider)
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
if err != nil {
return nil, fmt.Errorf("could not load provider for resource %v: %w", old.URN, err)
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
}
Don't re-delete resources that are `PendingReplacement` (#16510) As well as indicating that a resource's state has changes, a diff can also indicate that those changes require the _replacement_ of the resource, meaning that it must be recreated and not just updated. In this scenario, there are two possible ways to replace the resource -- by first creating another new resource before deleting the old one ("create-before-replace"), or by first deleting the old resource before creating its replacement ("delete-before-replace"). Create-before-replace is the default since generally, if possible to implement, it should result in fewer instances of "downtime", where a desired resource does not exist in the system. Should delete-before-replace be chosen, Pulumi implements this under the hood as three steps: delete for replacement, replace, and create replacement. To track things consistently, as well as enable resumption of an interrupted operation, Pulumi writes a flag, `PendingReplacement` to the state of a deleted resource that will later be cleaned up by a completed replacement. Should an interrupted operation be resumed, Pulumi does not currently take `PendingReplacement` into account, and always enqueues a(nother) delete operation. This is typically fine (albeit wasteful) since deletes are (should) be idempotent, but unnecessary. This commit adds @jesse-triplewhale's fix for this behaviour whereby the `PendingReplacement` flag is simply removed before the remainder of the required steps (replace, create replacement) are actioned as normal. It also extends this work with some lifecycle tests for this scenario and a few others that may arise as a result of an interrupted replacement. Fixes #16288 Closes #16303 Co-authored-by: Jesse Grodman <jesse@triplewhale.com>
2024-06-28 23:16:20 +00:00
var deleteStep Step
if old.PendingReplacement {
deleteStep = NewRemovePendingReplaceStep(sg.deployment, old)
} else {
deleteStep = NewDeleteReplacementStep(sg.deployment, sg.deletes, old, true)
}
return append(steps,
Don't re-delete resources that are `PendingReplacement` (#16510) As well as indicating that a resource's state has changes, a diff can also indicate that those changes require the _replacement_ of the resource, meaning that it must be recreated and not just updated. In this scenario, there are two possible ways to replace the resource -- by first creating another new resource before deleting the old one ("create-before-replace"), or by first deleting the old resource before creating its replacement ("delete-before-replace"). Create-before-replace is the default since generally, if possible to implement, it should result in fewer instances of "downtime", where a desired resource does not exist in the system. Should delete-before-replace be chosen, Pulumi implements this under the hood as three steps: delete for replacement, replace, and create replacement. To track things consistently, as well as enable resumption of an interrupted operation, Pulumi writes a flag, `PendingReplacement` to the state of a deleted resource that will later be cleaned up by a completed replacement. Should an interrupted operation be resumed, Pulumi does not currently take `PendingReplacement` into account, and always enqueues a(nother) delete operation. This is typically fine (albeit wasteful) since deletes are (should) be idempotent, but unnecessary. This commit adds @jesse-triplewhale's fix for this behaviour whereby the `PendingReplacement` flag is simply removed before the remainder of the required steps (replace, create replacement) are actioned as normal. It also extends this work with some lifecycle tests for this scenario and a few others that may arise as a result of an interrupted replacement. Fixes #16288 Closes #16303 Co-authored-by: Jesse Grodman <jesse@triplewhale.com>
2024-06-28 23:16:20 +00:00
deleteStep,
NewReplaceStep(sg.deployment, old, new, diff.ReplaceKeys, diff.ChangedKeys, diff.DetailedDiff, false),
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
NewCreateReplacementStep(
sg.deployment, event, old, new, diff.ReplaceKeys, diff.ChangedKeys, diff.DetailedDiff, false),
), nil
}
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
return []Step{
NewCreateReplacementStep(
sg.deployment, event, old, new, diff.ReplaceKeys, diff.ChangedKeys, diff.DetailedDiff, true),
NewReplaceStep(sg.deployment, old, new, diff.ReplaceKeys, diff.ChangedKeys, diff.DetailedDiff, true),
// note that the delete step is generated "later" on, after all creates/updates finish.
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
}, nil
}
// If we fell through, it's an update.
sg.updates[urn] = true
if logging.V(7) {
logging.V(7).Infof("Planner decided to update '%v' (oldprops=%v inputs=%v)", urn, oldInputs, new.Inputs)
}
return []Step{
NewUpdateStep(sg.deployment, event, old, new, diff.StableKeys, diff.ChangedKeys, diff.DetailedDiff,
goal.IgnoreChanges),
}, nil
}
// If resource was unchanged, but there were initialization errors, generate an empty update
// step to attempt to "continue" awaiting initialization.
if hasInitErrors {
sg.updates[urn] = true
return []Step{NewUpdateStep(sg.deployment, event, old, new, diff.StableKeys, nil, nil, nil)}, nil
}
// Else there are no changes needed
return nil, nil
}
func (sg *stepGenerator) GenerateDeletes(targetsOpt UrnTargets) ([]Step, error) {
// To compute the deletion list, we must walk the list of old resources *backwards*. This is because the list is
// stored in dependency order, and earlier elements are possibly leaf nodes for later elements. We must not delete
// dependencies prior to their dependent nodes.
var dels []Step
if prev := sg.deployment.prev; prev != nil {
for i := len(prev.Resources) - 1; i >= 0; i-- {
// If this resource is explicitly marked for deletion or wasn't seen at all, delete it.
res := prev.Resources[i]
if res.Delete {
// The below assert is commented-out because it's believed to be wrong.
//
// The original justification for this assert is that the author (swgillespie) believed that
// it was impossible for a single URN to be deleted multiple times in the same program.
// This has empirically been proven to be false - it is possible using today engine to construct
// a series of actions that puts arbitrarily many pending delete resources with the same URN in
// the snapshot.
//
// It is not clear whether or not this is OK. I (swgillespie), the author of this comment, have
// seen no evidence that it is *not* OK. However, concerns were raised about what this means for
// structural resources, and so until that question is answered, I am leaving this comment and
// assert in the code.
//
// Regardless, it is better to admit strange behavior in corner cases than it is to crash the CLI
// whenever we see multiple deletes for the same URN.
// contract.Assert(!sg.deletes[res.URN])
if sg.pendingDeletes[res] {
logging.V(7).Infof(
"Planner ignoring pending-delete resource (%v, %v) that was already deleted", res.URN, res.ID)
continue
}
if sg.deletes[res.URN] {
logging.V(7).Infof(
"Planner is deleting pending-delete urn '%v' that has already been deleted", res.URN)
}
logging.V(7).Infof("Planner decided to delete '%v' due to replacement", res.URN)
sg.deletes[res.URN] = true
dels = append(dels, NewDeleteReplacementStep(sg.deployment, sg.deletes, res, false))
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
} else if _, aliased := sg.aliased[res.URN]; !sg.sames[res.URN] && !sg.updates[res.URN] && !sg.replaces[res.URN] &&
!sg.reads[res.URN] && !aliased {
// NOTE: we deliberately do not check sg.deletes here, as it is possible for us to issue multiple
// delete steps for the same URN if the old checkpoint contained pending deletes.
logging.V(7).Infof("Planner decided to delete '%v'", res.URN)
sg.deletes[res.URN] = true
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
if !res.PendingReplacement {
dels = append(dels, NewDeleteStep(sg.deployment, sg.deletes, res))
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
} else {
dels = append(dels, NewRemovePendingReplaceStep(sg.deployment, res))
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
}
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
// We just added a Delete step, so we need to ensure the provider for this resource is available.
if sg.deletes[res.URN] {
err := sg.deployment.EnsureProvider(res.Provider)
if err != nil {
return nil, fmt.Errorf("could not load provider for resource %v: %w", res.URN, err)
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
}
}
}
}
2019-09-20 02:28:14 +00:00
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
// Check each proposed delete against the relevant resource plan
for _, s := range dels {
if sg.deployment.plan != nil {
if resourcePlan, ok := sg.deployment.plan.ResourcePlans[s.URN()]; ok {
if len(resourcePlan.Ops) == 0 {
return nil, fmt.Errorf("%v is not allowed by the plan: no more steps were expected for this resource", s.Op())
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
constraint := resourcePlan.Ops[0]
// We remove the Op from the list before doing the constraint check.
// This is because we look at Ops at the end to see if any expected operations didn't attempt to happen.
// This op has been attempted, it just might fail its constraint.
resourcePlan.Ops = resourcePlan.Ops[1:]
if !ConstrainedTo(s.Op(), constraint) {
return nil, fmt.Errorf("%v is not allowed by the plan: this resource is constrained to %v", s.Op(), constraint)
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
} else {
if !ConstrainedTo(s.Op(), OpSame) {
return nil, fmt.Errorf("%v is not allowed by the plan: no steps were expected for this resource", s.Op())
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
}
}
}
// If we're generating plans add a delete op to the plan for this resource
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.GeneratePlan {
Preview of update plans (#8448) * Implement resource plans in the engine * Plumb plans through the CLI. * Update wording * plan renderer * constraints * Renames * Update message * fixes for rebase breaks and diffs * WIP: outputs in plans * fix diff * fixup * Liniting and test fixing * Test and fix PropertyPath.String() * Fix colors * Fix cmdutil.PrintTable to handle non-simple strings * More tests * Readd test_plan.go * lint * Test expected deletes * Test expected delete * Test missing create * Fix test for missing creates * rm Paths() * property set shrink test * notes * More tests * Pop op before constraint check * Delete plan cmd, rename arguments to preview and up * Hide behind envvars * typo * Better constraint diffs * Adds/Deletes/Updates * Fix aliased * Check more constraints * fix test * revert stack changes * Resource sames test * Fix same resource test * Fix more tests * linting * Update pkg/cmd/pulumi/up.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Update pkg/cmd/pulumi/preview.go Co-authored-by: Alex Mullans <a.mullans@pulumi.com> * Auto refresh if using plans * Fix TestGetRefreshOption * Fix TestExplicitDeleteBeforeReplace * lint * More copying in tests because I do not trust myself to get mutation correct * Small preview plan test * Add TestPlannedUpdateChangedStack * Revert auto-refresh changes * Validate outputs don't change * omitempty * Add manifest to plan * Add proper Plan type * wip config work * Config and manifest serder * linting * Asset NoError * Actually check error * Fix clone * Test diag message * Start on more tests * Add String and GoString to Result I got fed up assert errors in tests that looked like: ``` Expected nil, but got: &result.simpleResult{err:(*errors.fundamental)(0xc0002fa5d0)} ``` It was very hard to work out at a glance what had gone wrong and I kept having to hook a debugger just to look at what the error was. With GoString these now print something like: ``` Expected nil, but got: &simpleResult{err: Unexpected diag message: <{%reset%}>resource violates plan: properties changed: -zed, -baz, -foo<{%reset%}> } ``` Which is much more ussful. * Add test error text * Fix reporting of unseen op errors * Fix unneeded deletes * Fix unexpected deletes * Fix up tests * Fix merge conflict * lint * Fix nil map error * Fix serialisation typo * Diff against old inputs * Diff against checked goal * Diff against empty for creates * Fix test * inputs not outputs * Seperate PlanDiff type * Add properties * Fix input diffs * Handle creates * lint * Add plan message * Clone plan for update preview * Save and serialise env vars in plans * lint * pretty print json * input output difference test * test alias * fix typo in for loop * Handle resource plans with nil goal * go mod tidy * typo * Auto use plans from up previews in experimental mode * Don't preview if we have plan * Don't run previews with plans now * fixing tests * Handle diffs and goals * Update copystructure * tests/go.sum * Revert mod changes * Add copystructure to tests/go.sum * includeUnknowns * go mod tidy * Make plans for imports * Remove unused function * Move code more locally * Handle nil in serialize * Handle empty output diffs * Add test for dropping computed values * Allow computed properties to become deletes * if out the generation of plans unless experimental mode is opt'd into * lint * typo * Revert back to plans not skipping previews, this is orthognal to --skip-preview * Trying to work out non-determinism * Remove notes.txt * Hacking with check idea * Pass checked inputs back to Check from plan file * Include resource urn in constraint error * Give much more informative errors when plans fail * lint * Update expected diag strings in tests * Remove unused code * Duplicate Diff and DeepEquals methods for plans * Add comment about check ops with failures * Fix CheckedInputs comment * OutputDiff doesn't need to be a pointer * Fix checks against computed * diffStringSets * lint * lint pkg * Use 4 space indent * Don't wrap Buffer in Writer * Mark flags hidden rather than disabled * Remove envvars from plans * Assert MarkHidden error * Add to changelog * Note plan/save-plan is experimental Co-authored-by: Pat Gavlin <pat@pulumi.com> Co-authored-by: Alex Mullans <a.mullans@pulumi.com>
2022-01-31 10:31:51 +00:00
resourcePlan, ok := sg.deployment.newPlans.get(s.URN())
if !ok {
// TODO(pdg-plan): using the program inputs means that non-determinism could sneak in as part of default
// application. However, it is necessary in the face of computed inputs.
resourcePlan = &ResourcePlan{}
sg.deployment.newPlans.set(s.URN(), resourcePlan)
}
resourcePlan.Ops = append(resourcePlan.Ops, s.Op())
}
}
// If -target was provided to either `pulumi update` or `pulumi destroy` then only delete
// resources that were specified.
allowedResourcesToDelete, err := sg.determineAllowedResourcesToDeleteFromTargets(targetsOpt)
if err != nil {
return nil, err
2019-09-21 00:50:44 +00:00
}
if allowedResourcesToDelete != nil {
filtered := []Step{}
for _, step := range dels {
if _, has := allowedResourcesToDelete[step.URN()]; has {
filtered = append(filtered, step)
}
}
2019-09-20 02:28:14 +00:00
dels = filtered
}
2019-09-20 02:28:14 +00:00
deletingUnspecifiedTarget := false
for _, step := range dels {
urn := step.URN()
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if !targetsOpt.Contains(urn) && !sg.deployment.opts.TargetDependents {
d := diag.GetResourceWillBeDestroyedButWasNotSpecifiedInTargetList(urn)
// Targets were specified, but didn't include this resource to create. Report all the
// problematic targets so the user doesn't have to keep adding them one at a time and
// re-running the operation.
//
// Mark that step generation entered an error state so that the entire app run fails.
sg.deployment.Diag().Errorf(d, urn)
sg.sawError = true
deletingUnspecifiedTarget = true
}
}
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if deletingUnspecifiedTarget && !sg.deployment.opts.DryRun {
// In preview we keep going so that the user will hear about all the problems and can then
// fix up their command once (as opposed to adding a target, rerunning, adding a target,
// rerunning, etc. etc.).
//
// Doing a normal run. We should not proceed here at all. We don't want to delete
// something the user didn't ask for.
return nil, result.BailErrorf("delete untargeted resource")
}
return dels, nil
}
2019-09-20 02:28:14 +00:00
// getTargetDependents returns the (transitive) set of dependents on the target resources.
// This includes both implicit and explicit dependents in the DAG itself, as well as children.
func (sg *stepGenerator) getTargetDependents(targetsOpt UrnTargets) map[resource.URN]bool {
// Seed the list with the initial set of targets.
var frontier []*resource.State
for _, res := range sg.deployment.prev.Resources {
if targetsOpt.Contains(res.URN) {
frontier = append(frontier, res)
}
}
// Produce a dependency graph of resources.
dg := graph.NewDependencyGraph(sg.deployment.prev.Resources)
// Now accumulate a list of targets that are implicated because they depend upon the targets.
targets := make(map[resource.URN]bool)
for len(frontier) > 0 {
// Pop the next to explore, mark it, and skip any we've already seen.
next := frontier[0]
frontier = frontier[1:]
if _, has := targets[next.URN]; has {
continue
}
targets[next.URN] = true
// Compute the set of resources depending on this one, either implicitly, explicitly,
// or because it is a child resource. Add them to the frontier to keep exploring.
deps := dg.DependingOn(next, targets, true)
frontier = append(frontier, deps...)
}
return targets
}
// determineAllowedResourcesToDeleteFromTargets computes the full (transitive) closure of resources
// that need to be deleted to permit the full list of targetsOpt resources to be deleted. This list
// will include the targetsOpt resources, but may contain more than just that, if there are dependent
// or child resources that require the targets to exist (and so are implicated in the deletion).
func (sg *stepGenerator) determineAllowedResourcesToDeleteFromTargets(
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
targetsOpt UrnTargets,
) (map[resource.URN]bool, error) {
if !targetsOpt.IsConstrained() {
// no specific targets, so we won't filter down anything
return nil, nil
}
2019-09-20 02:28:14 +00:00
// Produce a map of targets and their dependents, including explicit and implicit
// DAG dependencies, as well as children (transitively).
targets := sg.getTargetDependents(targetsOpt)
logging.V(7).Infof("Planner was asked to only delete/update '%v'", targetsOpt)
resourcesToDelete := make(map[resource.URN]bool)
2019-09-20 02:28:14 +00:00
// Now actually use all the requested targets to figure out the exact set to delete.
for target := range targets {
current := sg.deployment.olds[target]
if current == nil {
// user specified a target that didn't exist. they will have already gotten a warning
// about this when we called checkTargets. explicitly ignore this target since it won't
// be something we could possibly be trying to delete, nor could have dependents we
// might need to replace either.
continue
}
resourcesToDelete[target] = true
// the item the user is asking to destroy may cause downstream replacements. Clean those up
// as well. Use the standard delete-before-replace computation to determine the minimal
// set of downstream resources that are affected.
deps, err := sg.calculateDependentReplacements(current)
if err != nil {
return nil, err
2019-09-20 02:28:14 +00:00
}
for _, dep := range deps {
logging.V(7).Infof("GenerateDeletes(...): Adding dependent: %v", dep.res.URN)
resourcesToDelete[dep.res.URN] = true
}
}
if logging.V(7) {
keys := []resource.URN{}
for k := range resourcesToDelete {
keys = append(keys, k)
2019-09-20 02:28:14 +00:00
}
logging.V(7).Infof("Planner will delete all of '%v'", keys)
2019-09-20 02:28:14 +00:00
}
return resourcesToDelete, nil
}
// ScheduleDeletes takes a list of steps that will delete resources and "schedules" them by producing a list of list of
// steps, where each list can be executed in parallel but a previous list must be executed to completion before
// advancing to the next list.
//
// In lieu of tracking per-step dependencies and orienting the step executor around these dependencies, this function
// provides a conservative approximation of what deletions can safely occur in parallel. The insight here is that the
// resource dependency graph is a partially-ordered set and all partially-ordered sets can be easily decomposed into
// antichains - subsets of the set that are all not comparable to one another. (In this definition, "not comparable"
// means "do not depend on one another").
//
// The algorithm for decomposing a poset into antichains is:
// 1. While there exist elements in the poset,
2022-09-14 02:12:02 +00:00
// 1a. There must exist at least one "maximal" element of the poset. Let E_max be those elements.
// 2a. Remove all elements E_max from the poset. E_max is an antichain.
// 3a. Goto 1.
//
// Translated to our dependency graph:
// 1. While the set of condemned resources is not empty:
2022-09-14 02:12:02 +00:00
// 1a. Remove all resources with no outgoing edges from the graph and add them to the current antichain.
// 2a. Goto 1.
//
// The resulting list of antichains is a list of list of steps that can be safely executed in parallel. Since we must
// process deletes in reverse (so we don't delete resources upon which other resources depend), we reverse the list and
// hand it back to the deployment executor for safe execution.
func (sg *stepGenerator) ScheduleDeletes(deleteSteps []Step) []antichain {
var antichains []antichain // the list of parallelizable steps we intend to return.
dg := sg.deployment.depGraph // the current deployment's dependency graph.
condemned := mapset.NewSet[*resource.State]() // the set of condemned resources.
stepMap := make(map[*resource.State]Step) // a map from resource states to the steps that delete them.
logging.V(7).Infof("Planner trusts dependency graph, scheduling deletions in parallel")
// For every step we've been given, record it as condemned and save the step that will be used to delete it. We'll
// iteratively place these steps into antichains as we remove elements from the condemned set.
for _, step := range deleteSteps {
condemned.Add(step.Res())
stepMap[step.Res()] = step
}
for !condemned.IsEmpty() {
var steps antichain
logging.V(7).Infof("Planner beginning schedule of new deletion antichain")
for res := range condemned.Iter() {
// Does res have any outgoing edges to resources that haven't already been removed from the graph?
condemnedDependencies := dg.DependenciesOf(res).Intersect(condemned)
if condemnedDependencies.IsEmpty() {
// If not, it's safe to delete res at this stage.
logging.V(7).Infof("Planner scheduling deletion of '%v'", res.URN)
steps = append(steps, stepMap[res])
}
// If one of this resource's dependencies or this resource's parent hasn't been removed from the graph yet,
// it can't be deleted this round.
}
// For all resources that are to be deleted in this round, remove them from the graph.
for _, step := range steps {
condemned.Remove(step.Res())
}
antichains = append(antichains, steps)
}
// Up until this point, all logic has been "backwards" - we're scheduling resources for deletion when all of their
// dependencies finish deletion, but that's exactly the opposite of what we need to do. We can only delete a
// resource when all *resources that depend on it* complete deletion. Our solution is still correct, though, it's
// just backwards.
//
// All we have to do here is reverse the list and then our solution is correct.
slices.Reverse(antichains)
return antichains
}
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
// providerChanged diffs the Provider field of old and new resources, returning true if the rest of the step generator
// should consider there to be a diff between these two resources.
func (sg *stepGenerator) providerChanged(urn resource.URN, old, new *resource.State) (bool, error) {
// If a resource's Provider field has changed, we may need to show a diff and we may not. This is subtle. See
// pulumi/pulumi#2753 for more details.
//
// Recent versions of Pulumi allow for language hosts to pass a plugin version to the engine. The purpose of this is
// to ensure that the plugin that the engine uses for a particular resource is *exactly equal* to the version of the
// SDK that the language host used to produce the resource registration. This is critical for correct versioning
// semantics; it is generally an error for a language SDK to produce a registration that is serviced by a
// differently versioned plugin, since the two version in complete lockstep and there is no guarantee that the two
// will work correctly together when not the same version.
if old.Provider == new.Provider {
return false, nil
}
logging.V(stepExecutorLogLevel).Infof("sg.diffProvider(%s, ...): observed provider diff", urn)
logging.V(stepExecutorLogLevel).Infof("sg.diffProvider(%s, ...): %v => %v", urn, old.Provider, new.Provider)
// If we're changing from a component resource to a non-component resource, there is no old provider to
// diff against and trigger a delete but we need to Create the new custom resource. If we're changing from
// a custom resource to a component resource, we should always trigger a replace.
if old.Provider == "" || new.Provider == "" {
return true, nil
}
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
oldRef, err := providers.ParseReference(old.Provider)
if err != nil {
return false, err
}
newRef, err := providers.ParseReference(new.Provider)
if err != nil {
return false, err
}
if alias, ok := sg.aliased[oldRef.URN()]; ok && alias == newRef.URN() {
logging.V(stepExecutorLogLevel).Infof(
"sg.diffProvider(%s, ...): observed an aliased provider from %q to %q", urn, oldRef.URN(), newRef.URN())
return false, nil
}
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
// If one or both of these providers are not default providers, we will need to accept the diff and replace
// everything. This might not be strictly necessary, but it is conservatively correct.
if !providers.IsDefaultProvider(oldRef.URN()) || !providers.IsDefaultProvider(newRef.URN()) {
logging.V(stepExecutorLogLevel).Infof(
"sg.diffProvider(%s, ...): reporting provider diff due to change in default provider status", urn)
logging.V(stepExecutorLogLevel).Infof(
"sg.diffProvider(%s, ...): old provider %q is default: %v",
urn, oldRef.URN(), providers.IsDefaultProvider(oldRef.URN()))
logging.V(stepExecutorLogLevel).Infof(
"sg.diffProvider(%s, ...): new provider %q is default: %v",
urn, newRef.URN(), providers.IsDefaultProvider(newRef.URN()))
return true, err
}
// If both of these providers are default providers, use the *new provider* to diff the config and determine if
// this provider requires replacement.
//
// Note that, if we have many resources managed by the same provider that is getting replaced in this manner,
// this will call DiffConfig repeatedly with the same arguments for every resource. If this becomes a
// performance problem, this result can be cached.
newProv, ok := sg.deployment.providers.GetProvider(newRef)
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
if !ok {
return false, fmt.Errorf("failed to resolve provider reference: %q", oldRef.String())
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
}
oldRes, ok := sg.deployment.olds[oldRef.URN()]
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
contract.Assertf(ok, "old state didn't have provider, despite resource using it?")
newRes, ok := sg.providers[newRef.URN()]
contract.Assertf(ok, "new deployment didn't have provider, despite resource using it?")
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
diff, err := newProv.DiffConfig(context.TODO(), plugin.DiffConfigRequest{
URN: newRef.URN(),
OldInputs: oldRes.Inputs,
OldOutputs: oldRes.Outputs,
NewInputs: newRes.Inputs,
AllowUnknowns: true,
})
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
if err != nil {
return false, err
}
// If there is a replacement diff, we must also replace this resource.
if diff.Replace() {
logging.V(stepExecutorLogLevel).Infof(
"sg.diffProvider(%s, ...): new provider's DiffConfig reported replacement", urn)
return true, nil
}
// Otherwise, it's safe to allow this new provider to replace our old one.
logging.V(stepExecutorLogLevel).Infof(
"sg.diffProvider(%s, ...): both providers are default, proceeding with resource diff", urn)
return false, nil
}
// diff returns a DiffResult for the given resource.
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
func (sg *stepGenerator) diff(urn resource.URN, old, new *resource.State, oldInputs, oldOutputs,
newInputs resource.PropertyMap, prov plugin.Provider, allowUnknowns bool,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
ignoreChanges []string,
) (plugin.DiffResult, error) {
// If this resource is marked for replacement, just return a "replace" diff that blames the id.
if sg.isTargetedReplace(urn) {
return plugin.DiffResult{Changes: plugin.DiffSome, ReplaceKeys: []resource.PropertyKey{"id"}}, nil
}
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
// Before diffing the resource, diff the provider field. If the provider field changes, we may or may
// not need to replace the resource.
providerChanged, err := sg.providerChanged(urn, old, new)
if err != nil {
return plugin.DiffResult{}, err
} else if providerChanged {
return plugin.DiffResult{Changes: plugin.DiffSome, ReplaceKeys: []resource.PropertyKey{"provider"}}, nil
}
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
// Apply legacy diffing behavior if requested. In this mode, if the provider-calculated inputs for a resource did
// not change, then the resource is considered to have no diff between its desired and actual state.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
if sg.deployment.opts.UseLegacyDiff && oldInputs.DeepEquals(newInputs) {
return plugin.DiffResult{Changes: plugin.DiffNone}, nil
}
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
// If there is no provider for this resource (which should only happen for component resources), simply return a
// "diffs exist" result.
if prov == nil {
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
if oldInputs.DeepEquals(newInputs) {
return plugin.DiffResult{Changes: plugin.DiffNone}, nil
}
return plugin.DiffResult{Changes: plugin.DiffSome}, nil
}
return diffResource(urn, old.ID, oldInputs, oldOutputs, newInputs, prov, allowUnknowns, ignoreChanges)
}
// diffResource invokes the Diff function for the given custom resource's provider and returns the result.
func diffResource(urn resource.URN, id resource.ID, oldInputs, oldOutputs,
newInputs resource.PropertyMap, prov plugin.Provider, allowUnknowns bool,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
ignoreChanges []string,
) (plugin.DiffResult, error) {
contract.Requiref(prov != nil, "prov", "must not be nil")
// Grab the diff from the provider. At this point we know that there were changes to the Pulumi inputs, so if the
// provider returns an "unknown" diff result, pretend it returned "diffs exist".
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
diff, err := prov.Diff(context.TODO(), plugin.DiffRequest{
URN: urn,
ID: id,
OldInputs: oldInputs,
OldOutputs: oldOutputs,
NewInputs: newInputs,
AllowUnknowns: allowUnknowns,
IgnoreChanges: ignoreChanges,
})
if err != nil {
return diff, err
}
if diff.Changes == plugin.DiffUnknown {
new, res := processIgnoreChanges(newInputs, oldInputs, ignoreChanges)
if res != nil {
return plugin.DiffResult{}, err
}
tmp := oldInputs.Diff(new)
if tmp.AnyChanges() {
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
diff.Changes = plugin.DiffSome
diff.ChangedKeys = tmp.ChangedKeys()
Mark diff as an input diff when auto-diffing in the step generator (#14256) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/14040 When a provider returns `DiffUnknown` the step generator calculates a simple diff based on the old and new inputs. We were not correctly marking that this is an input diff, and so when reconstructing objects from the detailed diff later in `TranslateDetailedDiff` we we're looking at the old output properties rather than the old input properties. ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. --> --------- Co-authored-by: Justin Van Patten <jvp@justinvp.com>
2023-10-18 10:33:04 +00:00
diff.DetailedDiff = plugin.NewDetailedDiffFromObjectDiff(tmp, true /* inputDiff */)
} else {
diff.Changes = plugin.DiffNone
Defer all diffs to resource providers. (#2849) Thse changes make a subtle but critical adjustment to the process the Pulumi engine uses to determine whether or not a difference exists between a resource's actual and desired states, and adjusts the way this difference is calculated and displayed accordingly. Today, the Pulumi engine get the first chance to decide whether or not there is a difference between a resource's actual and desired states. It does this by comparing the current set of inputs for a resource (i.e. the inputs from the running Pulumi program) with the last set of inputs used to update the resource. If there is no difference between the old and new inputs, the engine decides that no change is necessary without consulting the resource's provider. Only if there are changes does the engine consult the resource's provider for more information about the difference. This can be problematic for a number of reasons: - Not all providers do input-input comparison; some do input-state comparison - Not all providers are able to update the last deployed set of inputs when performing a refresh - Some providers--either intentionally or due to bugs--may see changes in resources whose inputs have not changed All of these situations are confusing at the very least, and the first is problematic with respect to correctness. Furthermore, the display code only renders diffs it observes rather than rendering the diffs observed by the provider, which can obscure the actual changes detected at runtime. These changes address both of these issues: - Rather than comparing the current inputs against the last inputs before calling a resource provider's Diff function, the engine calls the Diff function in all cases. - Providers may now return a list of properties that differ between the requested and actual state and the way in which they differ. This information will then be used by the CLI to render the diff appropriately. A provider may also indicate that a particular diff is between old and new inputs rather than old state and new inputs. Fixes #2453.
2019-07-01 19:34:19 +00:00
}
}
return diff, nil
}
// issueCheckErrors prints any check errors to the diagnostics error sink.
func issueCheckErrors(deployment *Deployment, new *resource.State, urn resource.URN,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
failures []plugin.CheckFailure,
) bool {
return issueCheckFailures(deployment.Diag().Errorf, new, urn, failures)
}
// issueCheckErrors prints any check errors to the given printer function.
func issueCheckFailures(printf func(*diag.Diag, ...interface{}), new *resource.State, urn resource.URN,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
failures []plugin.CheckFailure,
) bool {
if len(failures) == 0 {
return false
}
inputs := new.Inputs
for _, failure := range failures {
if failure.Property != "" {
printf(diag.GetResourcePropertyInvalidValueError(urn),
new.Type, urn.Name(), failure.Property, inputs[failure.Property], failure.Reason)
} else {
printf(
diag.GetResourceInvalidError(urn), new.Type, urn.Name(), failure.Reason)
}
}
return true
}
// processIgnoreChanges sets the value for each ignoreChanges property in inputs to the value from oldInputs. This has
// the effect of ensuring that no changes will be made for the corresponding property.
func processIgnoreChanges(inputs, oldInputs resource.PropertyMap,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
ignoreChanges []string,
) (resource.PropertyMap, error) {
ignoredInputs := inputs.Copy()
var invalidPaths []string
for _, ignoreChange := range ignoreChanges {
path, err := resource.ParsePropertyPath(ignoreChange)
if err != nil {
continue
}
ok := path.Reset(oldInputs, ignoredInputs)
if !ok {
invalidPaths = append(invalidPaths, ignoreChange)
}
}
if len(invalidPaths) != 0 {
return nil, fmt.Errorf("cannot ignore changes to the following properties because one or more elements of "+
"the path are missing: %q", strings.Join(invalidPaths, ", "))
}
return ignoredInputs, nil
}
func (sg *stepGenerator) loadResourceProvider(
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
urn resource.URN, custom bool, provider string, typ tokens.Type,
) (plugin.Provider, error) {
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// If this is not a custom resource, then it has no provider by definition.
if !custom {
return nil, nil
}
// If this resource is a provider resource, use the deployment's provider registry for its CRUD operations.
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// Otherwise, resolve the the resource's provider reference.
if providers.IsProviderType(typ) {
return sg.deployment.providers, nil
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
contract.Assertf(provider != "", "must have a provider for custom resource %v", urn)
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
ref, refErr := providers.ParseReference(provider)
if refErr != nil {
return nil, sg.bailDaig(diag.GetBadProviderError(urn), provider, urn, refErr)
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
if providers.IsDenyDefaultsProvider(ref) {
pkg := providers.GetDeniedDefaultProviderPkg(ref)
return nil, sg.bailDaig(diag.GetDefaultProviderDenied(urn), pkg, urn)
}
p, ok := sg.deployment.GetProvider(ref)
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
if !ok {
return nil, sg.bailDaig(diag.GetUnknownProviderError(urn), provider, urn)
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
return p, nil
}
func (sg *stepGenerator) getProviderResource(urn resource.URN, provider string) *resource.State {
if provider == "" {
return nil
}
// All callers of this method are on paths that have previously validated that the provider
// reference can be parsed correctly and has a provider resource in the map.
ref, err := providers.ParseReference(provider)
contract.AssertNoErrorf(err, "failed to parse provider reference")
result := sg.providers[ref.URN()]
contract.Assertf(result != nil, "provider missing from step generator providers map")
return result
}
// initErrorSpecialKey is a special property key used to indicate that a diff is due to
// initialization errors existing in the old state instead of due to a specific property
// diff between old and new states.
const initErrorSpecialKey = "#initerror"
// applyReplaceOnChanges adjusts a DiffResult returned from a provider to apply the ReplaceOnChange
// settings in the desired state and init errors from the previous state.
func applyReplaceOnChanges(diff plugin.DiffResult,
all: Reformat with gofumpt Per team discussion, switching to gofumpt. [gofumpt][1] is an alternative, stricter alternative to gofmt. It addresses other stylistic concerns that gofmt doesn't yet cover. [1]: https://github.com/mvdan/gofumpt See the full list of [Added rules][2], but it includes: - Dropping empty lines around function bodies - Dropping unnecessary variable grouping when there's only one variable - Ensuring an empty line between multi-line functions - simplification (`-s` in gofmt) is always enabled - Ensuring multi-line function signatures end with `) {` on a separate line. [2]: https://github.com/mvdan/gofumpt#Added-rules gofumpt is stricter, but there's no lock-in. All gofumpt output is valid gofmt output, so if we decide we don't like it, it's easy to switch back without any code changes. gofumpt support is built into the tooling we use for development so this won't change development workflows. - golangci-lint includes a gofumpt check (enabled in this PR) - gopls, the LSP for Go, includes a gofumpt option (see [installation instrutions][3]) [3]: https://github.com/mvdan/gofumpt#installation This change was generated by running: ```bash gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error) ``` The following files were manually tweaked afterwards: - pkg/cmd/pulumi/stack_change_secrets_provider.go: one of the lines overflowed and had comments in an inconvenient place - pkg/cmd/pulumi/destroy.go: `var x T = y` where `T` wasn't necessary - pkg/cmd/pulumi/policy_new.go: long line because of error message - pkg/backend/snapshot_test.go: long line trying to assign three variables in the same assignment I have included mention of gofumpt in the CONTRIBUTING.md.
2023-03-03 16:36:39 +00:00
replaceOnChanges []string, hasInitErrors bool,
) (plugin.DiffResult, error) {
// No further work is necessary for DiffNone unless init errors are present.
if diff.Changes != plugin.DiffSome && !hasInitErrors {
return diff, nil
}
replaceOnChangePaths := slice.Prealloc[resource.PropertyPath](len(replaceOnChanges))
for _, p := range replaceOnChanges {
path, err := resource.ParsePropertyPath(p)
if err != nil {
return diff, err
}
replaceOnChangePaths = append(replaceOnChangePaths, path)
}
// Calculate the new DetailedDiff
var modifiedDiff map[string]plugin.PropertyDiff
if diff.DetailedDiff != nil {
modifiedDiff = map[string]plugin.PropertyDiff{}
for p, v := range diff.DetailedDiff {
diffPath, err := resource.ParsePropertyPath(p)
if err != nil {
return diff, err
}
changeToReplace := false
for _, replaceOnChangePath := range replaceOnChangePaths {
if replaceOnChangePath.Contains(diffPath) {
changeToReplace = true
break
}
}
if changeToReplace {
v = v.ToReplace()
}
modifiedDiff[p] = v
}
}
// Calculate the new ReplaceKeys
modifiedReplaceKeysMap := map[resource.PropertyKey]struct{}{}
for _, k := range diff.ReplaceKeys {
modifiedReplaceKeysMap[k] = struct{}{}
}
for _, k := range diff.ChangedKeys {
for _, replaceOnChangePath := range replaceOnChangePaths {
keyPath, err := resource.ParsePropertyPath(string(k))
if err != nil {
continue
}
if replaceOnChangePath.Contains(keyPath) {
modifiedReplaceKeysMap[k] = struct{}{}
}
}
}
modifiedReplaceKeys := slice.Prealloc[resource.PropertyKey](len(modifiedReplaceKeysMap))
for k := range modifiedReplaceKeysMap {
modifiedReplaceKeys = append(modifiedReplaceKeys, k)
}
// Add init errors to modified diff results
modifiedChanges := diff.Changes
if hasInitErrors {
for _, replaceOnChangePath := range replaceOnChangePaths {
initErrPath, err := resource.ParsePropertyPath(initErrorSpecialKey)
if err != nil {
continue
}
if replaceOnChangePath.Contains(initErrPath) {
modifiedReplaceKeys = append(modifiedReplaceKeys, initErrorSpecialKey)
if modifiedDiff != nil {
modifiedDiff[initErrorSpecialKey] = plugin.PropertyDiff{
Kind: plugin.DiffUpdateReplace,
InputDiff: false,
}
}
// If an init error is present on a path that causes replacement, then trigger a replacement.
modifiedChanges = plugin.DiffSome
}
}
}
return plugin.DiffResult{
DetailedDiff: modifiedDiff,
ReplaceKeys: modifiedReplaceKeys,
ChangedKeys: diff.ChangedKeys,
Changes: modifiedChanges,
DeleteBeforeReplace: diff.DeleteBeforeReplace,
StableKeys: diff.StableKeys,
}, nil
}
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
type dependentReplace struct {
res *resource.State
keys []resource.PropertyKey
}
func (sg *stepGenerator) calculateDependentReplacements(root *resource.State) ([]dependentReplace, error) {
// We need to compute the set of resources that may be replaced by a change to the resource
// under consideration. We do this by taking the complete set of transitive dependents on the
// resource under consideration and removing any resources that would not be replaced by changes
// to their dependencies. We determine whether or not a resource may be replaced by substituting
// unknowns for input properties that may change due to deletion of the resources their value
// depends on and calling the resource provider's `Diff` method.
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
//
// This is perhaps clearer when described by example. Consider the following dependency graph:
//
// A
// __|__
// B C
// | _|_
// D E F
//
// In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case,
// however, that changes to the specific properties of any of those resources R that would occur
// if a resource on the path to A were deleted and recreated may not cause R to be replaced. For
// example, the edge from B to A may be a simple `dependsOn` edge such that a change to B does
// not actually influence any of B's input properties. More commonly, the edge from B to A may
// be due to a property from A being used as the input to a property of B that does not require
// B to be replaced upon a change. In these cases, neither B nor D would need to be deleted
// before A could be deleted.
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
var toReplace []dependentReplace
replaceSet := map[resource.URN]bool{root.URN: true}
requiresReplacement := func(r *resource.State) (bool, []resource.PropertyKey, error) {
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// Neither component nor external resources require replacement.
if !r.Custom || r.External {
return false, nil, nil
}
// If the resource's provider is in the replace set, we must replace this resource.
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
if r.Provider != "" {
ref, err := providers.ParseReference(r.Provider)
if err != nil {
return false, nil, err
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
if replaceSet[ref.URN()] {
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
// We need to use the old provider configuration to delete this resource so ensure it's loaded.
err := sg.deployment.EnsureProvider(r.Provider)
if err != nil {
return false, nil, fmt.Errorf("could not load provider for resource %v: %w", r.URN, err)
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
}
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
return true, nil, nil
}
}
// Scan the properties of this resource in order to determine whether or not any of them depend on a resource
// that requires replacement and build a set of input properties for the provider diff.
hasDependencyInReplaceSet, inputsForDiff := false, resource.PropertyMap{}
for pk, pv := range r.Inputs {
for _, propertyDep := range r.PropertyDependencies[pk] {
if replaceSet[propertyDep] {
hasDependencyInReplaceSet = true
pv = resource.MakeComputed(resource.NewStringProperty("<unknown>"))
}
}
inputsForDiff[pk] = pv
}
// If none of this resource's properties depend on a resource in the replace set, then none of the properties
// may change and this resource does not need to be replaced.
if !hasDependencyInReplaceSet {
return false, nil, nil
}
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
// We're going to have to call diff on this resources provider so ensure that we have it created
if !providers.IsProviderType(r.Type) {
err := sg.deployment.EnsureProvider(r.Provider)
if err != nil {
return false, nil, fmt.Errorf("could not load provider for resource %v: %w", r.URN, err)
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
}
} else {
// This is a provider itself so load it so that Diff below is possible
err := sg.deployment.SameProvider(r)
if err != nil {
return false, nil, fmt.Errorf("create provider %v: %w", r.URN, err)
Don't load providers at startup This changes the provider registry to no longer load all the providers from the old state on startup (in `NewRegistry`) instead the load logic has been moved to the `Same` method. The step_executor and step_generator have been fixed up to ensure that for cases where a resource might not have had it's provider created yet (i.e. for DBR'ing the old version of a resource, for refreshes or deletes) they ask the `Deployment` to look up the provider in the old state and `Same` it in the registry. All of the above means we only load providers we're going to use (even taking --targets into account). One fix mot done in this change is to auto-update providers for deletes. That is given a program state with two resources both using V1 of a provider, if you run the program to update one of those resource to use V2 of the provider but to delete the other resource currently we'll still load V1 to do that delete. It _might_ be possible (although this is definitly questionable) to see that another resource changed it's provider from V1 to V2 and to just assume the same change should have happened to the deleted resource. This could be helpful for not loading old provider versions at all, but can be done in two passes now pretty easily. Just run `up` without any program changes except for the SDK version bump to update all the provider references to V2 of the provider, then do another `up` that deletes the second resource. Fixes https://github.com/pulumi/pulumi/issues/12177.
2023-04-12 09:35:20 +00:00
}
}
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// Otherwise, fetch the resource's provider. Since we have filtered out component resources, this resource must
// have a provider.
prov, err := sg.loadResourceProvider(r.URN, r.Custom, r.Provider, r.Type)
if err != nil {
return false, nil, err
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
contract.Assertf(prov != nil, "resource %v has no provider", r.URN)
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
// Call the provider's `Diff` method and return.
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302) Normalize methods on plugin.Provider to the form: ```go Method(context.Context, MethodRequest) (MethodResponse, error) ``` This provides a more consistent and forwards compatible interface for each of our methods. --- I'm motivated to work on this because the bridge maintains a copy of this interface: `ProviderWithContext`. This doubles the pain of dealing with any breaking change and this PR would allow me to remove the extra interface. I'm willing to fix consumers of `plugin.Provider` in `pulumi/pulumi`, but I wanted to make sure that we would be willing to merge this PR if I get it green. <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes # (issue) ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-06-07 19:47:49 +00:00
diff, err := prov.Diff(context.TODO(), plugin.DiffRequest{
URN: r.URN,
ID: r.ID,
OldInputs: r.Inputs,
OldOutputs: r.Outputs,
NewInputs: inputsForDiff,
AllowUnknowns: true,
})
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
if err != nil {
return false, nil, err
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
return diff.Replace(), diff.ReplaceKeys, nil
}
// Walk the root resource's dependents in order and build up the set of resources that require replacement.
Fix a dependency graph bug during DBR. (#3329) The dependency graph used to determine the set of resources that depend on a resource being DBR'd is constructured from the list of resource states present in the old snapshot. However, the dependencies of resources that are present in both the old snapshot and the current plan can be different, which in turn can cause the engine to make incorrect decisions during DBR with respect to which resources need to be replaced. For example, consider the following program: ``` var resA = new Resource("a", {dbr: "foo"}); var resB = new Resource("b", {dbr: resA.prop}); ``` If this program is then changed to: ``` var resB = new Resource("b", {dbr: "<literal value of resA.prop>"}); var resA = new Resource("a", {dbr: "bar"}); ``` The engine will first decide to make no changes to "b", as its input property values have not changed. "b" has changed, however, such that it no longer has a dependency on "a". The engine will then decide to DBR "a". In the process, it will determine that it first needs to delete "b", because the state for "b" that is used when calculating "a"'s dependents does not reflect the changes made during the plan. To fix this issue, we rely on the observation that dependents can only have been _removed_ from the base dependency graph: for a dependent to have been added, it would have had to have been registered prior to the root--a resource it depends on--which is not a valid operation. This means that any resources that depend on the root must not yet have been registered, which in turn implies that resources that have already been registered must not depend on the root. Thus, we ignore these resources if they are encountered while walking the old dependency graph to determine the set of dependents.
2019-10-13 00:22:13 +00:00
//
// NOTE: the dependency graph we use for this calculation is based on the dependency graph from the last snapshot.
// If there are resources in this graph that used to depend on the root but have been re-registered such that they
// no longer depend on the root, we may make incorrect decisions. To avoid that, we rely on the observation that
// dependents can only have been _removed_ from the base dependency graph: for a dependent to have been added,
// it would have had to have been registered prior to the root, which is not a valid operation. This means that
// any resources that depend on the root must not yet have been registered, which in turn implies that resources
// that have already been registered must not depend on the root. Thus, we ignore these resources if they are
// encountered while walking the old dependency graph to determine the set of dependents.
impossibleDependents := sg.urns
for _, d := range sg.deployment.depGraph.DependingOn(root, impossibleDependents, false) {
replace, keys, err := requiresReplacement(d)
if err != nil {
return nil, err
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
}
if replace {
toReplace, replaceSet[d.URN] = append(toReplace, dependentReplace{res: d, keys: keys}), true
}
}
// Return the list of resources to replace.
return toReplace, nil
}
func (sg *stepGenerator) AnalyzeResources() error {
var resources []plugin.AnalyzerStackResource
sg.deployment.news.Range(func(urn resource.URN, v *resource.State) bool {
goal, ok := sg.deployment.goals.Load(urn)
contract.Assertf(ok, "failed to load goal for %s", urn)
resource := plugin.AnalyzerStackResource{
AnalyzerResource: plugin.AnalyzerResource{
URN: v.URN,
Type: v.Type,
Name: v.URN.Name(),
// Unlike Analyze, AnalyzeStack is called on the final outputs of each resource,
// to verify the final stack is in a compliant state.
Properties: v.Outputs,
Options: plugin.AnalyzerResourceOptions{
Protect: v.Protect,
IgnoreChanges: goal.IgnoreChanges,
DeleteBeforeReplace: goal.DeleteBeforeReplace,
AdditionalSecretOutputs: v.AdditionalSecretOutputs,
Aliases: v.GetAliases(),
CustomTimeouts: v.CustomTimeouts,
},
},
Parent: v.Parent,
Dependencies: v.Dependencies,
PropertyDependencies: v.PropertyDependencies,
}
providerResource := sg.getProviderResource(v.URN, v.Provider)
if providerResource != nil {
resource.Provider = &plugin.AnalyzerProviderResource{
URN: providerResource.URN,
Type: providerResource.Type,
Name: providerResource.URN.Name(),
Properties: providerResource.Inputs,
}
}
resources = append(resources, resource)
return true
})
analyzers := sg.deployment.ctx.Host.ListAnalyzers()
for _, analyzer := range analyzers {
diagnostics, err := analyzer.AnalyzeStack(resources)
if err != nil {
return err
}
for _, d := range diagnostics {
if d.EnforcementLevel == apitype.Remediate {
// Stack policies cannot be remediated, so treat the level as mandatory.
d.EnforcementLevel = apitype.Mandatory
}
sg.sawError = sg.sawError || (d.EnforcementLevel == apitype.Mandatory)
// If a URN was provided and it is a URN associated with a resource in the stack, use it.
// Otherwise, if the URN is empty or is not associated with a resource in the stack, use
// the default root stack URN.
var urn resource.URN
if d.URN != "" {
if _, ok := sg.deployment.news.Load(d.URN); ok {
urn = d.URN
}
}
if urn == "" {
urn = resource.DefaultRootStackURN(sg.deployment.Target().Name.Q(), sg.deployment.source.Project())
}
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
sg.deployment.events.OnPolicyViolation(urn, d)
}
}
return nil
}
Propagate deleted dependencies of untargeted resources (#16247) When using `--target` to target specific resources during an update, we use the list of targets to decide which steps to generate given a set of resource registrations. Specifically: * If the registration event names a resource that is targeted, we process it as usual. * If the registration event names a resource that _is not_ targeted, we emit a `SameStep` for it. In the latter case, the emission of a `SameStep` means that the old state for the resource will be copied across to the new state. This is the desired behaviour -- the resource was not targeted and so the new state should contain the resource exactly as it was prior to the update. However, this presents a problem if the old state has references to resources that either will not appear in the new state, or will appear in the wrong place. Consider the following program in TypeScript-esque pseudocode: ```typescript const a = new Resource("a") const b = new Resource("b", { dependency: a }) const c = new Resource("c") ``` Here, `b` depends on `a`, while `a` and `c` have no dependencies. We run this program without specifying targets and obtain a state containing `a`, `b` and `c`, with `a` appearing before `b` due to `b`'s dependency on `a`. We now modify the program as follows: ```typescript const b = new Resource("b") const c = new Resource("c") ``` `a` has been removed from the program and consequently `b` no longer depends on it. We once more run the program, this time with a `--target` of `c`. That is to say, neither `a` nor `b` is targeted. The execution proceeds as follows: * `a` is not in the program, so no `RegisterResourceEvent` will be emitted and processed for it. * `b` is in the program, but it is not targeted. Its `RegisterResourceEvent` will be turned into a `SameStep` and `b`'s _old state will be copied as-is to the new state_. * `c` is in the program and is targeted. It will be processed as normal. At the end of execution when we come to write the snapshot, we take the following actions: * We first write the processed resources: `b`'s old state and `c`'s new state. * We then copy over any unprocessed resources from the base (previous) snapshot. This includes `a` (which is again desirable since its deletion should not be processed due to it not being targeted). Our snapshot is now not topologically sorted and thus invalid: `b` has a dependency on `a`, but `a` appears after it. Presently this bug will manifest irrespective of the nature of the dependency: `.Dependencies`, `.PropertyDependencies` and `.DeletedWith` are all affected. This commit fixes this issue by traversing all untargeted resource dependency relationships and ensuring that `SameStep`s (or better if they have been targeted) are emitted before emitting the depending resource's `SameStep`. * Fixes #16052 * Fixes #15959
2024-05-23 12:31:03 +00:00
// hasGeneratedStep returns true if and only if the step generator has generated a step for the given URN.
func (sg *stepGenerator) hasGeneratedStep(urn resource.URN) bool {
return sg.creates[urn] ||
sg.sames[urn] ||
sg.updates[urn] ||
sg.deletes[urn] ||
sg.replaces[urn] ||
sg.reads[urn]
}
// newStepGenerator creates a new step generator that operates on the given deployment.
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
func newStepGenerator(deployment *Deployment) *stepGenerator {
return &stepGenerator{
deployment: deployment,
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
urns: make(map[resource.URN]bool),
reads: make(map[resource.URN]bool),
creates: make(map[resource.URN]bool),
sames: make(map[resource.URN]bool),
replaces: make(map[resource.URN]bool),
updates: make(map[resource.URN]bool),
deletes: make(map[resource.URN]bool),
skippedCreates: make(map[resource.URN]bool),
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
pendingDeletes: make(map[*resource.State]bool),
Refine resource replacement logic for providers (#2767) This commit touches an intersection of a few different provider-oriented features that combined to cause a particularly severe bug that made it impossible for users to upgrade provider versions without seeing replacements with their resources. For some context, Pulumi models all providers as resources and places them in the snapshot like any other resource. Every resource has a reference to the provider that created it. If a Pulumi program does not specify a particular provider to use when performing a resource operation, the Pulumi engine injects one automatically; these are called "default providers" and are the most common ways that users end up with providers in their snapshot. Default providers can be identified by their name, which is always prefixed with "default". Recently, in an effort to make the Pulumi engine more flexible with provider versions, it was made possible for the engine to have multiple default providers active for a provider of a particular type, which was previously not possible. Because a provider is identified as a tuple of package name and version, it was difficult to find a name for these duplicate default providers that did not cause additional problems. The provider versioning PR gave these default providers a name that was derived from the version of the package. This proved to be a problem, because when users upgraded from one version of a package to another, this changed the name of their default provider which in turn caused all of their resources created using that provider (read: everything) to be replaced. To combat this, this PR introduces a rule that the engine will apply when diffing a resource to determine whether or not it needs to be replaced: "If a resource's provider changes, and both old and new providers are default providers whose properties do not require replacement, proceed as if there were no diff." This allows the engine to gracefully recognize and recover when a resource's default provider changes names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
providers: make(map[resource.URN]*resource.State),
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 17:46:30 +00:00
dependentReplaceKeys: make(map[resource.URN][]resource.PropertyKey),
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
aliased: make(map[resource.URN]resource.URN),
aliases: make(map[resource.URN]resource.URN),
Clean up deployment options (#16357) # Description There are a number of parts of the deployment process that require context about and configuration for the operation being executed. For instance: * Source evaluation -- evaluating programs in order to emit resource registrations * Step generation -- processing resource registrations in order to generate steps (create this, update that, delete the other, etc.) * Step execution -- executing steps in order to action a deployment. Presently, these pieces all take some form of `Options` struct or pass explicit arguments. This is problematic for a couple of reasons: * It could be possible for different parts of the codebase to end up operating in different contexts/with different configurations, whether due to different values being passed explicitly or due to missed copying/instantiation. * Some parts need less context/configuration than others, but still accept full `Options`, making it hard to discern what information is actually necessary in any given part of the process. This commit attempts to clean things up by moving deployment options directly into the `Deployment` itself. Since step generation and execution already refer to a `Deployment`, they get a consistent view of the options for free. For source evaluation, we introduce an `EvalSourceOptions` struct for configuring just the options necessary there. At the top level, the engine configures a single set of options to flow through the deployment steps later on. As part of this work, a few other things have been changed: * Preview/dry-run parameters have been incorporated into options. This lets up lop off another argument and mitigate a bit of "boolean blindness". We don't appear to flip this flag within a deployment process (indeed, all options seem to be immutable) and so having it as a separate flag doesn't seem to buy us anything. * Several methods representing parts of the deployment process have lost arguments in favour of state that is already being carried on (or can be carried on) their receiver. For instance, `deployment.run` no longer takes actions or preview configuration. While doing so means that a `deployment` could be run multiple times with different actions/preview arguments, we don't currently exploit this fact anywhere, so moving this state to the point of construction both simplifies things considerably and reduces the possibility for error (e.g. passing different values of `preview` when instantiating a `deployment` and subsequently calling `run`). * Event handlers have been split out of the options object and attached to `Deployment` separately. This means we can talk about options at a higher level without having to `nil` out/worry about this field and mutate it correctly later on. * Options are no longer mutated during deployment. Presently there appears to be only one case of this -- when handling `ContinueOnError` in the presence of `IgnoreChanges` (e.g. when performing a refresh). This case has been refactored so that the mutation is no longer necessary. # Notes * This change is in preparation for #16146, where we'd like to add an environment variable to control behaviour and having a single unified `Options` struct would make it easier to pass this configuration down with introducing (more) global state into deployments. Indeed, this change should make it easier to factor global state into `Options` so that it can be controlled and tested more easily/is less susceptible to bugs, race conditions, etc. * I've tweaked/extended some comments while I'm here and have learned things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on this if we'd rather not conflate. * This change does mean that if in future we wanted e.g. to be able to run a `Deployment` in multiple different ways with multiple sets of actions, we'd have to refactor. Pushing state to the point of object construction reduces the flexibility of the code. However, since we are not presently using that flexibility (nor is there an obvious [to my mind] use case in the near future), this seems like a good trade-off to guard against bugs/make it simpler to move that state around. * I've left some other review comments in the code around questions/changes that might be a bad idea; happy to receive feedback on it all though!
2024-06-11 13:37:57 +00:00
// We clone the targets passed as options because we will modify this set as
// we compute the full set (e.g. by expanding globs, or traversing
// dependents).
targetsActual: deployment.opts.Targets.Clone(),
}
}