<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Similar to what we did to the `InvokeRequest` a while ago. We're
currently using the same protobuf structure for `Provider.Call` and
`ResourceMonitor.Call` despite different field sets being filled in for
each of them.
This splits the structure into `CallRequest` for providers and
`ResourceCallRequest` for the resource monitor. A number of fields in
each are removed and marked reserved with a comment explaining why.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes https://github.com/pulumi/pulumi/issues/14532.
14532 was just for Remote and Component, but since raising that we've
added LogicalName as well so this PR also adds support for that.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
github.com/golang/protobuf is marked deprecated and I was getting
increasingly triggered by the inconsistency of importing the `Empty`
type from "github.com/golang/protobuf/ptypes/empty" or
"google.golang.org/protobuf/types/known/emptypb" as "pbempty" or "empty"
or "emptypb". Similar for the struct type.
So this replaces all the Protobufs imports with ones from
"google.golang.org/protobuf", normalises the import name to always just
be the module name (emptypb), and adds the depguard linter to ensure we
don't use the deprecated package anymore.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This extends the resource monitor interface with fields for plugin
checksums (on top of the existing plugin version and download url
fields). These fields are threaded through the engine and are persisted
in resource state. The sent or saved data is then used when installing
plugins to ensure that the checksums match what was recorded at the time
the SDK was built.
Similar to https://github.com/pulumi/pulumi/pull/13776 nothing is using
this yet, but this lays the engine side plumbing for them.
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
This change updates the engine to detect if a `RegisterResource` request
is coming from an older Node.js SDK that is using incorrect alias specs
and, if so, transforms the aliases to be correct. This allows us to
maintain compatibility for users who have upgraded their CLI but are
still using an older version of the Node.js SDK with incorrect alias
specs.
We detect if the request is from a Node.js SDK by looking at the gRPC
request's metadata headers, specifically looking at the "pulumi-runtime"
and "user-agent" headers.
First, if the request has a "pulumi-runtime" header with a value of
"nodejs", we know it's coming from the Node.js plugin. The Node.js
language plugin proxies gRPC calls from the Node.js SDK to the resource
monitor and the proxy now sets the "pulumi-runtime" header to "nodejs"
for `RegisterResource` calls.
Second, if the request has a "user-agent" header that starts with
"grpc-node-js/", we know it's coming from the Node.js SDK. This is the
case for inline programs in the automation API, which connects directly
to the resource monitor, rather than going through the language plugin's
proxy.
We can't just look at "user-agent", because in the proxy case it will
have a Go-specific "user-agent".
Updated Node.js SDKs set a new `aliasSpecs` field on the
`RegisterResource` request, which indicates that the alias specs are
correct, and no transforms are needed.
Per team discussion, switching to gofumpt.
[gofumpt][1] is an alternative, stricter alternative to gofmt.
It addresses other stylistic concerns that gofmt doesn't yet cover.
[1]: https://github.com/mvdan/gofumpt
See the full list of [Added rules][2], but it includes:
- Dropping empty lines around function bodies
- Dropping unnecessary variable grouping when there's only one variable
- Ensuring an empty line between multi-line functions
- simplification (`-s` in gofmt) is always enabled
- Ensuring multi-line function signatures end with
`) {` on a separate line.
[2]: https://github.com/mvdan/gofumpt#Added-rules
gofumpt is stricter, but there's no lock-in.
All gofumpt output is valid gofmt output,
so if we decide we don't like it, it's easy to switch back
without any code changes.
gofumpt support is built into the tooling we use for development
so this won't change development workflows.
- golangci-lint includes a gofumpt check (enabled in this PR)
- gopls, the LSP for Go, includes a gofumpt option
(see [installation instrutions][3])
[3]: https://github.com/mvdan/gofumpt#installation
This change was generated by running:
```bash
gofumpt -w $(rg --files -g '*.go' | rg -v testdata | rg -v compilation_error)
```
The following files were manually tweaked afterwards:
- pkg/cmd/pulumi/stack_change_secrets_provider.go:
one of the lines overflowed and had comments in an inconvenient place
- pkg/cmd/pulumi/destroy.go:
`var x T = y` where `T` wasn't necessary
- pkg/cmd/pulumi/policy_new.go:
long line because of error message
- pkg/backend/snapshot_test.go:
long line trying to assign three variables in the same assignment
I have included mention of gofumpt in the CONTRIBUTING.md.
In many cases there is no need to delete resources if the container
resource is going to be deleted as well.
A few examples:
* Database object (roles, tables) when database is being deleted
* Cloud IAM bindings when user itself is being deleted
This helps with:
* Speeding the deletion process
* Removing unnecessary calls to providers
* Avoiding failed deletions when the pulumi user running the
plan has access to the container resource but not the contained
ones
To avoid deleting contained resources, set the `DeletedWith` resource
option to the container resource.
TODO:
Should we support DeletedWith with PendingDeletes?
Special case might be when the contained resource is marked as pending
deletion but we now want to delete the container resource, so
ultimately there is no need to delete the contained anymore
This doesn't really effect anything with our current usage, but if we
ever put proto files in another package and try to import the current
set it wouldn't actually be able to find them.
I noticed this while working on
https://github.com/pulumi/pulumi/pull/10792 where I added a new
"pulumirpc.engine" package. The proto file could refer to things like
"pulumirpc.PluginDependency" but the go code then tried to import it
like `_go "github.com/pulumi/pulumi/v3/proto/go/pulumirpc"` which isn't
actually the correct import path (missing the sdk folder part, and there
isn't acutally a folder called pulumirpc).
* Send smart aliases via gRPC to engine
* Add to SupportsFeature
* Restore old logic when the engine doesn't support smartAliases
* Add to deploytest ResourceOptions
* Add tests
* Add to CHANGELOG
* Fix test
* Rename proto fields
* Regenerate protobufs
* Fix up SDKs after field rename
* Rename deploytest aliases
* Rename internal fields
* Fix typo in c# code
* Fix typescript
* Rename feature to aliasSpecs
* Rename type to Spec
* Plumb in basics of retainOnDelete
* Add test
* Make test pass
* Add to changelog
* Add to API list
* lint
* Add semicolon
* Fix Infof call
* Fix method call
* new delete mode work
* cleanup
* protectTest
* Fix up test
* Fix replace
* Fix up test
* Warn on drop
* lint
* Change to just a bool flag
* Regenerate proto
* Rework to just a bool flag with no error
* Remove old comment
* Fix C# typo
* rm extra space
* Add missing semicolon
* Reformat python
* False typo
* Fix typo in js function name
* Reword docs
* lint
* Read doesn't need retainOnDelete
* [engine] Pipe serverURL through register resource
* Fix lint
* Thread serverURL through default provider calls
* Change tag from "serverURL" to "pluginDownloadURL"
* Update CHANGELOG_PENDING.md
* Allow provider to be null
* Fix tests
* Include server url passthrough in test
* Fix parseProviderRequest
* Add test for url pass through
* Fix lint
* Correct small nits from @justinp
* Add test for default providers
* Move special helpers to providers
* Partial conversion serverURL -> pluginDownloadURL
* Remove serverURL
* Remove more serverURL instances
* const correctness
* Add url to ProviderRequest.Name()
I also canonicalize the url by removing any trailing '/'
* Fix typo + lint
* Add test for url canonicalization
* Fix ProviderRequest.Name for version=nil
Adds a new resource option to force replacement when certain properties report changes, even if the resource provider itself does not require a replacement.
Fixes#6753.
Co-authored-by: Levi Blackstone <levi@pulumi.com>
Adds initial support for resource methods (via a new `Call` gRPC method similar to `Invoke`), with support for authoring methods from Node.js, and calling methods from Python.
Resources are serialized as their URN, ID, and package version. Each
Pulumi package is expected to register itself with the SDK. The package
will be invoked to construct appropriate instances of rehydrated
resources. Packages are distinguished by their name and their version.
This is the foundation of cross-process resources.
Related to #2430.
Co-authored-by: Mikhail Shilkov <github@mikhail.io>
Co-authored-by: Luke Hoban <luke@pulumi.com>
Co-authored-by: Levi Blackstone <levi@pulumi.com>
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
These changes add a new method to the resource provider gRPC interface,
`GetSchema`, that allows consumers of these providers to extract
JSON-serialized schema information for the provider's types, resources,
and functions.
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes#3190.
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes#3190.
With these changes, a user may explicitly set `deleteBeforeReplace` to
`false` in order to disable DBR behavior for a particular resource. This
is the SDK + CLI escape hatch for cases where the changes in
https://github.com/pulumi/pulumi-terraform/pull/465 cause undesirable
behavior.
These changes add support for passing `ignoreChanges` paths to resource
providers. This is intended to accommodate providers that perform diffs
between resource inputs and resource state (e.g. all Terraform-based
providers, the k8s provider when using API server dry-runs). These paths
are specified using the same syntax as the paths used in detailed diffs.
In addition to passing these paths to providers, the existing support
for `ignoreChanges` in inputs has been extended to accept paths rather
than top-level keys. It is an error to specify a path that is missing
one or more component in the old or new inputs.
Fixes#2936, #2663.
* Plumbing the custom timeouts from the engine to the providers
* Plumbing the CustomTimeouts through to the engine and adding test to show this
* Change the provider proto to include individual timeouts
* Plumbing the CustomTimeouts from the engine through to the Provider RPC interface
* Change how the CustomTimeouts are sent across RPC
These errors were spotted in testing. We can now see that the timeout
information is arriving in the RegisterResourceRequest
```
req=&pulumirpc.RegisterResourceRequest{
Type: "aws:s3/bucket:Bucket",
Name: "my-bucket",
Parent: "urn:pulumi:dev::aws-vpc::pulumi:pulumi:Stack::aws-vpc-dev",
Custom: true,
Object: &structpb.Struct{},
Protect: false,
Dependencies: nil,
Provider: "",
PropertyDependencies: {},
DeleteBeforeReplace: false,
Version: "",
IgnoreChanges: nil,
AcceptSecrets: true,
AdditionalSecretOutputs: nil,
Aliases: nil,
CustomTimeouts: &pulumirpc.RegisterResourceRequest_CustomTimeouts{
Create: 300,
Update: 400,
Delete: 500,
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
},
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
}
```
* Changing the design to use strings
* CHANGELOG entry to include the CustomTimeouts work
* Changing custom timeouts to be passed around the engine as converted value
We don't want to pass around strings - the user can provide it but we want
to make the engine aware of the timeout in seconds as a float64
A resource can be imported by setting the `import` property in the
resource options bag when instantiating a resource. In order to
successfully import a resource, its desired configuration (i.e. its
inputs) must not differ from its actual configuration (i.e. its state)
as calculated by the resource's provider.
There are a few interesting state transitions hiding here when importing
a resource:
1. No prior resource exists in the checkpoint file. In this case, the
resource is simply imported.
2. An external resource exists in the checkpoint file. In this case, the
resource is imported and the old external state is discarded.
3. A non-external resource exists in the checkpoint file and its ID is
different from the ID to import. In this case, the new resource is
imported and the old resource is deleted.
4. A non-external resource exists in the checkpoint file, but the ID is
the same as the ID to import. In this case, the import ID is ignored
and the resource is treated as it would be in all cases except for
changes that would replace the resource. In that case, the step
generator issues an error that indicates that the import ID should be
removed: were we to move forward with the replace, the new state of
the stack would fall under case (3), which is almost certainly not
what the user intends.
Fixes#1662.
Thse changes make a subtle but critical adjustment to the process the
Pulumi engine uses to determine whether or not a difference exists
between a resource's actual and desired states, and adjusts the way this
difference is calculated and displayed accordingly.
Today, the Pulumi engine get the first chance to decide whether or not
there is a difference between a resource's actual and desired states. It
does this by comparing the current set of inputs for a resource (i.e.
the inputs from the running Pulumi program) with the last set of inputs
used to update the resource. If there is no difference between the old
and new inputs, the engine decides that no change is necessary without
consulting the resource's provider. Only if there are changes does the
engine consult the resource's provider for more information about the
difference. This can be problematic for a number of reasons:
- Not all providers do input-input comparison; some do input-state
comparison
- Not all providers are able to update the last deployed set of inputs
when performing a refresh
- Some providers--either intentionally or due to bugs--may see changes
in resources whose inputs have not changed
All of these situations are confusing at the very least, and the first
is problematic with respect to correctness. Furthermore, the display
code only renders diffs it observes rather than rendering the diffs
observed by the provider, which can obscure the actual changes detected
at runtime.
These changes address both of these issues:
- Rather than comparing the current inputs against the last inputs
before calling a resource provider's Diff function, the engine calls
the Diff function in all cases.
- Providers may now return a list of properties that differ between the
requested and actual state and the way in which they differ. This
information will then be used by the CLI to render the diff
appropriately. A provider may also indicate that a particular diff is
between old and new inputs rather than old state and new inputs.
Fixes#2453.
Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource.
There are two key places this change is implemented.
The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN.
The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent.
Four specific scenarios are explicitly tested as part of this PR:
1. Renaming a resource
2. Adopting a resource into a component (as the owner of both component and consumption codebases)
3. Renaming a component instance (as the owner of the consumption codebase without changes to the component)
4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase)
4. Combining (1) and (3) to make both changes to a resource at the same time
`pulumi query` requires that language plugins know about "query mode" so
that they don't do things like try to register the default stack
resource.
To communicate that a language host should boot into query mode, we
augment the language plugin protocol to include this information.
Fixes#2277.
Adds a new ignoreChanges resource option that allows specifying a list of property names whose values will be ignored during updates. The property values will be used for Create, but will be ignored for purposes of updates, and as a result also cannot trigger replacements.
This is a feature of the Pulumi engine, not of the resource providers, so no new logic is needed in providers to support this feature. Instead, the engine simply replaces the values of input properties in the goal state with old inputs for properties marked as ignoreChanges.
Currently, only top level properties may be specified in ignoreChanges. In the future, this could be extended to support paths to nested properties (including into array elements) with a JSONPath/JMESPath syntax.
In pursuit of pulumi/pulumi#2389, this commit adds the necessary changes
to the resource monitor protocol so that language hosts can communicate
exactly what version of a provider should be used when servicing an
Invoke, ReadResource, or RegisterResource. The expectation here is that,
if a language host provides a version, the engine MUST use EXACTLY that
version of a provider plugin in order to service the request.