Parameterization refers to the ability for a provider to vary its schema
based on a parameter that is passed to a new `Parameterize` call on the
provider interface. The package reference that is returned may then be
used to interact with the bespoke schema/packages within.
Paramterization is key to e.g. dynamically bridging providers. In this
instance we can manage and release a single "bridge" provider that
accepts a parameter defining the upstream provider to bridge, and
returns a reference to a dynamically constructed package whose schema
reflects the upstream as needed.
In this commit, we extend the engine so that `Call` calls can accept a
package reference and thus be parameterized.
We've recently introduced resource transforms, which allow users to
update any resource options and properties for the duration of a program
using a callback. We want to introduce similar functionality for Invokes
(and eventually also StreamInvokes, Read and Calls). This can help users
e.g. set default providers through transforms consistently for all
components.
While this PR only implements the engine parts of invoke transforms, the
API for this will look very similar to what the API for resource
transforms looks like. For example in TypeScript:
```
pulumi.runtime.registerInvokeTransform(args => {
[...]
});
```
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Making this change has a couple of benefits.
Firstly (and honestly the main one) is it make codegen much simpler. We
just to emit a base64 string and/or other embedded byte array to the
generated SDK. Currently we need emit a full `proto.Value` expression
for each language, and that's not hard but it's also not trivial and
means more combinations of things per test per language.
Secondly it will allow providers to use more efficient encodings of
their parameter than JSON if there is one. I imagine some providers
might make the parameter value a protobuf message and parse that
(similar to what we do for transform functions), but they can easily
fallback to just treating the bytes as a JSON string if they want.
The only downside to this is the value is obfuscated in the generated
SDK and in the state file. Neither of those are really expected to be
viewed by users, so this feels like a minor loss.
A few weeks ago we added the concept of package references, mostly for
parametrised providers so that we wouldn't have to send the parameter
value over and over for each `RegisterResourceRequest`. This was done by
sending the package reference through the `provider` field.
Nothing yet uses this feature (it hasn't yet been added to any language
SDKs), and thinking over this the last week I realised there was an
issue using this for extension packages.
When using an extension provider it's valid to also specify an explicit
provider, which will be using the `provider` field. At this point
there's no where for the SDK to send extra information to the engine
about the parameterisation.
This is a small reworking of this idea to use a separate `package`
field, so that even for an extension type we can specify the
parameterised package it comes from and an explicit provider for it.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This is the first part of paramaterized providers in the engine. This
only supports "replacement" packages, that is where we fully replace a
providers package with a new package (think how dynamic tfbridge will
work, vs how crd2pulumi will work).
I've made the decision to _not_ support using parameterised providers by
sending the parameter in the RegisterResource request. This will
necessitate some different work in how we send the parameter for
explicit providers compared to version and pluginDownloadURL, but I
think it's worth it going forward.
No changelog as this is still basically unusable without codegen support
done, and should still be considered primarily for internal experimental
use for now.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
Similar to destroy --continue-on-error, this flag allows `pulumi up`
to continue if any errors are encountered.
Currently when we encounter an error while creating/updating a
resource, we cancel the context of the deployment executor, and thus
the deployment stops once the resources that are being processed in
parallel with the failed one finish being updated.
For --continue-on-error, we ignore these errors, and let the
deployment executor continue. In order for the deployment executor to
exit eventually we also have to mark these steps as done, as the
deployment executor will otherwise just hang, and callers with open
channels waiting for it to finish/report back will hang indefinitely.
The errors in the step will still be reported back to the user by the
OnResourceStepPost callback.
Fixes https://github.com/pulumi/pulumi/issues/14515
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Similar to what we did to the `InvokeRequest` a while ago. We're
currently using the same protobuf structure for `Provider.Call` and
`ResourceMonitor.Call` despite different field sets being filled in for
each of them.
This splits the structure into `CallRequest` for providers and
`ResourceCallRequest` for the resource monitor. A number of fields in
each are removed and marked reserved with a comment explaining why.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This extends the resource monitor interface with fields for plugin
checksums (on top of the existing plugin version and download url
fields). These fields are threaded through the engine and are persisted
in resource state. The sent or saved data is then used when installing
plugins to ensure that the checksums match what was recorded at the time
the SDK was built.
Similar to https://github.com/pulumi/pulumi/pull/13776 nothing is using
this yet, but this lays the engine side plumbing for them.
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
This change updates the engine to detect if a `RegisterResource` request
is coming from an older Node.js SDK that is using incorrect alias specs
and, if so, transforms the aliases to be correct. This allows us to
maintain compatibility for users who have upgraded their CLI but are
still using an older version of the Node.js SDK with incorrect alias
specs.
We detect if the request is from a Node.js SDK by looking at the gRPC
request's metadata headers, specifically looking at the "pulumi-runtime"
and "user-agent" headers.
First, if the request has a "pulumi-runtime" header with a value of
"nodejs", we know it's coming from the Node.js plugin. The Node.js
language plugin proxies gRPC calls from the Node.js SDK to the resource
monitor and the proxy now sets the "pulumi-runtime" header to "nodejs"
for `RegisterResource` calls.
Second, if the request has a "user-agent" header that starts with
"grpc-node-js/", we know it's coming from the Node.js SDK. This is the
case for inline programs in the automation API, which connects directly
to the resource monitor, rather than going through the language plugin's
proxy.
We can't just look at "user-agent", because in the proxy case it will
have a Go-specific "user-agent".
Updated Node.js SDKs set a new `aliasSpecs` field on the
`RegisterResource` request, which indicates that the alias specs are
correct, and no transforms are needed.
In many cases there is no need to delete resources if the container
resource is going to be deleted as well.
A few examples:
* Database object (roles, tables) when database is being deleted
* Cloud IAM bindings when user itself is being deleted
This helps with:
* Speeding the deletion process
* Removing unnecessary calls to providers
* Avoiding failed deletions when the pulumi user running the
plan has access to the container resource but not the contained
ones
To avoid deleting contained resources, set the `DeletedWith` resource
option to the container resource.
TODO:
Should we support DeletedWith with PendingDeletes?
Special case might be when the contained resource is marked as pending
deletion but we now want to delete the container resource, so
ultimately there is no need to delete the contained anymore
* Send smart aliases via gRPC to engine
* Add to SupportsFeature
* Restore old logic when the engine doesn't support smartAliases
* Add to deploytest ResourceOptions
* Add tests
* Add to CHANGELOG
* Fix test
* Rename proto fields
* Regenerate protobufs
* Fix up SDKs after field rename
* Rename deploytest aliases
* Rename internal fields
* Fix typo in c# code
* Fix typescript
* Rename feature to aliasSpecs
* Rename type to Spec
* Plumb in basics of retainOnDelete
* Add test
* Make test pass
* Add to changelog
* Add to API list
* lint
* Add semicolon
* Fix Infof call
* Fix method call
* new delete mode work
* cleanup
* protectTest
* Fix up test
* Fix replace
* Fix up test
* Warn on drop
* lint
* Change to just a bool flag
* Regenerate proto
* Rework to just a bool flag with no error
* Remove old comment
* Fix C# typo
* rm extra space
* Add missing semicolon
* Reformat python
* False typo
* Fix typo in js function name
* Reword docs
* lint
* Read doesn't need retainOnDelete
* [engine] Pipe serverURL through register resource
* Fix lint
* Thread serverURL through default provider calls
* Change tag from "serverURL" to "pluginDownloadURL"
* Update CHANGELOG_PENDING.md
* Allow provider to be null
* Fix tests
* Include server url passthrough in test
* Fix parseProviderRequest
* Add test for url pass through
* Fix lint
* Correct small nits from @justinp
* Add test for default providers
* Move special helpers to providers
* Partial conversion serverURL -> pluginDownloadURL
* Remove serverURL
* Remove more serverURL instances
* const correctness
* Add url to ProviderRequest.Name()
I also canonicalize the url by removing any trailing '/'
* Fix typo + lint
* Add test for url canonicalization
* Fix ProviderRequest.Name for version=nil
Adds a new resource option to force replacement when certain properties report changes, even if the resource provider itself does not require a replacement.
Fixes#6753.
Co-authored-by: Levi Blackstone <levi@pulumi.com>
Resources are serialized as their URN, ID, and package version. Each
Pulumi package is expected to register itself with the SDK. The package
will be invoked to construct appropriate instances of rehydrated
resources. Packages are distinguished by their name and their version.
This is the foundation of cross-process resources.
Related to #2430.
Co-authored-by: Mikhail Shilkov <github@mikhail.io>
Co-authored-by: Luke Hoban <luke@pulumi.com>
Co-authored-by: Levi Blackstone <levi@pulumi.com>
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes#3190.
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes#3190.
With these changes, a user may explicitly set `deleteBeforeReplace` to
`false` in order to disable DBR behavior for a particular resource. This
is the SDK + CLI escape hatch for cases where the changes in
https://github.com/pulumi/pulumi-terraform/pull/465 cause undesirable
behavior.
* Plumbing the custom timeouts from the engine to the providers
* Plumbing the CustomTimeouts through to the engine and adding test to show this
* Change the provider proto to include individual timeouts
* Plumbing the CustomTimeouts from the engine through to the Provider RPC interface
* Change how the CustomTimeouts are sent across RPC
These errors were spotted in testing. We can now see that the timeout
information is arriving in the RegisterResourceRequest
```
req=&pulumirpc.RegisterResourceRequest{
Type: "aws:s3/bucket:Bucket",
Name: "my-bucket",
Parent: "urn:pulumi:dev::aws-vpc::pulumi:pulumi:Stack::aws-vpc-dev",
Custom: true,
Object: &structpb.Struct{},
Protect: false,
Dependencies: nil,
Provider: "",
PropertyDependencies: {},
DeleteBeforeReplace: false,
Version: "",
IgnoreChanges: nil,
AcceptSecrets: true,
AdditionalSecretOutputs: nil,
Aliases: nil,
CustomTimeouts: &pulumirpc.RegisterResourceRequest_CustomTimeouts{
Create: 300,
Update: 400,
Delete: 500,
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
},
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
}
```
* Changing the design to use strings
* CHANGELOG entry to include the CustomTimeouts work
* Changing custom timeouts to be passed around the engine as converted value
We don't want to pass around strings - the user can provide it but we want
to make the engine aware of the timeout in seconds as a float64
A resource can be imported by setting the `import` property in the
resource options bag when instantiating a resource. In order to
successfully import a resource, its desired configuration (i.e. its
inputs) must not differ from its actual configuration (i.e. its state)
as calculated by the resource's provider.
There are a few interesting state transitions hiding here when importing
a resource:
1. No prior resource exists in the checkpoint file. In this case, the
resource is simply imported.
2. An external resource exists in the checkpoint file. In this case, the
resource is imported and the old external state is discarded.
3. A non-external resource exists in the checkpoint file and its ID is
different from the ID to import. In this case, the new resource is
imported and the old resource is deleted.
4. A non-external resource exists in the checkpoint file, but the ID is
the same as the ID to import. In this case, the import ID is ignored
and the resource is treated as it would be in all cases except for
changes that would replace the resource. In that case, the step
generator issues an error that indicates that the import ID should be
removed: were we to move forward with the replace, the new state of
the stack would fall under case (3), which is almost certainly not
what the user intends.
Fixes#1662.
Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource.
There are two key places this change is implemented.
The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN.
The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent.
Four specific scenarios are explicitly tested as part of this PR:
1. Renaming a resource
2. Adopting a resource into a component (as the owner of both component and consumption codebases)
3. Renaming a component instance (as the owner of the consumption codebase without changes to the component)
4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase)
4. Combining (1) and (3) to make both changes to a resource at the same time
Fixes#2277.
Adds a new ignoreChanges resource option that allows specifying a list of property names whose values will be ignored during updates. The property values will be used for Create, but will be ignored for purposes of updates, and as a result also cannot trigger replacements.
This is a feature of the Pulumi engine, not of the resource providers, so no new logic is needed in providers to support this feature. Instead, the engine simply replaces the values of input properties in the goal state with old inputs for properties marked as ignoreChanges.
Currently, only top level properties may be specified in ignoreChanges. In the future, this could be extended to support paths to nested properties (including into array elements) with a JSONPath/JMESPath syntax.
In pursuit of pulumi/pulumi#2389, this commit adds the necessary changes
to the resource monitor protocol so that language hosts can communicate
exactly what version of a provider should be used when servicing an
Invoke, ReadResource, or RegisterResource. The expectation here is that,
if a language host provides a version, the engine MUST use EXACTLY that
version of a provider plugin in order to service the request.
These changes add a new flag to the various `ResourceOptions` types that
indicates that a resource should be deleted before it is replaced, even
if the provider does not require this behavior. The usual
delete-before-replace cascade semantics apply.
Fixes#1620.
* Work around commonjs protoc bug
When compiling with the commonjs target, the protoc compiler still emits
references to Closure Compiler-isms that whack global state onto the
global object. This is particularly bad for us since we expect to be
able to make backwards-compatible changes to our Protobuf definitions
without breaking things, and this bug makes it impossible to do so.
To remedy the bug, this commit hacks the output of protoc (again) with
sed in order to avoid ever touching the global object. Everything still
works fine because the commonjs target (correctly) exports the protobuf
message types via the module system - it's just not writing to global
anymore.
* Fix status.proto
* Don't hack status.proto
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.
We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.
This is perhaps clearer when described by example. Consider the
following dependency graph:
A
__|__
B C
| _|_
D E F
In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.
In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
* Protobuf changes to record dependencies for read resources
* Add a number of tests for read resources, especially around replacement
* Place read resources in the snapshot with "external" bit set
Fixespulumi/pulumi#1521. This commit introduces two new step ops: Read
and ReadReplacement. The engine generates Read and ReadReplacement steps
when servicing ReadResource RPC calls from the language host.
* Fix an omission of OpReadReplace from the step list
* Rebase against master
* Transition to use V2 Resources by default
* Add a semantic "relinquish" operation to the engine
If the engine observes that a resource is read and also that the
resource exists in the snapshot as a non-external resource, it will not
delete the resource if the IDs of the old and new resources match.
* Typo fix
* CR: add missing comments, DeserializeDeployment -> DeserializeDeploymentV2, ID check