Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
// Copyright 2016-2023, Pulumi Corporation.
|
2018-05-22 19:43:36 +00:00
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
|
|
|
|
package deploy
|
|
|
|
|
|
|
|
import (
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
"context"
|
2022-01-26 17:08:36 +00:00
|
|
|
"encoding/json"
|
2023-12-12 12:19:42 +00:00
|
|
|
"errors"
|
2017-09-09 14:37:10 +00:00
|
|
|
"fmt"
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
"net/url"
|
|
|
|
"path/filepath"
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
"reflect"
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
"strconv"
|
2021-12-17 22:52:01 +00:00
|
|
|
"strings"
|
2022-10-25 19:30:13 +00:00
|
|
|
"sync"
|
2019-09-16 21:16:43 +00:00
|
|
|
"time"
|
|
|
|
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
"github.com/blang/semver"
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
"github.com/google/uuid"
|
2021-06-17 21:46:05 +00:00
|
|
|
"github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc"
|
2019-09-16 21:16:43 +00:00
|
|
|
opentracing "github.com/opentracing/opentracing-go"
|
2024-01-17 09:35:20 +00:00
|
|
|
"google.golang.org/protobuf/types/known/emptypb"
|
2021-11-13 02:37:17 +00:00
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
"google.golang.org/grpc"
|
2018-05-16 22:37:34 +00:00
|
|
|
"google.golang.org/grpc/codes"
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
"google.golang.org/grpc/credentials/insecure"
|
Maintain alias compat for older Node.js SDKs on new CLIs
This change updates the engine to detect if a `RegisterResource` request
is coming from an older Node.js SDK that is using incorrect alias specs
and, if so, transforms the aliases to be correct. This allows us to
maintain compatibility for users who have upgraded their CLI but are
still using an older version of the Node.js SDK with incorrect alias
specs.
We detect if the request is from a Node.js SDK by looking at the gRPC
request's metadata headers, specifically looking at the "pulumi-runtime"
and "user-agent" headers.
First, if the request has a "pulumi-runtime" header with a value of
"nodejs", we know it's coming from the Node.js plugin. The Node.js
language plugin proxies gRPC calls from the Node.js SDK to the resource
monitor and the proxy now sets the "pulumi-runtime" header to "nodejs"
for `RegisterResource` calls.
Second, if the request has a "user-agent" header that starts with
"grpc-node-js/", we know it's coming from the Node.js SDK. This is the
case for inline programs in the automation API, which connects directly
to the resource monitor, rather than going through the language plugin's
proxy.
We can't just look at "user-agent", because in the proxy case it will
have a Go-specific "user-agent".
Updated Node.js SDKs set a new `aliasSpecs` field on the
`RegisterResource` request, which indicates that the alias specs are
correct, and no transforms are needed.
2023-05-31 22:37:59 +00:00
|
|
|
"google.golang.org/grpc/metadata"
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
"google.golang.org/protobuf/proto"
|
2017-08-30 01:24:12 +00:00
|
|
|
|
2024-08-29 09:14:39 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/codegen/schema"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/resource/deploy/providers"
|
2022-11-01 15:15:09 +00:00
|
|
|
interceptors "github.com/pulumi/pulumi/pkg/v3/util/rpcdebug"
|
2022-02-14 18:50:29 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/diag"
|
2022-12-15 14:46:39 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/env"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/resource"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/resource/config"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/resource/plugin"
|
2023-06-28 16:02:04 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/slice"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/tokens"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/contract"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/logging"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/result"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/rpcutil"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/rpcutil/rpcerror"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/workspace"
|
|
|
|
pulumirpc "github.com/pulumi/pulumi/sdk/v3/proto/go"
|
2024-03-15 13:01:45 +00:00
|
|
|
|
|
|
|
mapset "github.com/deckarep/golang-set/v2"
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
)
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// EvalRunInfo provides information required to execute and deploy resources within a package.
|
|
|
|
type EvalRunInfo struct {
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
// the package metadata.
|
|
|
|
Proj *workspace.Project `json:"proj" yaml:"proj"`
|
|
|
|
// the package's working directory.
|
|
|
|
Pwd string `json:"pwd" yaml:"pwd"`
|
|
|
|
// the path to the program.
|
|
|
|
Program string `json:"program" yaml:"program"`
|
|
|
|
// the path to the project's directory.
|
|
|
|
ProjectRoot string `json:"projectRoot,omitempty" yaml:"projectRoot,omitempty"`
|
|
|
|
// any arguments to pass to the package.
|
|
|
|
Args []string `json:"args,omitempty" yaml:"args,omitempty"`
|
|
|
|
// the target being deployed into.
|
|
|
|
Target *Target `json:"target,omitempty" yaml:"target,omitempty"`
|
2017-08-30 01:24:12 +00:00
|
|
|
}
|
|
|
|
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
// EvalRunInfoOptions provides options for configuring an evaluation source.
|
|
|
|
type EvalSourceOptions struct {
|
|
|
|
// true if the evaluation is producing resources for a dry-run/preview.
|
|
|
|
DryRun bool
|
|
|
|
// the degree of parallelism for resource operations (<=1 for serial).
|
2024-08-28 13:45:17 +00:00
|
|
|
Parallel int32
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
// true to disable resource reference support.
|
|
|
|
DisableResourceReferences bool
|
|
|
|
// true to disable output value support.
|
|
|
|
DisableOutputValues bool
|
2024-09-04 10:36:45 +00:00
|
|
|
// AttachDebugger to launch the language host in debug mode.
|
|
|
|
AttachDebugger bool
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
}
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// NewEvalSource returns a planning source that fetches resources by evaluating a package with a set of args and
|
|
|
|
// a confgiuration map. This evaluation is performed using the given plugin context and may optionally use the
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
// given plugin host (or the default, if this is nil). Note that closing the eval source also closes the host.
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
func NewEvalSource(
|
|
|
|
plugctx *plugin.Context,
|
|
|
|
runinfo *EvalRunInfo,
|
|
|
|
defaultProviderInfo map[tokens.Package]workspace.PluginSpec,
|
|
|
|
opts EvalSourceOptions,
|
2023-03-03 16:36:39 +00:00
|
|
|
) Source {
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
return &evalSource{
|
2021-12-17 22:52:01 +00:00
|
|
|
plugctx: plugctx,
|
|
|
|
runinfo: runinfo,
|
|
|
|
defaultProviderInfo: defaultProviderInfo,
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
opts: opts,
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
type evalSource struct {
|
2021-12-17 22:52:01 +00:00
|
|
|
plugctx *plugin.Context // the plugin context.
|
|
|
|
runinfo *EvalRunInfo // the directives to use when running the program.
|
2022-08-26 14:51:14 +00:00
|
|
|
defaultProviderInfo map[tokens.Package]workspace.PluginSpec // the default provider versions for this source.
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
opts EvalSourceOptions // options for the evaluation source.
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (src *evalSource) Close() error {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-04-05 16:48:09 +00:00
|
|
|
// Project is the name of the project being run by this evaluation source.
|
2018-02-14 21:56:16 +00:00
|
|
|
func (src *evalSource) Project() tokens.PackageName {
|
|
|
|
return src.runinfo.Proj.Name
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
2020-07-09 14:19:12 +00:00
|
|
|
// Stack is the name of the stack being targeted by this evaluation source.
|
Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
func (src *evalSource) Stack() tokens.StackName {
|
2020-07-09 14:19:12 +00:00
|
|
|
return src.runinfo.Target.Name
|
|
|
|
}
|
|
|
|
|
|
|
|
func (src *evalSource) Info() interface{} { return src.runinfo }
|
|
|
|
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
// Iterate will spawn an evaluator coroutine and prepare to interact with it on subsequent calls to Next.
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
func (src *evalSource) Iterate(ctx context.Context, providers ProviderSource) (SourceIterator, error) {
|
2019-09-16 21:16:43 +00:00
|
|
|
tracingSpan := opentracing.SpanFromContext(ctx)
|
2018-08-08 20:45:48 +00:00
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
// Decrypt the configuration.
|
|
|
|
config, err := src.runinfo.Target.Config.Decrypt(src.runinfo.Target.Decrypter)
|
|
|
|
if err != nil {
|
2023-09-20 14:34:24 +00:00
|
|
|
return nil, fmt.Errorf("failed to decrypt config: %w", err)
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
}
|
|
|
|
|
2021-05-18 16:48:08 +00:00
|
|
|
// Keep track of any config keys that have secure values.
|
|
|
|
configSecretKeys := src.runinfo.Target.Config.SecureKeys()
|
|
|
|
|
2023-11-16 17:55:14 +00:00
|
|
|
configMap, err := src.runinfo.Target.Config.AsDecryptedPropertyMap(ctx, src.runinfo.Target.Decrypter)
|
2023-10-20 10:44:16 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("failed to convert config to map: %w", err)
|
|
|
|
}
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// First, fire up a resource monitor that will watch for and record resource creation.
|
2017-11-29 19:27:32 +00:00
|
|
|
regChan := make(chan *registerResourceEvent)
|
|
|
|
regOutChan := make(chan *registerResourceOutputsEvent)
|
2018-08-03 21:06:00 +00:00
|
|
|
regReadChan := make(chan *readResourceEvent)
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
|
2021-06-24 22:38:01 +00:00
|
|
|
mon, err := newResourceMonitor(
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
src, providers, regChan, regOutChan, regReadChan, config, configSecretKeys, tracingSpan)
|
2017-08-30 01:24:12 +00:00
|
|
|
if err != nil {
|
2023-09-20 14:34:24 +00:00
|
|
|
return nil, fmt.Errorf("failed to start resource monitor: %w", err)
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
2024-08-29 09:14:39 +00:00
|
|
|
// Also start up a schema loader for the language runtime to use to fetch schema information.
|
|
|
|
loaderRegistration := schema.LoaderRegistration(
|
|
|
|
schema.NewLoaderServer(schema.NewPluginLoader(src.plugctx.Host)))
|
|
|
|
loaderServer, err := plugin.NewServer(src.plugctx, loaderRegistration)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("failed to start loader server: %w", err)
|
|
|
|
}
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// Create a new iterator with appropriate channels, and gear up to go!
|
|
|
|
iter := &evalSourceIterator{
|
2024-08-29 09:14:39 +00:00
|
|
|
loaderServer: loaderServer,
|
|
|
|
mon: mon,
|
|
|
|
src: src,
|
|
|
|
regChan: regChan,
|
|
|
|
regOutChan: regOutChan,
|
|
|
|
regReadChan: regReadChan,
|
|
|
|
finChan: make(chan error),
|
2017-06-22 05:02:57 +00:00
|
|
|
}
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// Now invoke Run in a goroutine. All subsequent resource creation events will come in over the gRPC channel,
|
|
|
|
// and we will pump them through the channel. If the Run call ultimately fails, we need to propagate the error.
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
iter.forkRun(config, configSecretKeys, configMap)
|
2017-08-30 01:24:12 +00:00
|
|
|
|
|
|
|
// Finally, return the fresh iterator that the caller can use to take things from here.
|
|
|
|
return iter, nil
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
type evalSourceIterator struct {
|
2024-08-29 09:14:39 +00:00
|
|
|
loaderServer *plugin.GrpcServer // the grpc server for the schema loader.
|
|
|
|
mon SourceResourceMonitor // the resource monitor, per iterator.
|
|
|
|
src *evalSource // the owning eval source object.
|
|
|
|
regChan chan *registerResourceEvent // the channel that contains resource registrations.
|
|
|
|
regOutChan chan *registerResourceOutputsEvent // the channel that contains resource completions.
|
|
|
|
regReadChan chan *readResourceEvent // the channel that contains read resource requests.
|
|
|
|
finChan chan error // the channel that communicates completion.
|
|
|
|
done bool // set to true when the evaluation is done.
|
2024-10-01 12:41:29 +00:00
|
|
|
aborted bool // set to true when the iterator is aborted.
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (iter *evalSourceIterator) Close() error {
|
2017-09-08 22:11:09 +00:00
|
|
|
// Cancel the monitor and reclaim any associated resources.
|
|
|
|
return iter.mon.Cancel()
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
func (iter *evalSourceIterator) ResourceMonitor() SourceResourceMonitor {
|
|
|
|
return iter.mon
|
|
|
|
}
|
|
|
|
|
2023-09-12 15:01:16 +00:00
|
|
|
func (iter *evalSourceIterator) Next() (SourceEvent, error) {
|
2024-10-01 12:41:29 +00:00
|
|
|
// if the iterator is aborted, return an error.
|
|
|
|
if iter.aborted {
|
|
|
|
return nil, result.BailErrorf("EvalSourceIterator aborted")
|
|
|
|
}
|
2017-08-30 01:24:12 +00:00
|
|
|
// If we are done, quit.
|
|
|
|
if iter.done {
|
|
|
|
return nil, nil
|
|
|
|
}
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 00:24:00 +00:00
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// Await the program to compute some more state and then inspect what it has to say.
|
|
|
|
select {
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
case <-iter.mon.AbortChan():
|
2024-10-01 12:41:29 +00:00
|
|
|
iter.aborted = true
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
return nil, result.BailErrorf("EvalSourceIterator aborted")
|
2017-11-21 01:38:09 +00:00
|
|
|
case reg := <-iter.regChan:
|
2023-02-17 20:05:48 +00:00
|
|
|
contract.Assertf(reg != nil, "received a nil registerResourceEvent")
|
2017-11-21 01:38:09 +00:00
|
|
|
goal := reg.Goal()
|
2018-05-15 22:28:00 +00:00
|
|
|
logging.V(5).Infof("EvalSourceIterator produced a registration: t=%v,name=%v,#props=%v",
|
2017-11-21 01:38:09 +00:00
|
|
|
goal.Type, goal.Name, len(goal.Properties))
|
|
|
|
return reg, nil
|
2017-11-29 19:27:32 +00:00
|
|
|
case regOut := <-iter.regOutChan:
|
2023-02-17 20:05:48 +00:00
|
|
|
contract.Assertf(regOut != nil, "received a nil registerResourceOutputsEvent")
|
2018-05-15 22:28:00 +00:00
|
|
|
logging.V(5).Infof("EvalSourceIterator produced a completion: urn=%v,#outs=%v",
|
2017-11-29 19:27:32 +00:00
|
|
|
regOut.URN(), len(regOut.Outputs()))
|
|
|
|
return regOut, nil
|
2018-08-03 21:06:00 +00:00
|
|
|
case read := <-iter.regReadChan:
|
2023-02-17 20:05:48 +00:00
|
|
|
contract.Assertf(read != nil, "received a nil readResourceEvent")
|
2018-08-03 21:06:00 +00:00
|
|
|
logging.V(5).Infoln("EvalSourceIterator produced a read")
|
|
|
|
return read, nil
|
2023-09-12 15:01:16 +00:00
|
|
|
case err := <-iter.finChan:
|
2017-08-30 01:24:12 +00:00
|
|
|
// If we are finished, we can safely exit. The contract with the language provider is that this implies
|
|
|
|
// that the language runtime has exited and so calling Close on the plugin is fine.
|
|
|
|
iter.done = true
|
2023-09-12 15:01:16 +00:00
|
|
|
if err != nil {
|
|
|
|
if result.IsBail(err) {
|
2019-03-19 23:21:50 +00:00
|
|
|
logging.V(5).Infof("EvalSourceIterator ended with bail.")
|
|
|
|
} else {
|
2023-09-12 15:01:16 +00:00
|
|
|
logging.V(5).Infof("EvalSourceIterator ended with an error: %v", err)
|
2019-03-19 23:21:50 +00:00
|
|
|
}
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 00:24:00 +00:00
|
|
|
}
|
2023-09-12 15:01:16 +00:00
|
|
|
return nil, err
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 00:24:00 +00:00
|
|
|
}
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
// forkRun performs the evaluation from a distinct goroutine. This function blocks until it's our turn to go.
|
2023-10-20 10:44:16 +00:00
|
|
|
func (iter *evalSourceIterator) forkRun(
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
config map[config.Key]string,
|
|
|
|
configSecretKeys []config.Key,
|
|
|
|
configPropertyMap resource.PropertyMap,
|
2023-10-20 10:44:16 +00:00
|
|
|
) {
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 14:45:23 +00:00
|
|
|
// Fire up the goroutine to make the RPC invocation against the language runtime. As this executes, calls
|
|
|
|
// to queue things up in the resource channel will occur, and we will serve them concurrently.
|
|
|
|
go func() {
|
|
|
|
// Next, launch the language plugin.
|
2023-09-12 15:01:16 +00:00
|
|
|
run := func() error {
|
2018-11-01 15:28:11 +00:00
|
|
|
rt := iter.src.runinfo.Proj.Runtime.Name()
|
2024-01-25 23:28:58 +00:00
|
|
|
|
2022-10-04 08:58:01 +00:00
|
|
|
rtopts := iter.src.runinfo.Proj.Runtime.Options()
|
2024-01-25 23:28:58 +00:00
|
|
|
programInfo := plugin.NewProgramInfo(
|
|
|
|
/* rootDirectory */ iter.src.runinfo.ProjectRoot,
|
|
|
|
/* programDirectory */ iter.src.runinfo.Pwd,
|
|
|
|
/* entryPoint */ iter.src.runinfo.Program,
|
|
|
|
/* options */ rtopts)
|
|
|
|
|
|
|
|
langhost, err := iter.src.plugctx.Host.LanguageRuntime(rt, programInfo)
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 14:45:23 +00:00
|
|
|
if err != nil {
|
2023-09-12 15:01:16 +00:00
|
|
|
return fmt.Errorf("failed to launch language host %s: %w", rt, err)
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 14:45:23 +00:00
|
|
|
}
|
|
|
|
contract.Assertf(langhost != nil, "expected non-nil language host %s", rt)
|
|
|
|
|
|
|
|
// Now run the actual program.
|
2019-03-20 18:54:32 +00:00
|
|
|
progerr, bail, err := langhost.Run(plugin.RunInfo{
|
2023-10-20 10:44:16 +00:00
|
|
|
MonitorAddress: iter.mon.Address(),
|
Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
Stack: iter.src.runinfo.Target.Name.String(),
|
2023-10-20 10:44:16 +00:00
|
|
|
Project: string(iter.src.runinfo.Proj.Name),
|
|
|
|
Pwd: iter.src.runinfo.Pwd,
|
|
|
|
Args: iter.src.runinfo.Args,
|
|
|
|
Config: config,
|
|
|
|
ConfigSecretKeys: configSecretKeys,
|
|
|
|
ConfigPropertyMap: configPropertyMap,
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
DryRun: iter.src.opts.DryRun,
|
|
|
|
Parallel: iter.src.opts.Parallel,
|
2023-10-20 10:44:16 +00:00
|
|
|
Organization: string(iter.src.runinfo.Target.Organization),
|
2024-01-25 23:28:58 +00:00
|
|
|
Info: programInfo,
|
2024-08-29 09:14:39 +00:00
|
|
|
LoaderAddress: iter.loaderServer.Addr(),
|
2024-09-04 10:36:45 +00:00
|
|
|
AttachDebugger: iter.src.opts.AttachDebugger,
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 14:45:23 +00:00
|
|
|
})
|
2019-03-19 23:21:50 +00:00
|
|
|
|
2019-03-20 18:54:32 +00:00
|
|
|
// Check if we were asked to Bail. This a special random constant used for that
|
|
|
|
// purpose.
|
|
|
|
if err == nil && bail {
|
2023-09-12 15:01:16 +00:00
|
|
|
return result.BailErrorf("run bailed")
|
2019-03-20 18:54:32 +00:00
|
|
|
}
|
|
|
|
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 14:45:23 +00:00
|
|
|
if err == nil && progerr != "" {
|
|
|
|
// If the program had an unhandled error; propagate it to the caller.
|
2021-11-13 02:37:17 +00:00
|
|
|
err = fmt.Errorf("an unhandled error occurred: %v", progerr)
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 14:45:23 +00:00
|
|
|
}
|
2023-09-12 15:01:16 +00:00
|
|
|
return err
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 14:45:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Communicate the error, if it exists, or nil if the program exited cleanly.
|
|
|
|
iter.finChan <- run()
|
|
|
|
}()
|
2017-08-30 01:24:12 +00:00
|
|
|
}
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
// defaultProviders manages the registration of default providers. The default provider for a package is the provider
|
|
|
|
// resource that will be used to manage resources that do not explicitly reference a provider. Default providers will
|
|
|
|
// only be registered for packages that are used by resources registered by the user's Pulumi program.
|
|
|
|
type defaultProviders struct {
|
2019-04-17 18:25:02 +00:00
|
|
|
// A map of package identifiers to versions, used to disambiguate which plugin to load if no version is provided
|
|
|
|
// by the language host.
|
2022-08-26 14:51:14 +00:00
|
|
|
defaultProviderInfo map[tokens.Package]workspace.PluginSpec
|
2019-04-17 18:25:02 +00:00
|
|
|
|
|
|
|
// A map of ProviderRequest strings to provider references, used to keep track of the set of default providers that
|
|
|
|
// have already been loaded.
|
|
|
|
providers map[string]providers.Reference
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
config plugin.ConfigSource
|
|
|
|
|
2019-08-27 17:10:51 +00:00
|
|
|
requests chan defaultProviderRequest
|
|
|
|
providerRegChan chan<- *registerResourceEvent
|
|
|
|
cancel <-chan bool
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
type defaultProviderResponse struct {
|
|
|
|
ref providers.Reference
|
|
|
|
err error
|
|
|
|
}
|
|
|
|
|
|
|
|
type defaultProviderRequest struct {
|
2019-04-17 18:25:02 +00:00
|
|
|
req providers.ProviderRequest
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
response chan<- defaultProviderResponse
|
|
|
|
}
|
|
|
|
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
func (d *defaultProviders) normalizeProviderRequest(req providers.ProviderRequest) providers.ProviderRequest {
|
2019-04-17 18:25:02 +00:00
|
|
|
// Request that the engine instantiate a specific version of this provider, if one was requested. We'll figure out
|
|
|
|
// what version to request by:
|
|
|
|
// 1. Providing the Version field of the ProviderRequest verbatim, if it was provided, otherwise
|
|
|
|
// 2. Querying the list of default versions provided to us on startup and returning the value associated with
|
|
|
|
// the given package, if one exists, otherwise
|
|
|
|
// 3. We give nothing to the engine and let the engine figure it out.
|
|
|
|
//
|
|
|
|
// As we tighen up our approach to provider versioning, 2 and 3 will go away and be replaced entirely by 1. 3 is
|
|
|
|
// especially onerous because the engine selects the "newest" plugin available on the machine, which is generally
|
|
|
|
// problematic for a lot of reasons.
|
|
|
|
if req.Version() != nil {
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): using version %s from request", req, req.Version())
|
2019-04-17 18:25:02 +00:00
|
|
|
} else {
|
2021-12-17 22:52:01 +00:00
|
|
|
if version := d.defaultProviderInfo[req.Package()].Version; version != nil {
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): default version hit on version %s", req, version)
|
2024-06-10 17:28:47 +00:00
|
|
|
req = providers.NewProviderRequest(
|
2024-07-15 08:33:36 +00:00
|
|
|
req.Package(), version, req.PluginDownloadURL(), req.PluginChecksums(), req.Parameterization())
|
2019-04-17 18:25:02 +00:00
|
|
|
} else {
|
|
|
|
logging.V(5).Infof(
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
"normalizeProviderRequest(%s): default provider miss, sending nil version to engine", req)
|
2019-04-17 18:25:02 +00:00
|
|
|
}
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
|
|
|
|
2021-12-17 22:52:01 +00:00
|
|
|
if req.PluginDownloadURL() != "" {
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): using pluginDownloadURL %s from request",
|
2021-12-17 22:52:01 +00:00
|
|
|
req, req.PluginDownloadURL())
|
|
|
|
} else {
|
|
|
|
if pluginDownloadURL := d.defaultProviderInfo[req.Package()].PluginDownloadURL; pluginDownloadURL != "" {
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): default pluginDownloadURL hit on %s",
|
2021-12-17 22:52:01 +00:00
|
|
|
req, pluginDownloadURL)
|
2024-06-10 17:28:47 +00:00
|
|
|
req = providers.NewProviderRequest(
|
2024-07-15 08:33:36 +00:00
|
|
|
req.Package(), req.Version(), pluginDownloadURL, req.PluginChecksums(), req.Parameterization())
|
2021-12-17 22:52:01 +00:00
|
|
|
} else {
|
|
|
|
logging.V(5).Infof(
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
"normalizeProviderRequest(%s): default pluginDownloadURL miss, sending empty string to engine", req)
|
2021-12-17 22:52:01 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-09-11 15:54:07 +00:00
|
|
|
if req.PluginChecksums() != nil {
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): using pluginChecksums %v from request",
|
2023-09-11 15:54:07 +00:00
|
|
|
req, req.PluginChecksums())
|
|
|
|
} else {
|
|
|
|
if pluginChecksums := d.defaultProviderInfo[req.Package()].Checksums; pluginChecksums != nil {
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): default pluginChecksums hit on %v",
|
2023-09-11 15:54:07 +00:00
|
|
|
req, pluginChecksums)
|
2024-06-10 17:28:47 +00:00
|
|
|
req = providers.NewProviderRequest(
|
2024-07-15 08:33:36 +00:00
|
|
|
req.Package(), req.Version(), req.PluginDownloadURL(), pluginChecksums, req.Parameterization())
|
2023-09-11 15:54:07 +00:00
|
|
|
} else {
|
|
|
|
logging.V(5).Infof(
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
"normalizeProviderRequest(%s): default pluginChecksums miss, sending empty map to engine", req)
|
2023-09-11 15:54:07 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-06-10 17:28:47 +00:00
|
|
|
if req.Parameterization() != nil {
|
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): default parameterization miss, sending nil to engine", req)
|
|
|
|
} else {
|
|
|
|
logging.V(5).Infof("normalizeProviderRequest(%s): using parameterization %v from request",
|
|
|
|
req, req.Parameterization())
|
|
|
|
|
|
|
|
// TODO: Should Parameterization be in defaultProviderInfo
|
|
|
|
//if parameterization := d.defaultProviderInfo[req.Package()].Parameterization; parameterization != nil {
|
|
|
|
// logging.V(5).Infof("normalizeProviderRequest(%s): default parameterization hit on %v",
|
|
|
|
// req, parameterization)
|
|
|
|
// req = providers.NewProviderRequest(
|
|
|
|
// req.Version(), req.Package(), req.PluginDownloadURL(), req.PluginChecksums(), parameterization)
|
|
|
|
//} else {
|
|
|
|
// logging.V(5).Infof(
|
|
|
|
// "normalizeProviderRequest(%s): default parameterization miss, sending nil to engine", req)
|
|
|
|
//}
|
|
|
|
}
|
|
|
|
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
return req
|
|
|
|
}
|
|
|
|
|
|
|
|
// newRegisterDefaultProviderEvent creates a RegisterResourceEvent and completion channel that can be sent to the
|
|
|
|
// engine to register a default provider resource for the indicated package.
|
|
|
|
func (d *defaultProviders) newRegisterDefaultProviderEvent(
|
|
|
|
req providers.ProviderRequest,
|
|
|
|
) (*registerResourceEvent, <-chan *RegisterResult, error) {
|
|
|
|
// Attempt to get the config for the package.
|
|
|
|
inputs, err := d.config.GetPackageConfig(req.Package())
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
if req.Version() != nil {
|
|
|
|
providers.SetProviderVersion(inputs, req.Version())
|
|
|
|
}
|
|
|
|
if req.PluginDownloadURL() != "" {
|
|
|
|
providers.SetProviderURL(inputs, req.PluginDownloadURL())
|
|
|
|
}
|
|
|
|
if req.PluginChecksums() != nil {
|
|
|
|
providers.SetProviderChecksums(inputs, req.PluginChecksums())
|
|
|
|
}
|
2024-06-10 17:28:47 +00:00
|
|
|
if req.Parameterization() != nil {
|
2024-07-15 08:33:36 +00:00
|
|
|
providers.SetProviderName(inputs, req.Name())
|
|
|
|
providers.SetProviderParameterization(inputs, req.Parameterization())
|
2024-06-10 17:28:47 +00:00
|
|
|
}
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
// Create the result channel and the event.
|
|
|
|
done := make(chan *RegisterResult)
|
|
|
|
event := ®isterResourceEvent{
|
2019-04-17 18:25:02 +00:00
|
|
|
goal: resource.NewGoal(
|
|
|
|
providers.MakeProviderType(req.Package()),
|
2024-07-15 08:33:36 +00:00
|
|
|
req.DefaultName(), true, inputs, "", false, nil, "", nil, nil, nil,
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
nil, nil, nil, "", nil, nil, false, "", ""),
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
done: done,
|
|
|
|
}
|
|
|
|
return event, done, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// handleRequest services a single default provider request. If the request is for a default provider that we have
|
|
|
|
// already loaded, we will return its reference. If the request is for a default provider that has not yet been
|
|
|
|
// loaded, we will send a register resource request to the engine, wait for it to complete, and then cache and return
|
|
|
|
// the reference of the loaded provider.
|
|
|
|
//
|
|
|
|
// Note that this function must not be called from two goroutines concurrently; it is the responsibility of d.serve()
|
|
|
|
// to ensure this.
|
2019-04-17 18:25:02 +00:00
|
|
|
func (d *defaultProviders) handleRequest(req providers.ProviderRequest) (providers.Reference, error) {
|
|
|
|
logging.V(5).Infof("handling default provider request for package %s", req)
|
|
|
|
|
Canonicalize provider version during default provider lookup (#16109)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
This PR addresses a problem that leads to two default providers for a
given package+version. The problem occurs when given a combination of
unversioned and versioned provider requests, as can occur in
multi-language programs.
The fix is as follows: given an unversioned provider request, apply the
default version before looking into the provider cache.
Fixes #16108
## Example
Given the example program from #16108 plus this fix, the stack state has
one provider as expected:
```jsonl
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0",
"custom": true,
"id": "c19b65b0-dd5a-432f-8c75-d7544de782ec",
"type": "pulumi:providers:kubernetes",
"inputs": {
"version": "4.10.0"
},
"outputs": {
"version": "4.10.0"
},
"created": "2024-05-02T21:22:09.72166Z",
"modified": "2024-05-02T21:22:09.72166Z"
}
{
"urn": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup$kubernetes:core/v1:ConfigMap::example:eron/example",
"custom": true,
"id": "eron/example",
"type": "kubernetes:core/v1:ConfigMap",
"parent": "urn:pulumi:dev::issue-1939-nodejs::kubernetes:yaml/v2:ConfigGroup::example",
"provider": "urn:pulumi:dev::issue-1939-nodejs::pulumi:providers:kubernetes::default_4_10_0::c19b65b0-dd5a-432f-8c75-d7544de782ec",
"propertyDependencies": {
"apiVersion": [],
"kind": [],
"metadata": []
},
}
```
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-06 08:22:28 +00:00
|
|
|
req = d.normalizeProviderRequest(req)
|
|
|
|
|
2022-01-26 17:08:36 +00:00
|
|
|
denyCreation, err := d.shouldDenyRequest(req)
|
|
|
|
if err != nil {
|
|
|
|
return providers.Reference{}, err
|
|
|
|
}
|
|
|
|
if denyCreation {
|
|
|
|
logging.V(5).Infof("denied default provider request for package %s", req)
|
2023-11-20 08:59:00 +00:00
|
|
|
return providers.NewDenyDefaultProvider(string(req.Package().Name())), nil
|
2022-01-26 17:08:36 +00:00
|
|
|
}
|
|
|
|
|
2019-04-17 18:25:02 +00:00
|
|
|
// Have we loaded this provider before? Use the existing reference, if so.
|
|
|
|
//
|
|
|
|
// Note that we are using the request's String as the key for the provider map. Go auto-derives hash and equality
|
|
|
|
// functions for aggregates, but the one auto-derived for ProviderRequest does not have the semantics we want. The
|
|
|
|
// use of a string key here is hacky but gets us the desired semantics - that ProviderRequest is a tuple of
|
|
|
|
// optional value-typed Version and a package.
|
|
|
|
ref, ok := d.providers[req.String()]
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
if ok {
|
|
|
|
return ref, nil
|
|
|
|
}
|
|
|
|
|
2019-04-17 18:25:02 +00:00
|
|
|
event, done, err := d.newRegisterDefaultProviderEvent(req)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
if err != nil {
|
|
|
|
return providers.Reference{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
select {
|
2019-08-27 17:10:51 +00:00
|
|
|
case d.providerRegChan <- event:
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
case <-d.cancel:
|
|
|
|
return providers.Reference{}, context.Canceled
|
|
|
|
}
|
|
|
|
|
2019-04-17 18:25:02 +00:00
|
|
|
logging.V(5).Infof("waiting for default provider for package %s", req)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
|
|
|
|
var result *RegisterResult
|
|
|
|
select {
|
|
|
|
case result = <-done:
|
|
|
|
case <-d.cancel:
|
|
|
|
return providers.Reference{}, context.Canceled
|
|
|
|
}
|
|
|
|
|
2019-04-17 18:25:02 +00:00
|
|
|
logging.V(5).Infof("registered default provider for package %s: %s", req, result.State.URN)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
|
|
|
|
id := result.State.ID
|
Reuse provider instances where possible (#14127)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes https://github.com/pulumi/pulumi/issues/13987.
This reworks the registry to better track provider instances such that
we can reuse unconfigured instances between Creates, Updates, and Sames.
When we allocate a provider instance in the registry for a Check call we
save it with the special id "unconfigured". This value should never make
its way back to program SDKs, it's purely an internal value for the
engine.
When we do a Create, Update or Same we look to see if there's an
unconfigured provider to use and if so configures that one, else it
starts up a fresh one. (N.B. Update we can assume there will always be
an unconfigured one from the Check call before).
This has also fixed registry Create to use the ID `UnknownID` rather
than `""`, have added some contract assertions to check that and fixed
up some test fallout because of that (the tests had been getting away
with leaving ID blank before).
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-10-12 20:46:01 +00:00
|
|
|
contract.Assertf(id != "", "default provider for package %s has no ID", req)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
|
|
|
|
ref, err = providers.NewReference(result.State.URN, id)
|
2023-02-17 20:05:48 +00:00
|
|
|
contract.Assertf(err == nil, "could not create provider reference with URN %s and ID %s", result.State.URN, id)
|
2019-04-17 18:25:02 +00:00
|
|
|
d.providers[req.String()] = ref
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
|
|
|
|
return ref, nil
|
|
|
|
}
|
|
|
|
|
2022-01-26 17:08:36 +00:00
|
|
|
// If req should be allowed, or if we should prevent the request.
|
|
|
|
func (d *defaultProviders) shouldDenyRequest(req providers.ProviderRequest) (bool, error) {
|
2022-10-21 00:40:20 +00:00
|
|
|
logging.V(9).Infof("checking if %#v should be denied", req)
|
|
|
|
|
|
|
|
if req.Package().Name().String() == "pulumi" {
|
|
|
|
logging.V(9).Infof("we always allow %#v through", req)
|
|
|
|
return false, nil
|
|
|
|
}
|
|
|
|
|
2022-01-26 17:08:36 +00:00
|
|
|
pConfig, err := d.config.GetPackageConfig("pulumi")
|
|
|
|
if err != nil {
|
|
|
|
return true, err
|
|
|
|
}
|
|
|
|
|
|
|
|
denyCreation := false
|
|
|
|
if value, ok := pConfig["disable-default-providers"]; ok {
|
|
|
|
array := []interface{}{}
|
|
|
|
if !value.IsString() {
|
2023-12-12 12:19:42 +00:00
|
|
|
return true, errors.New("Unexpected encoding of pulumi:disable-default-providers")
|
2022-01-26 17:08:36 +00:00
|
|
|
}
|
2022-02-14 18:50:29 +00:00
|
|
|
if value.StringValue() == "" {
|
|
|
|
// If the list is provided but empty, we don't encode a empty json
|
|
|
|
// list, we just encode the empty string. Check to ensure we don't
|
|
|
|
// get parse errors.
|
|
|
|
return false, nil
|
|
|
|
}
|
2022-01-26 17:08:36 +00:00
|
|
|
if err := json.Unmarshal([]byte(value.StringValue()), &array); err != nil {
|
|
|
|
return true, fmt.Errorf("Failed to parse %s: %w", value.StringValue(), err)
|
|
|
|
}
|
|
|
|
for i, v := range array {
|
|
|
|
s, ok := v.(string)
|
|
|
|
if !ok {
|
|
|
|
return true, fmt.Errorf("pulumi:disable-default-providers[%d] must be a string", i)
|
|
|
|
}
|
|
|
|
barred := strings.TrimSpace(s)
|
|
|
|
if barred == "*" || barred == req.Package().Name().String() {
|
|
|
|
logging.V(7).Infof("denying %s (star=%t)", req, barred == "*")
|
|
|
|
denyCreation = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
logging.V(9).Infof("Did not find a config for 'pulumi'")
|
|
|
|
}
|
|
|
|
|
|
|
|
return denyCreation, nil
|
|
|
|
}
|
|
|
|
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
// serve is the primary loop responsible for handling default provider requests.
|
|
|
|
func (d *defaultProviders) serve() {
|
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case req := <-d.requests:
|
|
|
|
// Note that we do not need to handle cancellation when sending the response: every message we receive is
|
|
|
|
// guaranteed to have something waiting on the other end of the response channel.
|
2019-04-17 18:25:02 +00:00
|
|
|
ref, err := d.handleRequest(req.req)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
req.response <- defaultProviderResponse{ref: ref, err: err}
|
|
|
|
case <-d.cancel:
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// getDefaultProviderRef fetches the provider reference for the default provider for a particular package.
|
2019-04-17 18:25:02 +00:00
|
|
|
func (d *defaultProviders) getDefaultProviderRef(req providers.ProviderRequest) (providers.Reference, error) {
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
response := make(chan defaultProviderResponse)
|
|
|
|
select {
|
2019-04-17 18:25:02 +00:00
|
|
|
case d.requests <- defaultProviderRequest{req: req, response: response}:
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
case <-d.cancel:
|
|
|
|
return providers.Reference{}, context.Canceled
|
|
|
|
}
|
|
|
|
res := <-response
|
|
|
|
return res.ref, res.err
|
|
|
|
}
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
// A transformation function that can be applied to a resource.
|
|
|
|
type TransformFunction func(
|
|
|
|
ctx context.Context,
|
|
|
|
name, typ string, custom bool, parent resource.URN,
|
|
|
|
props resource.PropertyMap,
|
|
|
|
opts *pulumirpc.TransformResourceOptions,
|
|
|
|
) (resource.PropertyMap, *pulumirpc.TransformResourceOptions, error)
|
|
|
|
|
2024-07-11 16:01:44 +00:00
|
|
|
// A transformation function that can be applied to an invoke.
|
|
|
|
type TransformInvokeFunction func(
|
|
|
|
ctx context.Context, token string, args resource.PropertyMap,
|
|
|
|
opts *pulumirpc.TransformInvokeOptions,
|
|
|
|
) (resource.PropertyMap, *pulumirpc.TransformInvokeOptions, error)
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
type CallbacksClient struct {
|
|
|
|
pulumirpc.CallbacksClient
|
|
|
|
|
|
|
|
conn *grpc.ClientConn
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *CallbacksClient) Close() error {
|
|
|
|
return c.conn.Close()
|
|
|
|
}
|
|
|
|
|
|
|
|
func NewCallbacksClient(conn *grpc.ClientConn) *CallbacksClient {
|
|
|
|
return &CallbacksClient{
|
|
|
|
CallbacksClient: pulumirpc.NewCallbacksClient(conn),
|
|
|
|
conn: conn,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-04-05 16:48:09 +00:00
|
|
|
// resmon implements the pulumirpc.ResourceMonitor interface and acts as the gateway between a language runtime's
|
2017-08-30 01:24:12 +00:00
|
|
|
// evaluation of a program and the internal resource planning and deployment logic.
|
|
|
|
type resmon struct {
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
pulumirpc.UnsafeResourceMonitorServer
|
2022-12-14 19:20:26 +00:00
|
|
|
|
2024-04-15 12:07:45 +00:00
|
|
|
pendingTransforms map[string][]TransformFunction // pending transformation functions for a constructed resource
|
|
|
|
pendingTransformsLock sync.Mutex
|
|
|
|
|
|
|
|
parents map[resource.URN]resource.URN // map of child URNs to their parent URNs
|
|
|
|
parentsLock sync.Mutex
|
|
|
|
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
resGoals map[resource.URN]resource.Goal // map of seen URNs and their goals.
|
|
|
|
resGoalsLock sync.Mutex // locks the resGoals map.
|
|
|
|
diagnostics diag.Sink // logger for user-facing messages
|
|
|
|
providers ProviderSource // the provider source itself.
|
|
|
|
componentProviders map[resource.URN]map[string]string // which providers component resources used
|
|
|
|
componentProvidersLock sync.Mutex // which locks the componentProviders map
|
|
|
|
defaultProviders *defaultProviders // the default provider manager.
|
|
|
|
sourcePositions *sourcePositions // source position manager.
|
|
|
|
constructInfo plugin.ConstructInfo // information for construct and call calls.
|
|
|
|
regChan chan *registerResourceEvent // the channel to send resource registrations to.
|
|
|
|
regOutChan chan *registerResourceOutputsEvent // the channel to send resource output registrations to.
|
|
|
|
regReadChan chan *readResourceEvent // the channel to send resource reads to.
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
abortChan chan bool // a channel that can abort iteration of resources.
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
cancel chan bool // a channel that can cancel the server.
|
|
|
|
done <-chan error // a channel that resolves when the server completes.
|
|
|
|
opts EvalSourceOptions // options for the resource monitor.
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
|
2024-05-02 11:32:54 +00:00
|
|
|
// the working directory for the resources sent to this monitor.
|
|
|
|
workingDirectory string
|
|
|
|
|
2024-07-11 16:01:44 +00:00
|
|
|
stackTransformsLock sync.Mutex
|
|
|
|
stackTransforms []TransformFunction // stack transformation functions
|
|
|
|
stackInvokeTransformsLock sync.Mutex
|
|
|
|
stackInvokeTransforms []TransformInvokeFunction // invoke transformation functions
|
|
|
|
resourceTransformsLock sync.Mutex
|
|
|
|
resourceTransforms map[resource.URN][]TransformFunction // option transformation functions per resource
|
|
|
|
callbacksLock sync.Mutex
|
|
|
|
callbacks map[string]*CallbacksClient // callbacks clients per target address
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
|
|
|
|
packageRefLock sync.Mutex
|
|
|
|
// A map of UUIDs to the description of a provider package they correspond to
|
|
|
|
packageRefMap map[string]providers.ProviderRequest
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
2019-05-02 21:22:50 +00:00
|
|
|
var _ SourceResourceMonitor = (*resmon)(nil)
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// newResourceMonitor creates a new resource monitor RPC server.
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
func newResourceMonitor(
|
|
|
|
src *evalSource,
|
|
|
|
provs ProviderSource,
|
|
|
|
regChan chan *registerResourceEvent,
|
|
|
|
regOutChan chan *registerResourceOutputsEvent,
|
|
|
|
regReadChan chan *readResourceEvent,
|
|
|
|
config map[config.Key]string,
|
|
|
|
configSecretKeys []config.Key,
|
|
|
|
tracingSpan opentracing.Span,
|
2023-03-03 16:36:39 +00:00
|
|
|
) (*resmon, error) {
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
abortChan := make(chan bool)
|
|
|
|
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
// Create our cancellation channel.
|
|
|
|
cancel := make(chan bool)
|
|
|
|
|
|
|
|
// Create a new default provider manager.
|
|
|
|
d := &defaultProviders{
|
2021-12-17 22:52:01 +00:00
|
|
|
defaultProviderInfo: src.defaultProviderInfo,
|
|
|
|
providers: make(map[string]providers.Reference),
|
|
|
|
config: src.runinfo.Target,
|
|
|
|
requests: make(chan defaultProviderRequest),
|
|
|
|
providerRegChan: regChan,
|
|
|
|
cancel: cancel,
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// New up an engine RPC server.
|
|
|
|
resmon := &resmon{
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
diagnostics: src.plugctx.Diag,
|
|
|
|
providers: provs,
|
|
|
|
defaultProviders: d,
|
|
|
|
workingDirectory: src.runinfo.Pwd,
|
|
|
|
sourcePositions: newSourcePositions(src.runinfo.ProjectRoot),
|
|
|
|
pendingTransforms: map[string][]TransformFunction{},
|
|
|
|
parents: map[resource.URN]resource.URN{},
|
|
|
|
resGoals: map[resource.URN]resource.Goal{},
|
|
|
|
componentProviders: map[resource.URN]map[string]string{},
|
|
|
|
regChan: regChan,
|
|
|
|
regOutChan: regOutChan,
|
|
|
|
regReadChan: regReadChan,
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
abortChan: abortChan,
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
cancel: cancel,
|
|
|
|
opts: src.opts,
|
|
|
|
callbacks: map[string]*CallbacksClient{},
|
|
|
|
resourceTransforms: map[resource.URN][]TransformFunction{},
|
|
|
|
packageRefMap: map[string]providers.ProviderRequest{},
|
2017-08-30 01:24:12 +00:00
|
|
|
}
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// Fire up a gRPC server and start listening for incomings.
|
2022-11-01 15:15:09 +00:00
|
|
|
handle, err := rpcutil.ServeWithOptions(rpcutil.ServeOptions{
|
|
|
|
Cancel: resmon.cancel,
|
|
|
|
Init: func(srv *grpc.Server) error {
|
2018-04-05 16:48:09 +00:00
|
|
|
pulumirpc.RegisterResourceMonitorServer(srv, resmon)
|
2017-08-30 01:24:12 +00:00
|
|
|
return nil
|
|
|
|
},
|
2023-12-19 16:14:40 +00:00
|
|
|
Options: sourceEvalServeOptions(src.plugctx, tracingSpan, env.DebugGRPC.Value()),
|
2022-11-01 15:15:09 +00:00
|
|
|
})
|
2017-08-30 01:24:12 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
2017-06-21 17:31:06 +00:00
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
resmon.constructInfo = plugin.ConstructInfo{
|
2021-06-24 22:38:01 +00:00
|
|
|
Project: string(src.runinfo.Proj.Name),
|
Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
Stack: src.runinfo.Target.Name.String(),
|
2021-06-24 22:38:01 +00:00
|
|
|
Config: config,
|
|
|
|
ConfigSecretKeys: configSecretKeys,
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
DryRun: src.opts.DryRun,
|
|
|
|
Parallel: src.opts.Parallel,
|
2022-11-01 15:15:09 +00:00
|
|
|
MonitorAddress: fmt.Sprintf("127.0.0.1:%d", handle.Port),
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
}
|
2022-11-01 15:15:09 +00:00
|
|
|
resmon.done = handle.Done
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
go d.serve()
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
return resmon, nil
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
func (rm *resmon) AbortChan() <-chan bool {
|
|
|
|
return rm.abortChan
|
|
|
|
}
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
// Get or allocate a new grpc client for the given callback address.
|
|
|
|
func (rm *resmon) GetCallbacksClient(target string) (*CallbacksClient, error) {
|
|
|
|
rm.callbacksLock.Lock()
|
|
|
|
defer rm.callbacksLock.Unlock()
|
|
|
|
|
|
|
|
if client, has := rm.callbacks[target]; has {
|
|
|
|
return client, nil
|
|
|
|
}
|
|
|
|
|
2024-11-06 18:36:10 +00:00
|
|
|
conn, err := grpc.NewClient(target, grpc.WithTransportCredentials(insecure.NewCredentials()))
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
client := NewCallbacksClient(conn)
|
|
|
|
rm.callbacks[target] = client
|
|
|
|
return client, nil
|
|
|
|
}
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// Address returns the address at which the monitor's RPC server may be reached.
|
|
|
|
func (rm *resmon) Address() string {
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
return rm.constructInfo.MonitorAddress
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 00:24:00 +00:00
|
|
|
}
|
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// Cancel signals that the engine should be terminated, awaits its termination, and returns any errors that result.
|
|
|
|
func (rm *resmon) Cancel() error {
|
2018-05-16 22:37:34 +00:00
|
|
|
close(rm.cancel)
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
errs := []error{<-rm.done}
|
|
|
|
for _, client := range rm.callbacks {
|
|
|
|
errs = append(errs, client.Close())
|
|
|
|
}
|
|
|
|
return errors.Join(errs...)
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
2023-12-19 16:14:40 +00:00
|
|
|
func sourceEvalServeOptions(ctx *plugin.Context, tracingSpan opentracing.Span, logFile string) []grpc.ServerOption {
|
2022-11-01 15:15:09 +00:00
|
|
|
serveOpts := rpcutil.OpenTracingServerInterceptorOptions(
|
|
|
|
tracingSpan,
|
|
|
|
otgrpc.SpanDecorator(decorateResourceSpans),
|
|
|
|
)
|
2023-12-19 16:14:40 +00:00
|
|
|
if logFile != "" {
|
2022-11-01 15:15:09 +00:00
|
|
|
di, err := interceptors.NewDebugInterceptor(interceptors.DebugInterceptorOptions{
|
|
|
|
LogFile: logFile,
|
|
|
|
Mutex: ctx.DebugTraceMutex,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
// ignoring
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
metadata := map[string]interface{}{
|
|
|
|
"mode": "server",
|
|
|
|
}
|
|
|
|
serveOpts = append(serveOpts, di.ServerOptions(interceptors.LogOptions{
|
|
|
|
Metadata: metadata,
|
|
|
|
})...)
|
|
|
|
}
|
|
|
|
return serveOpts
|
|
|
|
}
|
|
|
|
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
// getProviderReference fetches the provider reference for a resource, read, or invoke from the given package with the
|
|
|
|
// given unparsed provider reference. If the unparsed provider reference is empty, this function returns a reference
|
|
|
|
// to the default provider for the indicated package.
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
func (rm *resmon) getProviderReference(defaultProviders *defaultProviders, req providers.ProviderRequest,
|
2023-03-03 16:36:39 +00:00
|
|
|
rawProviderRef string,
|
|
|
|
) (providers.Reference, error) {
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
if rawProviderRef != "" {
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
// Check if this is a real provider ref (URN::ID) or a package reference (a dashed uuid)
|
|
|
|
if strings.Contains(rawProviderRef, "::") {
|
|
|
|
ref, err := providers.ParseReference(rawProviderRef)
|
|
|
|
if err != nil {
|
|
|
|
return providers.Reference{}, fmt.Errorf("could not parse provider reference: %w", err)
|
|
|
|
}
|
|
|
|
return ref, nil
|
|
|
|
}
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
|
|
|
|
2019-10-22 20:26:17 +00:00
|
|
|
ref, err := defaultProviders.getDefaultProviderRef(req)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
if err != nil {
|
|
|
|
return providers.Reference{}, err
|
|
|
|
}
|
|
|
|
return ref, nil
|
|
|
|
}
|
|
|
|
|
2019-10-22 20:26:17 +00:00
|
|
|
// getProviderFromSource fetches the provider plugin for a resource, read, or invoke from the given
|
|
|
|
// package with the given unparsed provider reference. If the unparsed provider reference is empty,
|
|
|
|
// this function returns the plugin for the indicated package's default provider.
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
func (rm *resmon) getProviderFromSource(
|
2022-02-28 23:33:45 +00:00
|
|
|
providerSource ProviderSource, defaultProviders *defaultProviders,
|
|
|
|
req providers.ProviderRequest, rawProviderRef string,
|
2023-03-03 16:36:39 +00:00
|
|
|
token tokens.ModuleMember,
|
|
|
|
) (plugin.Provider, error) {
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
providerRef, err := rm.getProviderReference(defaultProviders, req, rawProviderRef)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
if err != nil {
|
2022-02-28 23:33:45 +00:00
|
|
|
return nil, fmt.Errorf("getProviderFromSource: %w", err)
|
|
|
|
} else if providers.IsDenyDefaultsProvider(providerRef) {
|
|
|
|
msg := diag.GetDefaultProviderDenied("Invoke").Message
|
|
|
|
return nil, fmt.Errorf(msg, req.Package(), token)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
2022-02-28 23:33:45 +00:00
|
|
|
|
|
|
|
provider, ok := providerSource.GetProvider(providerRef)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
if !ok {
|
2022-02-28 23:33:45 +00:00
|
|
|
return nil, fmt.Errorf("unknown provider '%v' -> '%v'", rawProviderRef, providerRef)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
|
|
|
return provider, nil
|
|
|
|
}
|
|
|
|
|
2023-09-11 15:54:07 +00:00
|
|
|
func parseProviderRequest(
|
|
|
|
pkg tokens.Package, version,
|
|
|
|
pluginDownloadURL string, pluginChecksums map[string][]byte,
|
2024-07-15 08:33:36 +00:00
|
|
|
parameterization *providers.ProviderParameterization,
|
2023-09-11 15:54:07 +00:00
|
|
|
) (providers.ProviderRequest, error) {
|
Refine resource replacement logic for providers (#2767)
This commit touches an intersection of a few different provider-oriented
features that combined to cause a particularly severe bug that made it
impossible for users to upgrade provider versions without seeing
replacements with their resources.
For some context, Pulumi models all providers as resources and places
them in the snapshot like any other resource. Every resource has a
reference to the provider that created it. If a Pulumi program does not
specify a particular provider to use when performing a resource
operation, the Pulumi engine injects one automatically; these are called
"default providers" and are the most common ways that users end up with
providers in their snapshot. Default providers can be identified by
their name, which is always prefixed with "default".
Recently, in an effort to make the Pulumi engine more flexible with
provider versions, it was made possible for the engine to have multiple
default providers active for a provider of a particular type, which was
previously not possible. Because a provider is identified as a tuple of
package name and version, it was difficult to find a name for these
duplicate default providers that did not cause additional problems. The
provider versioning PR gave these default providers a name that was
derived from the version of the package. This proved to be a problem,
because when users upgraded from one version of a package to another,
this changed the name of their default provider which in turn caused all
of their resources created using that provider (read: everything) to be
replaced.
To combat this, this PR introduces a rule that the engine will apply
when diffing a resource to determine whether or not it needs to be
replaced: "If a resource's provider changes, and both old and new
providers are default providers whose properties do not require
replacement, proceed as if there were no diff." This allows the engine
to gracefully recognize and recover when a resource's default provider changes
names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
|
|
|
if version == "" {
|
|
|
|
logging.V(5).Infof("parseProviderRequest(%s): semver version is the empty string", pkg)
|
2024-07-15 08:33:36 +00:00
|
|
|
return providers.NewProviderRequest(pkg, nil, pluginDownloadURL, pluginChecksums, parameterization), nil
|
Refine resource replacement logic for providers (#2767)
This commit touches an intersection of a few different provider-oriented
features that combined to cause a particularly severe bug that made it
impossible for users to upgrade provider versions without seeing
replacements with their resources.
For some context, Pulumi models all providers as resources and places
them in the snapshot like any other resource. Every resource has a
reference to the provider that created it. If a Pulumi program does not
specify a particular provider to use when performing a resource
operation, the Pulumi engine injects one automatically; these are called
"default providers" and are the most common ways that users end up with
providers in their snapshot. Default providers can be identified by
their name, which is always prefixed with "default".
Recently, in an effort to make the Pulumi engine more flexible with
provider versions, it was made possible for the engine to have multiple
default providers active for a provider of a particular type, which was
previously not possible. Because a provider is identified as a tuple of
package name and version, it was difficult to find a name for these
duplicate default providers that did not cause additional problems. The
provider versioning PR gave these default providers a name that was
derived from the version of the package. This proved to be a problem,
because when users upgraded from one version of a package to another,
this changed the name of their default provider which in turn caused all
of their resources created using that provider (read: everything) to be
replaced.
To combat this, this PR introduces a rule that the engine will apply
when diffing a resource to determine whether or not it needs to be
replaced: "If a resource's provider changes, and both old and new
providers are default providers whose properties do not require
replacement, proceed as if there were no diff." This allows the engine
to gracefully recognize and recover when a resource's default provider changes
names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
|
|
|
}
|
2019-05-22 01:14:20 +00:00
|
|
|
|
Refine resource replacement logic for providers (#2767)
This commit touches an intersection of a few different provider-oriented
features that combined to cause a particularly severe bug that made it
impossible for users to upgrade provider versions without seeing
replacements with their resources.
For some context, Pulumi models all providers as resources and places
them in the snapshot like any other resource. Every resource has a
reference to the provider that created it. If a Pulumi program does not
specify a particular provider to use when performing a resource
operation, the Pulumi engine injects one automatically; these are called
"default providers" and are the most common ways that users end up with
providers in their snapshot. Default providers can be identified by
their name, which is always prefixed with "default".
Recently, in an effort to make the Pulumi engine more flexible with
provider versions, it was made possible for the engine to have multiple
default providers active for a provider of a particular type, which was
previously not possible. Because a provider is identified as a tuple of
package name and version, it was difficult to find a name for these
duplicate default providers that did not cause additional problems. The
provider versioning PR gave these default providers a name that was
derived from the version of the package. This proved to be a problem,
because when users upgraded from one version of a package to another,
this changed the name of their default provider which in turn caused all
of their resources created using that provider (read: everything) to be
replaced.
To combat this, this PR introduces a rule that the engine will apply
when diffing a resource to determine whether or not it needs to be
replaced: "If a resource's provider changes, and both old and new
providers are default providers whose properties do not require
replacement, proceed as if there were no diff." This allows the engine
to gracefully recognize and recover when a resource's default provider changes
names, as long as the provider's config has not changed.
2019-06-03 19:16:31 +00:00
|
|
|
parsedVersion, err := semver.Parse(version)
|
|
|
|
if err != nil {
|
|
|
|
logging.V(5).Infof("parseProviderRequest(%s, %s): semver version string is invalid: %v", pkg, version, err)
|
|
|
|
return providers.ProviderRequest{}, err
|
|
|
|
}
|
2019-04-17 18:25:02 +00:00
|
|
|
|
2021-12-17 22:52:01 +00:00
|
|
|
url := strings.TrimSuffix(pluginDownloadURL, "/")
|
|
|
|
|
2024-07-15 08:33:36 +00:00
|
|
|
return providers.NewProviderRequest(pkg, &parsedVersion, url, pluginChecksums, parameterization), nil
|
2019-04-17 18:25:02 +00:00
|
|
|
}
|
|
|
|
|
2024-06-24 14:59:18 +00:00
|
|
|
func (rm *resmon) RegisterPackage(ctx context.Context,
|
|
|
|
req *pulumirpc.RegisterPackageRequest,
|
|
|
|
) (*pulumirpc.RegisterPackageResponse, error) {
|
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterPackage(%v)", req)
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
|
2024-07-08 10:55:08 +00:00
|
|
|
name := tokens.Package(req.Name)
|
|
|
|
if name == "" {
|
|
|
|
return nil, errors.New("package name is empty")
|
|
|
|
}
|
|
|
|
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
// First parse the request into a ProviderRequest
|
|
|
|
var version *semver.Version
|
|
|
|
if req.Version != "" {
|
|
|
|
v, err := semver.Parse(req.Version)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("parse package version %s: %w", req.Version, err)
|
|
|
|
}
|
|
|
|
version = &v
|
|
|
|
}
|
2024-06-10 17:28:47 +00:00
|
|
|
// Parse the parameterization
|
2024-07-15 08:33:36 +00:00
|
|
|
var parameterization *providers.ProviderParameterization
|
2024-07-08 10:55:08 +00:00
|
|
|
if req.Parameterization != nil {
|
2024-08-05 16:01:28 +00:00
|
|
|
parameterizationVersion, err := semver.Parse(req.Parameterization.Version)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("parse parameter version %s: %w", req.Parameterization.Version, err)
|
2024-06-10 17:28:47 +00:00
|
|
|
}
|
|
|
|
|
2024-07-08 10:55:08 +00:00
|
|
|
// RegisterPackageRequest keeps all the plugin information in the root fields "name", "version" etc, while the
|
|
|
|
// information about the parameterized package is in the "parameterization" field. Internally in the engine, and
|
|
|
|
// for resource state we need to flip that around a bit.
|
2024-07-15 08:33:36 +00:00
|
|
|
parameterization = providers.NewProviderParameterization(
|
|
|
|
tokens.Package(req.Parameterization.Name),
|
|
|
|
parameterizationVersion,
|
2024-07-10 11:15:35 +00:00
|
|
|
req.Parameterization.Value,
|
2024-06-10 17:28:47 +00:00
|
|
|
)
|
|
|
|
}
|
|
|
|
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
pi := providers.NewProviderRequest(
|
2024-07-15 08:33:36 +00:00
|
|
|
tokens.Package(req.Name), version, req.DownloadUrl, req.Checksums,
|
2024-06-10 17:28:47 +00:00
|
|
|
parameterization)
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
|
|
|
|
rm.packageRefLock.Lock()
|
|
|
|
defer rm.packageRefLock.Unlock()
|
|
|
|
|
|
|
|
// See if this package is already registered, else add it to the map.
|
|
|
|
for uuid, candidate := range rm.packageRefMap {
|
|
|
|
if reflect.DeepEqual(candidate, pi) {
|
2024-06-24 14:59:18 +00:00
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterPackage(%v) matched %s", req, uuid)
|
|
|
|
return &pulumirpc.RegisterPackageResponse{Ref: uuid}, nil
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Wasn't found add it to the map
|
|
|
|
uuid := uuid.New().String()
|
|
|
|
rm.packageRefMap[uuid] = pi
|
2024-06-24 14:59:18 +00:00
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterPackage(%v) created %s", req, uuid)
|
|
|
|
return &pulumirpc.RegisterPackageResponse{Ref: uuid}, nil
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
}
|
|
|
|
|
2019-04-12 20:25:01 +00:00
|
|
|
func (rm *resmon) SupportsFeature(ctx context.Context,
|
2023-03-03 16:36:39 +00:00
|
|
|
req *pulumirpc.SupportsFeatureRequest,
|
|
|
|
) (*pulumirpc.SupportsFeatureResponse, error) {
|
2019-04-12 20:25:01 +00:00
|
|
|
hasSupport := false
|
|
|
|
|
2023-01-23 14:57:51 +00:00
|
|
|
// NOTE: DO NOT ADD ANY MORE FEATURES TO THIS LIST
|
|
|
|
//
|
|
|
|
// Context: https://github.com/pulumi/pulumi-dotnet/pull/88#pullrequestreview-1265714090
|
|
|
|
//
|
|
|
|
// We shouldn't add any more features to this list, copying strings around codebases is prone to bugs.
|
|
|
|
// Rather than adding a new feature here, setup a new SupportsFeatureV2 method, that takes a grpc enum
|
|
|
|
// instead. That can then be safely code generated out to each language with no risk of typos.
|
|
|
|
//
|
|
|
|
// These old features have to stay as is because old engines DO support them, but wouldn't support the new
|
|
|
|
// SupportsFeatureV2 method.
|
|
|
|
|
2019-04-12 20:25:01 +00:00
|
|
|
switch req.Id {
|
2020-11-07 02:56:23 +00:00
|
|
|
case "secrets":
|
2019-04-12 20:25:01 +00:00
|
|
|
hasSupport = true
|
2020-11-25 18:43:46 +00:00
|
|
|
case "resourceReferences":
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
hasSupport = !rm.opts.DisableResourceReferences
|
2021-09-15 21:16:00 +00:00
|
|
|
case "outputValues":
|
Clean up deployment options (#16357)
# Description
There are a number of parts of the deployment process that require
context about and configuration for the operation being executed. For
instance:
* Source evaluation -- evaluating programs in order to emit resource
registrations
* Step generation -- processing resource registrations in order to
generate steps (create this, update that, delete the other, etc.)
* Step execution -- executing steps in order to action a deployment.
Presently, these pieces all take some form of `Options` struct or pass
explicit arguments. This is problematic for a couple of reasons:
* It could be possible for different parts of the codebase to end up
operating in different contexts/with different configurations, whether
due to different values being passed explicitly or due to missed
copying/instantiation.
* Some parts need less context/configuration than others, but still
accept full `Options`, making it hard to discern what information is
actually necessary in any given part of the process.
This commit attempts to clean things up by moving deployment options
directly into the `Deployment` itself. Since step generation and
execution already refer to a `Deployment`, they get a consistent view of
the options for free. For source evaluation, we introduce an
`EvalSourceOptions` struct for configuring just the options necessary
there. At the top level, the engine configures a single set of options
to flow through the deployment steps later on.
As part of this work, a few other things have been changed:
* Preview/dry-run parameters have been incorporated into options. This
lets up lop off another argument and mitigate a bit of "boolean
blindness". We don't appear to flip this flag within a deployment
process (indeed, all options seem to be immutable) and so having it as a
separate flag doesn't seem to buy us anything.
* Several methods representing parts of the deployment process have lost
arguments in favour of state that is already being carried on (or can be
carried on) their receiver. For instance, `deployment.run` no longer
takes actions or preview configuration. While doing so means that a
`deployment` could be run multiple times with different actions/preview
arguments, we don't currently exploit this fact anywhere, so moving this
state to the point of construction both simplifies things considerably
and reduces the possibility for error (e.g. passing different values of
`preview` when instantiating a `deployment` and subsequently calling
`run`).
* Event handlers have been split out of the options object and attached
to `Deployment` separately. This means we can talk about options at a
higher level without having to `nil` out/worry about this field and
mutate it correctly later on.
* Options are no longer mutated during deployment. Presently there
appears to be only one case of this -- when handling `ContinueOnError`
in the presence of `IgnoreChanges` (e.g. when performing a refresh).
This case has been refactored so that the mutation is no longer
necessary.
# Notes
* This change is in preparation for #16146, where we'd like to add an
environment variable to control behaviour and having a single unified
`Options` struct would make it easier to pass this configuration down
with introducing (more) global state into deployments. Indeed, this
change should make it easier to factor global state into `Options` so
that it can be controlled and tested more easily/is less susceptible to
bugs, race conditions, etc.
* I've tweaked/extended some comments while I'm here and have learned
things the hard way (e.g. `Refresh` vs `isRefresh`). Feedback welcome on
this if we'd rather not conflate.
* This change does mean that if in future we wanted e.g. to be able to
run a `Deployment` in multiple different ways with multiple sets of
actions, we'd have to refactor. Pushing state to the point of object
construction reduces the flexibility of the code. However, since we are
not presently using that flexibility (nor is there an obvious [to my
mind] use case in the near future), this seems like a good trade-off to
guard against bugs/make it simpler to move that state around.
* I've left some other review comments in the code around
questions/changes that might be a bad idea; happy to receive feedback on
it all though!
2024-06-11 13:37:57 +00:00
|
|
|
hasSupport = !rm.opts.DisableOutputValues
|
2022-09-22 17:13:55 +00:00
|
|
|
case "aliasSpecs":
|
|
|
|
hasSupport = true
|
2022-10-31 21:36:29 +00:00
|
|
|
case "deletedWith":
|
|
|
|
hasSupport = true
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
case "transforms":
|
|
|
|
hasSupport = true
|
2024-07-11 16:01:44 +00:00
|
|
|
case "invokeTransforms":
|
|
|
|
hasSupport = true
|
2024-09-13 14:32:53 +00:00
|
|
|
case "parameterization":
|
|
|
|
// N.B This serves a dual purpose of also indicating that package references are supported.
|
|
|
|
hasSupport = true
|
2019-04-12 20:25:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
logging.V(5).Infof("ResourceMonitor.SupportsFeature(id: %s) = %t", req.Id, hasSupport)
|
|
|
|
|
|
|
|
return &pulumirpc.SupportsFeatureResponse{
|
|
|
|
HasSupport: hasSupport,
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2017-09-20 00:23:10 +00:00
|
|
|
// Invoke performs an invocation of a member located in a resource provider.
|
2022-04-14 09:59:46 +00:00
|
|
|
func (rm *resmon) Invoke(ctx context.Context, req *pulumirpc.ResourceInvokeRequest) (*pulumirpc.InvokeResponse, error) {
|
2024-07-11 16:01:44 +00:00
|
|
|
// Fetch the token.
|
2017-09-20 00:23:10 +00:00
|
|
|
tok := tokens.ModuleMember(req.GetTok())
|
|
|
|
|
2017-12-15 15:22:49 +00:00
|
|
|
label := fmt.Sprintf("ResourceMonitor.Invoke(%s)", tok)
|
2017-09-20 00:23:10 +00:00
|
|
|
args, err := plugin.UnmarshalProperties(
|
2019-04-12 21:29:08 +00:00
|
|
|
req.GetArgs(), plugin.MarshalOptions{
|
2024-05-02 11:32:54 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2019-04-12 21:29:08 +00:00
|
|
|
})
|
2017-09-20 00:23:10 +00:00
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("failed to unmarshal %v args: %w", tok, err)
|
2017-09-20 00:23:10 +00:00
|
|
|
}
|
|
|
|
|
2024-07-11 16:01:44 +00:00
|
|
|
opts := &pulumirpc.TransformInvokeOptions{
|
|
|
|
Provider: req.GetProvider(),
|
|
|
|
Version: req.GetVersion(),
|
|
|
|
PluginDownloadUrl: req.GetPluginDownloadURL(),
|
|
|
|
PluginChecksums: req.GetPluginChecksums(),
|
|
|
|
}
|
|
|
|
|
|
|
|
// Lock the invoke transforms and run all of those before loading the provider.
|
|
|
|
err = func() error {
|
|
|
|
// Function exists to scope the lock
|
|
|
|
rm.stackInvokeTransformsLock.Lock()
|
|
|
|
defer rm.stackInvokeTransformsLock.Unlock()
|
|
|
|
|
|
|
|
for _, transform := range rm.stackInvokeTransforms {
|
|
|
|
newArgs, newOpts, err := transform(ctx, string(tok), args, opts)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
args = newArgs
|
|
|
|
opts = newOpts
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Load up the resource provider if necessary.
|
|
|
|
providerReq, err := parseProviderRequest(
|
|
|
|
tok.Package(), opts.Version, opts.PluginDownloadUrl, opts.PluginChecksums, nil)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2024-07-24 17:23:13 +00:00
|
|
|
|
|
|
|
packageRef := req.GetPackageRef()
|
|
|
|
if packageRef != "" {
|
|
|
|
var has bool
|
|
|
|
providerReq, has = rm.packageRefMap[packageRef]
|
|
|
|
if !has {
|
|
|
|
return nil, fmt.Errorf("unknown provider package '%v'", packageRef)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-07-11 16:01:44 +00:00
|
|
|
prov, err := rm.getProviderFromSource(rm.providers, rm.defaultProviders, providerReq, opts.Provider, tok)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("Invoke: %w", err)
|
|
|
|
}
|
|
|
|
|
2017-09-20 00:23:10 +00:00
|
|
|
// Do the invoke and then return the arguments.
|
2018-05-15 22:28:00 +00:00
|
|
|
logging.V(5).Infof("ResourceMonitor.Invoke received: tok=%v #args=%v", tok, len(args))
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
resp, err := prov.Invoke(ctx, plugin.InvokeRequest{
|
|
|
|
Tok: tok,
|
|
|
|
Args: args,
|
|
|
|
})
|
2017-09-20 00:23:10 +00:00
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("invocation of %v returned an error: %w", tok, err)
|
2017-09-20 00:23:10 +00:00
|
|
|
}
|
2022-11-16 19:21:53 +00:00
|
|
|
|
|
|
|
// Respect `AcceptResources` unless `tok` is for the built-in `pulumi:pulumi:getResource` function,
|
|
|
|
// in which case always keep resources to maintain the original behavior for older SDKs that are not
|
|
|
|
// setting the `AccceptResources` flag.
|
|
|
|
keepResources := req.GetAcceptResources()
|
|
|
|
if tok == "pulumi:pulumi:getResource" {
|
|
|
|
keepResources = true
|
|
|
|
}
|
|
|
|
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
mret, err := plugin.MarshalProperties(resp.Properties, plugin.MarshalOptions{
|
2024-05-02 11:32:54 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: keepResources,
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2019-04-12 21:29:08 +00:00
|
|
|
})
|
2017-09-20 00:23:10 +00:00
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("failed to marshal %v return: %w", tok, err)
|
2017-09-20 00:23:10 +00:00
|
|
|
}
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
chkfails := slice.Prealloc[*pulumirpc.CheckFailure](len(resp.Failures))
|
|
|
|
for _, failure := range resp.Failures {
|
2018-04-05 16:48:09 +00:00
|
|
|
chkfails = append(chkfails, &pulumirpc.CheckFailure{
|
2017-09-20 00:23:10 +00:00
|
|
|
Property: string(failure.Property),
|
|
|
|
Reason: failure.Reason,
|
|
|
|
})
|
|
|
|
}
|
2018-04-05 16:48:09 +00:00
|
|
|
return &pulumirpc.InvokeResponse{Return: mret, Failures: chkfails}, nil
|
|
|
|
}
|
|
|
|
|
2021-06-30 14:48:56 +00:00
|
|
|
// Call dynamically executes a method in the provider associated with a component resource.
|
2024-02-08 13:16:23 +00:00
|
|
|
func (rm *resmon) Call(ctx context.Context, req *pulumirpc.ResourceCallRequest) (*pulumirpc.CallResponse, error) {
|
2021-06-30 14:48:56 +00:00
|
|
|
// Fetch the token and load up the resource provider if necessary.
|
|
|
|
tok := tokens.ModuleMember(req.GetTok())
|
2023-09-11 15:54:07 +00:00
|
|
|
providerReq, err := parseProviderRequest(
|
|
|
|
tok.Package(), req.GetVersion(),
|
2024-06-10 17:28:47 +00:00
|
|
|
req.GetPluginDownloadURL(), req.GetPluginChecksums(), nil)
|
2021-06-30 14:48:56 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2024-07-26 15:36:53 +00:00
|
|
|
|
|
|
|
packageRef := req.GetPackageRef()
|
|
|
|
if packageRef != "" {
|
|
|
|
var has bool
|
|
|
|
providerReq, has = rm.packageRefMap[packageRef]
|
|
|
|
if !has {
|
|
|
|
return nil, fmt.Errorf("unknown provider package '%v'", packageRef)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
prov, err := rm.getProviderFromSource(rm.providers, rm.defaultProviders, providerReq, req.GetProvider(), tok)
|
2021-06-30 14:48:56 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
label := fmt.Sprintf("ResourceMonitor.Call(%s)", tok)
|
|
|
|
|
|
|
|
args, err := plugin.UnmarshalProperties(
|
|
|
|
req.GetArgs(), plugin.MarshalOptions{
|
2024-02-06 16:46:31 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
KeepOutputValues: true,
|
|
|
|
UpgradeToOutputValues: true,
|
2024-05-02 11:32:54 +00:00
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2021-06-30 14:48:56 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("failed to unmarshal %v args: %w", tok, err)
|
2021-06-30 14:48:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
argDependencies := map[resource.PropertyKey][]resource.URN{}
|
|
|
|
for name, deps := range req.GetArgDependencies() {
|
|
|
|
urns := make([]resource.URN, len(deps.Urns))
|
|
|
|
for i, urn := range deps.Urns {
|
2023-07-18 15:10:23 +00:00
|
|
|
urn, err := resource.ParseURN(urn)
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid dependency on argument %d URN: %s", i, err))
|
|
|
|
}
|
|
|
|
urns[i] = urn
|
2021-06-30 14:48:56 +00:00
|
|
|
}
|
|
|
|
argDependencies[resource.PropertyKey(name)] = urns
|
|
|
|
}
|
|
|
|
|
2024-02-21 09:15:38 +00:00
|
|
|
// If we have output values we can add the dependencies from them to the args dependencies map we send to the provider.
|
|
|
|
for key, output := range args {
|
2024-03-15 13:01:45 +00:00
|
|
|
argDependencies[key] = extendOutputDependencies(argDependencies[key], output)
|
2024-02-21 09:15:38 +00:00
|
|
|
}
|
|
|
|
|
2021-06-30 14:48:56 +00:00
|
|
|
info := plugin.CallInfo{
|
|
|
|
Project: rm.constructInfo.Project,
|
|
|
|
Stack: rm.constructInfo.Stack,
|
|
|
|
Config: rm.constructInfo.Config,
|
|
|
|
DryRun: rm.constructInfo.DryRun,
|
|
|
|
Parallel: rm.constructInfo.Parallel,
|
|
|
|
MonitorAddress: rm.constructInfo.MonitorAddress,
|
|
|
|
}
|
|
|
|
options := plugin.CallOptions{
|
|
|
|
ArgDependencies: argDependencies,
|
|
|
|
}
|
|
|
|
|
|
|
|
// Do the all and then return the arguments.
|
|
|
|
logging.V(5).Infof(
|
|
|
|
"ResourceMonitor.Call received: tok=%v #args=%v #info=%v #options=%v", tok, len(args), info, options)
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
ret, err := prov.Call(ctx, plugin.CallRequest{
|
|
|
|
Tok: tok,
|
|
|
|
Args: args,
|
|
|
|
Info: info,
|
|
|
|
Options: options,
|
|
|
|
})
|
2021-06-30 14:48:56 +00:00
|
|
|
if err != nil {
|
2024-11-07 09:56:04 +00:00
|
|
|
var rpcError error
|
|
|
|
rpcError, ok := rpcerror.FromError(err)
|
|
|
|
if !ok {
|
|
|
|
rpcError = err
|
|
|
|
}
|
|
|
|
|
|
|
|
message := errorToMessage(rpcError, args)
|
|
|
|
rm.diagnostics.Errorf(diag.GetCallFailedError(), tok, message)
|
|
|
|
|
|
|
|
rm.abortChan <- true
|
|
|
|
<-rm.cancel
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("call of %v returned an error: %w", tok, err)
|
2021-06-30 14:48:56 +00:00
|
|
|
}
|
2024-02-14 08:15:24 +00:00
|
|
|
|
|
|
|
if ret.ReturnDependencies == nil {
|
|
|
|
ret.ReturnDependencies = map[resource.PropertyKey][]resource.URN{}
|
|
|
|
}
|
|
|
|
for k, v := range ret.Return {
|
2024-03-15 13:01:45 +00:00
|
|
|
ret.ReturnDependencies[k] = extendOutputDependencies(ret.ReturnDependencies[k], v)
|
2021-06-30 14:48:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
returnDependencies := map[string]*pulumirpc.CallResponse_ReturnDependencies{}
|
|
|
|
for name, deps := range ret.ReturnDependencies {
|
|
|
|
urns := make([]string, len(deps))
|
|
|
|
for i, urn := range deps {
|
|
|
|
urns[i] = string(urn)
|
|
|
|
}
|
|
|
|
returnDependencies[string(name)] = &pulumirpc.CallResponse_ReturnDependencies{Urns: urns}
|
|
|
|
}
|
|
|
|
|
2024-02-14 08:15:24 +00:00
|
|
|
mret, err := plugin.MarshalProperties(ret.Return, plugin.MarshalOptions{
|
2024-05-02 11:32:54 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2024-02-14 08:15:24 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("failed to marshal %v return: %w", tok, err)
|
|
|
|
}
|
|
|
|
|
2023-06-28 16:02:04 +00:00
|
|
|
chkfails := slice.Prealloc[*pulumirpc.CheckFailure](len(ret.Failures))
|
2021-06-30 14:48:56 +00:00
|
|
|
for _, failure := range ret.Failures {
|
|
|
|
chkfails = append(chkfails, &pulumirpc.CheckFailure{
|
|
|
|
Property: string(failure.Property),
|
|
|
|
Reason: failure.Reason,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return &pulumirpc.CallResponse{Return: mret, ReturnDependencies: returnDependencies, Failures: chkfails}, nil
|
|
|
|
}
|
|
|
|
|
2019-10-22 07:20:26 +00:00
|
|
|
func (rm *resmon) StreamInvoke(
|
2023-03-03 16:36:39 +00:00
|
|
|
req *pulumirpc.ResourceInvokeRequest, stream pulumirpc.ResourceMonitor_StreamInvokeServer,
|
|
|
|
) error {
|
2020-07-10 16:56:35 +00:00
|
|
|
tok := tokens.ModuleMember(req.GetTok())
|
|
|
|
label := fmt.Sprintf("ResourceMonitor.StreamInvoke(%s)", tok)
|
|
|
|
|
2023-09-11 15:54:07 +00:00
|
|
|
providerReq, err := parseProviderRequest(
|
|
|
|
tok.Package(), req.GetVersion(),
|
2024-06-10 17:28:47 +00:00
|
|
|
req.GetPluginDownloadURL(), req.GetPluginChecksums(), nil)
|
2020-07-10 16:56:35 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
prov, err := rm.getProviderFromSource(rm.providers, rm.defaultProviders, providerReq, req.GetProvider(), tok)
|
2020-07-10 16:56:35 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
args, err := plugin.UnmarshalProperties(
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
req.GetArgs(), plugin.MarshalOptions{
|
2024-05-02 11:32:54 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
})
|
2020-07-10 16:56:35 +00:00
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return fmt.Errorf("failed to unmarshal %v args: %w", tok, err)
|
2020-07-10 16:56:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Synchronously do the StreamInvoke and then return the arguments. This will block until the
|
|
|
|
// streaming operation completes!
|
|
|
|
logging.V(5).Infof("ResourceMonitor.StreamInvoke received: tok=%v #args=%v", tok, len(args))
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
resp, err := prov.StreamInvoke(context.TODO(), plugin.StreamInvokeRequest{
|
|
|
|
Tok: tok,
|
|
|
|
Args: args,
|
|
|
|
OnNext: func(event resource.PropertyMap) error {
|
|
|
|
mret, err := plugin.MarshalProperties(event, plugin.MarshalOptions{
|
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepResources: req.GetAcceptResources(),
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return fmt.Errorf("failed to marshal return: %w", err)
|
|
|
|
}
|
2020-07-10 16:56:35 +00:00
|
|
|
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
return stream.Send(&pulumirpc.InvokeResponse{Return: mret})
|
|
|
|
},
|
2020-07-10 16:56:35 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return fmt.Errorf("streaming invocation of %v returned an error: %w", tok, err)
|
2020-07-10 16:56:35 +00:00
|
|
|
}
|
|
|
|
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
chkfails := slice.Prealloc[*pulumirpc.CheckFailure](len(resp.Failures))
|
|
|
|
for _, failure := range resp.Failures {
|
2020-07-10 16:56:35 +00:00
|
|
|
chkfails = append(chkfails, &pulumirpc.CheckFailure{
|
|
|
|
Property: string(failure.Property),
|
|
|
|
Reason: failure.Reason,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(chkfails) > 0 {
|
|
|
|
return stream.Send(&pulumirpc.InvokeResponse{Failures: chkfails})
|
|
|
|
}
|
|
|
|
return nil
|
2019-10-22 07:20:26 +00:00
|
|
|
}
|
|
|
|
|
2018-04-05 16:48:09 +00:00
|
|
|
// ReadResource reads the current state associated with a resource from its provider plugin.
|
|
|
|
func (rm *resmon) ReadResource(ctx context.Context,
|
2023-03-03 16:36:39 +00:00
|
|
|
req *pulumirpc.ReadResourceRequest,
|
|
|
|
) (*pulumirpc.ReadResourceResponse, error) {
|
2018-04-05 16:48:09 +00:00
|
|
|
// Read the basic inputs necessary to identify the plugin.
|
2018-09-07 22:19:18 +00:00
|
|
|
t, err := tokens.ParseTypeToken(req.GetType())
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, err.Error())
|
|
|
|
}
|
|
|
|
|
2023-11-20 08:59:00 +00:00
|
|
|
name := req.GetName()
|
2023-07-18 15:10:23 +00:00
|
|
|
parent, err := resource.ParseOptionalURN(req.GetParent())
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid parent URN: %s", err))
|
|
|
|
}
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
|
|
|
|
provider := req.GetProvider()
|
|
|
|
if !providers.IsProviderType(t) && provider == "" {
|
2023-09-11 15:54:07 +00:00
|
|
|
providerReq, err := parseProviderRequest(
|
|
|
|
t.Package(), req.GetVersion(),
|
2024-06-10 17:28:47 +00:00
|
|
|
req.GetPluginDownloadURL(), req.GetPluginChecksums(), nil)
|
2019-04-17 18:25:02 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2024-07-26 10:03:18 +00:00
|
|
|
|
|
|
|
packageRef := req.GetPackageRef()
|
|
|
|
if packageRef != "" {
|
|
|
|
var has bool
|
|
|
|
providerReq, has = rm.packageRefMap[packageRef]
|
|
|
|
if !has {
|
|
|
|
return nil, fmt.Errorf("unknown provider package '%v'", packageRef)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-04-17 18:25:02 +00:00
|
|
|
ref, provErr := rm.defaultProviders.getDefaultProviderRef(providerReq)
|
2018-09-07 22:19:18 +00:00
|
|
|
if provErr != nil {
|
|
|
|
return nil, provErr
|
2022-02-28 23:33:45 +00:00
|
|
|
} else if providers.IsDenyDefaultsProvider(ref) {
|
|
|
|
msg := diag.GetDefaultProviderDenied("Read").Message
|
|
|
|
return nil, fmt.Errorf(msg, req.GetType(), t)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
|
|
|
provider = ref.String()
|
2018-04-05 16:48:09 +00:00
|
|
|
}
|
|
|
|
|
2018-04-07 14:52:10 +00:00
|
|
|
id := resource.ID(req.GetId())
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
label := fmt.Sprintf("ResourceMonitor.ReadResource(%s, %s, %s, %s)", id, t, name, provider)
|
2023-06-28 16:02:04 +00:00
|
|
|
deps := slice.Prealloc[resource.URN](len(req.GetDependencies()))
|
2018-08-03 21:06:00 +00:00
|
|
|
for _, depURN := range req.GetDependencies() {
|
2023-07-18 15:10:23 +00:00
|
|
|
urn, err := resource.ParseURN(depURN)
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid dependency: %s", err))
|
|
|
|
}
|
|
|
|
deps = append(deps, urn)
|
2018-08-03 21:06:00 +00:00
|
|
|
}
|
2018-04-05 16:48:09 +00:00
|
|
|
|
2018-08-03 21:06:00 +00:00
|
|
|
props, err := plugin.UnmarshalProperties(req.GetProperties(), plugin.MarshalOptions{
|
2024-05-02 11:32:54 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2018-08-03 21:06:00 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-04-07 14:52:10 +00:00
|
|
|
|
2023-06-28 16:02:04 +00:00
|
|
|
additionalSecretOutputs := slice.Prealloc[resource.PropertyKey](len(req.GetAdditionalSecretOutputs()))
|
2019-05-09 21:27:34 +00:00
|
|
|
for _, name := range req.GetAdditionalSecretOutputs() {
|
|
|
|
additionalSecretOutputs = append(additionalSecretOutputs, resource.PropertyKey(name))
|
2019-04-23 00:45:26 +00:00
|
|
|
}
|
|
|
|
|
2018-08-03 21:06:00 +00:00
|
|
|
event := &readResourceEvent{
|
2019-05-09 21:27:34 +00:00
|
|
|
id: id,
|
|
|
|
name: name,
|
|
|
|
baseType: t,
|
|
|
|
provider: provider,
|
|
|
|
parent: parent,
|
|
|
|
props: props,
|
|
|
|
dependencies: deps,
|
|
|
|
additionalSecretOutputs: additionalSecretOutputs,
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
sourcePosition: rm.sourcePositions.getFromRequest(req),
|
2019-05-09 21:27:34 +00:00
|
|
|
done: make(chan *ReadResult),
|
2018-08-03 21:06:00 +00:00
|
|
|
}
|
|
|
|
select {
|
|
|
|
case rm.regReadChan <- event:
|
|
|
|
case <-rm.cancel:
|
|
|
|
logging.V(5).Infof("ResourceMonitor.ReadResource operation canceled, name=%s", name)
|
|
|
|
return nil, rpcerror.New(codes.Unavailable, "resource monitor shut down while sending resource registration")
|
2018-04-05 16:48:09 +00:00
|
|
|
}
|
2018-04-07 14:52:10 +00:00
|
|
|
|
2018-08-03 21:06:00 +00:00
|
|
|
// Now block waiting for the operation to finish.
|
|
|
|
var result *ReadResult
|
|
|
|
select {
|
|
|
|
case result = <-event.done:
|
|
|
|
case <-rm.cancel:
|
|
|
|
logging.V(5).Infof("ResourceMonitor.ReadResource operation canceled, name=%s", name)
|
|
|
|
return nil, rpcerror.New(codes.Unavailable, "resource monitor shut down while waiting on step's done channel")
|
|
|
|
}
|
|
|
|
|
2023-02-17 20:05:48 +00:00
|
|
|
contract.Assertf(result != nil, "ReadResource operation returned a nil result")
|
2018-08-03 21:06:00 +00:00
|
|
|
marshaled, err := plugin.MarshalProperties(result.State.Outputs, plugin.MarshalOptions{
|
2024-05-02 11:32:54 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: req.GetAcceptSecrets(),
|
|
|
|
KeepResources: req.GetAcceptResources(),
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2018-08-03 21:06:00 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("failed to marshal %s return state: %w", result.State.URN, err)
|
2018-08-03 21:06:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return &pulumirpc.ReadResourceResponse{
|
|
|
|
Urn: string(result.State.URN),
|
|
|
|
Properties: marshaled,
|
|
|
|
}, nil
|
2017-09-20 00:23:10 +00:00
|
|
|
}
|
|
|
|
|
2024-07-11 16:01:44 +00:00
|
|
|
// Wrap the transform callback so the engine can call the callback server, which will then execute the function. The
|
|
|
|
// wrapper takes care of all the necessary marshalling and unmarshalling.
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
func (rm *resmon) wrapTransformCallback(cb *pulumirpc.Callback) (TransformFunction, error) {
|
|
|
|
client, err := rm.GetCallbacksClient(cb.Target)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
token := cb.Token
|
|
|
|
return func(
|
|
|
|
ctx context.Context, name, typ string, custom bool, parent resource.URN,
|
|
|
|
props resource.PropertyMap, opts *pulumirpc.TransformResourceOptions,
|
|
|
|
) (resource.PropertyMap, *pulumirpc.TransformResourceOptions, error) {
|
|
|
|
logging.V(5).Infof("Transform: name=%v type=%v custom=%v parent=%v props=%v opts=%v",
|
|
|
|
name, typ, custom, parent, props, opts)
|
|
|
|
|
|
|
|
mopts := plugin.MarshalOptions{
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
KeepOutputValues: true,
|
2024-05-02 11:32:54 +00:00
|
|
|
WorkingDirectory: rm.workingDirectory,
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
mprops, err := plugin.MarshalProperties(props, mopts)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
request, err := proto.Marshal(&pulumirpc.TransformRequest{
|
|
|
|
Name: name,
|
|
|
|
Type: typ,
|
|
|
|
Custom: custom,
|
|
|
|
Properties: mprops,
|
|
|
|
Options: opts,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, fmt.Errorf("marshaling request: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
resp, err := client.Invoke(ctx, &pulumirpc.CallbackInvokeRequest{
|
|
|
|
Token: token,
|
|
|
|
Request: request,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
logging.V(5).Infof("Transform callback error: %v", err)
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
var response pulumirpc.TransformResponse
|
|
|
|
err = proto.Unmarshal(resp.Response, &response)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, fmt.Errorf("unmarshaling response: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
newOpts := opts
|
|
|
|
if response.Options != nil {
|
|
|
|
newOpts = response.Options
|
|
|
|
}
|
|
|
|
|
|
|
|
newProps := props
|
|
|
|
if response.Properties != nil {
|
|
|
|
newProps, err = plugin.UnmarshalProperties(response.Properties, mopts)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
logging.V(5).Infof("Transform: props=%v opts=%v", newProps, newOpts)
|
|
|
|
|
|
|
|
return newProps, newOpts, nil
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2024-07-11 16:01:44 +00:00
|
|
|
// Wrap the transform callback so the engine can call the callback server, which will then execute the function. The
|
|
|
|
// wrapper takes care of all the necessary marshalling and unmarshalling.
|
|
|
|
func (rm *resmon) wrapInvokeTransformCallback(cb *pulumirpc.Callback) (TransformInvokeFunction, error) {
|
|
|
|
client, err := rm.GetCallbacksClient(cb.Target)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
token := cb.Token
|
|
|
|
return func(
|
|
|
|
ctx context.Context, invokeToken string,
|
|
|
|
args resource.PropertyMap, opts *pulumirpc.TransformInvokeOptions,
|
|
|
|
) (resource.PropertyMap, *pulumirpc.TransformInvokeOptions, error) {
|
|
|
|
logging.V(5).Infof("Invoke transform: token=%v props=%v opts=%v",
|
|
|
|
invokeToken, args, opts)
|
|
|
|
|
|
|
|
margs := plugin.MarshalOptions{
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
KeepOutputValues: true,
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
|
|
|
}
|
|
|
|
|
|
|
|
mprops, err := plugin.MarshalProperties(args, margs)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
var request []byte
|
|
|
|
request, err = proto.Marshal(&pulumirpc.TransformInvokeRequest{
|
|
|
|
Token: invokeToken,
|
|
|
|
Args: mprops,
|
|
|
|
Options: opts,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, fmt.Errorf("marshaling request: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
resp, err := client.Invoke(ctx, &pulumirpc.CallbackInvokeRequest{
|
|
|
|
Token: token,
|
|
|
|
Request: request,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
logging.V(5).Infof("Invoke transform callback error: %v", err)
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
newOpts := opts
|
|
|
|
var newProps resource.PropertyMap
|
|
|
|
var response pulumirpc.TransformInvokeResponse
|
|
|
|
err = proto.Unmarshal(resp.Response, &response)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, fmt.Errorf("unmarshaling response: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
if response.Options != nil {
|
|
|
|
newOpts = response.Options
|
|
|
|
}
|
|
|
|
newProps = args
|
|
|
|
if response.Args != nil {
|
|
|
|
newProps, err = plugin.UnmarshalProperties(response.Args, margs)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
logging.V(5).Infof("Invoke transform: props=%v opts=%v", newProps, newOpts)
|
|
|
|
|
|
|
|
return newProps, newOpts, nil
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
func (rm *resmon) RegisterStackTransform(ctx context.Context, cb *pulumirpc.Callback) (*emptypb.Empty, error) {
|
|
|
|
rm.stackTransformsLock.Lock()
|
|
|
|
defer rm.stackTransformsLock.Unlock()
|
|
|
|
|
|
|
|
if cb.Target == "" {
|
|
|
|
return nil, errors.New("target must be specified")
|
|
|
|
}
|
|
|
|
|
|
|
|
wrapped, err := rm.wrapTransformCallback(cb)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
rm.stackTransforms = append(rm.stackTransforms, wrapped)
|
|
|
|
return &emptypb.Empty{}, nil
|
|
|
|
}
|
|
|
|
|
2024-07-11 16:01:44 +00:00
|
|
|
func (rm *resmon) RegisterStackInvokeTransform(ctx context.Context, cb *pulumirpc.Callback) (*emptypb.Empty, error) {
|
|
|
|
rm.stackInvokeTransformsLock.Lock()
|
|
|
|
defer rm.stackInvokeTransformsLock.Unlock()
|
|
|
|
|
|
|
|
if cb.Target == "" {
|
|
|
|
return nil, errors.New("target must be specified")
|
|
|
|
}
|
|
|
|
|
|
|
|
wrapped, err := rm.wrapInvokeTransformCallback(cb)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
rm.stackInvokeTransforms = append(rm.stackInvokeTransforms, wrapped)
|
|
|
|
return &emptypb.Empty{}, nil
|
|
|
|
}
|
|
|
|
|
2023-03-13 16:01:20 +00:00
|
|
|
// inheritFromParent returns a new goal that inherits from the given parent goal.
|
|
|
|
// Currently only inherits DeletedWith from parent.
|
|
|
|
func inheritFromParent(child resource.Goal, parent resource.Goal) *resource.Goal {
|
|
|
|
goal := child
|
|
|
|
if goal.DeletedWith == "" {
|
|
|
|
goal.DeletedWith = parent.DeletedWith
|
|
|
|
}
|
|
|
|
return &goal
|
|
|
|
}
|
|
|
|
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
type sourcePositions struct {
|
|
|
|
projectRoot string
|
|
|
|
}
|
|
|
|
|
|
|
|
func newSourcePositions(projectRoot string) *sourcePositions {
|
|
|
|
if projectRoot == "" {
|
|
|
|
projectRoot = "/"
|
|
|
|
} else {
|
|
|
|
contract.Assertf(filepath.IsAbs(projectRoot), "projectRoot is not an absolute path")
|
|
|
|
projectRoot = filepath.Clean(projectRoot)
|
|
|
|
}
|
|
|
|
return &sourcePositions{projectRoot: projectRoot}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *sourcePositions) parseSourcePosition(raw *pulumirpc.SourcePosition) (string, error) {
|
|
|
|
if raw == nil {
|
|
|
|
return "", nil
|
|
|
|
}
|
|
|
|
|
|
|
|
if raw.Line <= 0 {
|
|
|
|
return "", fmt.Errorf("invalid line number %v", raw.Line)
|
|
|
|
}
|
|
|
|
|
|
|
|
col := ""
|
|
|
|
if raw.Column != 0 {
|
|
|
|
if raw.Column < 0 {
|
|
|
|
return "", fmt.Errorf("invalid column number %v", raw.Column)
|
|
|
|
}
|
|
|
|
col = "," + strconv.FormatInt(int64(raw.Column), 10)
|
|
|
|
}
|
|
|
|
|
|
|
|
posURL, err := url.Parse(raw.Uri)
|
|
|
|
if err != nil {
|
|
|
|
return "", err
|
|
|
|
}
|
|
|
|
if posURL.Scheme != "file" {
|
|
|
|
return "", fmt.Errorf("unrecognized scheme %q", posURL.Scheme)
|
|
|
|
}
|
|
|
|
|
|
|
|
file := filepath.FromSlash(posURL.Path)
|
|
|
|
if !filepath.IsAbs(file) {
|
2023-12-12 12:19:42 +00:00
|
|
|
return "", errors.New("source positions must include absolute paths")
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
}
|
|
|
|
rel, err := filepath.Rel(s.projectRoot, file)
|
|
|
|
if err != nil {
|
|
|
|
return "", fmt.Errorf("making relative path: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
posURL.Scheme = "project"
|
|
|
|
posURL.Path = "/" + filepath.ToSlash(rel)
|
|
|
|
posURL.Fragment = fmt.Sprintf("%v%s", raw.Line, col)
|
|
|
|
|
|
|
|
return posURL.String(), nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Allow getFromRequest to accept any gRPC request that has a source position (ReadResourceRequest,
|
|
|
|
// RegisterResourceRequest, ResourceInvokeRequest, and CallRequest).
|
|
|
|
type hasSourcePosition interface {
|
|
|
|
GetSourcePosition() *pulumirpc.SourcePosition
|
|
|
|
}
|
|
|
|
|
|
|
|
// getFromRequest returns any source position information from an incoming request.
|
|
|
|
func (s *sourcePositions) getFromRequest(req hasSourcePosition) string {
|
|
|
|
pos, err := s.parseSourcePosition(req.GetSourcePosition())
|
|
|
|
if err != nil {
|
|
|
|
logging.V(5).Infof("parsing source position %#v: %v", req.GetSourcePosition(), err)
|
|
|
|
return ""
|
|
|
|
}
|
|
|
|
return pos
|
|
|
|
}
|
|
|
|
|
Maintain alias compat for older Node.js SDKs on new CLIs
This change updates the engine to detect if a `RegisterResource` request
is coming from an older Node.js SDK that is using incorrect alias specs
and, if so, transforms the aliases to be correct. This allows us to
maintain compatibility for users who have upgraded their CLI but are
still using an older version of the Node.js SDK with incorrect alias
specs.
We detect if the request is from a Node.js SDK by looking at the gRPC
request's metadata headers, specifically looking at the "pulumi-runtime"
and "user-agent" headers.
First, if the request has a "pulumi-runtime" header with a value of
"nodejs", we know it's coming from the Node.js plugin. The Node.js
language plugin proxies gRPC calls from the Node.js SDK to the resource
monitor and the proxy now sets the "pulumi-runtime" header to "nodejs"
for `RegisterResource` calls.
Second, if the request has a "user-agent" header that starts with
"grpc-node-js/", we know it's coming from the Node.js SDK. This is the
case for inline programs in the automation API, which connects directly
to the resource monitor, rather than going through the language plugin's
proxy.
We can't just look at "user-agent", because in the proxy case it will
have a Go-specific "user-agent".
Updated Node.js SDKs set a new `aliasSpecs` field on the
`RegisterResource` request, which indicates that the alias specs are
correct, and no transforms are needed.
2023-05-31 22:37:59 +00:00
|
|
|
// requestFromNodeJS returns true if the request is coming from a Node.js language runtime
|
|
|
|
// or SDK. This is determined by checking if the request has a "pulumi-runtime" metadata
|
|
|
|
// header with a value of "nodejs". If no "pulumi-runtime" header is present, then it
|
|
|
|
// checks if the request has a "user-agent" metadata header that has a value that starts
|
|
|
|
// with "grpc-node-js/".
|
|
|
|
func requestFromNodeJS(ctx context.Context) bool {
|
|
|
|
if md, hasMetadata := metadata.FromIncomingContext(ctx); hasMetadata {
|
|
|
|
// Check for the "pulumi-runtime" header first.
|
|
|
|
// We'll always respect this header value when present.
|
|
|
|
if runtime, ok := md["pulumi-runtime"]; ok {
|
|
|
|
return len(runtime) == 1 && runtime[0] == "nodejs"
|
|
|
|
}
|
|
|
|
// Otherwise, check the "user-agent" header.
|
|
|
|
if ua, ok := md["user-agent"]; ok {
|
|
|
|
return len(ua) == 1 && strings.HasPrefix(ua[0], "grpc-node-js/")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// transformAliasForNodeJSCompat transforms the alias from the legacy Node.js values to properly specified values.
|
2024-01-23 17:44:53 +00:00
|
|
|
func transformAliasForNodeJSCompat(alias *pulumirpc.Alias) *pulumirpc.Alias {
|
|
|
|
switch a := alias.Alias.(type) {
|
|
|
|
case *pulumirpc.Alias_Spec_:
|
|
|
|
// The original implementation in the Node.js SDK did not specify aliases correctly:
|
|
|
|
//
|
|
|
|
// - It did not set NoParent when it should have, but instead set Parent to empty.
|
|
|
|
// - It set NoParent to true and left Parent empty when both the alias and resource had no Parent specified.
|
|
|
|
//
|
|
|
|
// To maintain compatibility with such versions of the Node.js SDK, we transform these incorrectly
|
|
|
|
// specified aliases into properly specified ones that work with this implementation of the engine:
|
|
|
|
//
|
|
|
|
// - { Parent: "", NoParent: false } -> { Parent: "", NoParent: true }
|
|
|
|
// - { Parent: "", NoParent: true } -> { Parent: "", NoParent: false }
|
|
|
|
spec := &pulumirpc.Alias_Spec{
|
|
|
|
Name: a.Spec.Name,
|
|
|
|
Type: a.Spec.Type,
|
|
|
|
Stack: a.Spec.Stack,
|
|
|
|
Project: a.Spec.Project,
|
|
|
|
}
|
|
|
|
|
|
|
|
switch p := a.Spec.Parent.(type) {
|
|
|
|
case *pulumirpc.Alias_Spec_ParentUrn:
|
|
|
|
if p.ParentUrn == "" {
|
|
|
|
spec.Parent = &pulumirpc.Alias_Spec_NoParent{NoParent: true}
|
|
|
|
} else {
|
|
|
|
spec.Parent = p
|
|
|
|
}
|
|
|
|
case *pulumirpc.Alias_Spec_NoParent:
|
|
|
|
if p.NoParent {
|
|
|
|
spec.Parent = nil
|
|
|
|
} else {
|
|
|
|
spec.Parent = p
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
spec.Parent = &pulumirpc.Alias_Spec_NoParent{NoParent: true}
|
|
|
|
}
|
|
|
|
|
|
|
|
return &pulumirpc.Alias{
|
|
|
|
Alias: &pulumirpc.Alias_Spec_{
|
|
|
|
Spec: spec,
|
|
|
|
},
|
|
|
|
}
|
Maintain alias compat for older Node.js SDKs on new CLIs
This change updates the engine to detect if a `RegisterResource` request
is coming from an older Node.js SDK that is using incorrect alias specs
and, if so, transforms the aliases to be correct. This allows us to
maintain compatibility for users who have upgraded their CLI but are
still using an older version of the Node.js SDK with incorrect alias
specs.
We detect if the request is from a Node.js SDK by looking at the gRPC
request's metadata headers, specifically looking at the "pulumi-runtime"
and "user-agent" headers.
First, if the request has a "pulumi-runtime" header with a value of
"nodejs", we know it's coming from the Node.js plugin. The Node.js
language plugin proxies gRPC calls from the Node.js SDK to the resource
monitor and the proxy now sets the "pulumi-runtime" header to "nodejs"
for `RegisterResource` calls.
Second, if the request has a "user-agent" header that starts with
"grpc-node-js/", we know it's coming from the Node.js SDK. This is the
case for inline programs in the automation API, which connects directly
to the resource monitor, rather than going through the language plugin's
proxy.
We can't just look at "user-agent", because in the proxy case it will
have a Go-specific "user-agent".
Updated Node.js SDKs set a new `aliasSpecs` field on the
`RegisterResource` request, which indicates that the alias specs are
correct, and no transforms are needed.
2023-05-31 22:37:59 +00:00
|
|
|
}
|
2024-01-23 17:44:53 +00:00
|
|
|
|
Maintain alias compat for older Node.js SDKs on new CLIs
This change updates the engine to detect if a `RegisterResource` request
is coming from an older Node.js SDK that is using incorrect alias specs
and, if so, transforms the aliases to be correct. This allows us to
maintain compatibility for users who have upgraded their CLI but are
still using an older version of the Node.js SDK with incorrect alias
specs.
We detect if the request is from a Node.js SDK by looking at the gRPC
request's metadata headers, specifically looking at the "pulumi-runtime"
and "user-agent" headers.
First, if the request has a "pulumi-runtime" header with a value of
"nodejs", we know it's coming from the Node.js plugin. The Node.js
language plugin proxies gRPC calls from the Node.js SDK to the resource
monitor and the proxy now sets the "pulumi-runtime" header to "nodejs"
for `RegisterResource` calls.
Second, if the request has a "user-agent" header that starts with
"grpc-node-js/", we know it's coming from the Node.js SDK. This is the
case for inline programs in the automation API, which connects directly
to the resource monitor, rather than going through the language plugin's
proxy.
We can't just look at "user-agent", because in the proxy case it will
have a Go-specific "user-agent".
Updated Node.js SDKs set a new `aliasSpecs` field on the
`RegisterResource` request, which indicates that the alias specs are
correct, and no transforms are needed.
2023-05-31 22:37:59 +00:00
|
|
|
return alias
|
|
|
|
}
|
|
|
|
|
2024-06-21 08:55:17 +00:00
|
|
|
func (rm *resmon) resolveProvider(
|
|
|
|
provider string, providers map[string]string, parent resource.URN, pkg tokens.Package,
|
|
|
|
) string {
|
|
|
|
if provider != "" {
|
|
|
|
return provider
|
|
|
|
}
|
|
|
|
if prov, ok := providers[string(pkg)]; ok {
|
|
|
|
return prov
|
|
|
|
}
|
|
|
|
if parent != "" {
|
|
|
|
rm.componentProvidersLock.Lock()
|
|
|
|
defer rm.componentProvidersLock.Unlock()
|
|
|
|
if parentsProvider, ok := rm.componentProviders[parent]; ok {
|
|
|
|
if prov, ok := parentsProvider[string(pkg)]; ok {
|
|
|
|
return prov
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ""
|
|
|
|
}
|
|
|
|
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
// Turn the GRPC status into a message, which can later be logged. Currently we only support a subset
|
|
|
|
// of the possible details types, which can be expanded later. If the details type is not recognized, we
|
|
|
|
// still return the message, but will leave out the details. This will allow us to be forward compatible
|
|
|
|
// when new details types are added.
|
2024-11-06 10:56:10 +00:00
|
|
|
func errorToMessage(err error, inputs resource.PropertyMap) string {
|
|
|
|
switch e := err.(type) {
|
|
|
|
case *rpcerror.Error:
|
|
|
|
message := e.Message()
|
|
|
|
if e.Cause() != nil {
|
|
|
|
message = fmt.Sprintf("%v: %v", message, e.Cause().Message())
|
|
|
|
}
|
|
|
|
if len(e.InputPropertiesErrors()) > 0 {
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
props := resource.NewObjectProperty(inputs)
|
2024-11-06 10:56:10 +00:00
|
|
|
for _, err := range e.InputPropertiesErrors() {
|
|
|
|
propertyPath, e := resource.ParsePropertyPath(err.PropertyPath)
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
if e == nil {
|
|
|
|
value, ok := propertyPath.Get(props)
|
|
|
|
if ok {
|
|
|
|
message = fmt.Sprintf("%v\n\t\t- property %v with value '%v' has a problem: %v",
|
2024-11-06 10:56:10 +00:00
|
|
|
message, err.PropertyPath, value, err.Reason)
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
message = fmt.Sprintf("%v\n\t\t- property %v has a problem: %v",
|
2024-11-07 09:56:04 +00:00
|
|
|
message, err.PropertyPath, err.Reason)
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
}
|
|
|
|
}
|
2024-11-06 10:56:10 +00:00
|
|
|
return message
|
|
|
|
default:
|
|
|
|
return err.Error()
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
// RegisterResource is invoked by a language process when a new resource has been allocated.
|
|
|
|
func (rm *resmon) RegisterResource(ctx context.Context,
|
2023-03-03 16:36:39 +00:00
|
|
|
req *pulumirpc.RegisterResourceRequest,
|
|
|
|
) (*pulumirpc.RegisterResourceResponse, error) {
|
2017-08-30 01:24:12 +00:00
|
|
|
// Communicate the type, name, and object information to the iterator that is awaiting us.
|
2023-11-20 08:59:00 +00:00
|
|
|
name := req.GetName()
|
2017-12-15 15:22:49 +00:00
|
|
|
custom := req.GetCustom()
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
remote := req.GetRemote()
|
2023-07-18 15:10:23 +00:00
|
|
|
parent, err := resource.ParseOptionalURN(req.GetParent())
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid parent URN: %s", err))
|
|
|
|
}
|
2019-07-12 18:12:01 +00:00
|
|
|
id := resource.ID(req.GetImportId())
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
sourcePosition := rm.sourcePositions.getFromRequest(req)
|
2018-09-07 22:19:18 +00:00
|
|
|
|
|
|
|
// Custom resources must have a three-part type so that we can 1) identify if they are providers and 2) retrieve the
|
|
|
|
// provider responsible for managing a particular resource (based on the type's Package).
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
var t tokens.Type
|
|
|
|
if custom || remote {
|
2018-09-07 22:19:18 +00:00
|
|
|
t, err = tokens.ParseTypeToken(req.GetType())
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, err.Error())
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// Component resources may have any format type.
|
|
|
|
t = tokens.Type(req.GetType())
|
|
|
|
}
|
2018-02-21 23:11:21 +00:00
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
label := fmt.Sprintf("ResourceMonitor.RegisterResource(%s,%s)", t, name)
|
|
|
|
|
|
|
|
// We need to build the full alias spec list here, so we can pass it to transforms.
|
|
|
|
aliases := []*pulumirpc.Alias{}
|
|
|
|
for _, aliasURN := range req.GetAliasURNs() {
|
|
|
|
aliases = append(aliases, &pulumirpc.Alias{Alias: &pulumirpc.Alias_Urn{Urn: aliasURN}})
|
|
|
|
}
|
|
|
|
|
|
|
|
// We assume aliases are properly specified. However, if a request hasn't explicitly
|
|
|
|
// indicated that it is using properly specified aliases and the request is coming
|
|
|
|
// from Node.js, transform the aliases from the incorrect Node.js values to properly
|
|
|
|
// specified values, to maintain backward compatibility for users of older Node.js
|
|
|
|
// SDKs that aren't sending properly specified aliases.
|
|
|
|
transformAliases := !req.GetAliasSpecs() && requestFromNodeJS(ctx)
|
|
|
|
|
|
|
|
for _, aliasObject := range req.GetAliases() {
|
|
|
|
if transformAliases {
|
|
|
|
aliasObject = transformAliasForNodeJSCompat(aliasObject)
|
|
|
|
}
|
|
|
|
aliases = append(aliases, aliasObject)
|
|
|
|
}
|
|
|
|
|
|
|
|
var deleteBeforeReplace *bool
|
|
|
|
// Technically DeleteBeforeReplaceDefined should be used to decided if DeleteBeforeReplace should be looked at or
|
|
|
|
// not. However the Go sdk doesn't set Defined so we have a fallback here of respecting this field if either Defined
|
|
|
|
// is set or DeleteBeforeReplace is true.
|
|
|
|
if req.GetDeleteBeforeReplaceDefined() || req.GetDeleteBeforeReplace() {
|
|
|
|
deleteBeforeReplace = &req.DeleteBeforeReplace
|
|
|
|
}
|
|
|
|
|
|
|
|
props, err := plugin.UnmarshalProperties(
|
|
|
|
req.GetObject(), plugin.MarshalOptions{
|
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
ComputeAssetHashes: true,
|
|
|
|
KeepSecrets: true,
|
|
|
|
KeepResources: true,
|
|
|
|
KeepOutputValues: true,
|
|
|
|
UpgradeToOutputValues: true,
|
2024-05-02 11:32:54 +00:00
|
|
|
WorkingDirectory: rm.workingDirectory,
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
// Before we pass the props to the transform function we need to ensure that they correctly carry any dependency
|
|
|
|
// information.
|
|
|
|
dependencies := mapset.NewSet[resource.URN]()
|
|
|
|
for _, dependingURN := range req.GetDependencies() {
|
|
|
|
urn, err := resource.ParseURN(dependingURN)
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid dependency URN: %s", err))
|
|
|
|
}
|
|
|
|
dependencies.Add(urn)
|
|
|
|
}
|
|
|
|
|
|
|
|
propertyDependencies := make(map[resource.PropertyKey]mapset.Set[resource.URN])
|
|
|
|
if len(req.GetPropertyDependencies()) == 0 && !remote {
|
|
|
|
// If this request did not specify property dependencies, treat each property as depending on every resource
|
|
|
|
// in the request's dependency list. We don't need to do this when remote is true, because all clients that
|
|
|
|
// support remote already support passing property dependencies, so there's no need to backfill here.
|
|
|
|
for pk := range props {
|
|
|
|
propertyDependencies[pk] = dependencies
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// Otherwise, unmarshal the per-property dependency information.
|
|
|
|
for pk, pd := range req.GetPropertyDependencies() {
|
|
|
|
deps := mapset.NewSet[resource.URN]()
|
|
|
|
for _, d := range pd.Urns {
|
|
|
|
urn, err := resource.ParseURN(d)
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid dependency on property %s URN: %s", pk, err))
|
|
|
|
}
|
|
|
|
deps.Add(urn)
|
|
|
|
}
|
|
|
|
propertyDependencies[resource.PropertyKey(pk)] = deps
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we're running any transforms we need to update all the property values to Outputs to track dependencies.
|
|
|
|
if len(req.Transforms) > 0 {
|
|
|
|
props = upgradeOutputValues(props, propertyDependencies)
|
|
|
|
}
|
|
|
|
|
2024-06-21 08:55:17 +00:00
|
|
|
provider := req.GetProvider()
|
|
|
|
if custom || remote {
|
|
|
|
provider = rm.resolveProvider(req.GetProvider(), req.GetProviders(), parent, t.Package())
|
|
|
|
}
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
opts := &pulumirpc.TransformResourceOptions{
|
|
|
|
DependsOn: req.GetDependencies(),
|
|
|
|
Protect: req.GetProtect(),
|
|
|
|
IgnoreChanges: req.GetIgnoreChanges(),
|
|
|
|
ReplaceOnChanges: req.GetReplaceOnChanges(),
|
|
|
|
Version: req.GetVersion(),
|
|
|
|
Aliases: aliases,
|
2024-06-21 08:55:17 +00:00
|
|
|
Provider: provider,
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
Providers: req.GetProviders(),
|
|
|
|
CustomTimeouts: req.GetCustomTimeouts(),
|
|
|
|
PluginDownloadUrl: req.GetPluginDownloadURL(),
|
|
|
|
RetainOnDelete: req.GetRetainOnDelete(),
|
|
|
|
DeletedWith: req.GetDeletedWith(),
|
|
|
|
DeleteBeforeReplace: deleteBeforeReplace,
|
|
|
|
AdditionalSecretOutputs: req.GetAdditionalSecretOutputs(),
|
|
|
|
PluginChecksums: req.GetPluginChecksums(),
|
|
|
|
}
|
|
|
|
|
2024-04-15 12:07:45 +00:00
|
|
|
// This might be a resource registation for a resource that another process requested to be constructed. If so we'll
|
|
|
|
// have saved the pending transforms for this and we should use those rather than what is on the request.
|
|
|
|
var transforms []TransformFunction
|
|
|
|
pendingKey := fmt.Sprintf("%s::%s::%s", parent, t, name)
|
|
|
|
err = func() error {
|
|
|
|
rm.pendingTransformsLock.Lock()
|
|
|
|
defer rm.pendingTransformsLock.Unlock()
|
|
|
|
|
|
|
|
if pending, ok := rm.pendingTransforms[pendingKey]; ok {
|
|
|
|
transforms = pending
|
|
|
|
} else {
|
|
|
|
transforms, err = slice.MapError(req.Transforms, rm.wrapTransformCallback)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
// We only need to save this for remote calls
|
|
|
|
if remote && len(transforms) > 0 {
|
|
|
|
rm.pendingTransforms[pendingKey] = transforms
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}()
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2024-04-15 12:07:45 +00:00
|
|
|
|
|
|
|
// Before we calculate anything else run the transformations. First run the transforms for this resource,
|
|
|
|
// then it's parents etc etc
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
for _, transform := range transforms {
|
|
|
|
newProps, newOpts, err := transform(ctx, name, string(t), custom, parent, props, opts)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
props = newProps
|
|
|
|
opts = newOpts
|
|
|
|
}
|
|
|
|
// Lookup our parents transformations and run those
|
|
|
|
err = func() error {
|
|
|
|
// Function exists to scope the lock
|
|
|
|
rm.resourceTransformsLock.Lock()
|
|
|
|
defer rm.resourceTransformsLock.Unlock()
|
2024-04-15 12:07:45 +00:00
|
|
|
rm.parentsLock.Lock()
|
|
|
|
defer rm.parentsLock.Unlock()
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
|
|
|
|
current := parent
|
|
|
|
for current != "" {
|
|
|
|
if transforms, ok := rm.resourceTransforms[current]; ok {
|
|
|
|
for _, transform := range transforms {
|
|
|
|
newProps, newOpts, err := transform(ctx, name, string(t), custom, parent, props, opts)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
props = newProps
|
|
|
|
opts = newOpts
|
|
|
|
}
|
|
|
|
}
|
2024-04-15 12:07:45 +00:00
|
|
|
current = rm.parents[current]
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Then lock the stack transformations and run all of those
|
|
|
|
err = func() error {
|
|
|
|
// Function exists to scope the lock
|
|
|
|
rm.stackTransformsLock.Lock()
|
|
|
|
defer rm.stackTransformsLock.Unlock()
|
|
|
|
|
|
|
|
for _, transform := range rm.stackTransforms {
|
|
|
|
newProps, newOpts, err := transform(ctx, name, string(t), custom, parent, props, opts)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
props = newProps
|
|
|
|
opts = newOpts
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
2024-01-30 16:45:10 +00:00
|
|
|
}
|
|
|
|
|
2022-10-05 21:04:10 +00:00
|
|
|
// We handle updating the providers map to include the providers field of the parent if
|
|
|
|
// both the current resource and its parent is a component resource.
|
2022-10-25 19:30:13 +00:00
|
|
|
func() {
|
|
|
|
// Function exists to scope the lock
|
|
|
|
rm.componentProvidersLock.Lock()
|
|
|
|
defer rm.componentProvidersLock.Unlock()
|
|
|
|
if parentsProviders, parentIsComponent := rm.componentProviders[parent]; !custom &&
|
|
|
|
parent != "" && parentIsComponent {
|
|
|
|
for k, v := range parentsProviders {
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if opts.Providers == nil {
|
|
|
|
opts.Providers = map[string]string{}
|
|
|
|
}
|
|
|
|
if _, ok := opts.Providers[k]; !ok {
|
|
|
|
opts.Providers[k] = v
|
2022-10-25 19:30:13 +00:00
|
|
|
}
|
2022-10-05 21:04:10 +00:00
|
|
|
}
|
|
|
|
}
|
2022-10-25 19:30:13 +00:00
|
|
|
}()
|
2022-10-05 21:04:10 +00:00
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
var providerRef providers.Reference
|
2021-04-14 00:56:34 +00:00
|
|
|
var providerRefs map[string]string
|
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
if custom && !providers.IsProviderType(t) || remote {
|
2023-09-11 15:54:07 +00:00
|
|
|
providerReq, err := parseProviderRequest(
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
t.Package(), opts.GetVersion(),
|
2024-06-10 17:28:47 +00:00
|
|
|
opts.GetPluginDownloadUrl(), opts.GetPluginChecksums(), nil)
|
2021-04-14 00:56:34 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2024-07-20 10:08:44 +00:00
|
|
|
packageRef := req.GetPackageRef()
|
|
|
|
if packageRef != "" {
|
2024-06-24 14:59:18 +00:00
|
|
|
var has bool
|
2024-07-20 10:08:44 +00:00
|
|
|
providerReq, has = rm.packageRefMap[packageRef]
|
2024-06-24 14:59:18 +00:00
|
|
|
if !has {
|
2024-07-20 10:08:44 +00:00
|
|
|
return nil, fmt.Errorf("unknown provider package '%v'", packageRef)
|
2024-06-24 14:59:18 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
providerRef, err = rm.getProviderReference(rm.defaultProviders, providerReq, opts.GetProvider())
|
2021-04-14 00:56:34 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
providerRefs = make(map[string]string, len(opts.GetProviders()))
|
|
|
|
for name, provider := range opts.GetProviders() {
|
RegisterProvider engine work (#16241)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support for a `RegisterProvider` method to the engine. This
allows an SDK process to send the information for a package (name,
version, url, etc, and parameter in the future) and get back a UUID for
that run of the engine that can be used to re-lookup that information.
That allows the SDK to just send the `provider` field in
`RegisterResourceRequest` instead of filling in `version`,
`pluginDownloadURL` etc (and importantly not having to fill in
`parameter` for parameterised providers, which could be a large amount
of data).
This doesn't update any of the SDKs to yet use this method. We can do
that piecemeal, but it will require core sdk and codegen changes for
each language.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-05-23 06:16:59 +00:00
|
|
|
ref, err := rm.getProviderReference(rm.defaultProviders, providerReq, provider)
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2021-04-14 00:56:34 +00:00
|
|
|
providerRefs[name] = ref.String()
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
parsedAliases := []resource.Alias{}
|
|
|
|
for _, aliasObject := range opts.Aliases {
|
2022-09-22 17:13:55 +00:00
|
|
|
aliasSpec := aliasObject.GetSpec()
|
|
|
|
var alias resource.Alias
|
|
|
|
if aliasSpec != nil {
|
|
|
|
alias = resource.Alias{
|
2024-01-26 18:29:22 +00:00
|
|
|
Name: aliasSpec.Name,
|
|
|
|
Type: aliasSpec.Type,
|
|
|
|
Stack: aliasSpec.Stack,
|
|
|
|
Project: aliasSpec.Project,
|
|
|
|
}
|
|
|
|
switch parent := aliasSpec.GetParent().(type) {
|
|
|
|
case *pulumirpc.Alias_Spec_ParentUrn:
|
|
|
|
// Technically an SDK shouldn't set `parent` at all to specify the default parent, but both NodeJS and
|
|
|
|
// Python have buggy SDKs that set parent to an empty URN to specify the default parent. We handle this
|
|
|
|
// case here to maintain backward compatibility with older SDKs but it would be good to fix this to be
|
|
|
|
// strict in V4.
|
|
|
|
parentURN, err := resource.ParseOptionalURN(parent.ParentUrn)
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid parent alias URN: %s", err))
|
|
|
|
}
|
|
|
|
alias.Parent = parentURN
|
|
|
|
case *pulumirpc.Alias_Spec_NoParent:
|
|
|
|
alias.NoParent = parent.NoParent
|
2022-09-22 17:13:55 +00:00
|
|
|
}
|
|
|
|
} else {
|
2023-07-18 15:10:23 +00:00
|
|
|
urn, err := resource.ParseURN(aliasObject.GetUrn())
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid alias URN: %s", err))
|
2022-09-22 17:13:55 +00:00
|
|
|
}
|
2023-07-18 15:10:23 +00:00
|
|
|
alias = resource.Alias{URN: urn}
|
2022-09-22 17:13:55 +00:00
|
|
|
}
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
parsedAliases = append(parsedAliases, alias)
|
2019-06-01 06:01:01 +00:00
|
|
|
}
|
|
|
|
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
// Reparse the dependency information from any transformation results
|
|
|
|
if len(req.Transforms) > 0 {
|
|
|
|
dependencies = mapset.NewSet[resource.URN]()
|
|
|
|
for _, dependingURN := range opts.DependsOn {
|
|
|
|
urn, err := resource.ParseURN(dependingURN)
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid dependency URN: %s", err))
|
|
|
|
}
|
|
|
|
dependencies.Add(urn)
|
|
|
|
}
|
|
|
|
// Now we've run the transforms we can rebuild the property dependency maps. If we have output values we can add the
|
|
|
|
// dependencies from them to the dependencies map we send to the provider and save to state.
|
|
|
|
propertyDependencies = make(map[resource.PropertyKey]mapset.Set[resource.URN])
|
|
|
|
for key, output := range props {
|
|
|
|
deps := mapset.NewSet[resource.URN]()
|
|
|
|
addOutputDependencies(deps, output)
|
|
|
|
propertyDependencies[key] = deps
|
|
|
|
|
|
|
|
// Also add these to the overall dependencies
|
|
|
|
dependencies = dependencies.Union(deps)
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// If we ran transforms we would have merged all the dependencies togther already, but if we didn't we want to
|
|
|
|
// ensure any output values add their dependencies to the dependencies map we send to the provider.
|
|
|
|
for key, output := range props {
|
|
|
|
if propertyDependencies[key] == nil {
|
|
|
|
propertyDependencies[key] = mapset.NewSet[resource.URN]()
|
|
|
|
}
|
|
|
|
addOutputDependencies(propertyDependencies[key], output)
|
2023-07-18 15:10:23 +00:00
|
|
|
}
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
rawDependencies := dependencies.ToSlice()
|
|
|
|
rawPropertyDependencies := make(map[resource.PropertyKey][]resource.URN)
|
|
|
|
for key, deps := range propertyDependencies {
|
|
|
|
rawPropertyDependencies[key] = deps.ToSlice()
|
2018-02-21 23:11:21 +00:00
|
|
|
}
|
|
|
|
|
2021-12-17 22:52:01 +00:00
|
|
|
if providers.IsProviderType(t) {
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if opts.GetVersion() != "" {
|
|
|
|
version, err := semver.Parse(opts.GetVersion())
|
2021-12-17 22:52:01 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("%s: passed invalid version: %w", label, err)
|
|
|
|
}
|
|
|
|
providers.SetProviderVersion(props, &version)
|
|
|
|
}
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if opts.GetPluginDownloadUrl() != "" {
|
|
|
|
providers.SetProviderURL(props, opts.GetPluginDownloadUrl())
|
2021-12-17 22:52:01 +00:00
|
|
|
}
|
2022-06-15 20:03:11 +00:00
|
|
|
|
2024-07-29 14:34:44 +00:00
|
|
|
if req.GetPackageRef() != "" {
|
|
|
|
// If the provider resource has a package ref then we need to set all it's input fields as in
|
|
|
|
// newRegisterDefaultProviderEvent.
|
|
|
|
packageRef := req.GetPackageRef()
|
|
|
|
providerReq, has := rm.packageRefMap[packageRef]
|
|
|
|
if !has {
|
|
|
|
return nil, fmt.Errorf("unknown provider package '%v'", packageRef)
|
|
|
|
}
|
|
|
|
|
|
|
|
if providerReq.Version() != nil {
|
|
|
|
providers.SetProviderVersion(props, providerReq.Version())
|
|
|
|
}
|
|
|
|
if providerReq.PluginDownloadURL() != "" {
|
|
|
|
providers.SetProviderURL(props, providerReq.PluginDownloadURL())
|
|
|
|
}
|
|
|
|
if providerReq.PluginChecksums() != nil {
|
|
|
|
providers.SetProviderChecksums(props, providerReq.PluginChecksums())
|
|
|
|
}
|
|
|
|
if providerReq.Parameterization() != nil {
|
|
|
|
providers.SetProviderName(props, providerReq.Name())
|
|
|
|
providers.SetProviderParameterization(props, providerReq.Parameterization())
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-06-15 20:03:11 +00:00
|
|
|
// Make sure that an explicit provider which doesn't specify its plugin gets the
|
|
|
|
// same plugin as the default provider for the package.
|
|
|
|
defaultProvider, ok := rm.defaultProviders.defaultProviderInfo[providers.GetProviderPackage(t)]
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if ok && opts.GetVersion() == "" && opts.GetPluginDownloadUrl() == "" {
|
2022-06-15 20:03:11 +00:00
|
|
|
if defaultProvider.Version != nil {
|
|
|
|
providers.SetProviderVersion(props, defaultProvider.Version)
|
|
|
|
}
|
|
|
|
if defaultProvider.PluginDownloadURL != "" {
|
|
|
|
providers.SetProviderURL(props, defaultProvider.PluginDownloadURL)
|
|
|
|
}
|
|
|
|
}
|
2021-01-05 23:57:11 +00:00
|
|
|
}
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 21:18:43 +00:00
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
protect := opts.Protect
|
|
|
|
ignoreChanges := opts.IgnoreChanges
|
|
|
|
replaceOnChanges := opts.ReplaceOnChanges
|
|
|
|
retainOnDelete := opts.RetainOnDelete
|
|
|
|
deletedWith, err := resource.ParseOptionalURN(opts.GetDeletedWith())
|
|
|
|
if err != nil {
|
|
|
|
return nil, rpcerror.New(codes.InvalidArgument, fmt.Sprintf("invalid DeletedWith URN: %s", err))
|
2019-08-20 22:51:02 +00:00
|
|
|
}
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
customTimeouts := opts.CustomTimeouts
|
|
|
|
|
|
|
|
additionalSecretOutputs := opts.GetAdditionalSecretOutputs()
|
2019-08-20 22:51:02 +00:00
|
|
|
|
2024-02-06 16:46:31 +00:00
|
|
|
// At this point we're going to forward these properties to the rest of the engine and potentially to providers. As
|
|
|
|
// we add features to the code above (most notably transforms) we could end up with more instances of `OutputValue`
|
|
|
|
// than the rest of the system historically expects. To minimize the disruption we downgrade `OutputValue`s with no
|
|
|
|
// dependencies down to `Computed` and `Secret` or their plain values. We only do this for non-remote resources.
|
|
|
|
// Remote resources already deal with `OutputValue`s and even though it would be more consistent to downgrade them
|
|
|
|
// here it would be a break change.
|
|
|
|
if !remote {
|
|
|
|
props = downgradeOutputValues(props)
|
|
|
|
}
|
|
|
|
|
2018-05-15 22:28:00 +00:00
|
|
|
logging.V(5).Infof(
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
"ResourceMonitor.RegisterResource received: t=%v, name=%v, custom=%v, #props=%v, parent=%v, protect=%v, "+
|
2021-04-14 00:56:34 +00:00
|
|
|
"provider=%v, deps=%v, deleteBeforeReplace=%v, ignoreChanges=%v, aliases=%v, customTimeouts=%v, "+
|
2022-10-20 06:15:43 +00:00
|
|
|
"providers=%v, replaceOnChanges=%v, retainOnDelete=%v, deletedWith=%v",
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
t, name, custom, len(props), parent, protect, providerRef, rawDependencies, opts.DeleteBeforeReplace, ignoreChanges,
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
parsedAliases, customTimeouts, providerRefs, replaceOnChanges, retainOnDelete, deletedWith)
|
2017-11-21 01:38:09 +00:00
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
// If this is a remote component, fetch its provider and issue the construct call. Otherwise, register the resource.
|
|
|
|
var result *RegisterResult
|
2022-10-05 21:04:10 +00:00
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
var outputDeps map[string]*pulumirpc.RegisterResourceResponse_PropertyDependencies
|
|
|
|
if remote {
|
|
|
|
provider, ok := rm.providers.GetProvider(providerRef)
|
2022-10-20 01:40:57 +00:00
|
|
|
if providers.IsDenyDefaultsProvider(providerRef) {
|
2023-07-18 15:10:23 +00:00
|
|
|
msg := diag.GetDefaultProviderDenied("").Message
|
2022-10-20 01:40:57 +00:00
|
|
|
return nil, fmt.Errorf(msg, t.Package().String(), t.String())
|
|
|
|
}
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
if !ok {
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("unknown provider '%v'", providerRef)
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
}
|
2018-05-16 22:37:34 +00:00
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
// Invoke the provider's Construct RPC method.
|
|
|
|
options := plugin.ConstructOptions{
|
2022-09-22 17:13:55 +00:00
|
|
|
// We don't actually need to send a list of aliases to construct anymore because the engine does
|
|
|
|
// all alias construction.
|
2023-03-06 23:11:15 +00:00
|
|
|
Aliases: []resource.Alias{},
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
Dependencies: rawDependencies,
|
2023-03-06 23:11:15 +00:00
|
|
|
Protect: protect,
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
PropertyDependencies: rawPropertyDependencies,
|
2023-03-06 23:11:15 +00:00
|
|
|
Providers: providerRefs,
|
|
|
|
AdditionalSecretOutputs: additionalSecretOutputs,
|
|
|
|
DeletedWith: deletedWith,
|
|
|
|
IgnoreChanges: ignoreChanges,
|
|
|
|
ReplaceOnChanges: replaceOnChanges,
|
|
|
|
RetainOnDelete: retainOnDelete,
|
|
|
|
}
|
|
|
|
if customTimeouts != nil {
|
|
|
|
options.CustomTimeouts = &plugin.CustomTimeouts{
|
|
|
|
Create: customTimeouts.Create,
|
|
|
|
Update: customTimeouts.Update,
|
|
|
|
Delete: customTimeouts.Delete,
|
|
|
|
}
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
}
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if opts.DeleteBeforeReplace != nil {
|
|
|
|
options.DeleteBeforeReplace = *opts.DeleteBeforeReplace
|
2023-03-06 23:11:15 +00:00
|
|
|
}
|
|
|
|
|
Normalize plugin.Provider methods to (Context, Request) -> (Response, error) (#16302)
Normalize methods on plugin.Provider to the form:
```go
Method(context.Context, MethodRequest) (MethodResponse, error)
```
This provides a more consistent and forwards compatible interface for
each of our methods.
---
I'm motivated to work on this because the bridge maintains a copy of
this interface: `ProviderWithContext`. This doubles the pain of dealing
with any breaking change and this PR would allow me to remove the extra
interface. I'm willing to fix consumers of `plugin.Provider` in
`pulumi/pulumi`, but I wanted to make sure that we would be willing to
merge this PR if I get it green.
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes # (issue)
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [ ] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-06-07 19:47:49 +00:00
|
|
|
constructResult, err := provider.Construct(ctx, plugin.ConstructRequest{
|
|
|
|
Info: rm.constructInfo,
|
|
|
|
Type: t,
|
|
|
|
Name: name,
|
|
|
|
Parent: parent,
|
|
|
|
Inputs: props,
|
|
|
|
Options: options,
|
|
|
|
})
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
if err != nil {
|
2024-11-06 10:56:10 +00:00
|
|
|
var rpcError error
|
|
|
|
rpcError, ok := rpcerror.FromError(err)
|
|
|
|
if !ok {
|
|
|
|
rpcError = err
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
}
|
2024-11-06 10:56:10 +00:00
|
|
|
message := errorToMessage(rpcError, props)
|
|
|
|
rm.diagnostics.Errorf(diag.GetResourceInvalidError(constructResult.URN), t, name, message)
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
|
2024-10-01 12:41:29 +00:00
|
|
|
rm.abortChan <- true
|
allow component providers to return more detailed error messages (#17306)
This is based on work from @lunaris and @EronWright, to allow us to
return better error messages from component providers.
The basic idea here is to allow attaching more error details on the RPC
layer, and turn errors into diagnostic *in the engine*, to avoid the
need to do fix the error up in every SDK and pretty print it.
GRPC allows us to attach error details to the returned error to help us
with that. These details can be error details such as specified in the
`error_details` proto that comes with grpc, but can also take any other
shape. Currently we only support pretty printing select types from that
proto, but this can be extended in the future.
On the implementation side, Go has a pretty nice API to create these
errors, which we can just let users use directly. For Python and NodeJS
the API is not so nice for this, so we need to encapsulate the error
into a special exception, and then turn that into a proper GRPC message,
using the magic `grpc-status-details-bin` metadata field.
For both Python and NodeJS I've only implemented one class for errors so
far, as I'm interested in some feedback on the API design first. I'm
wondering if we should just let users specify the fields as tuples, and
then add them to the `error_details` proto? Very open to ideas here.
Closes: https://github.com/pulumi/pulumi/pull/16132
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
Co-authored-by: Eron Wright <eron@pulumi.com>
2024-09-25 15:38:36 +00:00
|
|
|
<-rm.cancel
|
|
|
|
return nil, rpcerror.New(codes.Unknown, "resource monitor shut down")
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
}
|
|
|
|
result = &RegisterResult{State: &resource.State{URN: constructResult.URN, Outputs: constructResult.Outputs}}
|
2017-11-21 01:38:09 +00:00
|
|
|
|
2024-02-14 08:15:24 +00:00
|
|
|
// The provider may have returned OutputValues in "Outputs", we need to downgrade them to Computed or
|
|
|
|
// Secret but also add them to the outputDeps map.
|
|
|
|
if constructResult.OutputDependencies == nil {
|
|
|
|
constructResult.OutputDependencies = map[resource.PropertyKey][]resource.URN{}
|
|
|
|
}
|
|
|
|
for k, v := range result.State.Outputs {
|
2024-03-15 13:01:45 +00:00
|
|
|
constructResult.OutputDependencies[k] = extendOutputDependencies(constructResult.OutputDependencies[k], v)
|
2024-02-14 08:15:24 +00:00
|
|
|
}
|
|
|
|
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
outputDeps = map[string]*pulumirpc.RegisterResourceResponse_PropertyDependencies{}
|
|
|
|
for k, deps := range constructResult.OutputDependencies {
|
|
|
|
urns := make([]string, len(deps))
|
|
|
|
for i, d := range deps {
|
|
|
|
urns[i] = string(d)
|
|
|
|
}
|
|
|
|
outputDeps[string(k)] = &pulumirpc.RegisterResourceResponse_PropertyDependencies{Urns: urns}
|
|
|
|
}
|
|
|
|
} else {
|
2023-06-28 16:02:04 +00:00
|
|
|
additionalSecretKeys := slice.Prealloc[resource.PropertyKey](len(additionalSecretOutputs))
|
2023-03-06 23:11:15 +00:00
|
|
|
for _, name := range additionalSecretOutputs {
|
|
|
|
additionalSecretKeys = append(additionalSecretKeys, resource.PropertyKey(name))
|
|
|
|
}
|
|
|
|
|
|
|
|
var timeouts resource.CustomTimeouts
|
|
|
|
if customTimeouts != nil {
|
|
|
|
if customTimeouts.Create != "" {
|
|
|
|
seconds, err := generateTimeoutInSeconds(customTimeouts.Create)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
timeouts.Create = seconds
|
|
|
|
}
|
|
|
|
if customTimeouts.Delete != "" {
|
|
|
|
seconds, err := generateTimeoutInSeconds(customTimeouts.Delete)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
timeouts.Delete = seconds
|
|
|
|
}
|
|
|
|
if customTimeouts.Update != "" {
|
|
|
|
seconds, err := generateTimeoutInSeconds(customTimeouts.Update)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
timeouts.Update = seconds
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
goal := resource.NewGoal(t, name, custom, props, parent, protect, rawDependencies,
|
|
|
|
providerRef.String(), nil, rawPropertyDependencies, opts.DeleteBeforeReplace, ignoreChanges,
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
additionalSecretKeys, parsedAliases, id, &timeouts, replaceOnChanges, retainOnDelete, deletedWith,
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
sourcePosition,
|
|
|
|
)
|
2023-03-13 16:01:20 +00:00
|
|
|
|
|
|
|
if goal.Parent != "" {
|
|
|
|
rm.resGoalsLock.Lock()
|
|
|
|
parentGoal, ok := rm.resGoals[goal.Parent]
|
|
|
|
if ok {
|
|
|
|
goal = inheritFromParent(*goal, parentGoal)
|
|
|
|
}
|
|
|
|
rm.resGoalsLock.Unlock()
|
|
|
|
}
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
// Send the goal state to the engine.
|
|
|
|
step := ®isterResourceEvent{
|
2023-03-13 16:01:20 +00:00
|
|
|
goal: goal,
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
done: make(chan *RegisterResult),
|
|
|
|
}
|
|
|
|
|
|
|
|
select {
|
|
|
|
case rm.regChan <- step:
|
|
|
|
case <-rm.cancel:
|
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterResource operation canceled, name=%s", name)
|
|
|
|
return nil, rpcerror.New(codes.Unavailable, "resource monitor shut down while sending resource registration")
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now block waiting for the operation to finish.
|
|
|
|
select {
|
|
|
|
case result = <-step.done:
|
|
|
|
case <-rm.cancel:
|
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterResource operation canceled, name=%s", name)
|
|
|
|
return nil, rpcerror.New(codes.Unavailable, "resource monitor shut down while waiting on step's done channel")
|
|
|
|
}
|
2024-04-22 11:12:45 +00:00
|
|
|
if result != nil && result.Result != ResultStateSuccess && !req.GetSupportsResultReporting() {
|
|
|
|
return nil, rpcerror.New(codes.Internal, "resource registration failed")
|
|
|
|
}
|
2023-03-13 16:01:20 +00:00
|
|
|
if result != nil && result.State != nil && result.State.URN != "" {
|
|
|
|
rm.resGoalsLock.Lock()
|
|
|
|
rm.resGoals[result.State.URN] = *goal
|
|
|
|
rm.resGoalsLock.Unlock()
|
|
|
|
}
|
2018-05-16 22:37:34 +00:00
|
|
|
}
|
|
|
|
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if result != nil && result.State != nil && result.State.URN != "" {
|
2024-04-15 12:07:45 +00:00
|
|
|
// We've got a safe URN now, save the parent and transformations
|
|
|
|
func() {
|
|
|
|
rm.parentsLock.Lock()
|
|
|
|
defer rm.parentsLock.Unlock()
|
|
|
|
rm.parents[result.State.URN] = parent
|
|
|
|
}()
|
2022-10-25 19:30:13 +00:00
|
|
|
func() {
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
rm.resourceTransformsLock.Lock()
|
|
|
|
defer rm.resourceTransformsLock.Unlock()
|
|
|
|
rm.resourceTransforms[result.State.URN] = transforms
|
2022-10-25 19:30:13 +00:00
|
|
|
}()
|
Engine support for remote transforms (#15290)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds support to the engine for "remote transformations".
A transform is "remote" because it is being invoked via the engine on
receiving a resource registration, rather than being ran locally in
process before sending a resource registration. These transforms can
also span multiple process boundaries, e.g. a transform function in a
user program, then a transform function in a component library, both
running for a resource registered by another component library.
The underlying new feature here is the idea of a `Callback`. The
expectation is we're going to use callbacks for multiple features so
these are _not_ defined in terms of transformations. A callback is an
untyped byte array (usually will be a protobuf message), plus an address
to define which server should be invoked to do the callback, and a token
to identify it.
A language sdk can start up and serve a `Callbacks` service, keep a
mapping of tokens to in-process functions (currently just using UUID's
for this), and then pass that service address and token to the engine to
be invoked later on.
The engine uses these callbacks to track transformations callbacks per
resource, and on a new resource registrations invokes each relevant
callback with the resource properties and options, having new properties
and options returned that are then passed to the next relevant transform
callback until all have been called and the engine has the final
resource state and options to use.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-02-21 16:30:46 +00:00
|
|
|
if !custom {
|
|
|
|
func() {
|
|
|
|
rm.componentProvidersLock.Lock()
|
|
|
|
defer rm.componentProvidersLock.Unlock()
|
|
|
|
rm.componentProviders[result.State.URN] = opts.GetProviders()
|
|
|
|
}()
|
|
|
|
}
|
2022-10-05 21:04:10 +00:00
|
|
|
}
|
|
|
|
|
Propagate inputs to outputs during preview. (#3327)
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes #3190.
2019-11-11 20:09:34 +00:00
|
|
|
// Filter out partially-known values if the requestor does not support them.
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
outputs := result.State.Outputs
|
2020-12-18 21:45:52 +00:00
|
|
|
|
|
|
|
// Local ComponentResources may contain unresolved resource refs, so ignore those outputs.
|
|
|
|
if !req.GetCustom() && !remote {
|
|
|
|
// In the case of a SameStep, the old resource outputs are returned to the language host after the step is
|
|
|
|
// executed. The outputs of a ComponentResource may depend on resources that have not been registered at the
|
|
|
|
// time the ComponentResource is itself registered, as the outputs are set by a later call to
|
|
|
|
// RegisterResourceOutputs. Therefore, when the SameStep returns the old resource outputs for a
|
|
|
|
// ComponentResource, it may return references to resources that have not yet been registered, which will cause
|
|
|
|
// the SDK's calls to getResource to fail when it attempts to resolve those references.
|
|
|
|
//
|
|
|
|
// Work on a more targeted fix is tracked in https://github.com/pulumi/pulumi/issues/5978
|
|
|
|
outputs = resource.PropertyMap{}
|
|
|
|
}
|
|
|
|
|
Propagate inputs to outputs during preview. (#3327)
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes #3190.
2019-11-11 20:09:34 +00:00
|
|
|
if !req.GetSupportsPartialValues() {
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
logging.V(5).Infof("stripping unknowns from RegisterResource response for urn %v", result.State.URN)
|
Propagate inputs to outputs during preview. (#3327)
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes #3190.
2019-11-11 20:09:34 +00:00
|
|
|
filtered := resource.PropertyMap{}
|
|
|
|
for k, v := range outputs {
|
|
|
|
if !v.ContainsUnknowns() {
|
|
|
|
filtered[k] = v
|
|
|
|
}
|
|
|
|
}
|
|
|
|
outputs = filtered
|
Add a notion of stable properties
This change adds the capability for a resource provider to indicate
that, where an action carried out in response to a diff, a certain set
of properties would be "stable"; that is to say, they are guaranteed
not to change. As a result, properties may be resolved to their final
values during previewing, avoiding erroneous cascading impacts.
This avoids the ever-annoying situation I keep running into when demoing:
when adding or removing an ingress rule to a security group, we ripple
the impact through the instance, and claim it must be replaced, because
that instance depends on the security group via its name. Well, the name
is a great example of a stable property, in that it will never change, and
so this is truly unfortunate and always adds uncertainty into the demos.
Particularly since the actual update doesn't need to perform replacements.
This resolves pulumi/pulumi#330.
2017-10-04 12:22:21 +00:00
|
|
|
}
|
Propagate inputs to outputs during preview. (#3327)
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes #3190.
2019-11-11 20:09:34 +00:00
|
|
|
|
2022-06-23 00:30:01 +00:00
|
|
|
// TODO(@platform):
|
|
|
|
// Currently component resources ignore these options:
|
|
|
|
// • ignoreChanges
|
|
|
|
// • customTimeouts
|
|
|
|
// • additionalSecretOutputs
|
|
|
|
// • replaceOnChanges
|
|
|
|
// • retainOnDelete
|
2022-10-20 06:15:43 +00:00
|
|
|
// • deletedWith
|
2022-06-16 16:25:36 +00:00
|
|
|
// Revisit these semantics in Pulumi v4.0
|
|
|
|
// See this issue for more: https://github.com/pulumi/pulumi/issues/9704
|
2022-06-23 00:30:01 +00:00
|
|
|
if !custom {
|
|
|
|
rm.checkComponentOption(result.State.URN, "ignoreChanges", func() bool {
|
|
|
|
return len(ignoreChanges) > 0
|
|
|
|
})
|
|
|
|
rm.checkComponentOption(result.State.URN, "customTimeouts", func() bool {
|
|
|
|
if customTimeouts == nil {
|
|
|
|
return false
|
|
|
|
}
|
2023-03-03 16:36:39 +00:00
|
|
|
hasUpdateTimeout := customTimeouts.Update != ""
|
|
|
|
hasCreateTimeout := customTimeouts.Create != ""
|
|
|
|
hasDeleteTimeout := customTimeouts.Delete != ""
|
2022-06-23 00:30:01 +00:00
|
|
|
return hasCreateTimeout || hasUpdateTimeout || hasDeleteTimeout
|
|
|
|
})
|
|
|
|
rm.checkComponentOption(result.State.URN, "additionalSecretOutputs", func() bool {
|
|
|
|
return len(additionalSecretOutputs) > 0
|
|
|
|
})
|
|
|
|
rm.checkComponentOption(result.State.URN, "replaceOnChanges", func() bool {
|
|
|
|
return len(replaceOnChanges) > 0
|
|
|
|
})
|
|
|
|
rm.checkComponentOption(result.State.URN, "retainOnDelete", func() bool {
|
|
|
|
return retainOnDelete
|
|
|
|
})
|
2022-10-20 06:15:43 +00:00
|
|
|
rm.checkComponentOption(result.State.URN, "deletedWith", func() bool {
|
|
|
|
return deletedWith != ""
|
|
|
|
})
|
2022-06-16 16:25:36 +00:00
|
|
|
}
|
|
|
|
|
2018-05-15 22:28:00 +00:00
|
|
|
logging.V(5).Infof(
|
Propagate inputs to outputs during preview. (#3327)
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes #3190.
2019-11-11 20:09:34 +00:00
|
|
|
"ResourceMonitor.RegisterResource operation finished: t=%v, urn=%v, #outs=%v",
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
result.State.Type, result.State.URN, len(outputs))
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
|
2017-08-30 01:24:12 +00:00
|
|
|
// Finally, unpack the response into properties that we can return to the language runtime. This mostly includes
|
|
|
|
// an ID, URN, and defaults and output properties that will all be blitted back onto the runtime object.
|
Propagate inputs to outputs during preview. (#3327)
These changes restore a more-correct version of the behavior that was
disabled with #3014. The original implementation of this behavior was
done in the SDKs, which do not have access to the complete inputs for a
resource (in particular, default values filled in by the provider during
`Check` are not exposed to the SDK). This lack of information meant that
the resolved output values could disagree with the typings present in
a provider SDK. Exacerbating this problem was the fact that unknown
values were dropped entirely, causing `undefined` values to appear in
unexpected places.
By doing this in the engine and allowing unknown values to be
represented in a first-class manner in the SDK, we can attack both of
these issues.
Although this behavior is not _strictly_ consistent with respect to the
resource model--in an update, a resource's output properties will come
from its provider and may differ from its input properties--this
behavior was present in the product for a fairly long time without
significant issues. In the future, we may be able to improve the
accuracy of resource outputs during a preview by allowing the provider
to dry-run CRUD operations and return partially-known values where
possible.
These changes also introduce new APIs in the Node and Python SDKs
that work with unknown values in a first-class fashion:
- A new parameter to the `apply` function that indicates that the
callback should be run even if the result of the apply contains
unknown values
- `containsUnknowns` and `isUnknown`, which return true if a value
either contains nested unknown values or is exactly an unknown value
- The `Unknown` type, which represents unknown values
The primary use case for these APIs is to allow nested, properties with
known values to be accessed via the lifted property accessor even when
the containing property is not fully know. A common example of this
pattern is the `metadata.name` property of a Kubernetes `Namespace`
object: while other properties of the `metadata` bag may be unknown,
`name` is often known. These APIs allow `ns.metadata.name` to return a
known value in this case.
In order to avoid exposing downlevel SDKs to unknown values--a change
which could break user code by exposing it to unexpected values--a
language SDK must indicate whether or not it supports first-class
unknown values as part of each `RegisterResourceRequest`.
These changes also allow us to avoid breaking user code with the new
behavior introduced by the prior commit.
Fixes #3190.
2019-11-11 20:09:34 +00:00
|
|
|
obj, err := plugin.MarshalProperties(outputs, plugin.MarshalOptions{
|
2024-05-02 11:32:54 +00:00
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
KeepSecrets: req.GetAcceptSecrets(),
|
|
|
|
KeepResources: req.GetAcceptResources(),
|
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2019-04-12 21:29:08 +00:00
|
|
|
})
|
2017-09-14 23:40:44 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
Reuse provider instances where possible (#14127)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes https://github.com/pulumi/pulumi/issues/13987.
This reworks the registry to better track provider instances such that
we can reuse unconfigured instances between Creates, Updates, and Sames.
When we allocate a provider instance in the registry for a Check call we
save it with the special id "unconfigured". This value should never make
its way back to program SDKs, it's purely an internal value for the
engine.
When we do a Create, Update or Same we look to see if there's an
unconfigured provider to use and if so configures that one, else it
starts up a fresh one. (N.B. Update we can assume there will always be
an unconfigured one from the Check call before).
This has also fixed registry Create to use the ID `UnknownID` rather
than `""`, have added some contract assertions to check that and fixed
up some test fallout because of that (the tests had been getting away
with leaving ID blank before).
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-10-12 20:46:01 +00:00
|
|
|
|
|
|
|
// Assert that we never leak the unconfigured provider ID to the language host.
|
|
|
|
contract.Assertf(
|
|
|
|
!providers.IsProviderType(result.State.Type) || result.State.ID != providers.UnconfiguredID,
|
|
|
|
"provider resource %s has unconfigured ID", result.State.URN)
|
|
|
|
|
2024-04-22 11:12:45 +00:00
|
|
|
reason := pulumirpc.Result_SUCCESS
|
|
|
|
if result.Result == ResultStateSkipped {
|
|
|
|
reason = pulumirpc.Result_SKIP
|
|
|
|
} else if result.Result == ResultStateFailed {
|
|
|
|
reason = pulumirpc.Result_FAIL
|
|
|
|
}
|
2018-04-05 16:48:09 +00:00
|
|
|
return &pulumirpc.RegisterResourceResponse{
|
Initial support for remote component construction. (#5280)
These changes add initial support for the construction of remote
components. For now, this support is limited to the NodeJS SDK;
follow-up changes will implement support for the other SDKs.
Remote components are component resources that are constructed and
managed by plugins rather than by Pulumi programs. In this sense, they
are a bit like cloud resources, and are supported by the same
distribution and plugin loading mechanisms and described by the same
schema system.
The construction of a remote component is initiated by a
`RegisterResourceRequest` with the new `remote` field set to `true`.
When the resource monitor receives such a request, it loads the plugin
that implements the component resource and calls the `Construct`
method added to the resource provider interface as part of these
changes. This method accepts the information necessary to construct the
component and its children: the component's name, type, resource
options, inputs, and input dependencies. It is responsible for
dispatching to the appropriate component factory to create the
component, then returning its URN, resolved output properties, and
output property dependencies. The dependency information is necessary to
support features such as delete-before-replace, which rely on precise
dependency information for custom resources.
These changes also add initial support for more conveniently
implementing resource providers in NodeJS. The interface used to
implement such a provider is similar to the dynamic provider interface
(and may be unified with that interface in the future).
An example of a NodeJS program constructing a remote component resource
also implemented in NodeJS can be found in
`tests/construct_component/nodejs`.
This is the core of #2430.
2020-09-08 02:33:55 +00:00
|
|
|
Urn: string(result.State.URN),
|
|
|
|
Id: string(result.State.ID),
|
|
|
|
Object: obj,
|
|
|
|
PropertyDependencies: outputDeps,
|
2024-04-22 11:12:45 +00:00
|
|
|
Result: reason,
|
2017-08-30 01:24:12 +00:00
|
|
|
}, nil
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
|
|
|
|
2022-06-23 00:30:01 +00:00
|
|
|
// checkComponentOption generates a warning message on the resource
|
|
|
|
// 'urn' if 'check' returns true.
|
|
|
|
// This function is intended to validate options passed to component resources,
|
|
|
|
// so urn is expected to refer to a component.
|
|
|
|
func (rm *resmon) checkComponentOption(urn resource.URN, optName string, check func() bool) {
|
|
|
|
if check() {
|
2022-10-13 12:40:08 +00:00
|
|
|
logging.V(10).Infof("The option '%s' has no automatic effect on component resource '%s', "+
|
|
|
|
"ensure it is handled correctly in the component code.", optName, urn)
|
2022-06-23 00:30:01 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
// RegisterResourceOutputs records some new output properties for a resource that have arrived after its initial
|
|
|
|
// provisioning. These will make their way into the eventual checkpoint state file for that resource.
|
|
|
|
func (rm *resmon) RegisterResourceOutputs(ctx context.Context,
|
2023-03-03 16:36:39 +00:00
|
|
|
req *pulumirpc.RegisterResourceOutputsRequest,
|
2024-01-17 09:35:20 +00:00
|
|
|
) (*emptypb.Empty, error) {
|
2017-11-29 19:27:32 +00:00
|
|
|
// Obtain and validate the message's inputs (a URN plus the output property map).
|
2023-07-18 15:10:23 +00:00
|
|
|
urn, err := resource.ParseURN(req.Urn)
|
|
|
|
if err != nil {
|
2023-12-20 15:54:06 +00:00
|
|
|
return nil, fmt.Errorf("invalid resource URN: %w", err)
|
2017-11-29 19:27:32 +00:00
|
|
|
}
|
2023-07-18 15:10:23 +00:00
|
|
|
|
2017-12-15 15:22:49 +00:00
|
|
|
label := fmt.Sprintf("ResourceMonitor.RegisterResourceOutputs(%s)", urn)
|
2017-11-29 19:27:32 +00:00
|
|
|
outs, err := plugin.UnmarshalProperties(
|
2019-04-12 21:29:08 +00:00
|
|
|
req.GetOutputs(), plugin.MarshalOptions{
|
|
|
|
Label: label,
|
|
|
|
KeepUnknowns: true,
|
|
|
|
ComputeAssetHashes: true,
|
|
|
|
KeepSecrets: true,
|
2020-10-27 17:12:12 +00:00
|
|
|
KeepResources: true,
|
2024-05-02 11:32:54 +00:00
|
|
|
WorkingDirectory: rm.workingDirectory,
|
2019-04-12 21:29:08 +00:00
|
|
|
})
|
2017-11-29 19:27:32 +00:00
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return nil, fmt.Errorf("cannot unmarshal output properties: %w", err)
|
2017-11-29 19:27:32 +00:00
|
|
|
}
|
2018-05-15 22:28:00 +00:00
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterResourceOutputs received: urn=%v, #outs=%v", urn, len(outs))
|
2017-11-29 19:27:32 +00:00
|
|
|
|
|
|
|
// Now send the step over to the engine to perform.
|
|
|
|
step := ®isterResourceOutputsEvent{
|
|
|
|
urn: urn,
|
|
|
|
outputs: outs,
|
|
|
|
done: make(chan bool),
|
|
|
|
}
|
2018-05-16 22:37:34 +00:00
|
|
|
|
|
|
|
select {
|
|
|
|
case rm.regOutChan <- step:
|
|
|
|
case <-rm.cancel:
|
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterResourceOutputs operation canceled, urn=%s", urn)
|
|
|
|
return nil, rpcerror.New(codes.Unavailable, "resource monitor shut down while sending resource outputs")
|
|
|
|
}
|
2017-11-29 19:27:32 +00:00
|
|
|
|
|
|
|
// Now block waiting for the operation to finish.
|
2018-05-16 22:37:34 +00:00
|
|
|
select {
|
|
|
|
case <-step.done:
|
|
|
|
case <-rm.cancel:
|
|
|
|
logging.V(5).Infof("ResourceMonitor.RegisterResourceOutputs operation canceled, urn=%s", urn)
|
|
|
|
return nil, rpcerror.New(codes.Unavailable, "resource monitor shut down while waiting on output step's done channel")
|
|
|
|
}
|
|
|
|
|
2018-05-15 22:28:00 +00:00
|
|
|
logging.V(5).Infof(
|
2017-11-29 19:27:32 +00:00
|
|
|
"ResourceMonitor.RegisterResourceOutputs operation finished: urn=%v, #outs=%v", urn, len(outs))
|
2024-01-17 09:35:20 +00:00
|
|
|
return &emptypb.Empty{}, nil
|
2017-11-29 19:27:32 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
type registerResourceEvent struct {
|
|
|
|
goal *resource.Goal // the resource goal state produced by the iterator.
|
|
|
|
done chan *RegisterResult // the channel to communicate with after the resource state is available.
|
2017-09-05 17:01:00 +00:00
|
|
|
}
|
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
var _ RegisterResourceEvent = (*registerResourceEvent)(nil)
|
2017-11-21 01:38:09 +00:00
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
func (g *registerResourceEvent) event() {}
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
func (g *registerResourceEvent) Goal() *resource.Goal {
|
2017-08-30 01:24:12 +00:00
|
|
|
return g.goal
|
|
|
|
}
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 00:24:00 +00:00
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
func (g *registerResourceEvent) Done(result *RegisterResult) {
|
2017-08-30 01:24:12 +00:00
|
|
|
// Communicate the resulting state back to the RPC thread, which is parked awaiting our reply.
|
2017-11-29 19:27:32 +00:00
|
|
|
g.done <- result
|
2017-11-21 01:38:09 +00:00
|
|
|
}
|
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
type registerResourceOutputsEvent struct {
|
|
|
|
urn resource.URN // the URN to which this completion applies.
|
|
|
|
outputs resource.PropertyMap // an optional property bag for output properties.
|
|
|
|
done chan bool // the channel to communicate with after the operation completes.
|
2017-11-21 01:38:09 +00:00
|
|
|
}
|
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
var _ RegisterResourceOutputsEvent = (*registerResourceOutputsEvent)(nil)
|
2017-11-21 01:38:09 +00:00
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
func (g *registerResourceOutputsEvent) event() {}
|
2017-11-21 01:38:09 +00:00
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
func (g *registerResourceOutputsEvent) URN() resource.URN {
|
2017-11-21 01:38:09 +00:00
|
|
|
return g.urn
|
|
|
|
}
|
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
func (g *registerResourceOutputsEvent) Outputs() resource.PropertyMap {
|
|
|
|
return g.outputs
|
2017-11-21 01:38:09 +00:00
|
|
|
}
|
|
|
|
|
2017-11-29 19:27:32 +00:00
|
|
|
func (g *registerResourceOutputsEvent) Done() {
|
2017-11-21 01:38:09 +00:00
|
|
|
// Communicate the resulting state back to the RPC thread, which is parked awaiting our reply.
|
2017-11-29 19:27:32 +00:00
|
|
|
g.done <- true
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 18:50:47 +00:00
|
|
|
}
|
2018-08-03 21:06:00 +00:00
|
|
|
|
|
|
|
type readResourceEvent struct {
|
2019-05-09 21:27:34 +00:00
|
|
|
id resource.ID
|
2023-11-20 08:59:00 +00:00
|
|
|
name string
|
2019-05-09 21:27:34 +00:00
|
|
|
baseType tokens.Type
|
|
|
|
provider string
|
|
|
|
parent resource.URN
|
|
|
|
props resource.PropertyMap
|
|
|
|
dependencies []resource.URN
|
|
|
|
additionalSecretOutputs []resource.PropertyKey
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
sourcePosition string
|
2019-05-09 21:27:34 +00:00
|
|
|
done chan *ReadResult
|
2018-08-03 21:06:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
var _ ReadResourceEvent = (*readResourceEvent)(nil)
|
|
|
|
|
|
|
|
func (g *readResourceEvent) event() {}
|
|
|
|
|
|
|
|
func (g *readResourceEvent) ID() resource.ID { return g.id }
|
2023-11-20 08:59:00 +00:00
|
|
|
func (g *readResourceEvent) Name() string { return g.name }
|
2018-08-03 21:06:00 +00:00
|
|
|
func (g *readResourceEvent) Type() tokens.Type { return g.baseType }
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 00:50:29 +00:00
|
|
|
func (g *readResourceEvent) Provider() string { return g.provider }
|
2018-08-03 21:06:00 +00:00
|
|
|
func (g *readResourceEvent) Parent() resource.URN { return g.parent }
|
|
|
|
func (g *readResourceEvent) Properties() resource.PropertyMap { return g.props }
|
|
|
|
func (g *readResourceEvent) Dependencies() []resource.URN { return g.dependencies }
|
2019-05-09 21:27:34 +00:00
|
|
|
func (g *readResourceEvent) AdditionalSecretOutputs() []resource.PropertyKey {
|
|
|
|
return g.additionalSecretOutputs
|
|
|
|
}
|
[engine] Add support for source positions
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
2023-06-29 18:41:19 +00:00
|
|
|
func (g *readResourceEvent) SourcePosition() string { return g.sourcePosition }
|
2023-03-03 16:36:39 +00:00
|
|
|
|
2018-08-03 21:06:00 +00:00
|
|
|
func (g *readResourceEvent) Done(result *ReadResult) {
|
|
|
|
g.done <- result
|
|
|
|
}
|
Addition of Custom Timeouts (#2885)
* Plumbing the custom timeouts from the engine to the providers
* Plumbing the CustomTimeouts through to the engine and adding test to show this
* Change the provider proto to include individual timeouts
* Plumbing the CustomTimeouts from the engine through to the Provider RPC interface
* Change how the CustomTimeouts are sent across RPC
These errors were spotted in testing. We can now see that the timeout
information is arriving in the RegisterResourceRequest
```
req=&pulumirpc.RegisterResourceRequest{
Type: "aws:s3/bucket:Bucket",
Name: "my-bucket",
Parent: "urn:pulumi:dev::aws-vpc::pulumi:pulumi:Stack::aws-vpc-dev",
Custom: true,
Object: &structpb.Struct{},
Protect: false,
Dependencies: nil,
Provider: "",
PropertyDependencies: {},
DeleteBeforeReplace: false,
Version: "",
IgnoreChanges: nil,
AcceptSecrets: true,
AdditionalSecretOutputs: nil,
Aliases: nil,
CustomTimeouts: &pulumirpc.RegisterResourceRequest_CustomTimeouts{
Create: 300,
Update: 400,
Delete: 500,
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
},
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
}
```
* Changing the design to use strings
* CHANGELOG entry to include the CustomTimeouts work
* Changing custom timeouts to be passed around the engine as converted value
We don't want to pass around strings - the user can provide it but we want
to make the engine aware of the timeout in seconds as a float64
2019-07-15 21:26:28 +00:00
|
|
|
|
|
|
|
func generateTimeoutInSeconds(timeout string) (float64, error) {
|
|
|
|
duration, err := time.ParseDuration(timeout)
|
|
|
|
if err != nil {
|
2021-11-13 02:37:17 +00:00
|
|
|
return 0, fmt.Errorf("unable to parse customTimeout Value %s", timeout)
|
Addition of Custom Timeouts (#2885)
* Plumbing the custom timeouts from the engine to the providers
* Plumbing the CustomTimeouts through to the engine and adding test to show this
* Change the provider proto to include individual timeouts
* Plumbing the CustomTimeouts from the engine through to the Provider RPC interface
* Change how the CustomTimeouts are sent across RPC
These errors were spotted in testing. We can now see that the timeout
information is arriving in the RegisterResourceRequest
```
req=&pulumirpc.RegisterResourceRequest{
Type: "aws:s3/bucket:Bucket",
Name: "my-bucket",
Parent: "urn:pulumi:dev::aws-vpc::pulumi:pulumi:Stack::aws-vpc-dev",
Custom: true,
Object: &structpb.Struct{},
Protect: false,
Dependencies: nil,
Provider: "",
PropertyDependencies: {},
DeleteBeforeReplace: false,
Version: "",
IgnoreChanges: nil,
AcceptSecrets: true,
AdditionalSecretOutputs: nil,
Aliases: nil,
CustomTimeouts: &pulumirpc.RegisterResourceRequest_CustomTimeouts{
Create: 300,
Update: 400,
Delete: 500,
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
},
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
}
```
* Changing the design to use strings
* CHANGELOG entry to include the CustomTimeouts work
* Changing custom timeouts to be passed around the engine as converted value
We don't want to pass around strings - the user can provide it but we want
to make the engine aware of the timeout in seconds as a float64
2019-07-15 21:26:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return duration.Seconds(), nil
|
|
|
|
}
|
2021-06-17 21:46:05 +00:00
|
|
|
|
|
|
|
func decorateResourceSpans(span opentracing.Span, method string, req, resp interface{}, grpcError error) {
|
|
|
|
if req == nil {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
switch method {
|
|
|
|
case "/pulumirpc.ResourceMonitor/Invoke":
|
2022-04-14 09:59:46 +00:00
|
|
|
span.SetTag("pulumi-decorator", req.(*pulumirpc.ResourceInvokeRequest).Tok)
|
2021-06-17 21:46:05 +00:00
|
|
|
case "/pulumirpc.ResourceMonitor/ReadResource":
|
|
|
|
span.SetTag("pulumi-decorator", req.(*pulumirpc.ReadResourceRequest).Type)
|
|
|
|
case "/pulumirpc.ResourceMonitor/RegisterResource":
|
|
|
|
span.SetTag("pulumi-decorator", req.(*pulumirpc.RegisterResourceRequest).Type)
|
|
|
|
}
|
|
|
|
}
|
2024-02-06 16:46:31 +00:00
|
|
|
|
|
|
|
// downgradeOutputValues recursively replaces all Output values with `Computed`, `Secret`, or their plain
|
|
|
|
// value. This loses all dependency information.
|
|
|
|
func downgradeOutputValues(v resource.PropertyMap) resource.PropertyMap {
|
|
|
|
var downgradeOutputPropertyValue func(v resource.PropertyValue) resource.PropertyValue
|
|
|
|
|
|
|
|
downgradeOutputPropertyValue = func(v resource.PropertyValue) resource.PropertyValue {
|
|
|
|
if v.IsOutput() {
|
|
|
|
output := v.OutputValue()
|
|
|
|
var result resource.PropertyValue
|
|
|
|
if output.Known {
|
|
|
|
result = downgradeOutputPropertyValue(output.Element)
|
|
|
|
} else {
|
|
|
|
result = resource.MakeComputed(resource.NewStringProperty(""))
|
|
|
|
}
|
|
|
|
if output.Secret {
|
|
|
|
result = resource.MakeSecret(result)
|
|
|
|
}
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
if v.IsObject() {
|
|
|
|
return resource.NewObjectProperty(downgradeOutputValues(v.ObjectValue()))
|
|
|
|
}
|
|
|
|
if v.IsArray() {
|
2024-04-10 16:00:24 +00:00
|
|
|
result := make([]resource.PropertyValue, len(v.ArrayValue()))
|
|
|
|
for i, elem := range v.ArrayValue() {
|
|
|
|
result[i] = downgradeOutputPropertyValue(elem)
|
2024-02-06 16:46:31 +00:00
|
|
|
}
|
|
|
|
return resource.NewArrayProperty(result)
|
|
|
|
}
|
|
|
|
if v.IsSecret() {
|
|
|
|
return resource.MakeSecret(downgradeOutputPropertyValue(v.SecretValue().Element))
|
|
|
|
}
|
|
|
|
if v.IsResourceReference() {
|
|
|
|
ref := v.ResourceReferenceValue()
|
|
|
|
return resource.NewResourceReferenceProperty(
|
|
|
|
resource.ResourceReference{
|
|
|
|
URN: ref.URN,
|
|
|
|
ID: downgradeOutputPropertyValue(ref.ID),
|
|
|
|
PackageVersion: ref.PackageVersion,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return v
|
|
|
|
}
|
|
|
|
|
|
|
|
result := make(resource.PropertyMap)
|
|
|
|
for k, pv := range v {
|
|
|
|
result[k] = downgradeOutputPropertyValue(pv)
|
|
|
|
}
|
|
|
|
return result
|
|
|
|
}
|
2024-02-14 08:15:24 +00:00
|
|
|
|
Send output values to transforms for dependency tracking (#15637)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This engine change is necessary for correct dependency tracking of
properties through transform functions.
Unlike other parts of the runtime/provider/engine interface transform
functions do not have a property dependencies map sent or returned.
Transforms rely entirely on the dependency arrays in output values for
their dependency tracking.
This change takes the property dependency map and upgrades all the input
properties to be output values tracking those same dependencies (if a
property doesn't have any dependencies it doesn't need to be upgraded to
an output value).
This upgrade is only done if we're using transform functions, there's
currently no need to do this unless we're using transforms.
After running the transforms we downgrade the output values back to
plain/secret/computed values if we're registering a custom resource. If
we're registering a component resource we just leave all the properties
upgraded as output values.
We also rebuild the resources dependencies slice and propertyDependency
map with the dependencies recorded in the transformed properties. If the
transform didn't change the properties this will just be the same data
we encoded into the output values in the properties structure.
There are a couple of things to keep in mind with this change.
Firstly using a transform can cause a property that was being sent to a
component provider as just a `resource.NumberProperty` to start being
sent as an `OutputProperty` with the `NumberProperty` as its element.
Component providers _should_ handle that, but it's feasible that it
could break some component code. This is a fairly limited blast radius
as it will only happen when that user starts using a transform function.
Secondly, we can't know in the engine if a property dependencies are
from a top level output value or a nested output. That is SDKs send both
of the following property shapes the same way to the engine for custom
resources:
```
A) Output(Array([Number(1)]), ["dep"])
---
B) Array([Output(Number(1), ["dep"])
```
Currently both would get upgraded to `Output(Array([Number(1)]),
["dep"])`. For a component resource the SDK will send output values, and
as long as they define a superset of the dependencies in the property
map then we don't add the top-level output tracking the same dependency
set.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-03-20 09:53:33 +00:00
|
|
|
func upgradeOutputValues(
|
|
|
|
v resource.PropertyMap, propertyDependencies map[resource.PropertyKey]mapset.Set[resource.URN],
|
|
|
|
) resource.PropertyMap {
|
|
|
|
// We assume that by the time this is being called we've upgraded all Secret/Computed values to outputs. We just
|
|
|
|
// need to add the dependency information from propertyDependencies.
|
|
|
|
|
|
|
|
result := make(resource.PropertyMap)
|
|
|
|
for k, pv := range v {
|
|
|
|
if deps, has := propertyDependencies[k]; has {
|
|
|
|
currentDeps := mapset.NewSet[resource.URN]()
|
|
|
|
addOutputDependencies(currentDeps, pv)
|
|
|
|
if currentDeps.IsSuperset(deps) {
|
|
|
|
// already has the deps, just copy across
|
|
|
|
result[k] = pv
|
|
|
|
} else {
|
|
|
|
var output resource.Output
|
|
|
|
if pv.IsOutput() {
|
|
|
|
output = pv.OutputValue()
|
|
|
|
} else {
|
|
|
|
output = resource.Output{
|
|
|
|
Element: pv,
|
|
|
|
Known: true,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Merge all the dependencies from the propertyDependencies map with any current dependencies on this
|
|
|
|
// output value.
|
|
|
|
currentDeps.Clear()
|
|
|
|
currentDeps.Append(output.Dependencies...)
|
|
|
|
currentDeps = currentDeps.Union(deps)
|
|
|
|
|
|
|
|
output.Dependencies = currentDeps.ToSlice()
|
|
|
|
result[k] = resource.NewOutputProperty(output)
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// no deps just copy across
|
|
|
|
result[k] = pv
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
2024-03-15 13:01:45 +00:00
|
|
|
func extendOutputDependencies(deps []resource.URN, v resource.PropertyValue) []resource.URN {
|
|
|
|
set := mapset.NewSet(deps...)
|
|
|
|
addOutputDependencies(set, v)
|
|
|
|
return set.ToSlice()
|
|
|
|
}
|
|
|
|
|
|
|
|
func addOutputDependencies(deps mapset.Set[resource.URN], v resource.PropertyValue) {
|
2024-02-14 08:15:24 +00:00
|
|
|
if v.IsOutput() {
|
|
|
|
output := v.OutputValue()
|
|
|
|
if output.Known {
|
2024-03-15 13:01:45 +00:00
|
|
|
addOutputDependencies(deps, output.Element)
|
2024-02-14 08:15:24 +00:00
|
|
|
}
|
2024-03-15 13:01:45 +00:00
|
|
|
deps.Append(output.Dependencies...)
|
2024-02-14 08:15:24 +00:00
|
|
|
}
|
|
|
|
if v.IsResourceReference() {
|
|
|
|
ref := v.ResourceReferenceValue()
|
2024-03-15 13:01:45 +00:00
|
|
|
addOutputDependencies(deps, ref.ID)
|
2024-02-14 08:15:24 +00:00
|
|
|
}
|
|
|
|
if v.IsObject() {
|
|
|
|
for _, elem := range v.ObjectValue() {
|
2024-03-15 13:01:45 +00:00
|
|
|
addOutputDependencies(deps, elem)
|
2024-02-14 08:15:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if v.IsArray() {
|
|
|
|
for _, elem := range v.ArrayValue() {
|
2024-03-15 13:01:45 +00:00
|
|
|
addOutputDependencies(deps, elem)
|
2024-02-14 08:15:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if v.IsSecret() {
|
2024-03-15 13:01:45 +00:00
|
|
|
addOutputDependencies(deps, v.SecretValue().Element)
|
2024-02-14 08:15:24 +00:00
|
|
|
}
|
|
|
|
}
|