When running a Pulumi operation such as a preview or an update, Pulumi
will detect if plugins (e.g. an AWS provider) are missing and download
and install them appropriately. Presently the user experience when this
happens isn't great (see e.g.
https://github.com/pulumi/pulumi/issues/14250), making it hard for the
user to see what is happening/what is taking so long when required
plugins are large.
This commit attempts to rectify this by continuing the work in
https://github.com/pulumi/pulumi/pull/16094 that tracks download
progress using engine events. In doing so, we are able to render
multiple downloads as part of the existing "system messages" panel in
the Pulumi output, and provide a clean view of what is going on when
downloads must occur before program execution. Moreover, we generalise
that PR to handle any engine process, enabling us to play a similar
trick with plugin installations (which can also take a while).
To preserve existing behaviour, we introduce a new class for these
events which we call "ephemeral", meaning that they are not persisted or
rendered in contexts such as diffs, for instance. Specifically,
ephemeral events are *not* sent to HTTP backends (i.e. the service), so
this commit should not require any changes to the service before merging
and releasing.
Fixes#14250Closes#16094Closes#16937https://github.com/user-attachments/assets/f0fac5e9-b3c8-4ea7-9cb7-075fc4b625d9https://github.com/user-attachments/assets/7a761aa9-10ad-4f66-afa3-e4550b4553a5
We want to introduce dubugging support for Pulumi programs. This PR
implements the engine changes necessary for that. Namely, we pass the
information whether we expect a debugger to be started to the language
runtime, and introduce a way for the language runtime plugins to report
to the engine that we're waiting for a debugger to attach. The language
runtime is expected to include the information relevant for the user to
be able to attach to the debugger, as well as a shortened message.
The idea is that the configuration can be picked up by an IDE, and the
debugger can attach automatically. Meanwhile the short message should
contain enough information to be able to attach a debugger manually,
while being short enough to be displayed to the user in the CLI output.
(this will commonly be either the port of the debugger, or the PID of
the process being debugged).
The implementation of the CLI flags and each of the language runtimes
will follow in subsequent PRs.
I tried adding a test to this, but I'm not sure it's possible with our
current testing infrastructure. To do this properly, we'd need to make a
RPC call to the engine, but we don't have that available in the
lifecycletests (? please let me know if I'm missing something). Once
more of the feature is implemented we might be able to implement an
integration test for it. (Not straightforward either, as we'll have to
tell the debugger to continue, but that should be more doable).
Another thing that's not clear to me is that @EronWright mentions this
could be used for MLC/provider debugging as well. However I'm not seeing
how that's going to work, as MLCs/providers are being run as a binary
plugin, which we don't compile from pulumi/pulumi, and thus wouldn't
necessarily even know which debugger to launch it under without a bunch
of additional configuration, that might be better in a shim around the
program (or just keeping the debugging the way we're currently doing,
launching the program and then letting the engine attach to it).
---------
Co-authored-by: Eron Wright <eron@pulumi.com>
Co-authored-by: Julien <julien@caffeine.lu>
Presently, the behaviour of diffing during refresh steps is incomplete,
returning only an "output diff" that presents the changes in outputs.
This commit changes refresh steps so that:
* they compute a diff similar to the one that would be computed if a
`preview` were run immediately after the refresh, which is more
typically what users expect and want; and
* `IgnoreChanges` resource options are respected when performing the new
desired-state diffs, so that property additions or changes reported by a
refresh can be ignored.
In particular, `IgnoreChanges` can now be used to acknowledge that part
or all of a resource may change in the provider, but the user is OK with
this and doesn't want to be notified about it during a refresh.
Importantly, this means that the diff won't be reported, but also that
the changes won't be applied to state.
The implementation covers the following:
* A diff is computed using the inputs from the program and then
inverting the result, since in the case of a refresh the diff is being
driven by the provider side and not the program. This doesn't change
what is stored back into the state, but it does produce a diff that is
more aligned with the "true changes to the desired state".
* `IgnoreChanges` resource options are now stored in state, so that this
information can be used in refresh operations that do not have access
to/run the program.
* In the context of a refresh operation, `IgnoreChanges` applies to
*both* input and output properties. This differs from the behaviour of a
normal update operation, where `IgnoreChanges` only considers input
properties.
* The special `"*"` value for `IgnoreChanges` can be used to ignore all
properties. It _also_ ignores the case where the resource cannot be
found in the provider, and instead keeps the resource intact in state
with its existing input and output properties.
Because the program is not run for refresh operations, `IgnoreChanges`
options must be applied separately before a refresh takes place. This
can be accomplished using e.g. a `pulumi up` that applies the options
prior to a refresh. We should investigate perhaps providing a `pulumi
state set ...`-like CLI to make these sorts of changes directly to a
state.
For use cases relying on the legacy refresh diff provider, the
`PULUMI_USE_LEGACY_REFRESH_DIFF` environment variable can be set, which
will disable desired-state diff computation. We only need to perform
checks in `RefreshStep.{ResultOp,Apply}`, since downstream code will
work correctly based on the presence or absence of a `DetailedDiff` in
the step.
### Notes
- https://github.com/pulumi/pulumi/issues/16144 affects some of these
cases - though its technically orthogonal
- https://github.com/pulumi/pulumi/issues/11279 is another technically
orthogonal issue that many providers (at least TFBridge ones) - do not
report back changes to input properties on Read when the input property
(or property path) was missing on the inputs. This is again technically
orthogonal - but leads to cases that appear "wrong" in terms of what is
stored back into the state still - though the same as before this
change.
- Azure Native doesn't seem to handle `ignoreChanges` passed to Diff, so
the ability to ignore changes on refresh doesn't currently work for
Azure Native.
### Fixes
* Fixes#16072
* Fixes#16278
* Fixes#16334
* Not quite #12346, but likely replaces the need for that
Co-authored-by: Will Jones <will@sacharissa.co.uk>
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
SerializePropertyValue needed a `context.Context` object to pass to the
`config.Encrypter`. It was using `context.TODO()`, this change instead
accepts a context on the parameters and lifts that up to
SerializeProperties, SerializeResource, SerializeOperation, and
SerializeDeployment.
There were a few call sites for those methods that already had a context
on hand, and they now pass that context. The other calls sites now use
`context.TODO()`, we should continue to iterate in this area to ensure
everywhere that needs a context has one passed in.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
Turn on the golangci-lint exhaustive linter. This is the first step
towards catching more missing cases during development rather than
in tests, or in production.
This might be best reviewed commit-by-commit, as the first commit turns
on the linter with the `default-signifies-exhaustive: true` option set,
which requires a lot less changes in the current codebase.
I think it's probably worth doing the second commit as well, as that
will get us the real benefits, even though we end up with a little bit
more churn. However it means all the `switch` statements are covered,
which isn't the case after the first commit, since we do have a lot of
`default` statements that just call `assert.Fail`.
Fixes#14601
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
When introducing the PolicyLoadEvent I missed handling it in some cases.
Do so now to avoid panics because it's not handled properly.
Fixes#14598
## Checklist
- [ ] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
Similar to how https://github.com/pulumi/pulumi/pull/13953 moves some
code from sdk/go/common to /pkg. This display code is only used in /pkg,
another simple reduction of what's in sdk/go/common.
These changes add support for passing source position information in
gRPC metadata and recording the source position that corresponds to a
resource registration in the statefile.
Enabling source position information in the resource model can provide
substantial benefits, including but not limited to:
- Better errors from the Pulumi CLI
- Go-to-defintion for resources in state
- Editor integration for errors, etc. from `pulumi preview`
Source positions are (file, line) or (file, line, column) tuples
represented as URIs. The line and column are stored in the fragment
portion of the URI as "line(,column)?". The scheme of the URI and the
form of its path component depends on the context in which it is
generated or used:
- During an active update, the URI's scheme is `file` and paths are
absolute filesystem paths. This allows consumers to easily access
arbitrary files that are available on the host.
- In a statefile, the URI's scheme is `project` and paths are relative
to the project root. This allows consumers to resolve source positions
relative to the project file in different contexts irrespective of the
location of the project itself (e.g. given a project-relative path and
the URL of the project's root on GitHub, one can build a GitHub URL for
the source position).
During an update, source position information may be attached to gRPC
calls as "source-position" metadata. This allows arbitrary calls to be
associated with source positions without changes to their protobuf
payloads. Modifying the protobuf payloads is also a viable approach, but
is somewhat more invasive than attaching metadata, and requires changes
to every call signature.
Source positions should reflect the position in user code that initiated
a resource model operation (e.g. the source position passed with
`RegisterResource` for `pet` in the example above should be the source
position in `index.ts`, _not_ the source position in the Pulumi SDK). In
general, the Pulumi SDK should be able to infer the source position of
the resource registration, as the relationship between a resource
registration and its corresponding user code should be static per SDK.
Source positions in state files will be stored as a new `registeredAt`
property on each resource. This property is optional.
`Created`: Created tracks when the remote resource was first added to state by pulumi. Checkpoints prior to early 2023 do not include this. (Create, Import)
`Modified`: Modified tracks when the resource state was last altered. Checkpoints prior to early 2023 do not include this. (Create, Import, Read, Refresh, Update)
When serialized they will follow RFC3339 with nanoseconds captured by a test case.
https://pkg.go.dev/time#RFC3339
Note: Older versions of pulumi may strip these fields when modifying the state.
For future expansion, when we inevitably need to track other timestamps, we'll add a new "operationTimestamps" field (or something similarly named that clarified these are timestamps of the actual Pulumi operations).
operationTimestamps: {
created: ...,
updated: ...,
imported: ...,
}
Fixes https://github.com/pulumi/pulumi/issues/12022
In many cases there is no need to delete resources if the container
resource is going to be deleted as well.
A few examples:
* Database object (roles, tables) when database is being deleted
* Cloud IAM bindings when user itself is being deleted
This helps with:
* Speeding the deletion process
* Removing unnecessary calls to providers
* Avoiding failed deletions when the pulumi user running the
plan has access to the container resource but not the contained
ones
To avoid deleting contained resources, set the `DeletedWith` resource
option to the container resource.
TODO:
Should we support DeletedWith with PendingDeletes?
Special case might be when the contained resource is marked as pending
deletion but we now want to delete the container resource, so
ultimately there is no need to delete the contained anymore
* Remove sequenceNumber from protobufs
* Regenerate protobufs
* Remove setting and reading of sequence number in Check
* Remove sequence numbers from state
* Replace sequenceNumber with randomSeed in Check
* Fix tests
* Add to CHANGELOG
* Moving previewDigest and exporting it, closes#9851
* Moving previewDigest and exporting it, closes#9851
* Updating changelog-pending
* Go Mod Tidy
* replacing to local
* more go.mod changes
* reseting go mod
* full move
* Fixing golint
* No go.mod changes needed
This reverts commit 17068e9b49.
Turns out NormalizeURNReferences needs this in the state to fix up URNs while the deployment is running. It feels like we should be able to either thread this information through to the snapshot manager another way but it's not obvious how. It's also tricky to test because snapshot code differs massively in unit tests compared to proper runs.
* Plumb in basics of retainOnDelete
* Add test
* Make test pass
* Add to changelog
* Add to API list
* lint
* Add semicolon
* Fix Infof call
* Fix method call
* new delete mode work
* cleanup
* protectTest
* Fix up test
* Fix replace
* Fix up test
* Warn on drop
* lint
* Change to just a bool flag
* Regenerate proto
* Rework to just a bool flag with no error
* Remove old comment
* Fix C# typo
* rm extra space
* Add missing semicolon
* Reformat python
* False typo
* Fix typo in js function name
* Reword docs
* lint
* Read doesn't need retainOnDelete
* Start adding SequenceNumber
* Start adding sequence number to state
* New generate functions
* notes
* Don't increment if unknown
* Deterministic name test
* Check replace
* typo
* lint
* Increment on targetted replace
* Some comments and external fixes
* Add test for resetting sequence number after replace
* Reset sequence numbers after replace
* assert check we never pass -1 to check
* Add to dynamic providers
* lint
* Add to changelog
Certain operations in `engine/diff` mutate engine events during display.
This mutation can occur concurrently with the serialization of the event
for persistence, which causes a panic in the CLI. These changes fix the
offending code and add code that copies each engine event before
persisteing it in order to guard against future issues.
- Remove `Info` from `Source`. This method was not used.
- Remove `Stack` from `EvalSource`. This method was not used.
- Remove `Type` and `URN` from `Step`. These values are available via
`Res().URN.Type()` and `Res().URN`, respectively. This removes the
possibility of inconsistencies between the type, URN, and state of the
resource associated with a `Step`.
- Remove URN from StepEventMetadata.
After importing some resources, and running a second update with the
import still applied, an unexpected replace would occur. This wouldn't
happen for the vast majority of resources, but for some it would.
It turns out that the resources that trigger this are ones that use a
different format of identifier for the import input than they do for the
ID property.
Before this change, we would trigger an import-replacement when an
existing resource's ID property didn't match the import property, which
would be the case for the small set of resources where the input
identifier is different than the ID property.
To avoid this, we now store the `importID` in the statefile, and
compare that to the import property instead of comparing the ID.
* Make `async:true` the default for `invoke` calls (#3750)
* Switch away from native grpc impl. (#3728)
* Remove usage of the 'deasync' library from @pulumi/pulumi. (#3752)
* Only retry as long as we get unavailable back. Anything else continues. (#3769)
* Handle all errors for now. (#3781)
* Do not assume --yes was present when using pulumi in non-interactive mode (#3793)
* Upgrade all paths for sdk and pkg to v2
* Backport C# invoke classes and other recent gen changes (#4288)
Adjust C# generation
* Replace IDeployment with a sealed class (#4318)
Replace IDeployment with a sealed class
* .NET: default to args subtype rather than Args.Empty (#4320)
* Adding system namespace for Dotnet code gen
This is required for using Obsolute attributes for deprecations
```
Iam/InstanceProfile.cs(142,10): error CS0246: The type or namespace name 'ObsoleteAttribute' could not be found (are you missing a using directive or an assembly reference?) [/Users/stack72/code/go/src/github.com/pulumi/pulumi-aws/sdk/dotnet/Pulumi.Aws.csproj]
Iam/InstanceProfile.cs(142,10): error CS0246: The type or namespace name 'Obsolete' could not be found (are you missing a using directive or an assembly reference?) [/Users/stack72/code/go/src/github.com/pulumi/pulumi-aws/sdk/dotnet/Pulumi.Aws.csproj]
```
* Fix the nullability of config type properties in C# codegen (#4379)
This package's flags conflict with those in google/glog. Replace all
references to this package with references to
pulumi/pulumi/pkg/util/logging, and change that package to explicitly
call `flag.CommandLine.Parse` with an empty slice.
This should make it much easier to consume these packages in downstream
repos that have direct or indirect dependencies on google/glog.
* Plumbing the custom timeouts from the engine to the providers
* Plumbing the CustomTimeouts through to the engine and adding test to show this
* Change the provider proto to include individual timeouts
* Plumbing the CustomTimeouts from the engine through to the Provider RPC interface
* Change how the CustomTimeouts are sent across RPC
These errors were spotted in testing. We can now see that the timeout
information is arriving in the RegisterResourceRequest
```
req=&pulumirpc.RegisterResourceRequest{
Type: "aws:s3/bucket:Bucket",
Name: "my-bucket",
Parent: "urn:pulumi:dev::aws-vpc::pulumi:pulumi:Stack::aws-vpc-dev",
Custom: true,
Object: &structpb.Struct{},
Protect: false,
Dependencies: nil,
Provider: "",
PropertyDependencies: {},
DeleteBeforeReplace: false,
Version: "",
IgnoreChanges: nil,
AcceptSecrets: true,
AdditionalSecretOutputs: nil,
Aliases: nil,
CustomTimeouts: &pulumirpc.RegisterResourceRequest_CustomTimeouts{
Create: 300,
Update: 400,
Delete: 500,
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
},
XXX_NoUnkeyedLiteral: struct {}{},
XXX_unrecognized: nil,
XXX_sizecache: 0,
}
```
* Changing the design to use strings
* CHANGELOG entry to include the CustomTimeouts work
* Changing custom timeouts to be passed around the engine as converted value
We don't want to pass around strings - the user can provide it but we want
to make the engine aware of the timeout in seconds as a float64
Thse changes make a subtle but critical adjustment to the process the
Pulumi engine uses to determine whether or not a difference exists
between a resource's actual and desired states, and adjusts the way this
difference is calculated and displayed accordingly.
Today, the Pulumi engine get the first chance to decide whether or not
there is a difference between a resource's actual and desired states. It
does this by comparing the current set of inputs for a resource (i.e.
the inputs from the running Pulumi program) with the last set of inputs
used to update the resource. If there is no difference between the old
and new inputs, the engine decides that no change is necessary without
consulting the resource's provider. Only if there are changes does the
engine consult the resource's provider for more information about the
difference. This can be problematic for a number of reasons:
- Not all providers do input-input comparison; some do input-state
comparison
- Not all providers are able to update the last deployed set of inputs
when performing a refresh
- Some providers--either intentionally or due to bugs--may see changes
in resources whose inputs have not changed
All of these situations are confusing at the very least, and the first
is problematic with respect to correctness. Furthermore, the display
code only renders diffs it observes rather than rendering the diffs
observed by the provider, which can obscure the actual changes detected
at runtime.
These changes address both of these issues:
- Rather than comparing the current inputs against the last inputs
before calling a resource provider's Diff function, the engine calls
the Diff function in all cases.
- Providers may now return a list of properties that differ between the
requested and actual state and the way in which they differ. This
information will then be used by the CLI to render the diff
appropriately. A provider may also indicate that a particular diff is
between old and new inputs rather than old state and new inputs.
Fixes#2453.
Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource.
There are two key places this change is implemented.
The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN.
The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent.
Four specific scenarios are explicitly tested as part of this PR:
1. Renaming a resource
2. Adopting a resource into a component (as the owner of both component and consumption codebases)
3. Renaming a component instance (as the owner of the consumption codebase without changes to the component)
4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase)
4. Combining (1) and (3) to make both changes to a resource at the same time
If --suppress-outputs is passed to `pulumi preview --json`, we
should not emit the stack outputs. This change fixespulumi/pulumi#2765.
Also adds a test case for this plus some variants of updates.
We were dropping new and old states on the floor instead of including
them as part of the previewed operation due to a logic error (we want
to append them when there are no errors from serialization, vs when
there are errors).
When constructing a Deployment (which is a plaintext representation of
a Snapshot), ensure that we encrypt secret values. To do so, we
introduce a new type `secrets.Manager` which is able to encrypt and
decrypt values. In addition, it is able to reflect information about
itself that can be stored in the deployment such that we can
deserialize the deployment into a snapshot (decrypting the values in
the process) without external knowledge about how it was encrypted.
The ability to do this is import for allowing stack references to
work, since two stacks may not use the same manager (or they will use
the same type of manager, but have different state).
The state value is stored in plaintext in the deployment, so it **must
not** contain sensitive data.
A sample manager, which just base64 encodes and decodes strings is
provided, as it useful for testing. We will allow it to be varried
soon.
This change adds a --json flag to the preview command, enabling
basic JSON serialization of preview plans. This effectively flattens
the engine event stream into a preview structure that contains a list
of steps, diagnostics, and summary information. Each step contains
the deep serialization of resource state, in addition to metadata about
the step, such as what kind of operation it entails.
This is a partial implementation of pulumi/pulumi#2390. In particular,
we only support --json on the `preview` command itself, and not `up`,
meaning that it isn't possible to serialize the result of an actual
deployment yet (thereby limiting what you can do with outputs, etc).