The Version field was inadvertently dropped when sending an import
request to the service. Now that we are requiring that the Version field
be set in deployments, this was causing errors.
During a deployment, we end up making a bunch of PATCH/PUT/POST style
REST requests. For these calls, it should be safe to retry the
operation if there was a hickup during our REST call, so mark them as
retryable.
The newly added `pulumi config refresh` updates your local copy of the
Pulumi.<stack-name>.yaml file to have the same configuration as the
most recent deployment in the cloud.
This can be used in a varirty of ways. One place we plan to use it is
in automation to clean up "leaked" stacks we have in CI. With the
changes you'll now be able to do the following:
```
$ cd $(mktemp -d)
$ echo -e "name: who-cares\nruntime: nodejs" > Pulumi.yaml
$ pulumi stack select <leaked-stack-name>
$ pulumi config refresh -f
$ pulumi destroy --force
```
Having a simpler gesture for the above is something we'll want to do
long term (we should be able to support `pulumi destory <stack-name>`
from a completely empty folder, today you need a Pulumi.yaml file
present, even if the contents don't matter).
But this gets us a little closer to where we want to be and introduces
a helpful primitive in the system.
Contributes to #814
These changes enable tracing of Pulumi API calls.
The span with which to associate an API call is passed via a
`context.Context` parameter. This required plumbing a
`context.Context` parameter through a rather large number of APIs,
especially in the backend.
In general, all API calls are associated with a new root span that
exists for essentially the entire lifetime of an invocation of the
Pulumi CLI. There were a few places where the plumbing got a bit hairier
than I was willing to address with these changes; I've used
`context.Background()` in these instances. API calls that receive this
context will create new root spans, but will still be traced.
Refactoring to support "preview" and "update" as part of the same
operation interacted poorly with deploying into a PPC via the service.
Per Pat, this is the quickest fix that gets us off the floor.
Part of #1301
This command cancels a stack's currently running update, if any. It can
be used to recover from the scenario in which an update is aborted
without marking the running update as complete. Once an update has been
cancelled, it is likely that the affected stack will need to be repaired
via an pair of export/import commands before future updates can succeed.
This is part of #1077.
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
This change introduces support for using the cloud backend when
`pulumi init` has not been run. When this is the case, we use the new
identity model, where a stack is referenced by an owner and a stack
name only.
There are a few things going on here:
- We add a new `--owner` flag to `pulumi stack init` that lets you
control what account a stack is created in.
- When listing stacks, we show stacks owned by you and any
organizations you are a member of. So, for example, I can do:
* `pulumi stack init my-great-stack`
* `pulumi stack init --owner pulumi my-great-stack`
To create a stack owned by my user and one owned by my
organization. When `pulumi stack ls` is run, you'll see both
stacks (since they are part of the same project).
- When spelling a stack on the CLI, an owner can be optionally
specified by prefixing the stack name with an owner name. For
example `my-great-stack` means the stack `my-great-stack` owned by
the current logged in user, where-as `pulumi/my-great-stack` would
be the stack owned by the `pulumi` organization
- `--all` can be passed to `pulumi stack ls` to see *all* stacks you
have access to, not just stacks tied to the current project.
This field indicates the schema of the serialized deployment. This field
behaves identically to the `Version` field of
`PatchUpdateCheckpointRequest`.
This is part of pulumi/pulumi-service#1046
And make the deployment an opaque JSON message. The verison field
indicates the schema of the deployment. A missing version field will
behave as if the version was set to `1`. A version of `1` indicates that
the serialized deployment has the `DeploymentV1` schema.
This is part of pulumi/pulumi-service#1046.
We've seen failures in CI where DNS lookups fail which cause our
operations against the service to fail, as well as other sorts of
timeouts.
Add a set of helper methods in a new httputil package that helps us do
retries on these operations, and then update our client library to use
them when we are doing GET requests. We also provide a way for non GET
requests to be retried, and use this when updating a lease (since it
is safe to retry multiple requests in this case).
As part of the new identity model, we're going to use tagging on
stacks to record metadata, let's create a bag for that, as well as a
few well known tag names that map to metadata we know we'll want to set.
This endpoint is relatively expensive, as it returns a large amount of
data for each stack (essentially the stack's entire checkpoint). We
currently call this endpoint in a few places that really just need
information about a single stack. In these places we should simply
interact with that stack directly.
After these changes, there is only one call to the `stacks` endpoint on
the startup path: `upgradeConfigurationFiles` loops over all of the
stacks in each backend and attempts to upgrade each stack's
configuration. Once we no longer need to do this we will not be hitting
the `stacks` endpoint on each CLI invocation.
These changes add the API types and cloud backend code necessary to
interact with service-managed stacks (i.e. stacks that do not have
PPC-managed deployments). The bulk of these changes are unremarkable:
the API types are straightforward, as are most of the interactions with
the new APIs. The trickiest bits are token and log management.
During an update to a managed stack, the CLI must continually renew the
token used to authorize the operations on that stack that comprise the
update. Once a token has been renewed, the old token should be
discarded. The CLI supports this by running a goroutine that is
responsible for both periodically renewing the token for an update and
servicing requests for the token itself from the rest of the backend.
In addition to token renewal, log output must be captured and uploaded
to the service during an update to a managed stack. Implementing this in
a reasonable fashion required a bit of refactoring in order to reuse
what already exists for the local backend. Each event-specific `Display`
function was replaced with an equivalent `Render` function that returns
a string rather than writing to a stream. This approach was chosen
primarily to avoid dealing with sheared colorization tags, which would
otherwise require clients to fuse log lines before colorizing. We could
take that approach in the future.
These changes refactor direct interactions with the Pulumi API out of
the cloud backend and into a subpackage, `pkg/backend/cloud/client`.
This package exposes a slightly higher-level API that takes care of
calculating paths, performing HTTP calls, and occasionally wrapping
multiple physical calls into a single logical call (notably the creation
of an update and the upload of its program).
This is primarily intended as preparation for some of the changes
suggested in the feedback for #1067.