* Close cancellation source before closing events
The cancellation source logs cancellation messages to the engine event
channel, so we must first close the cancellation source before closing
the channel.
* CR: Fix race in shutdown of signal goroutine
This change implements the same preview behavior we have for
cloud stacks, in pkg/backend/httpbe, for local stacks, in
pkg/backend/filebe. This mostly required just refactoring bits
and pieces so that we can share more of the code, although it
does still entail quite a bit of redundancy. In particular, the
apply functions for both backends are now so close to being
unified, but still require enough custom logic that it warrants
keeping them separate (for now...)
This simply refactors all the display logic out of the
pkg/backend/filestate package. This helps to gear us up to better unify
this logic between the filestate and httpstate backends.
Furthermore, this really ought to be in its own non-backend,
CLI-specific package, but I'm taking one step at a time here.
This renames the backend packages to more closely align with the
new direction for them. Namely, pkg/backend/cloud becomes
pkg/backend/httpstate and pkg/backend/local becomes
pkg/backend/filestate. This also helps to clarify that these are meant
to be around state management and so the upcoming refactoring required
to split out (e.g.) the display logic (amongst other things) will make
more sense, and we'll need better package names for those too.
As part of making the local backend more prominent, this changes a few
aspects of how you use it:
* Simplify how you log into a specific cloud; rather than
`pulumi login --cloud-url <url>`, just say `pulumi login <url>`.
* Use a proper URL scheme to denote local backend usage. We have chosen
file://, since the REST API backend is of course always https://.
This means that you can say `pulumi login file://~` to use the local
backend, with state files stored in your home directory. Similarly,
we support `pulumi login file://.` for the current directory.
* Add a --local flag to the login command, to make local logins a
bit easier in the common case of using your home directory. Just say
`pulumi login --local` and it is sugar for `pulumi login file://~`.
* Print the URL for the backend after logging in; for the cloud,
this is just the user's stacks page, and for the local backend,
this is the path to the user's stacks directory on disk.
* Tidy up the documentation for login a bit to be clearer about this.
This is part of pulumi/pulumi#1818.
* Initial support for passing URLs to `new` and `up`
This PR adds initial support for `pulumi new` using Git under the covers
to manage Pulumi templates, providing the same experience as before.
You can now also optionally pass a URL to a Git repository, e.g.
`pulumi new [<url>]`, including subdirectories within the repository,
and arbitrary branches, tags, or commits.
The following commands result in the same behavior from the user's
perspective:
- `pulumi new javascript`
- `pulumi new https://github.com/pulumi/templates/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates/javascript`
To specify an arbitrary branch, tag, or commit:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates/javascript`
Branches and tags can include '/' separators, and `pulumi` will still
find the right subdirectory.
URLs to Gists are also supported, e.g.:
`pulumi new https://gist.github.com/justinvp/6673959ceb9d2ac5a14c6d536cb871a6`
If the specified subdirectory in the repository does not contain a
`Pulumi.yaml`, it will look for subdirectories within containing
`Pulumi.yaml` files, and prompt the user to choose a template, along the
lines of how `pulumi new` behaves when no template is specified.
The following commands result in the CLI prompting to choose a template:
- `pulumi new`
- `pulumi new https://github.com/pulumi/templates/templates`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates`
Of course, arbitrary branches, tags, or commits can be specified as well:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates`
This PR also includes initial support for passing URLs to `pulumi up`,
providing a streamlined way to deploy installable cloud applications
with Pulumi, without having to manage source code locally before doing
a deployment.
For example, `pulumi up https://github.com/justinvp/aws` can be used to
deploy a sample AWS app. The stack can be updated with different
versions, e.g.
`pulumi up https://github.com/justinvp/aws/tree/v2 -s <stack-to-update>`
Config values can optionally be passed via command line flags, e.g.
`pulumi up https://github.com/justinvp/aws -c aws:region=us-west-2 -c foo:bar=blah`
Gists can also be used, e.g.
`pulumi up https://gist.github.com/justinvp/62fde0463f243fcb49f5a7222e51bc76`
* Fix panic when hitting ^C from "choose template" prompt
* Add description to templates
When running `pulumi new` without specifying a template, include the template description along with the name in the "choose template" display.
```
$ pulumi new
Please choose a template:
aws-go A minimal AWS Go program
aws-javascript A minimal AWS JavaScript program
aws-python A minimal AWS Python program
aws-typescript A minimal AWS TypeScript program
> go A minimal Go program
hello-aws-javascript A simple AWS serverless JavaScript program
javascript A minimal JavaScript program
python A minimal Python program
typescript A minimal TypeScript program
```
* React to changes to the pulumi/templates repo.
We restructured the `pulumi/templates` repo to have all the templates in the root instead of in a `templates` subdirectory, so make the change here to no longer look for templates in `templates`.
This also fixes an issue around using `Depth: 1` that I found while testing this. When a named template is used, we attempt to clone or pull from the `pulumi/templates` repo to `~/.pulumi/templates`. Having it go in this well-known directory allows us to maintain previous behavior around allowing offline use of templates. If we use `Depth: 1` for the initial clone, it will fail when attempting to pull when there are updates to the remote repository. Unfortunately, there's no built-in `--unshallow` support in `go-git` and setting a larger `Depth` doesn't appear to help. There may be a workaround, but for now, if we're cloning the pulumi templates directory to `~/.pulumi/templates`, we won't use `Depth: 1`. For template URLs, we will continue to use `Depth: 1` as we clone those to a temp directory (which gets deleted) that we'll never try to update.
* List available templates in help text
* Address PR Feedback
* Don't show "Installing dependencies" message for `up`
* Fix secrets handling
When prompting for config, if the existing stack value is a secret, keep it a secret and mask the prompt. If the template says it should be secret, make it a secret.
* Fix ${PROJECT} and ${DESCRIPTION} handling for `up`
Templates used with `up` should already have a filled-in project name and description, but if it's a `new`-style template, that has `${PROJECT}` and/or `${DESCRIPTION}`, be helpful and just replace these with better values.
* Fix stack handling
Add a bool `setCurrent` param to `requireStack` to control whether the current stack should be saved in workspace settings. For the `up <url>` case, we don't want to save. Also, split the `up` code into two separate functions: one for the `up <url>` case and another for the normal `up` case where you have workspace in your current directory. While we may be able to combine them back into a single function, right now it's a bit cleaner being separate, even with some small amount of duplication.
* Fix panic due to nil crypter
Lazily get the crypter only if needed inside `promptForConfig`.
* Embellish comment
* Harden isPreconfiguredEmptyStack check
Fix the code to check to make sure the URL specified on the command line matches the URL stored in the `pulumi:template` config value, and that the rest of the config from the stack satisfies the config requirements of the template.
In #1341 we promoted a class of errors in fetching git metadata from
glog messages to warnings printed by the CLI. On the asumption that
when we got warnings here they would be actionable.
The major impact here is that when you are working in a repository
which does not have a remote set to GitHub (common if you have just
`git init`'d a repository for a new project) or you don't call your
remote `origin` or you use some other code provider, we end up
printing a warning during every update.
This change does two things:
- Restructure the way we detect metadata to attempt to make progress
when it can. We bias towards returning some metadata even when we
can't determine the complete set of metadata.
- Use a multierror to track all the underlying failures from our
metadata probing and move it back to a glog message.
Overall, this feels like the right balance to me. We are retaining the
rich diagnostics information for when things go wrong, but we aren't
warning about common cases.
We could, of course, try to tighten our huristics (e.g. don't warn if
we can't find a GitHub remote but do warn if we can't compute if the
worktree is dirty) but it feels like that will be a game of
whack-a-mole over time and when warnings do fire its unlikely they
will be actionable.
Fixes#1443
These changes add support for adding a tracing header to API requests
made to the Pulumi service. Setting the `PULUMI_TRACING_HEADER`
environment variable or enabling debug commands and passing the
`--tracing-header` will change the value sent in this header. Setting
this value to `1` will request that the service enable distributed
tracing for all requests made by a particular CLI invocation.
These changes add support for injecting client tracing spans into HTTP
requests to the Pulumi API. The server can then rematerialize these span
references and attach its own spans for distributed tracing.
This change moves Git failures from glog.Warnings, which we don't
really pay attention, to true CLI warnings.
This will ensure we at least get bug reports in the event that this
fails on some user machine out in the wild. They are still non-fatal,
of course, since such a failure needn't prevent an update from happening.
This change captures the Git committer and author's login and email
addresses, so that we can display them prominently in the service. At
the moment, we only attribute updates to the identity that performed the
update which, in CI scenarios, is often always the same person for an
organization. This makes Pulumi look like a needlessly lonely place.
These changes enable tracing of Pulumi API calls.
The span with which to associate an API call is passed via a
`context.Context` parameter. This required plumbing a
`context.Context` parameter through a rather large number of APIs,
especially in the backend.
In general, all API calls are associated with a new root span that
exists for essentially the entire lifetime of an invocation of the
Pulumi CLI. There were a few places where the plumbing got a bit hairier
than I was willing to address with these changes; I've used
`context.Background()` in these instances. API calls that receive this
context will create new root spans, but will still be traced.
* Initialize a new stack as part of `pulumi new`
* Prompt for values with defaults preselected
* Install dependencies
* Prompt for default config values
This changes the CLI interface in a few ways:
* `pulumi preview` is back! The alternative of saying
`pulumi update --preview` just felt awkward, and it's a common
operation to want to perform. Let's just make it work.
* There are two flags consistent across all update commands,
`update`, `refresh`, and `destroy`:
- `--skip-preview` will skip the preview step. Note that this
does *not* skip the prompt to confirm that you'd like to proceed.
Indeed, it will still prompt, with a little warning text about
the fact that the preview has been skipped.
* `--yes` will auto-approve the updates.
This lands us in a simpler and more intuitive spot for common scenarios.
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
The `go-git` implementation of `git status` is outrageously expensive,
as it performs a hash-based comparision of the working tree against the
committed state with no caching. In some example runs, this takes
upwards of 15 seconds. Because this is on the startup path for updates,
this results in a rather poor user experience.
These changes replace the `go-git` implementation with a call to `git
status --porcelain -z`, which only writes data to stdout if the working
tree is dirty.
Note that these changes also make all git-related update metadata
best-effort.
hese changes plumb basic support for cancellation through the engine.
Two types of cancellation are supported for all engine operations:
- Cancellation, which waits for the operation to drive itself to a safe
point before the operation returns, and
- Termination, which does not wait for the operation to drive itself
to a safe opint for the operation returns.
When updating local or managed stacks, a single ^C triggers cancellation
of any running operation; a second ^C will trigger termination.
Fixes#513, #1077.
This change introduces support for using the cloud backend when
`pulumi init` has not been run. When this is the case, we use the new
identity model, where a stack is referenced by an owner and a stack
name only.
There are a few things going on here:
- We add a new `--owner` flag to `pulumi stack init` that lets you
control what account a stack is created in.
- When listing stacks, we show stacks owned by you and any
organizations you are a member of. So, for example, I can do:
* `pulumi stack init my-great-stack`
* `pulumi stack init --owner pulumi my-great-stack`
To create a stack owned by my user and one owned by my
organization. When `pulumi stack ls` is run, you'll see both
stacks (since they are part of the same project).
- When spelling a stack on the CLI, an owner can be optionally
specified by prefixing the stack name with an owner name. For
example `my-great-stack` means the stack `my-great-stack` owned by
the current logged in user, where-as `pulumi/my-great-stack` would
be the stack owned by the `pulumi` organization
- `--all` can be passed to `pulumi stack ls` to see *all* stacks you
have access to, not just stacks tied to the current project.
Long term, a stack name alone will not be sufficent to address a
stack. Introduce a new `backend.StackReference` interface that allows
each backend to give an opaque stack reference that can be used across
operations.
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
Upcoming work to remove the need for `pulumi init` makes testing the
upgrade code much harder than it did in the past (since workspace data
is moving to a different location on the file system, as well as some
other changes).
Instead of trying to maintain the code and test, let's just remove
it. Folks who haven't migrated (LM and the PPC deployment in the
service) should use the 0.11.3 or earlier CLI to migrate their
projects (simply by logging in and running a pulumi command) or move
things forward by hand.
We already had logic to skip prompting a user to create a stack,
when a stack-specific command was run but none was found, but we
only heeded it in one of two cases. This fixes the other case.
We have some code that deals with upgrading legacy projects (which had
workspace level configuration) to the new format where configuration
information was stored SxS with the application.
This code requires us to get a list of stacks from the backend (which
for hosted stacks means hitting api.pulumi.com) as part of the upgrade
process, so we knew all the stacks the user's project has. This is a
somewhat slow operation (which we will make faster regardless) but we
can structure things such that we don't need to do this often.
In the common case, we don't need to actually do upgrading at
all (new projects won't need it and once a project is upgraded that
project won't need it either) so update the code first to check if we
would need to do any work and if so, do the expensive operation of
getting stacks from a backend.
This should help with the slight pauses we've seen on the command line
since the work to default to folks logging in has landed.
This change does three major things:
1. Removes the ability to be logged into multiple clouds at the same
time. Previously, we supported being logged into multiple clouds at
the same time and the CLI would fan out requests and join responses
when needed. In general, this was only useful for Pulumi employees
that wanted run against multiple copies of the service (say production
and staging) but overall was very confusing (for example in the old
world a stack with the same identity could appear twice (since it was
in two backends) which the CLI didn't handle very well).
2. Stops treating the "local" backend as a special thing, from the
point of view of the CLI. Previouly we'd always connect to the local
backend and merge that data with whatever was in clouds we were
connected to. We had gestures like `--local` in `pulumi stack init`
that meant "use the local mode". Instead, to use the local mode now
you run `pulumi login --cloud-url local://` and then you are logged in
the local backend. Since you can only ever be logged into a single
backend, we can remove the `--local` and `--remote` flags from `pulumi
stack init`, it just now requires you to be logged in and creates a
stack in whatever back end you were logged into. When logging into the
local backend, you are not prompted for an access key.
3. Prompt for login in places where you have to log in, if you are not
already logged in.
This change includes a handful of stack-related CLI formatting
improvements that I've been noodling on in the background for a while,
based on things that tend to trip up demos and the inner loop workflow.
This includes:
* If `pulumi stack select` is run by itself, use an interactive
CLI menu to let the user select an existing stack, or choose to
create a new one. This looks as follows
$ pulumi stack select
Please choose a stack, or choose to create a new one:
abcdef
babblabblabble
> currentlyselected
defcon
<create a new stack>
and is navigated in the usual way (key up, down, enter).
* If a stack name is passed that does not exist, prompt the user
to ask whether s/he wants to create one on-demand. This hooks
interesting moments in time, like `pulumi stack select foo`,
and cuts down on the need to run additional commands.
* If a current stack is required, but none is currently selected,
then pop the same interactive menu shown above to select one.
Depending on the command being run, we may or may not show the
option to create a new stack (e.g., that doesn't make much sense
when you're running `pulumi destroy`, but might when you're
running `pulumi stack`). This again lets you do with a single
command what would have otherwise entailed an error with multiple
commands to recover from it.
* If you run `pulumi stack init` without any additional arguments,
we interactively prompt for the stack name. Before, we would
error and you'd then need to run `pulumi stack init <name>`.
* Colorize some things nicely; for example, now all prompts will
by default become bright white.
This addresses pulumi/pulumi#446: what we used to call "package" is
now called "project". This has gotten more confusing over time, now
that we're doing real package management.
Also fixespulumi/pulumi#426, while in here.
This PR adds a new `pulumi history` command, which prints the update history for a stack.
The local backend stores the update history in a JSON file on disk, next to the checkpoint file. The cloud backend simply provides the update metadata, and expects to receive all the data from a (NYI) `/history` REST endpoint.
`pkg/backend/updates.go` defines the data that is being persisted. The way the data is wired through the system is adding a new `backend.UpdateMetadata` parameter to a Stack/Backend's `Update` and `Destroy` methods.
I use `tests/integration/stack_outputs/` as the simple app for the related tests, hence the addition to the `.gitignore` and fixing the name in the `Pulumi.yaml`.
Fixes#636.
This PR surfaces the configuration options available to updates, previews, and destroys to the Pulumi Service. As part of this I refactored the options to unify them into a single `engine.UpdateOptions`, since they were all overlapping to various degrees.
With this PR we are adding several new flags to commands, e.g. `--summary` was not available on `pulumi destroy`.
There are also a few minor breaking changes.
- `pulumi destroy --preview` is now `pulumi destroy --dry-run` (to match the actual name of the field).
- The default behavior for "--color" is now `Always`. Previously it was `Always` or `Never` based on the value of a `--debug` flag. (You can specify `--color always` or `--color never` to get the exact behavior.)
Fixes#515, and cleans up the code making some other features slightly easier to add.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.
These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
Our recent changes to colorization changed from a boolean to a tri-valued
enum (Always, Never, Raw). The events from the service, however, are still
boolean-valued. This changes the message payload to carry the full values.
The prior behavior with cloud authentication was a bit confusing
when authenticating against anything but https://pulumi.com/. This
change fixes a few aspects of this:
* Improve error messages to differentiate between "authentication
failed" and "you haven't logged into the target cloud URL."
* Default to the cloud you're currently authenticated with, rather
than unconditionally selecting https://pulumi.com/. This ensures
$ pulumi login -c https://api.moolumi.io
$ pulumi stack ls
works, versus what was currently required
$ pulumi login -c https://api.moolumi.io
$ pulumi stack ls -c https://api.moolumi.io
with confusing error messages if you forgot the second -c.
* To do this, our default cloud logic changes to
1) Prefer the explicit -c if supplied;
2) Otherwise, pick the "currently authenticated" cloud; this is
the last cloud to have been targeted with pulumi login, or
otherwise the single cloud in the list if there is only one;
3) https://pulumi.com/ otherwise.
Part of the work to make it easier to tests of diff output. Specifically, we now allow users to pass --color=option for several pulumi commands. 'option' can be one of 'always', 'never', 'raw', and 'auto' (the default).
The meaning of these flags are:
1. auto: colorize normally, unless in --debug
2. always: always colorize no matter what
3. never: never colorize no matter what.
4. raw: colorize, but preserve the original "<{%%}>" style control codes and not the translated platform specific codes. This is for testing purposes and ensures we can have test for this stuff across platform.
As articulated in #714, the way config defaults to workspace-local
configuration is a bit error prone, especially now with the cloud
workflow being the default. This change implements several improvements:
* First, --save defaults to true, so that configuration changes will
persist into your project file. If you want the old local workspace
behavior, you can specify --save=false.
* Second, the order in which we applied configuration was a little
strange, because workspace settings overwrote project settings.
The order is changed now so that we take most specific over least
specific configuration. Per-stack is considered more specific
than global and project settings are considered more specific
than workspace.
* We now warn anytime workspace local configuration is used. This
is a developer scenario and can have subtle effects. It is simply
not safe to use in a team environment. In fact, I lost an arm
this morning due to workspace config... and that's why you always
issue warnings for unsafe things.
If a cloud you've previously authenticated with goes away -- as ours
sort of did, because the cloud endpoing in the CLI changed (to actually
be correct) -- then you can't logout without manually editing the
credentials file in your workspace. This is a little annoying. So,
rather than that, let's have a `pulumi logout --all` command that just
logs out of all clouds you are presently authenticated with.
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).