2018-05-22 19:43:36 +00:00
|
|
|
// Copyright 2016-2018, Pulumi Corporation.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
|
2017-09-22 02:18:21 +00:00
|
|
|
package cmd
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
|
|
|
|
import (
|
2018-04-20 01:59:14 +00:00
|
|
|
"context"
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 18:51:29 +00:00
|
|
|
|
2018-01-25 02:22:41 +00:00
|
|
|
"github.com/pkg/errors"
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
"github.com/spf13/cobra"
|
Make major commands more pleasant
This change eliminates the need to constantly type in the environment
name when performing major commands like configuration, planning, and
deployment. It's probably due to my age, however, I keep fat-fingering
simple commands in front of investors and I am embarrassed!
In the new model, there is a notion of a "current environment", and
I have modeled it kinda sorta just like Git's notion of "current branch."
By default, the current environment is set when you `init` something.
Otherwise, there is the `coco env select <env>` command to change it.
(Running this command w/out a new <env> will show you the current one.)
The major commands `config`, `plan`, `deploy`, and `destroy` will prefer
to use the current environment, unless it is overridden by using the
--env flag. All of the `coco env <cmd> <env>` commands still require the
explicit passing of an environment which seems reasonable since they are,
after all, about manipulating environments.
As part of this, I've overhauled the aging workspace settings cruft,
which had fallen into disrepair since the initial prototype.
2017-03-22 02:23:32 +00:00
|
|
|
|
2018-09-05 14:20:25 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/backend"
|
2018-09-04 22:40:15 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/backend/display"
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 15:29:46 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/engine"
|
2017-09-22 02:18:21 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/cmdutil"
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
)
|
|
|
|
|
2017-03-15 22:40:06 +00:00
|
|
|
func newDestroyCmd() *cobra.Command {
|
2017-07-14 00:09:46 +00:00
|
|
|
var debug bool
|
2017-10-16 19:04:35 +00:00
|
|
|
var stack string
|
2018-01-18 19:10:15 +00:00
|
|
|
|
2018-01-25 02:22:41 +00:00
|
|
|
var message string
|
|
|
|
|
2018-01-18 19:10:15 +00:00
|
|
|
// Flags for engine.UpdateOptions.
|
|
|
|
var analyzers []string
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 18:22:39 +00:00
|
|
|
var diffDisplay bool
|
2017-09-17 15:10:46 +00:00
|
|
|
var parallel int
|
2018-08-23 00:52:46 +00:00
|
|
|
var refresh bool
|
2018-01-18 19:10:15 +00:00
|
|
|
var showConfig bool
|
|
|
|
var showReplacementSteps bool
|
|
|
|
var showSames bool
|
2018-05-05 18:57:09 +00:00
|
|
|
var skipPreview bool
|
2018-10-06 21:13:02 +00:00
|
|
|
var suppressOutputs bool
|
2018-05-05 18:57:09 +00:00
|
|
|
var yes bool
|
2018-01-18 19:10:15 +00:00
|
|
|
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
var cmd = &cobra.Command{
|
2017-10-21 00:36:47 +00:00
|
|
|
Use: "destroy",
|
2017-11-14 01:26:29 +00:00
|
|
|
SuggestFor: []string{"delete", "down", "kill", "remove", "rm", "stop"},
|
2017-10-21 00:36:47 +00:00
|
|
|
Short: "Destroy an existing stack and its resources",
|
2017-10-16 19:04:35 +00:00
|
|
|
Long: "Destroy an existing stack and its resources\n" +
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
"\n" +
|
2017-10-16 19:04:35 +00:00
|
|
|
"This command deletes an entire existing stack by name. The current state is\n" +
|
2017-03-06 14:32:39 +00:00
|
|
|
"loaded from the associated snapshot file in the workspace. After running to completion,\n" +
|
2017-10-16 19:04:35 +00:00
|
|
|
"all of this stack's resources and associated state will be gone.\n" +
|
Add basic targeting capability
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update. This simplifies the management of deployment
records, checkpoints, and snapshots.
I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming). The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:
nutpack/
bin/
Nutpack.json
... any other compiled artifacts ...
husks/
... one snapshot per husk ...
For example, if we had a stage and prod husk, we would have:
nutpack/
bin/...
husks/
prod.json
stage.json
In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment. These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.
The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
2017-02-25 17:24:52 +00:00
|
|
|
"\n" +
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 18:51:29 +00:00
|
|
|
"Warning: although old snapshots can be used to recreate a stack, this command\n" +
|
2018-04-14 05:26:01 +00:00
|
|
|
"is generally irreversible and should be used with great care.",
|
2017-11-29 21:44:06 +00:00
|
|
|
Args: cmdutil.NoArgs,
|
2017-04-12 18:12:25 +00:00
|
|
|
Run: cmdutil.RunFunc(func(cmd *cobra.Command, args []string) error {
|
2018-09-29 17:49:14 +00:00
|
|
|
interactive := cmdutil.Interactive()
|
2018-05-05 18:57:09 +00:00
|
|
|
if !interactive {
|
|
|
|
yes = true // auto-approve changes, since we cannot prompt.
|
|
|
|
}
|
|
|
|
|
|
|
|
opts, err := updateFlagsToOptions(interactive, skipPreview, yes)
|
Revise the way previews are controlled
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
2018-04-28 21:50:17 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
2018-04-14 05:26:01 +00:00
|
|
|
}
|
|
|
|
|
2018-09-04 22:40:15 +00:00
|
|
|
opts.Display = display.Options{
|
2018-07-07 04:30:00 +00:00
|
|
|
Color: cmdutil.GetGlobalColorization(),
|
|
|
|
ShowConfig: showConfig,
|
|
|
|
ShowReplacementSteps: showReplacementSteps,
|
|
|
|
ShowSameResources: showSames,
|
2018-10-06 21:13:02 +00:00
|
|
|
SuppressOutputs: suppressOutputs,
|
2018-07-07 04:30:00 +00:00
|
|
|
IsInteractive: interactive,
|
|
|
|
DiffDisplay: diffDisplay,
|
|
|
|
Debug: debug,
|
|
|
|
}
|
|
|
|
|
Initial support for passing URLs to `new` and `up` (#1727)
* Initial support for passing URLs to `new` and `up`
This PR adds initial support for `pulumi new` using Git under the covers
to manage Pulumi templates, providing the same experience as before.
You can now also optionally pass a URL to a Git repository, e.g.
`pulumi new [<url>]`, including subdirectories within the repository,
and arbitrary branches, tags, or commits.
The following commands result in the same behavior from the user's
perspective:
- `pulumi new javascript`
- `pulumi new https://github.com/pulumi/templates/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates/javascript`
To specify an arbitrary branch, tag, or commit:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates/javascript`
Branches and tags can include '/' separators, and `pulumi` will still
find the right subdirectory.
URLs to Gists are also supported, e.g.:
`pulumi new https://gist.github.com/justinvp/6673959ceb9d2ac5a14c6d536cb871a6`
If the specified subdirectory in the repository does not contain a
`Pulumi.yaml`, it will look for subdirectories within containing
`Pulumi.yaml` files, and prompt the user to choose a template, along the
lines of how `pulumi new` behaves when no template is specified.
The following commands result in the CLI prompting to choose a template:
- `pulumi new`
- `pulumi new https://github.com/pulumi/templates/templates`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates`
Of course, arbitrary branches, tags, or commits can be specified as well:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates`
This PR also includes initial support for passing URLs to `pulumi up`,
providing a streamlined way to deploy installable cloud applications
with Pulumi, without having to manage source code locally before doing
a deployment.
For example, `pulumi up https://github.com/justinvp/aws` can be used to
deploy a sample AWS app. The stack can be updated with different
versions, e.g.
`pulumi up https://github.com/justinvp/aws/tree/v2 -s <stack-to-update>`
Config values can optionally be passed via command line flags, e.g.
`pulumi up https://github.com/justinvp/aws -c aws:region=us-west-2 -c foo:bar=blah`
Gists can also be used, e.g.
`pulumi up https://gist.github.com/justinvp/62fde0463f243fcb49f5a7222e51bc76`
* Fix panic when hitting ^C from "choose template" prompt
* Add description to templates
When running `pulumi new` without specifying a template, include the template description along with the name in the "choose template" display.
```
$ pulumi new
Please choose a template:
aws-go A minimal AWS Go program
aws-javascript A minimal AWS JavaScript program
aws-python A minimal AWS Python program
aws-typescript A minimal AWS TypeScript program
> go A minimal Go program
hello-aws-javascript A simple AWS serverless JavaScript program
javascript A minimal JavaScript program
python A minimal Python program
typescript A minimal TypeScript program
```
* React to changes to the pulumi/templates repo.
We restructured the `pulumi/templates` repo to have all the templates in the root instead of in a `templates` subdirectory, so make the change here to no longer look for templates in `templates`.
This also fixes an issue around using `Depth: 1` that I found while testing this. When a named template is used, we attempt to clone or pull from the `pulumi/templates` repo to `~/.pulumi/templates`. Having it go in this well-known directory allows us to maintain previous behavior around allowing offline use of templates. If we use `Depth: 1` for the initial clone, it will fail when attempting to pull when there are updates to the remote repository. Unfortunately, there's no built-in `--unshallow` support in `go-git` and setting a larger `Depth` doesn't appear to help. There may be a workaround, but for now, if we're cloning the pulumi templates directory to `~/.pulumi/templates`, we won't use `Depth: 1`. For template URLs, we will continue to use `Depth: 1` as we clone those to a temp directory (which gets deleted) that we'll never try to update.
* List available templates in help text
* Address PR Feedback
* Don't show "Installing dependencies" message for `up`
* Fix secrets handling
When prompting for config, if the existing stack value is a secret, keep it a secret and mask the prompt. If the template says it should be secret, make it a secret.
* Fix ${PROJECT} and ${DESCRIPTION} handling for `up`
Templates used with `up` should already have a filled-in project name and description, but if it's a `new`-style template, that has `${PROJECT}` and/or `${DESCRIPTION}`, be helpful and just replace these with better values.
* Fix stack handling
Add a bool `setCurrent` param to `requireStack` to control whether the current stack should be saved in workspace settings. For the `up <url>` case, we don't want to save. Also, split the `up` code into two separate functions: one for the `up <url>` case and another for the normal `up` case where you have workspace in your current directory. While we may be able to combine them back into a single function, right now it's a bit cleaner being separate, even with some small amount of duplication.
* Fix panic due to nil crypter
Lazily get the crypter only if needed inside `promptForConfig`.
* Embellish comment
* Harden isPreconfiguredEmptyStack check
Fix the code to check to make sure the URL specified on the command line matches the URL stored in the `pulumi:template` config value, and that the rest of the config from the stack satisfies the config requirements of the template.
2018-08-11 01:08:16 +00:00
|
|
|
s, err := requireStack(stack, false, opts.Display, true /*setCurrent*/)
|
2017-10-18 22:37:18 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2018-02-14 21:56:16 +00:00
|
|
|
proj, root, err := readProject()
|
2018-01-08 21:01:40 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2018-01-25 02:22:41 +00:00
|
|
|
m, err := getUpdateMetadata(message, root)
|
|
|
|
if err != nil {
|
|
|
|
return errors.Wrap(err, "gathering environment metadata")
|
|
|
|
}
|
|
|
|
|
2018-05-05 18:57:09 +00:00
|
|
|
opts.Engine = engine.UpdateOptions{
|
|
|
|
Analyzers: analyzers,
|
|
|
|
Parallel: parallel,
|
|
|
|
Debug: debug,
|
2018-08-23 00:52:46 +00:00
|
|
|
Refresh: refresh,
|
2018-05-05 18:57:09 +00:00
|
|
|
}
|
2017-08-22 23:23:50 +00:00
|
|
|
|
2018-09-05 14:20:25 +00:00
|
|
|
_, err = s.Destroy(commandContext(), backend.UpdateOperation{
|
|
|
|
Proj: proj,
|
|
|
|
Root: root,
|
|
|
|
M: m,
|
|
|
|
Opts: opts,
|
|
|
|
Scopes: cancellationScopes,
|
|
|
|
})
|
2018-04-20 01:59:14 +00:00
|
|
|
if err == context.Canceled {
|
|
|
|
return errors.New("destroy cancelled")
|
|
|
|
}
|
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
2018-08-11 04:39:59 +00:00
|
|
|
return PrintEngineError(err)
|
2017-03-07 13:47:42 +00:00
|
|
|
}),
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
}
|
|
|
|
|
2017-07-14 00:09:46 +00:00
|
|
|
cmd.PersistentFlags().BoolVarP(
|
|
|
|
&debug, "debug", "d", false,
|
|
|
|
"Print detailed debugging output during resource operations")
|
Make major commands more pleasant
This change eliminates the need to constantly type in the environment
name when performing major commands like configuration, planning, and
deployment. It's probably due to my age, however, I keep fat-fingering
simple commands in front of investors and I am embarrassed!
In the new model, there is a notion of a "current environment", and
I have modeled it kinda sorta just like Git's notion of "current branch."
By default, the current environment is set when you `init` something.
Otherwise, there is the `coco env select <env>` command to change it.
(Running this command w/out a new <env> will show you the current one.)
The major commands `config`, `plan`, `deploy`, and `destroy` will prefer
to use the current environment, unless it is overridden by using the
--env flag. All of the `coco env <cmd> <env>` commands still require the
explicit passing of an environment which seems reasonable since they are,
after all, about manipulating environments.
As part of this, I've overhauled the aging workspace settings cruft,
which had fallen into disrepair since the initial prototype.
2017-03-22 02:23:32 +00:00
|
|
|
cmd.PersistentFlags().StringVarP(
|
2017-10-16 19:04:35 +00:00
|
|
|
&stack, "stack", "s", "",
|
2018-06-26 00:24:53 +00:00
|
|
|
"The name of the stack to operate on. Defaults to the current stack")
|
2018-01-25 02:22:41 +00:00
|
|
|
cmd.PersistentFlags().StringVarP(
|
|
|
|
&message, "message", "m", "",
|
|
|
|
"Optional message to associate with the destroy operation")
|
|
|
|
|
2018-01-18 19:10:15 +00:00
|
|
|
// Flags for engine.UpdateOptions.
|
|
|
|
cmd.PersistentFlags().StringSliceVar(
|
|
|
|
&analyzers, "analyzer", []string{},
|
|
|
|
"Run one or more analyzers as part of this update")
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 18:22:39 +00:00
|
|
|
cmd.PersistentFlags().BoolVar(
|
|
|
|
&diffDisplay, "diff", false,
|
|
|
|
"Display operation as a rich diff showing the overall change")
|
2017-09-17 15:10:46 +00:00
|
|
|
cmd.PersistentFlags().IntVarP(
|
2018-08-27 23:41:20 +00:00
|
|
|
¶llel, "parallel", "p", defaultParallel,
|
2018-10-17 22:33:26 +00:00
|
|
|
"Allow P resource operations to run in parallel at once (1 for no parallelism, 0 for unbounded parallelism)")
|
2018-08-23 00:52:46 +00:00
|
|
|
cmd.PersistentFlags().BoolVarP(
|
|
|
|
&refresh, "refresh", "r", false,
|
|
|
|
"Refresh the state of the stack's resources before this update")
|
2017-10-16 19:04:35 +00:00
|
|
|
cmd.PersistentFlags().BoolVar(
|
2018-01-18 19:10:15 +00:00
|
|
|
&showConfig, "show-config", false,
|
|
|
|
"Show configuration keys and variables")
|
2017-03-01 00:44:46 +00:00
|
|
|
cmd.PersistentFlags().BoolVar(
|
2018-01-18 19:10:15 +00:00
|
|
|
&showReplacementSteps, "show-replacement-steps", false,
|
|
|
|
"Show detailed resource replacement creates and deletes instead of a single step")
|
|
|
|
cmd.PersistentFlags().BoolVar(
|
|
|
|
&showSames, "show-sames", false,
|
2018-04-14 05:26:01 +00:00
|
|
|
"Show resources that don't need to be updated because they haven't changed, alongside those that do")
|
2018-05-05 18:57:09 +00:00
|
|
|
cmd.PersistentFlags().BoolVar(
|
|
|
|
&skipPreview, "skip-preview", false,
|
|
|
|
"Do not perform a preview before performing the destroy")
|
2018-10-06 21:13:02 +00:00
|
|
|
cmd.PersistentFlags().BoolVar(
|
|
|
|
&suppressOutputs, "suppress-outputs", false,
|
|
|
|
"Suppress display of stack outputs (in case they contain sensitive values)")
|
2018-05-08 16:55:08 +00:00
|
|
|
cmd.PersistentFlags().BoolVarP(
|
|
|
|
&yes, "yes", "y", false,
|
2018-05-05 18:57:09 +00:00
|
|
|
"Automatically approve and perform the destroy after previewing it")
|
2018-04-24 21:23:08 +00:00
|
|
|
|
Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 19:21:26 +00:00
|
|
|
return cmd
|
|
|
|
}
|