This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
This change includes a handful of stack-related CLI formatting
improvements that I've been noodling on in the background for a while,
based on things that tend to trip up demos and the inner loop workflow.
This includes:
* If `pulumi stack select` is run by itself, use an interactive
CLI menu to let the user select an existing stack, or choose to
create a new one. This looks as follows
$ pulumi stack select
Please choose a stack, or choose to create a new one:
abcdef
babblabblabble
> currentlyselected
defcon
<create a new stack>
and is navigated in the usual way (key up, down, enter).
* If a stack name is passed that does not exist, prompt the user
to ask whether s/he wants to create one on-demand. This hooks
interesting moments in time, like `pulumi stack select foo`,
and cuts down on the need to run additional commands.
* If a current stack is required, but none is currently selected,
then pop the same interactive menu shown above to select one.
Depending on the command being run, we may or may not show the
option to create a new stack (e.g., that doesn't make much sense
when you're running `pulumi destroy`, but might when you're
running `pulumi stack`). This again lets you do with a single
command what would have otherwise entailed an error with multiple
commands to recover from it.
* If you run `pulumi stack init` without any additional arguments,
we interactively prompt for the stack name. Before, we would
error and you'd then need to run `pulumi stack init <name>`.
* Colorize some things nicely; for example, now all prompts will
by default become bright white.
This PR adds a new `pulumi history` command, which prints the update history for a stack.
The local backend stores the update history in a JSON file on disk, next to the checkpoint file. The cloud backend simply provides the update metadata, and expects to receive all the data from a (NYI) `/history` REST endpoint.
`pkg/backend/updates.go` defines the data that is being persisted. The way the data is wired through the system is adding a new `backend.UpdateMetadata` parameter to a Stack/Backend's `Update` and `Destroy` methods.
I use `tests/integration/stack_outputs/` as the simple app for the related tests, hence the addition to the `.gitignore` and fixing the name in the `Pulumi.yaml`.
Fixes#636.
This does three things:
* Use nice humanized times for update times, to avoid ridiculously
long timestamps consuming lots of horizontal space. Instead of
LAST UPDATE
2017-12-12 12:22:59.994163319 -0800 PST
we now see
LAST UPDATE
1 day ago
* Use the longest config key for the horizontal spacing when the key
exceeds the default alignment size. This avoids individual lines
wrapping in awkward ways.
* Do the same for stack names and output properties.
This change does two things:
1) Adds progress reporting to our uploads.
2) Eliminate the sleeps that burned 7 seconds at the front of
any cloud update, needlessly. It's actually impressively
fast without these!
When deploying a project via the Pulumi.com service, we have to upload
the entire "context" of your project to Pulumi.com. The context of the
program is all files in the directory tree rooted by the `Pulumi.yaml`
file, which will often contain stuff we don't want to upload, but
previously we had no control over what would be updated (and so folks
would do hacky things like delete folders before running `pulumi
update`).
This change adds support for `.pulumiignore` files which should behave
like `.gitignore`. In addition, we were not previously compressing
files when we added them to the zip archive we uploaded and now.
By default, every .pulumiignore file is treated as if it had an
exclusion for `.git/` at the top of the file (users can override this
by adding an explicit `!.git/` to their file) since it is very
unlikely for there to ever be a reason to upload the .git folder to
the service.
Fixespulumi/pulumi-service#122
This is the smallest possible thing that could work for both the local
development case and the case where we are targeting the Pulumi
Service.
Pulling down the pulumiframework and component packages here is a bit
of a feel bad, but we plan to rework the model soon enough to a
provider model which will remove the need for us to hold on to this
code (and will bring back the factoring where the CLI does not have
baked in knowledge of the Pulumi Cloud Framework).
Fixes#527
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.
We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.
The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.
We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.
In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.
We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.
We currently support sending tracing data to a Zipkin-compatible endpoint. This has been validated with both Zipkin and Jaeger UIs.
We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface. There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.
When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.
We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.
When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.
The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).
Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).
In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.
Secure values show up as unecrypted config values inside the language
hosts and providers.
Instead of doing the logic to see if a type has YAML tags and then
dispatching based on that to use either the direct go-yaml marshaller
or the one that works in terms of JSON tags, let's just say that we
always add YAML tags as well, and use go-yaml directly.
This change adds a `make configure` target, which handles preparing
the environment for building the project. This includes existing
steps, like dep ensure and yarn installing the Node.js SDK NPM
dependencies, and also includes downloading the right Node.js/V8
includes, putting them in the right place, and then generating the
appropriate node-gyp project files that reference those includes.