To help us diagnose various issues -- and also just as a nice status
reporting thing -- we'll add a --verbose flag to the pulumi plugin
install command. This will get used in the package scripts.
We have some code that deals with upgrading legacy projects (which had
workspace level configuration) to the new format where configuration
information was stored SxS with the application.
This code requires us to get a list of stacks from the backend (which
for hosted stacks means hitting api.pulumi.com) as part of the upgrade
process, so we knew all the stacks the user's project has. This is a
somewhat slow operation (which we will make faster regardless) but we
can structure things such that we don't need to do this often.
In the common case, we don't need to actually do upgrading at
all (new projects won't need it and once a project is upgraded that
project won't need it either) so update the code first to check if we
would need to do any work and if so, do the expensive operation of
getting stacks from a backend.
This should help with the slight pauses we've seen on the command line
since the work to default to folks logging in has landed.
The code that calculated plugin sizes was incorrect; it would show the
total size consumed by all plugins, for each plugin, which is clearly
busted. We should compute each plugin's size from its own directory.
This clashes with the existing --emoji/-e flag for the pulumi-wide
flags. I doubt --exact is going to be very common, so rather than
trying to invent a shorthand for it, we can just support the long form.
This change lets plugin versions to float in two ways:
1) If a `pulumi plugin install` detects a newer version is available
already, there's no need to download and install the older version.
2) If the engine attempts to load a plugin at a particular version,
if a newer version is available, it will be accepted without error.
As part of this, we permit $PATH to have the final say when determining
which version to accept. That is, it can always override the choice.
Note that I highly suspect, in the limit, that we'll want to stop doing
this for major version incompatibilities. For now, since we don't
envision any such version changes imminently, this will suffice.
This change does three major things:
1. Removes the ability to be logged into multiple clouds at the same
time. Previously, we supported being logged into multiple clouds at
the same time and the CLI would fan out requests and join responses
when needed. In general, this was only useful for Pulumi employees
that wanted run against multiple copies of the service (say production
and staging) but overall was very confusing (for example in the old
world a stack with the same identity could appear twice (since it was
in two backends) which the CLI didn't handle very well).
2. Stops treating the "local" backend as a special thing, from the
point of view of the CLI. Previouly we'd always connect to the local
backend and merge that data with whatever was in clouds we were
connected to. We had gestures like `--local` in `pulumi stack init`
that meant "use the local mode". Instead, to use the local mode now
you run `pulumi login --cloud-url local://` and then you are logged in
the local backend. Since you can only ever be logged into a single
backend, we can remove the `--local` and `--remote` flags from `pulumi
stack init`, it just now requires you to be logged in and creates a
stack in whatever back end you were logged into. When logging into the
local backend, you are not prompted for an access key.
3. Prompt for login in places where you have to log in, if you are not
already logged in.
Tests now target managed stacks instead of local stacks.
The existing logged in user and target backend API are used unless PULUMI_ACCES_TOKEN is defined, in which case tests are run under that access token and against the PULUMI_API backend.
For developer machines, we will now need to be logged in to Pulumi to run tests, and whichever default API backend is logged in (the one listed as current in ~/.pulumi/credentials.json) will be used. If you need to override these, provide PULUMI_ACCESS_TOKEN and possibly PULUMI_API.
For Travis, we currently target the staging service using the Pulumi Bot user.
We have decided to run tests in the pulumi organization. This can be overridden for local testing (or in Travis in the future) by defining PULUMI_API_OWNER_ORGANIZATION and using an access token with access to that organization.
Part of pulumi/home#195.
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
These changes refactor direct interactions with the Pulumi API out of
the cloud backend and into a subpackage, `pkg/backend/cloud/client`.
This package exposes a slightly higher-level API that takes care of
calculating paths, performing HTTP calls, and occasionally wrapping
multiple physical calls into a single logical call (notably the creation
of an update and the upload of its program).
This is primarily intended as preparation for some of the changes
suggested in the feedback for #1067.
If you use --cloud-url, as in
$ pulumi stack init foo --cloud-url https://x.io
we would silently fall back to logic that creates local stacks.
I realize all of this will get better with the new stack identity
model, however in the meantime, let's infer that the user wanted
--remote when --cloud-url is non-"".
When run without a `--plaintext` or `--secret` argument, `pulumi
config` warns that the value is stored unecrypted and that you can
pass `--secret` to encrypt it. Now, we also mention that `--plaintext`
can be pased explicity on the command line to avoid the warning.
Fixes#752
Note: This is a minor issue that I didn't get to for M11 that isn't
required for M11 and would be fine merging for post-M11.
When you specify a template name explicitly (e.g.
`pulumi new typescript`), we'll try to download the template tarball
without first downloading the JSON list of available templates. The JSON
includes a description used when replacing the `${DESCRIPTION}` string
in template files. Since we didn't download the JSON, we won't have a
description, so we fallback to a default value (`"A Pulumi project."`).
This also happens when specifying `--offline` to use an existing
template under `~/.pulumi/templates`; we won't have a description for
the template, so we fallback to a default description. The fallback
value happens to be the same as the description for each of our current
templates, so noone will currently notice an issue.
For M11, I included initial support for a template manifest file where
the description (and any future metadata) could be stored, but didn't go
as far as actually reading the file.
This change makes it so the CLI actually reads the description from the
manifest file (if it exists), otherwise falling back to the default
value as is done currently. Some minor related cleanup is included in
this change.
This fixes two different DST bugs in the logs test, one of which
led to the failure in pulumi/pulumi#1041:
1) The assertion was using ANSIC formatting, which does not contain
a time-zone, and because we are now in DST, the resulting time
didn't parse and format in a round-trippable way. Instead, let's
use RFC3339, and let's be explicit about the timezone.
2) The parseSince("1m30s") would in theory fail if the DST changed
at just the right moment, since the reference time could end up
different from the reference time that the test is using. Let's
expose the reference time to the caller, so that they may decide
how to deal with these situations, and let's use UTC here.
This adds a `pulumi new` command which makes it easy to quickly
automatically create the handful of needed files to get started building
an empty Pulumi project.
Usage:
```
$ pulumi new typescript
```
Or you can leave off the template name, and it will ask you to choose
one:
```
$ pulumi new
Please choose a template:
> javascript
python
typescript
```
The engine now emits events with richer metadata during the
ResourceOutputs and ResourcePre callbacks. The CLI can then use this
information to decide if it should display the event or not and how
much of the event to display.
Options dealing with what to display and how to display it have moved
into the CLI and the engine now emits all information for each event.
The engine now unconditionally emits a new type of event, a
PreludeEvent, which contains the configuration for a stack as well as
an indication if the stack is being previewed or updated. The
responsibility for interpreting the --show-config flag on the command
line is now handled by the CLI, which uses this to decide if it should
print the configuration or not, and then writes the "Previewing
changes" or "Deploying chanages" header.
This helper method is only really used for testing, but we should not
allow it to create a Key who's namespace has a colon (as ParseKey
would not build something like this).
config.Key has become a pair of namespace and name. Because the whole
world has not changed yet, there continues to be a way to convert
between a tokens.ModuleMember and config.Key, however now sometime the
conversion from tokens.ModuleMember can fail (when the module member
is not of the form `<package>:config:<name>`).
Right now, config.Key is a type alias for tokens.ModuleMember. I did a
pass over the codebase such that we use config.Key everywhere it
looked like the value did not leak to some external process (e.g a
resource provider or a langhost).
Doing this makes it a little clearer (hopefully) where code is
depending on a module member structure (e.g. <package>:config:<value>)
instead of just an opaque type.
As it stands, we only configure those providers for which configuration
is present. This can lead to surprising failure modes if those providers
are then used to create resources. These changes ensure that all
resource providers that are not configured during plan initialization
are configured upon first load.
Fixes#758.
By using untyped deployment structures via `json.RawMessage`, we can
support round-tripping between old CLI clients and newer servers, without
dropping possibly-important information on the floor. I hadn't realized
this design goal with the original system, and after talking to @pgavlin,
I better realized the intent and that we want to preserve this.
The filenames we used to store history data locally only had second
level precision. On my machine, the test history test is able to run
multiple `pulumi update` commands in the same second, which causes a
newer history file to overwrite an older one.
This change moves to using a nanosecond precision timestamp when
writing config. In addition, the CLI was trying to sort the updates
that came back from the backend (instead of just trusting them to be
in newest first order, as we documented) so I removed that code as
well.
Migrate configuration from the old model to the new model. The
strategy here is that when we first run `pulumi` we enumerate all of
the stacks from all of the backends we know about and for each stack
get the configuration values from the project and workspace and
promote them into the new file. As we do this, we remove stack
specific config from the workspace and Pulumi.yaml file.
If we are able to upgrade all the stacks we know about, we delete all
global configuration data in the workspace and in Pulumi.yaml as well.
We have a test that ensures upgrades continue to work.
This change updates our configuration model to make it simpler to
understand by removing some features and changing how things are
persisted in files.
Notable changes:
- We've removed the notion of "workspace" vs "project"
config. Now, configuration is always stored in a file next to
`Pulumi.yaml` named `Pulumi.<stack-name>.yaml` (the same file we'd
use for an other stack specific information we would need to persist
in the future).
- We've removed the notion of project wide configuration. Every new
stack gets a completely empty set of configuration and there's no
way to share common values across stacks, instead the common value
has to be set on each stack.
We retain some of the old code for the configuration system so we can
support upgrading a project in place. That will happen with the next
change.
This change fixes some issues and allows us to close some
others (since they are no longer possible).
Fixes#866Closes#872Closes#731
We are going to be changing the configuration model. To begin, let's
take most of the existing stuff and mark it as "deprecated" so we can
keep the existing behavior (to help transition newer code forward)
while making it clear what APIs should not be called in the
implementation of `pulumi` itself.
Per pulumi/pulumi#984, we will now issue an error if it appears you're
importing a checkpoint from a different stack. This can be overridden
if you know what you're doing (with --force), but in general this is a
sign that you're doing something very wrong that will be hard to undo.
Despite our good progress moving towards having an apitype package,
where our exchange types live and can be shared among the engine and
our services, there were a few major types that were still duplciated.
Resource was the biggest example -- and indeed, the apitype varirant
was missing the new Dependencies property -- but there were others,
like Manfiest, PluginInfo, etc. These too had semi-random omissions.
This change merges all of these types into the apitype package. This
not only cleans up the redundancy and missing properties, but will
"force the issue" with respect to keeping them in sync and properly
versioning the information in a backwards compatible way.
The resource/stack package still exists as a simple marshaling layer
to and from the engine's core data types.
Finally, I've made the controversial change to share the actual
Deployment data structure at the apitype layer also. This will force
us to confront differences in that data structure similarly, and will
allow us to leverage the strong typing throughout to catch issues.
If currently logged in, `stack init` creates a managed stack. Otherwise, it creates a local
stack. This avoids the need to specify `--local` when not using the service.
As today, `--local` can be passed, which will create a local stack regardless of being logged
in or not.
A new flag, `--remote`, has been added, which can be passed to indicate a managed stack,
used to force an error if not logged into the service.
1. Output different-colored edges for parent-child resource
relationships
2. Allow the changing of edge colors via command-line parameters
3. Allow the skipping of the parent-child graph or the
dependency graph when calculating all edges
This modifies the Graph interface slightly to allow an edge to specify
what color should be used when drawing it.
Some file systems do not record BithTimes and BirthTime panics in
these cases. We use HasBirthTimes to guard against this and print n/a
when we do not have a BirthTime.
This commit does two things:
1. All dependencies of a resource, both implicit and explicit, are
communicated directly to the engine when registering a resource. The
engine keeps track of these dependencies and ultimately serializes
them out to the checkpoint file upon successful deployment.
2. Once a successful deployment is done, the new `pulumi stack
graph` command reads the checkpoint file and outputs the dependency
information within in the DOT format.
Keeping track of dependency information within the checkpoint file is
desirable for a number of reasons, most notably delete-before-create,
where we want to delete resources before we have created their
replacement when performing an update.
I was reminded of this yesterday with unprintable characters as I
debugged some things on Windows. Inspired by Yarn, this change adds
a new flag --emoji (-e for short) that can be used to control whether
we show ASCII-only characters or not in the console. On Mac, it
defaults to true, and on Windows and Linux, it defaults to false.
This also brings back the retro ASCII-friendly progress spinner
when --emoji is disabled.
This adds support for two things:
* Installing all plugins that a project requires with a single command:
$ pulumi plugin install
* Listing the plugins that this project requires:
$ pulumi plugin ls --project
$ pulumi plugin ls -p
Prior to this change, we had a flat list of files in the
~/.pulumi/plugins directory. This was simple but unfortunately
too naive, since we in fact have multi-file plugins already.
Dumping them in the same directory increases the risk of a
collision. Instead, let's put them in their own directories.
This means, for example, you'll see things like
~/.pulumi/plugins/
resource-aws-v0.11.0-dev-8-g57a0d62/
README.txt
pulumi-resource-aws
Notice that the binary name stays the same -- e.g., in this
case pulumi-resource-aws -- and does not include the version.
This makes it simple to add it to your $PATH in the usual ways
and have it loaded as a preferred location.
This change renames prune to rm, to match what we use for other
similar commands. Someday perhaps we will add a prune that uses
some smarts to prune old plugins, etc.
Also tidy up some minor things about the command. For example,
we now require --all if you want to truly clear the entire plugin
cache. We also print more detail, like the full list of plugins
to be removed, in the confirmation prompt.