This change adds a Pulumi Cloud Console URL at the end of an update
that went through the Pulumi Cloud API. This is very basic, and is
meant to be just the beginning of adding more cross-linking to the
service. I'm still thinking through the phrasing to use here.
* Send structured errors across RPC boundaries
This brings us closer to gRPC best practices where we send structured
errors with error codes across RPC endpoints. The new "rpcerrors"
package can wrap errors from RPC endpoints, so RPC servers can attach
some additional context as to why a request failed.
* Code review feedback:
1. Rename rpcerrors -> rpcerror, better package name
2. Rename RPCError -> Error, RPCErrorCause -> ErrorCause, names
suggested by gometalinter to improve their package-qualified names
3. Fix import organization in rpcerror.go
This endpoint is relatively expensive, as it returns a large amount of
data for each stack (essentially the stack's entire checkpoint). We
currently call this endpoint in a few places that really just need
information about a single stack. In these places we should simply
interact with that stack directly.
After these changes, there is only one call to the `stacks` endpoint on
the startup path: `upgradeConfigurationFiles` loops over all of the
stacks in each backend and attempts to upgrade each stack's
configuration. Once we no longer need to do this we will not be hitting
the `stacks` endpoint on each CLI invocation.
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
We previously locked our dependency on google.golang.org/grpc to 1.7.2 due to issues we had seen on 1.8.x as noted in #701. However, this has prevented us using some other dependencies which require newer grpc. A test in this repo and AWS showed no problems with the latest 1.10.x versions of the library in our tests.
We'll go ahead and remove this constraint and allow grpc to float forward. If we see issues again, we'll use that repro case to investigate an alternative fix in our code.
Resolves#701.
These changes add the API types and cloud backend code necessary to
interact with service-managed stacks (i.e. stacks that do not have
PPC-managed deployments). The bulk of these changes are unremarkable:
the API types are straightforward, as are most of the interactions with
the new APIs. The trickiest bits are token and log management.
During an update to a managed stack, the CLI must continually renew the
token used to authorize the operations on that stack that comprise the
update. Once a token has been renewed, the old token should be
discarded. The CLI supports this by running a goroutine that is
responsible for both periodically renewing the token for an update and
servicing requests for the token itself from the rest of the backend.
In addition to token renewal, log output must be captured and uploaded
to the service during an update to a managed stack. Implementing this in
a reasonable fashion required a bit of refactoring in order to reuse
what already exists for the local backend. Each event-specific `Display`
function was replaced with an equivalent `Render` function that returns
a string rather than writing to a stream. This approach was chosen
primarily to avoid dealing with sheared colorization tags, which would
otherwise require clients to fuse log lines before colorizing. We could
take that approach in the future.
These changes refactor direct interactions with the Pulumi API out of
the cloud backend and into a subpackage, `pkg/backend/cloud/client`.
This package exposes a slightly higher-level API that takes care of
calculating paths, performing HTTP calls, and occasionally wrapping
multiple physical calls into a single logical call (notably the creation
of an update and the upload of its program).
This is primarily intended as preparation for some of the changes
suggested in the feedback for #1067.
We needed to have two types of `UpdateProgramRequest` to serve both newer shapes of Pulumi configuration values as well as the older, untyped version. (The original change was supporting the `ConfigValue` type, which had a flag to indiciate if the configuration value was encrypted or not.)
Now that LM has been migrated to the M10 bits https://github.com/pulumi/home/issues/168 , we can remove this type. (The PR to remove existing references in the `pulumi-service` repo is https://github.com/pulumi/pulumi-service/pull/957.)
The semantic versions we were using for pre-release PyPI packages wasn't
quite right. We had been using local version identifiers, a la +, rather
than proper pre-release tags, which means that version specifications like
`pulumi==0.11.0` will match `0.11.0+dev.1521506136.g7f043fd.dirty`, in
addition to just `0.11.0`. This is clearly not what we want.
This change moves us over to proper alpha and release candidate versions,
as specified in https://www.python.org/dev/peps/pep-0440/#pre-releases.
Namely, any `x.y.z-dev-123456789-abcdef` version will get translated into
an alpha `x.y.za12345679-abcdef`, and any `x.y.z-rc1` will get translated
into a proper release candidate `x.y.zrc1` version number.
If you use --cloud-url, as in
$ pulumi stack init foo --cloud-url https://x.io
we would silently fall back to logic that creates local stacks.
I realize all of this will get better with the new stack identity
model, however in the meantime, let's infer that the user wanted
--remote when --cloud-url is non-"".
When run without a `--plaintext` or `--secret` argument, `pulumi
config` warns that the value is stored unecrypted and that you can
pass `--secret` to encrypt it. Now, we also mention that `--plaintext`
can be pased explicity on the command line to avoid the warning.
Fixes#752
Implement Python invoke RPC
This change implements the invoke function for resource provider
RPCs. This is required to support a customer scenario.
There are a few other minor updates:
* Rename pulumi.export to pulumi.output.
* Change register_resource to, like invoke, return the resulting
object/dictionary, instead of the set_outputs function.
* Initialize the monitor/engine RPC connections to None when not
attached to the engine, thus ensuring good error messages.
* Fix Python project/stack metadata
Our previous strategy of just using `git describe --tags --dirty` to
compute a version caused issues. The major one was that since version
sort lexigrapically, git's strategy of having a commit count without
leading zeros lead to cases where 0.11.0-dev-9 was "newer than"
0.11.0-dev-10 which is not what you want at all.
With this change, we compute a version by first seeing if the commit
is tagged, and if so, we use that tag. Otherwise, we take the closest
tag and to it append the unix timestamp of the commit and then append
a git hash.
Because we use the commit timestamp, things will sort correctly again.
Part of pulumi/home#174
Just a nit: It's possible (though, unlikely) that the repo file is
deleted between the call to `os.Stat` and `ioutil.ReadFile`. Instead,
just try to read the file -- if the file doesn't exist,
`ioutil.ReadFile` will return an error that works the same with
`os.IsNotExist(err)` as the error returned from `os.Stat`.
We need to support the current version of the nodejs language host
running programs that use older version of @pulumi/pulumi where the
runtime expected config keys to look like
`<package>:config:<name>`. In the language host we actually did the
transformation from the new format to the old one, for compatability
reasons but we then droped the transfomed value on the floor.
Note: This is a minor issue that I didn't get to for M11 that isn't
required for M11 and would be fine merging for post-M11.
When you specify a template name explicitly (e.g.
`pulumi new typescript`), we'll try to download the template tarball
without first downloading the JSON list of available templates. The JSON
includes a description used when replacing the `${DESCRIPTION}` string
in template files. Since we didn't download the JSON, we won't have a
description, so we fallback to a default value (`"A Pulumi project."`).
This also happens when specifying `--offline` to use an existing
template under `~/.pulumi/templates`; we won't have a description for
the template, so we fallback to a default description. The fallback
value happens to be the same as the description for each of our current
templates, so noone will currently notice an issue.
For M11, I included initial support for a template manifest file where
the description (and any future metadata) could be stored, but didn't go
as far as actually reading the file.
This change makes it so the CLI actually reads the description from the
manifest file (if it exists), otherwise falling back to the default
value as is done currently. Some minor related cleanup is included in
this change.
The four concerns are:
parsing a v8 function string so we can figure out captured variables.
walkgin the function/object graph producing the graph we will serialize.
rewriting constructors and methods so that 'super' works.
serializing graph to text.
This change uses the prior checkpoint's deployment manifest to pre-
populate all plugins required to complete the destroy operation. This
allows for subsequent attempts to load a resource's plugin to match the
already-loaded version. This approach obviously doesn't work in a
hypothetical future world where plugins for the same resource provider
are loaded side-by-side, but we already know that.
Also, rename/cleanup a bunch of serialization code.
Also, generate better environment names in the serialized closure code. Thsi code should be much easier to make sense of as hte names will better track to the original names in the user code.
Also, dedupe simple non-capturing functions. This helps ensure we don't spit out N copies of __awaiter (one per file it is declared in).
This fixes two different DST bugs in the logs test, one of which
led to the failure in pulumi/pulumi#1041:
1) The assertion was using ANSIC formatting, which does not contain
a time-zone, and because we are now in DST, the resulting time
didn't parse and format in a round-trippable way. Instead, let's
use RFC3339, and let's be explicit about the timezone.
2) The parseSince("1m30s") would in theory fail if the DST changed
at just the right moment, since the reference time could end up
different from the reference time that the test is using. Let's
expose the reference time to the caller, so that they may decide
how to deal with these situations, and let's use UTC here.