This change does three major things:
1. Removes the ability to be logged into multiple clouds at the same
time. Previously, we supported being logged into multiple clouds at
the same time and the CLI would fan out requests and join responses
when needed. In general, this was only useful for Pulumi employees
that wanted run against multiple copies of the service (say production
and staging) but overall was very confusing (for example in the old
world a stack with the same identity could appear twice (since it was
in two backends) which the CLI didn't handle very well).
2. Stops treating the "local" backend as a special thing, from the
point of view of the CLI. Previouly we'd always connect to the local
backend and merge that data with whatever was in clouds we were
connected to. We had gestures like `--local` in `pulumi stack init`
that meant "use the local mode". Instead, to use the local mode now
you run `pulumi login --cloud-url local://` and then you are logged in
the local backend. Since you can only ever be logged into a single
backend, we can remove the `--local` and `--remote` flags from `pulumi
stack init`, it just now requires you to be logged in and creates a
stack in whatever back end you were logged into. When logging into the
local backend, you are not prompted for an access key.
3. Prompt for login in places where you have to log in, if you are not
already logged in.
Tests now target managed stacks instead of local stacks.
The existing logged in user and target backend API are used unless PULUMI_ACCES_TOKEN is defined, in which case tests are run under that access token and against the PULUMI_API backend.
For developer machines, we will now need to be logged in to Pulumi to run tests, and whichever default API backend is logged in (the one listed as current in ~/.pulumi/credentials.json) will be used. If you need to override these, provide PULUMI_ACCESS_TOKEN and possibly PULUMI_API.
For Travis, we currently target the staging service using the Pulumi Bot user.
We have decided to run tests in the pulumi organization. This can be overridden for local testing (or in Travis in the future) by defining PULUMI_API_OWNER_ORGANIZATION and using an access token with access to that organization.
Part of pulumi/home#195.
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
This takes the existing `apitype.Checkpoint` type and renames it to
`apitype.CheckpointV1` locking in the shape. In addition, we introduce
a `apitype.VersionedCheckpoint` type, which holds a version number and
a json document representing a checkpoint at that version. Now, when
reading a checkpoint, the CLI can determine if it's in a format it
understands, and fail gracefully if it is not.
While the CLI understands the older checkpoint version, it always
writes the newest version format, meaning that if you manage a
fire-and-forget stack with this version of the CLI, it will be
un-readable by previous versions.
Stacks managed by Pulumi.com are not impacted by this change.
Fixes: #887
The change to refactor out where we store configuration data broke our
old strategy, which we discovered when we tried to take this payload
into pulumi-aws.
Despite our good progress moving towards having an apitype package,
where our exchange types live and can be shared among the engine and
our services, there were a few major types that were still duplciated.
Resource was the biggest example -- and indeed, the apitype varirant
was missing the new Dependencies property -- but there were others,
like Manfiest, PluginInfo, etc. These too had semi-random omissions.
This change merges all of these types into the apitype package. This
not only cleans up the redundancy and missing properties, but will
"force the issue" with respect to keeping them in sync and properly
versioning the information in a backwards compatible way.
The resource/stack package still exists as a simple marshaling layer
to and from the engine's core data types.
Finally, I've made the controversial change to share the actual
Deployment data structure at the apitype layer also. This will force
us to confront differences in that data structure similarly, and will
allow us to leverage the strong typing throughout to catch issues.
This change gets enough of the Python SDK up and running that the
empty Python program will work. Mostly just scaffolding, but the
basic structure is now in place. The primary remaining work is to
wire up resource creation to the gRPC interfaces.
In summary:
* The basic structure is as follows:
- Everything goes into sdk/python/.
- sdk/python/cmd/pulumi-langhost-python is a Go language host
that simply knows how to spawn Python processes to run out
entrypoint in response to requests by the engine.
- sdk/python/cmd/pulumi-langhost-python-exec is a little Python
shim that is invoked by the language host to run Python programs,
and is responsible for setting up the minimal goo before we can
do so (RPC connections and the like).
- sdk/python/lib/ contains a Python Pip package suitable for PyPi.
- In there, we have two packages: the root pulumi package that
contains all of the basic Pulumi programming model abstractions,
and pulumi.runtime, which contains the implementation of
resource registration, RPC interfacing with the engine, and so on.
* Add logic in our test framework to conditionalize on the language
type and react accordingly. This will allow us to skip Yarn for
Python projects and eventually run Pip if there's a requirements.txt.
* Created the basic project structure, including all of the usual
Make targets for installing into the proper places.
* Building also runs Pylint and we are clean.
There are a few other minor things in here:
* Add an "empty" test for both Node.js and Python. These pass.
* Fix an existing bug in plugin shutdown logic. At some point, we
started waiting for stderr/stdout to flush before shutting down
the plugin; but if certain failures happen "early" during the
plugin launch process, these channels will never get initialized
and so waiting for them deadlocks.
* Recently we seem to have added logic to delete test temp
directories if a failure happened during initialization of said
temp directories. This is unfortunate, because you often need to
look at the temp directory to see what failed. We already clean
them up elsewhere after the full test completes successfully, so
I don't think we need to be doing this, and I've removed it.
Still many loose ends (config, resources, etc), but it's a start!
Backup copies of local stack checkpoints are now saved to the
user's home directory (`~/.pulumi/backups`) by default.
This enables users to recover after accidentally deleting their
local `.pulumi` directory (e.g. via `git clean`).
The behavior can be disabled by setting the
PULUMI_DISABLE_CHECKPOINT_BACKUPS environment variable, which
we use to disable backups when running all tests other than the
test for this functionality.
This addresses pulumi/pulumi#446: what we used to call "package" is
now called "project". This has gotten more confusing over time, now
that we're doing real package management.
Also fixespulumi/pulumi#426, while in here.
We've seen yarn fail from time to time during our integration
tests (usually timeouts talking to the npm registry) so let's add a
few retries to all yarn commands.
We were pretty careful to keep the test directory around if the test ever
exited early due to a panic or error return. But if the test ran to
completion and failed -- for example, if ExtraRuntimeValidation caused the
test to fail -- we would end up deleting the test directory.
Fixes#868
This PR adds a new `pulumi history` command, which prints the update history for a stack.
The local backend stores the update history in a JSON file on disk, next to the checkpoint file. The cloud backend simply provides the update metadata, and expects to receive all the data from a (NYI) `/history` REST endpoint.
`pkg/backend/updates.go` defines the data that is being persisted. The way the data is wired through the system is adding a new `backend.UpdateMetadata` parameter to a Stack/Backend's `Update` and `Destroy` methods.
I use `tests/integration/stack_outputs/` as the simple app for the related tests, hence the addition to the `.gitignore` and fixing the name in the `Pulumi.yaml`.
Fixes#636.
In travis, we've seen cases where writes to our standard streams
results in an error like: `/dev/stderr: resource temporarily
unavailable` which causes the tests to panic.
Now, in a perfect world, writes to /dev/stderr would not fail in this
way, but we do not live in a perfect world. Other processes on the
machine may make stderr/stdout non-blocking. We've are now seeing this
failure in Travis more often and it is masking real Pulumi failures
we want to debug.
This change restructures the test framework code a bit, to make it
easier to introduce additional languages. Our knowledge of Yarn and
Node.js project structure, for instance, was previously baked in to
the test logic, in a way that was hard to make, for instance, Yarn
optional. (In Python, of course, it will not be used.) To better
support this, I've moved some state onto a new programTester struct
that we can use to lazily find binaries required during the testing
(such as Yarn, Pip, and so on). I'm committing this separately so
that I can minimize merge conflicts in the Python work.
Fix two references to the now-unnamed `dir` that should have been to
other variables.
Check a real condition before the deferred call to RemoveAll instead of
checking the error return.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.
These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
This PR updates the `pkg/testing/integration` package to support running integration tests against the Pulumi Service if desired. This is done through adding new options to `ProgramTestOptions`. (Generally adding support for providing values to flags that were previously inaccessible.)
I added an integration test to confirm that it all works if the PULUMI_API environment variable is set. These tests aren't run in Travis, only manually. Since we cannot reliably run tests from `master` against the service because of the delay in rolling out updates to the Pulumi SDK, etc.
This fixes a few more edit directory issues, where we didn't
correctly propagate the changes in edit directory required during
subsequent destroy/stack activities. It also fixes a few error
paths so that we preserve the right directory to be removed.
This change adds the ability to do very coarse-grained negative
tests in our integration test framework. Either a test itself,
or one of its edits, may be marked ExpectFailure == true, at which
point either the preview or update MUST fail (and, if one fails
without this being set, we still treat it as an error).
This will allow us to remove a lot of current boilerplate in individual tests, and move it into the test harness.
Note that this will require updating users of the integration test framework. By moving to a property bag of inputs, we should avoid needing future breaking changes to this API though.
* Take an options pointer so values can change as a test runs.
* Don't pass redundant information.
* Extract initialization routine.
* Fix caller.
* Check return value.
* Extract destruction logic.
* Move preview and update into their own function.
* Inline null check.
* Revert "Make sure we properly update dir so that pulumi-destroy works."
This reverts commit 56bfc57998.
* Revert "Edits needs to continuously pass along the new directory. (#668)"
This reverts commit 8bd1822722.
* Revert "Refactor test code to make it simpler to validate code in the middle. (#662)"
This reverts commit ed65360157.