In travis, we've seen cases where writes to our standard streams
results in an error like: `/dev/stderr: resource temporarily
unavailable` which causes the tests to panic.
Now, in a perfect world, writes to /dev/stderr would not fail in this
way, but we do not live in a perfect world. Other processes on the
machine may make stderr/stdout non-blocking. We've are now seeing this
failure in Travis more often and it is masking real Pulumi failures
we want to debug.
This change restructures the test framework code a bit, to make it
easier to introduce additional languages. Our knowledge of Yarn and
Node.js project structure, for instance, was previously baked in to
the test logic, in a way that was hard to make, for instance, Yarn
optional. (In Python, of course, it will not be used.) To better
support this, I've moved some state onto a new programTester struct
that we can use to lazily find binaries required during the testing
(such as Yarn, Pip, and so on). I'm committing this separately so
that I can minimize merge conflicts in the Python work.
Fix two references to the now-unnamed `dir` that should have been to
other variables.
Check a real condition before the deferred call to RemoveAll instead of
checking the error return.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.
These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
This PR updates the `pkg/testing/integration` package to support running integration tests against the Pulumi Service if desired. This is done through adding new options to `ProgramTestOptions`. (Generally adding support for providing values to flags that were previously inaccessible.)
I added an integration test to confirm that it all works if the PULUMI_API environment variable is set. These tests aren't run in Travis, only manually. Since we cannot reliably run tests from `master` against the service because of the delay in rolling out updates to the Pulumi SDK, etc.
This fixes a few more edit directory issues, where we didn't
correctly propagate the changes in edit directory required during
subsequent destroy/stack activities. It also fixes a few error
paths so that we preserve the right directory to be removed.
This change adds the ability to do very coarse-grained negative
tests in our integration test framework. Either a test itself,
or one of its edits, may be marked ExpectFailure == true, at which
point either the preview or update MUST fail (and, if one fails
without this being set, we still treat it as an error).
This will allow us to remove a lot of current boilerplate in individual tests, and move it into the test harness.
Note that this will require updating users of the integration test framework. By moving to a property bag of inputs, we should avoid needing future breaking changes to this API though.
* Take an options pointer so values can change as a test runs.
* Don't pass redundant information.
* Extract initialization routine.
* Fix caller.
* Check return value.
* Extract destruction logic.
* Move preview and update into their own function.
* Inline null check.
* Revert "Make sure we properly update dir so that pulumi-destroy works."
This reverts commit 56bfc57998.
* Revert "Edits needs to continuously pass along the new directory. (#668)"
This reverts commit 8bd1822722.
* Revert "Refactor test code to make it simpler to validate code in the middle. (#662)"
This reverts commit ed65360157.
The two-phase output properties change broke the ability to recover
from a failed replacement that yields pending deletes in the checkpoint.
The issue here is simply that we should remember pending registrations
only for logical operations that *also* have a "new" state (create or
update). This commit fixes this, and also adds a new step test with
fault injection to probe many interesting combinations of steps.
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
This PR just wires the `Package.Main` field to the Pulumi Service (and in subsequent PRs, the `pulumi-service` and `pulumi-ppc` repos).
@joeduffy , should we just upload the entire `package.Package` type with the `UpdateProgramRequest` type? I'm not sure we want to treat that type as part of part of our public API surface area. But on the other hand, we'll need to mirror relevant fields in N places if we don't.
By default, debugging events are not displayed by `pulumi`; the `-d`
flag must be provided to enable their display. This is necessary e.g.
to view debugging output from Terraform when TF logging is enabled.
These changes add the option to display debug output from `pulumi` to
the integration test framework.
These changes also contain a small fix for the display of component
children.
A handful of UX improvments for config:
- `pulumi config ls` has been removed. Now, `pulumi config` with no
arguments prints the table of configuration values for a stack and
a new command `pulumi config get <key>` prints the value for a
single configuration key (useful for scripting).
- `pulumi config text` and `pulumi config secret` have been merged
into a single command `pulumi config set`. The flag `--secret` can
be used to encrypt the value we store (like `pulumi config secret`
used to do).
- To make it obvious that setting a value with `pulumi config set` is
in plan text, we now echo a message back to the user saying we
added the configuration value in plaintext.
Fixes#552
This change fixes getProject to return the project name, as
originally intended. (One line was missing.)
It also adds an integration test for this.
Fixespulumi/pulumi#580.
Because the Pulumi.yaml file demarcates the boundary used when
uploading a program to the Pulumi.com service at the moment, we
have trouble when a Pulumi program uses "up and over" references.
For instance, our customer wants to build a Dockerfile located
in some relative path, such as `../../elsewhere/`.
To support this, we will allow the Pulumi.yaml file to live
somewhere other than the main Pulumi entrypoint. For example,
it can live at the root of the repo, while the Pulumi program
lives in, say, `infra/`:
Pulumi.yaml:
name: as-before
main: infra/
This fixespulumi/pulumi#575. Further work can be done here to
provide even more flexibility; see pulumi/pulumi#574.
Add the ability to upload data and timing for test runs to S3. Uploaded data is designed to be queried via a service like AWS Athena and these queries can then be imported into BI tools (Excel, QuickSight, PowerBI, etc.)
Initially hook this up to the `minimal` test as a baseline.
We are not ready to start running integration tests against
Pulumi.com, but `pulumi` uses `PULUMI_API` to decide what backend to
target. Remove it when running `pulumi` in the integration tests so we
always get the local behavior.
pulumi/pulumi#471 tracks the work to actually be able to choose what
backend you want when an integration test runs.
This changes the overall flow to exit early when a failure occurs.
We will try to clean up if we believe to have gotten far enough to
actually have provisioned any resources.