* Lift snapshot management out of the engine
This PR is a prerequisite for parallelism by addressing a major problem
that the engine has to deal with when performing parallel resource
construction: parallel mutation of the global snapshot. This PR adds
a `SnapshotManager` type that is responsible for maintaining and
persisting the current resource snapshot. It serializes all reads and
writes to the global snapshot and persists the snapshot to persistent
storage upon every write.
As a side-effect of this, the core engine no longer needs to know about
snapshot management at all; all snapshot operations can be handled as
callbacks on deployment events. This will greatly simplify the
parallelization of the core engine.
Worth noting is that the core engine will still need to be able to read
the current snapshot, since it is interested in the dependency graphs
contained within. The full implications of that are out of scope of this
PR.
Remove dead code, Steps no longer need a reference to the plan iterator that created them
Fixing various issues that arise when bringing up pulumi-aws
Line length broke the build
Code review: remove dead field, fix yaml name error
Rebase against master, provide implementation of StackPersister for cloud backend
Code review feedback: comments on MutationStatus, style in snapshot.go
Code review feedback: move SnapshotManager to pkg/backend, change engine to use an interface SnapshotManager
Code review feedback: use a channel for synchronization
Add a comment and a new test
* Maintain two checkpoints, an immutable base and a mutable delta, and
periodically merge the two to produce snapshots
* Add a lot of tests - covers all of the non-error paths of BeginMutation and End
* Fix a test resource provider
* Add a few tests, fix a few issues
* Rebase against master, fixed merge
This change does three major things:
1. Removes the ability to be logged into multiple clouds at the same
time. Previously, we supported being logged into multiple clouds at
the same time and the CLI would fan out requests and join responses
when needed. In general, this was only useful for Pulumi employees
that wanted run against multiple copies of the service (say production
and staging) but overall was very confusing (for example in the old
world a stack with the same identity could appear twice (since it was
in two backends) which the CLI didn't handle very well).
2. Stops treating the "local" backend as a special thing, from the
point of view of the CLI. Previouly we'd always connect to the local
backend and merge that data with whatever was in clouds we were
connected to. We had gestures like `--local` in `pulumi stack init`
that meant "use the local mode". Instead, to use the local mode now
you run `pulumi login --cloud-url local://` and then you are logged in
the local backend. Since you can only ever be logged into a single
backend, we can remove the `--local` and `--remote` flags from `pulumi
stack init`, it just now requires you to be logged in and creates a
stack in whatever back end you were logged into. When logging into the
local backend, you are not prompted for an access key.
3. Prompt for login in places where you have to log in, if you are not
already logged in.
Tests now target managed stacks instead of local stacks.
The existing logged in user and target backend API are used unless PULUMI_ACCES_TOKEN is defined, in which case tests are run under that access token and against the PULUMI_API backend.
For developer machines, we will now need to be logged in to Pulumi to run tests, and whichever default API backend is logged in (the one listed as current in ~/.pulumi/credentials.json) will be used. If you need to override these, provide PULUMI_ACCESS_TOKEN and possibly PULUMI_API.
For Travis, we currently target the staging service using the Pulumi Bot user.
We have decided to run tests in the pulumi organization. This can be overridden for local testing (or in Travis in the future) by defining PULUMI_API_OWNER_ORGANIZATION and using an access token with access to that organization.
Part of pulumi/home#195.
Also, rename/cleanup a bunch of serialization code.
Also, generate better environment names in the serialized closure code. Thsi code should be much easier to make sense of as hte names will better track to the original names in the user code.
Also, dedupe simple non-capturing functions. This helps ensure we don't spit out N copies of __awaiter (one per file it is declared in).
config.Key has become a pair of namespace and name. Because the whole
world has not changed yet, there continues to be a way to convert
between a tokens.ModuleMember and config.Key, however now sometime the
conversion from tokens.ModuleMember can fail (when the module member
is not of the form `<package>:config:<name>`).
Right now, config.Key is a type alias for tokens.ModuleMember. I did a
pass over the codebase such that we use config.Key everywhere it
looked like the value did not leak to some external process (e.g a
resource provider or a langhost).
Doing this makes it a little clearer (hopefully) where code is
depending on a module member structure (e.g. <package>:config:<value>)
instead of just an opaque type.
Make many fixes to closure serialization
Primary things that i've done as part of this change:
Added support for cyclic objects.
Properly serialize objects that are shared across different function. previously you would get multiple copies, now you properly reference the same copy.
Remove the usages of 'hashes' for functions. Because we track identity of objects, we no longer need them.
Serialize properties of functions (if they have any).
Handle Objects/Functions with different __proto__s than normal. i.e. classes/constructors. but also anything the user may have done themselves to the object.
Handle generator functions.
Handle functions with 'computed' names.
Handle functions with 'symbol' names.
Handle serializing Promises as Promises.
Removed the dual Closure/AsyncClosure tree. One existed solely so we could have a tree without promises (for use in testing maybe?). Because this all exists in a part of our codebase that is entirely async, it's fine to have promises in the tree, and to await them when serializing the Closure to a string.
Handle serializing class-constructors and methods. Including properly handling 'super' calls.
Migrate configuration from the old model to the new model. The
strategy here is that when we first run `pulumi` we enumerate all of
the stacks from all of the backends we know about and for each stack
get the configuration values from the project and workspace and
promote them into the new file. As we do this, we remove stack
specific config from the workspace and Pulumi.yaml file.
If we are able to upgrade all the stacks we know about, we delete all
global configuration data in the workspace and in Pulumi.yaml as well.
We have a test that ensures upgrades continue to work.
This change updates our configuration model to make it simpler to
understand by removing some features and changing how things are
persisted in files.
Notable changes:
- We've removed the notion of "workspace" vs "project"
config. Now, configuration is always stored in a file next to
`Pulumi.yaml` named `Pulumi.<stack-name>.yaml` (the same file we'd
use for an other stack specific information we would need to persist
in the future).
- We've removed the notion of project wide configuration. Every new
stack gets a completely empty set of configuration and there's no
way to share common values across stacks, instead the common value
has to be set on each stack.
We retain some of the old code for the configuration system so we can
support upgrading a project in place. That will happen with the next
change.
This change fixes some issues and allows us to close some
others (since they are no longer possible).
Fixes#866Closes#872Closes#731
We are going to be changing the configuration model. To begin, let's
take most of the existing stuff and mark it as "deprecated" so we can
keep the existing behavior (to help transition newer code forward)
while making it clear what APIs should not be called in the
implementation of `pulumi` itself.
Despite our good progress moving towards having an apitype package,
where our exchange types live and can be shared among the engine and
our services, there were a few major types that were still duplciated.
Resource was the biggest example -- and indeed, the apitype varirant
was missing the new Dependencies property -- but there were others,
like Manfiest, PluginInfo, etc. These too had semi-random omissions.
This change merges all of these types into the apitype package. This
not only cleans up the redundancy and missing properties, but will
"force the issue" with respect to keeping them in sync and properly
versioning the information in a backwards compatible way.
The resource/stack package still exists as a simple marshaling layer
to and from the engine's core data types.
Finally, I've made the controversial change to share the actual
Deployment data structure at the apitype layer also. This will force
us to confront differences in that data structure similarly, and will
allow us to leverage the strong typing throughout to catch issues.
This change gets enough of the Python SDK up and running that the
empty Python program will work. Mostly just scaffolding, but the
basic structure is now in place. The primary remaining work is to
wire up resource creation to the gRPC interfaces.
In summary:
* The basic structure is as follows:
- Everything goes into sdk/python/.
- sdk/python/cmd/pulumi-langhost-python is a Go language host
that simply knows how to spawn Python processes to run out
entrypoint in response to requests by the engine.
- sdk/python/cmd/pulumi-langhost-python-exec is a little Python
shim that is invoked by the language host to run Python programs,
and is responsible for setting up the minimal goo before we can
do so (RPC connections and the like).
- sdk/python/lib/ contains a Python Pip package suitable for PyPi.
- In there, we have two packages: the root pulumi package that
contains all of the basic Pulumi programming model abstractions,
and pulumi.runtime, which contains the implementation of
resource registration, RPC interfacing with the engine, and so on.
* Add logic in our test framework to conditionalize on the language
type and react accordingly. This will allow us to skip Yarn for
Python projects and eventually run Pip if there's a requirements.txt.
* Created the basic project structure, including all of the usual
Make targets for installing into the proper places.
* Building also runs Pylint and we are clean.
There are a few other minor things in here:
* Add an "empty" test for both Node.js and Python. These pass.
* Fix an existing bug in plugin shutdown logic. At some point, we
started waiting for stderr/stdout to flush before shutting down
the plugin; but if certain failures happen "early" during the
plugin launch process, these channels will never get initialized
and so waiting for them deadlocks.
* Recently we seem to have added logic to delete test temp
directories if a failure happened during initialization of said
temp directories. This is unfortunate, because you often need to
look at the temp directory to see what failed. We already clean
them up elsewhere after the full test completes successfully, so
I don't think we need to be doing this, and I've removed it.
Still many loose ends (config, resources, etc), but it's a start!
1. Various idiomatic Go and TypeScript fixes
2. Add an integration test that end-to-end roundtrips dependency
information for a simple Pulumi program
3. Add an additional test assert that tests that dependency information
comes from the language host as expected
This addresses pulumi/pulumi#446: what we used to call "package" is
now called "project". This has gotten more confusing over time, now
that we're doing real package management.
Also fixespulumi/pulumi#426, while in here.
In order to begin publishing our core SDK package to NPM, we will
need it to be underneath the @pulumi scope so that it may remain
private. Eventually, we can alias pulumi back to it.
This is part of pulumi/pulumi#915.
As it stands, we serialize more than is correct when registering
resources: in addition to serializing the RegisterResource RPC, we also
wait for input properties to resolve in the same context. Unfortunately,
this means that we can create cycles in the promise graph when a
resource A is constructed in an earlier turn than some resource B and
one of B's output properties is an input to resource A. These changes
fix this issue by allowing input properties to resolve *before*
serializing the RegisterResource RPC.
Some integration tests had taken a dependency on the ordering of resources in
either the output of the `pulumi` command or the checkpoint file. The
only test that took a dependency on command output was updated s.t. its
resources have exactly one legal topographical sort (and therefore their
ordering is deterministic). The other tests were updated s.t. their
validation did not depend on resource ordering.
This PR adds a new `pulumi history` command, which prints the update history for a stack.
The local backend stores the update history in a JSON file on disk, next to the checkpoint file. The cloud backend simply provides the update metadata, and expects to receive all the data from a (NYI) `/history` REST endpoint.
`pkg/backend/updates.go` defines the data that is being persisted. The way the data is wired through the system is adding a new `backend.UpdateMetadata` parameter to a Stack/Backend's `Update` and `Destroy` methods.
I use `tests/integration/stack_outputs/` as the simple app for the related tests, hence the addition to the `.gitignore` and fixing the name in the `Pulumi.yaml`.
Fixes#636.
These changes add the ability to export a stack's latest deployment or
import a new deployment to a stack via the Pulumi CLI. These
capabilities are exposed by two new verbs under `stack`:
- export, which writes the current stack's latest deployment to stdout
- import, which reads a new deployment from stdin and applies it to the
current stack.
In the local case, this simply involves reading/writing the stack's
latest checkpoint file. In the cloud case, this involves hitting two new
endpoints on the service to perform the export or import.
This change implements resource protection, as per pulumi/pulumi#689.
The overall idea is that a resource can be marked as "protect: true",
which will prevent deletion of that resource for any reason whatsoever
(straight deletion, replacement, etc). This is expressed in the
program. To "unprotect" a resource, one must perform an update setting
"protect: false", and then afterwards, they can delete the resource.
For example:
let res = new MyResource("precious", { .. }, { protect: true });
Afterwards, the resource will display in the CLI with a lock icon, and
any attempts to remove it will fail in the usual ways (in planning or,
worst case, during an actual update).
This was done by adding a new ResourceOptions bag parameter to the
base Resource types. This is unfortunately a breaking change, but now
is the right time to take this one. We had been adding new settings
one by one -- like parent and dependsOn -- and this new approach will
set us up to add any number of additional settings down the road,
without needing to worry about breaking anything ever again.
This is related to protected stacks, as described in
pulumi/pulumi-service#399. Most likely this will serve as a foundational
building block that enables the coarser grained policy management.
This will allow us to remove a lot of current boilerplate in individual tests, and move it into the test harness.
Note that this will require updating users of the integration test framework. By moving to a property bag of inputs, we should avoid needing future breaking changes to this API though.
As articulated in #714, the way config defaults to workspace-local
configuration is a bit error prone, especially now with the cloud
workflow being the default. This change implements several improvements:
* First, --save defaults to true, so that configuration changes will
persist into your project file. If you want the old local workspace
behavior, you can specify --save=false.
* Second, the order in which we applied configuration was a little
strange, because workspace settings overwrote project settings.
The order is changed now so that we take most specific over least
specific configuration. Per-stack is considered more specific
than global and project settings are considered more specific
than workspace.
* We now warn anytime workspace local configuration is used. This
is a developer scenario and can have subtle effects. It is simply
not safe to use in a team environment. In fact, I lost an arm
this morning due to workspace config... and that's why you always
issue warnings for unsafe things.
* Take an options pointer so values can change as a test runs.
* Don't pass redundant information.
* Extract initialization routine.
* Fix caller.
* Check return value.
* Extract destruction logic.
* Move preview and update into their own function.
* Inline null check.
This change adds a `pulumi stack output` command. When passed no
arguments, it prints all stack output properties, in exactly the
same format as `pulumi stack` does (just without all the other stuff).
More importantly, if you pass a specific output property, a la
`pulumi stack output clusterARN`, just that property will be printed,
in a scriptable-friendly manner. This will help us automate wiring
multiple layers of stacks together during deployments.
This fixespulumi/pulumi#659.
* Revert "Make sure we properly update dir so that pulumi-destroy works."
This reverts commit 56bfc57998.
* Revert "Edits needs to continuously pass along the new directory. (#668)"
This reverts commit 8bd1822722.
* Revert "Refactor test code to make it simpler to validate code in the middle. (#662)"
This reverts commit ed65360157.
This fixes two closure bugs.
First, we had special cased `__awaiter` from days of yore, when we had
special cased its capture. I also think we were confused at some point
and instead of fixing the fact that we captured `this` for non-arrow
functions, which `__awaiter` would trigger, we doubled down on this
incorrect hack. This means we missed a real bonafide `this` capture.
Second, we had a global cache of captured variable objects. So, if a
free variable resolved to the same JavaScript object, it always resolved
to the first serialization of that object. This is clearly wrong if
the object had been mutated in the meantime. The cache is required to
reach a fixed point during mutually recursive captures, but we should
only be using it for the duration of a single closure serialization
call. That's precisely what this commit does.
Also add a fix for this case.
This fixespulumi/pulumi#663.
The two-phase output properties change broke the ability to recover
from a failed replacement that yields pending deletes in the checkpoint.
The issue here is simply that we should remember pending registrations
only for logical operations that *also* have a "new" state (create or
update). This commit fixes this, and also adds a new step test with
fault injection to probe many interesting combinations of steps.