As I began to write code using the ability to perform resource
lookups, especially in the code-generators, I realized the way it
was surfaced as an argument to the Resource base constructor would
lead to overload explosion. Instead of doing that, let's pass it
in the ResourceOptions bag.
Prior to this change, if you ended up with multiple Pulumi SDK
packages loaded side-by-side, we would fail in obscure ways. The
reason for this was that we initialize and store important state
in static variables. In the case that you load the same library
twice, however, you end up with separate copies of said statics,
which means we would be missing engine RPC addresses and so on.
This change adds the ability to recover from this situation by
mirroring the initialized state in process-wide environment
variables. By doing this, we can safely recover simply by reading
them back when we detect that they are missing. I think we can
eventually go even further here, and eliminate the entry point
launcher shim altogether by simply having the engine launch the
Node program with the right environment variables. This would
be a nice simplification to the system (fewer moving pieces).
There is still a risk that the separate copy is incompatible.
Presumably the reason for loading multiple copies is that the
NPM/Yarn version solver couldn't resolve to a shared version.
This may yield obscure failure modes should RPC interfaces change.
Figuring out what to do here is part of pulumi/pulumi#957.
This fixespulumi/pulumi#777 and pulumi/pulumi#1017.
This change skips unknown IDs during read operations. This can happen
when a read is performed using the output property of another resource
during planning. This is intentionally supported via ID being an
Input<ID> and all we need to do for this to work correctly is skip the
actual provider RPC and the runtime will propagate unknown outputs as
usual.
This change wires up the new Read RPC method in such a manner that
Pulumi programs can invoke it. This is technically not required for
refreshing state programmatically (as in pulumi/pulumi#1081), however
it's a feature we had eons ago and have wanted since (see
pulumi/pulumi#83), and will allow us to write code like
let vm = aws.ec2.Instance.get("my-vm", "i-07043cd97bd2c9cfc");
// use any property from here on out ...
The way this works is simply by bridging the Pulumi program via its
existing RPC connection to the engine, much like Invoke and
RegisterResource RPC requests already do, and then invoking the proper
resource provider in order to read the state. Note that some resources
cannot be uniquely identified by their ID alone, and so an extra
resource state bag may be provided with just those properties required.
This came almost for free (okay, not exactly) and will come in handy as
we start gaining experience with reading live state from resources.
1. Various idiomatic Go and TypeScript fixes
2. Add an integration test that end-to-end roundtrips dependency
information for a simple Pulumi program
3. Add an additional test assert that tests that dependency information
comes from the language host as expected
This commit does two things:
1. All dependencies of a resource, both implicit and explicit, are
communicated directly to the engine when registering a resource. The
engine keeps track of these dependencies and ultimately serializes
them out to the checkpoint file upon successful deployment.
2. Once a successful deployment is done, the new `pulumi stack
graph` command reads the checkpoint file and outputs the dependency
information within in the DOT format.
Keeping track of dependency information within the checkpoint file is
desirable for a number of reasons, most notably delete-before-create,
where we want to delete resources before we have created their
replacement when performing an update.
As it stands, we serialize more than is correct when registering
resources: in addition to serializing the RegisterResource RPC, we also
wait for input properties to resolve in the same context. Unfortunately,
this means that we can create cycles in the promise graph when a
resource A is constructed in an earlier turn than some resource B and
one of B's output properties is an input to resource A. These changes
fix this issue by allowing input properties to resolve *before*
serializing the RegisterResource RPC.
Some integration tests had taken a dependency on the ordering of resources in
either the output of the `pulumi` command or the checkpoint file. The
only test that took a dependency on command output was updated s.t. its
resources have exactly one legal topographical sort (and therefore their
ordering is deterministic). The other tests were updated s.t. their
validation did not depend on resource ordering.
This change implements resource protection, as per pulumi/pulumi#689.
The overall idea is that a resource can be marked as "protect: true",
which will prevent deletion of that resource for any reason whatsoever
(straight deletion, replacement, etc). This is expressed in the
program. To "unprotect" a resource, one must perform an update setting
"protect: false", and then afterwards, they can delete the resource.
For example:
let res = new MyResource("precious", { .. }, { protect: true });
Afterwards, the resource will display in the CLI with a lock icon, and
any attempts to remove it will fail in the usual ways (in planning or,
worst case, during an actual update).
This was done by adding a new ResourceOptions bag parameter to the
base Resource types. This is unfortunately a breaking change, but now
is the right time to take this one. We had been adding new settings
one by one -- like parent and dependsOn -- and this new approach will
set us up to add any number of additional settings down the road,
without needing to worry about breaking anything ever again.
This is related to protected stacks, as described in
pulumi/pulumi-service#399. Most likely this will serve as a foundational
building block that enables the coarser grained policy management.
At the moment, we swallow and log errors for rejected promises during
resolution of resource input properties. This is clearly wrong, and
we should instead let them go rejected so that the unhandled rejected
promise logic triggers, and leads to program failure as expected.
This change simplifies the necessary RPC changes for components.
Instead of a Begin/End pair, which complicates the whole system
because now we have the opportunity of a missing End call, we will
simply let RPCs come in that append outputs to existing states.
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
This change adds back component output properties. Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
This change switches from child lists to parent pointers, in the
way resource ancestries are represented. This cleans up a fair bit
of the old parenting logic, including all notion of ambient parent
scopes (and will notably address pulumi/pulumi#435).
This lets us show a more parent/child display in the output when
doing planning and updating. For instance, here is an update of
a lambda's text, which is logically part of a cloud timer:
* cloud:timer:Timer: (same)
[urn=urn:pulumi:malta::lm-cloud:☁️timer:Timer::lm-cts-malta-job-CleanSnapshots]
* cloud:function:Function: (same)
[urn=urn:pulumi:malta::lm-cloud:☁️function:Function::lm-cts-malta-job-CleanSnapshots]
* aws:serverless:Function: (same)
[urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots]
~ aws:lambda/function:Function: (modify)
[id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741]
[urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots]
- code : archive(assets:2092f44) {
// etc etc etc
Note that we still get walls of text, but this will be actually
quite nice when combined with pulumi/pulumi#454.
I've also suppressed printing properties that didn't change during
updates when --detailed was not passed, and also suppressed empty
strings and zero-length arrays (since TF uses these as defaults in
many places and it just makes creation and deletion quite verbose).
Note that this is a far cry from everything we can possibly do
here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417).
But it's a good start towards taming some of our output spew.
The `nodejs` language support is implemented as two programs: one that
manages the initial connection to the engine and provides the language
serivce itself, and another that the language service invokes in order
to run a `nodejs` Pulumi program. The latter is responsible for running
the user's program and communicating its resource requests to the
engine. Currently, `run` effectively assumes that the user's program
will run synchronously from start to finish, and will disconnect from
the engine once the user's program has completed. This assumption breaks
if the user's program requires multiple turns of the event loop to
finish its root resource requests. For example, the following program
would fail to create its second resource because the engine will be
disconnected once it reaches its `await`:
```
(async () => {
let a = new Resource();
await somePromise();
let = new Resource();
})();
```
These changes fix this issue by disconnecting from the engine during
process shutdown rather than after the user's program has finished its
first turn through the event loop.
This changes a few things about "components":
* Rename what was previously ExternalResource to CustomResource,
and all of the related fields and parameters that this implies.
This just seems like a much nicer and expected name for what
these represent. I realize I am stealing a name we had thought
about using elsewhere, but this seems like an appropriate use.
* Introduce ComponentResource, to make initializing resources
that merely aggregate other resources easier to do correctly.
* Add a withParent and parentScope concept to Resource, to make
allocating children less error-prone. Now there's no need to
explicitly adopt children as they are allocated; instead, any
children allocated as part of the withParent callback will
auto-parent to the resource provided. This is used by
ComponentResource's initialization function to make initialization
easier, including the distinction between inputs and outputs.
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
This resource provider accepts a single configuration parameter, `testing:provider:module`, that is the path to a Javascript module that implements CRUD operations for a set of resource types. This allows e.g. a test case to provide its own implementation of these operations that may succeed or fail in interesting ways.
Fixes#338.
This exposes the existing runtime logging functionality in a way meant
for 3rd-parties to consume. This can be useful if we want to introduce
debug logging, warnings, or other things, that fit nicely with the
Pulumi CLI and overall developer workflow.
This change adds the capability for a resource provider to indicate
that, where an action carried out in response to a diff, a certain set
of properties would be "stable"; that is to say, they are guaranteed
not to change. As a result, properties may be resolved to their final
values during previewing, avoiding erroneous cascading impacts.
This avoids the ever-annoying situation I keep running into when demoing:
when adding or removing an ingress rule to a security group, we ripple
the impact through the instance, and claim it must be replaced, because
that instance depends on the security group via its name. Well, the name
is a great example of a stable property, in that it will never change, and
so this is truly unfortunate and always adds uncertainty into the demos.
Particularly since the actual update doesn't need to perform replacements.
This resolvespulumi/pulumi#330.
This wires up the Node.js SDK to the newly added Invoke function
on the resource monitor and provider gRPC interfaces, letting us
expose functions that are implemented by the providers to user code.
This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
This adds back Computed<T> as a short-hand for Promise<T | undefined>.
Subtly, all resource properties need to permit undefined flowing through
during planning Rather than forcing the long-hand version, which is easy
to forget, we'll keep the convention of preferring Computed<T>. It's
just a typedef and the runtime type is just a Promise.
As part of pulumi/pulumi-fabric#331, we've been exploring just using
undefined to indicate that a property value is absent during planning.
We also considered blocking the message loop to simplify the overall
programming model, so that all asynchrony is hidden.
It turns out ThereBeDragons 🐲 anytime you try to block the
message loop. So, we aren't quite sure about that bit.
But the part we are convicted about is that this Computed/Property
model is far too complex. Furthermore, it's very close to promises, and
yet frustratingly so far away. Indeed, the original thinking in
pulumi/pulumi-fabric#271 was simply to use promises, but we wanted to
encourage dataflow styles, rather than control flow. But we muddied up
our thinking by worrying about awaiting a promise that would never resolve.
It turns out we can achieve a middle ground: resolve planning promises to
undefined, so that they don't lead to hangs, but still use promises so
that asynchrony is explicit in the system. This also avoids blocking the
message loop. Who knows, this may actually be a fine final destination.
This change flips the polarity on parallelism: rather than having a
--serialize flag, we will have a --parallel=P flag, and by default
we will shut off parallelism. We aren't benefiting from it at the
moment (until we implement pulumi/pulumi-fabric#106), and there are
more hidden dependencies in places like AWS Lambdas and Permissions
than I had realized. We may revisit the default, but this allows
us to bite off the messiness of dependsOn only when we benefit from
it. And in any case, the --parallel=P capability will be useful.
This change adds an optiona dependsOn parameter to Resource constructors,
to "force" a fake dependency between resources. We have an extremely strong
desire to resort to using this only in unusual cases -- and instead rely
on the natural dependency DAG based on properties -- but experience in other
resource provisioning frameworks tells us that we're likely to need this in
the general case. Indeed, we've already encountered the need in AWS's
API Gateway resources... and I suspect we'll run into more especially as we
tackle non-serverless resources like EC2 Instances, where "ambient"
dependencies are far more commonplace.
This also makes parallelism the default mode of operation, and we have a
new --serialize flag that can be used to suppress this default behavior.
Full disclosure: I expect this to become more Make-like, i.e. -j 8, where
you can specify the precise width of parallelism, when we tackle
pulumi/pulumi-fabric#106. I also think there's a good chance we will flip
the default, so that serial execution is the default, so that developers
who don't benefit from the parallelism don't need to worry about dependsOn
in awkward ways. This tends to be the way most tools (like Make) operate.
This fixespulumi/pulumi-fabric#335.
* Initialize the diganostics logger with opts.Debug when doing
a Deploy, like we do Plan.
* Don't spew leaked promises if there were Log.errors.
* Serialize logging RPC calls so that they can't appear out of order.
* Print stack traces in more places and, in particular, remember
the original context for any errors that may occur asynchronously,
like resource registration and calls to mapValue.
* Include origin stack traces generally in more error messages.
* Add some more mapValue test cases.
* Only undefined-propagate mapValue values during dry-runs.
This change serializes all resource operations. Please see
pulumi/pulumi#335 for more details. In a nutshell, there are
resources that have implicit hidden dependencies and now that
the runtime is fully asynchronous, we are tripping over problems
left and right (even worse, they are non-deterministic). All
of the problems have been in the AWS API Gateway resources;
until we come up with a holistic solution here, serializing all
calls should make things more stable in the interim.
The change to tear down RPC connections after the program exits --
to fix problems on Linux presumably due to the way libuv is implemented --
unfortunately introduces nondeterminism and overzealous termination that
can happen at inopportune times. Instead, we need to wait for the current
RPC queue to drain. To fix this, we'll maintain a list of currently active
RPC calls and, only once they have completed, will we close the clients.
We have an issue in the runtime right now where we serialize closures
asynchronously, meaning we make it possible to form cycles between
resource graphs (something that ought to be impossible in our model,
where resources are "immutable" after creation and cannot form cycles).
Let me tell you a tale of debugging this ...
Well, no, let's not do that. But thankfully I've left behind some
little utilities that might make debugging such a thing easier down
the road. Namely:
* By default, most of our core runtime promises leverage a leak handler
that will log an error message should the process exit with certain
critical unresolved promises. This error message will include some
handy context (like whether it was an input promise) as well as a
stack trace for its point of creation.
* Optionally, with a flag in runtime/debuggable.ts, you may wire up
a hang detector, for situations where we may want to detect this
situation sooner than process exit, using the regular message loop.
This uses a defined timeout, prints the same diagnostics as the
leak detector when a hang is detected, and is disabled by default.
This fixes a few problems with dependent resolutions and hardens
even more promises-related error paths, so we swallow precisely zero
errors (or at least we hope so). This also digs through multi-level
chains of promises and computed properties as needed for nested mapValues.