These changes add a new resource to the Pulumi SDK,
`pulumi.StackReference`, that represents a reference to another stack.
This resource has an output property, `outputs`, that contains the
complete set of outputs for the referenced stack. The Pulumi account
performing the deployment that creates a `StackReference` must have
access to the referenced stack or the call will fail.
This resource is implemented by a builtin provider managed by the engine.
This provider will be used for any custom resources and invokes inside
the `pulumi:pulumi` module. Currently this provider supports only the
`pulumi:pulumi:StackReference` resource.
Fixes#109.
We run the same suite of changes that we did on gometalinter. This
ended up catching a few new issues, some of which were addressed and
some of which were baselined.
* Make v8 primitives async as there is no way to avoid async in node11.
* Simplify API.
* Move processing of well-known globals into the v8 layer.
We'll need this so that we can map from RemoteObjectIds back to these well known values.
* Remove unnecesssary check.
* Cleanup comments and extract helper.
* Introduce helper bridge method for the simple case of making an entry for a string.
* Make functions async. They'll need to be async once we move to the Inspector api.
* Make functions async. They'll need to be async once we move to the Inspector api.
* Make functions async. They'll need to be async once we move to the Inspector api.
* Move property access behind helpers so they can move to the Inspector API in the future.
* Only call function when we know we have a Function. Remove redundant null check.
* Properly serialize certain special JavaScript number values that JSON serialization cannot handle.
* Only marshall across the 'source' and 'flags' for a RegExp when serializing.
* Add a simple test to validate a regex without flags.
* Extract functionality into helper method.
* Add test with complex output scenarios.
* Output serialization needs to avoid recursively trying to serialize a serialized value.
* Introduce indirection for introspecting properties of an object.
* Use our own introspection API for examining an Array.
* Hide direct property access through API indirection.
* Produce values like the v8 Inspector does.
* Compute the module map asynchronously. Will need that when mapping mirrors instead.
* Cleanup a little code in closure creation.
* Get serialization working on Node11 (except function locations).
* Run tests in the same order on <v11 and >=v11
* Make tests run on multiple versions of node.
* Rename file to make PR simpler to review.
* Cleanup.
* Be more careful with global state.
* Remove commented line.
* Only allow getting a session when on Node11 or above.
* Promisify methods.
Rather than placing these combinators directly on the Output class,
which feels odd because they are special purpose to iterables, and deal
with not only Outputs but also Inputs, we will place them on a
separate and dedicated iterable module for these utility helpers.
This change adds some new constructors for output properties:
1) We alias `Output.create` to `output`, more like Promise's various
construction methods. This reads better and is more discoverable.
2) A new `Output.createMap` function will accept an array of inputs,
along with a selector function for key/value pairs, and produces
an output map with said keys and values inside of it.
3) A new `Output.createGroupByMap` functon will similarly accept an
array of inputs and a key/value selector, however it creates an
output map with said keys, but where values are arrays of values,
and all duplicate keys will lead to appending to said arrays.
Tests to come in a subsequent checkin.
* Fail closure serialization in Node 11
Node 11 changed many of the intrinsics that we depend upon for closure
serialization, so until we fix the underlying issues this commit lazily
fails if a closure is serialized when running on Node 11.
* CR feedback
Some providers (namely Kubernetes) require unbounded parallelism in
order to function correctly. This commit enables the engine to operate
in a mode with unbounded parallelism and switches to that mode by
default.
Suppose you have `pulumi.output(o).apply(foo)`, with `o` being type `O`
and `foo` taking type `O` as an argument. If `O` is a type with methods,
this will fail to type check.
The reason is that `UnwrappedObject<T>` (as well as the other
`Unwrapped*` types) will recursively wrap the types of field values
whose type was `Function`. Since `UnwrappedObject<Function>` is not the
same as `Function`, we fail to type check. Note that this does not
result in an actual "boxed" object -- this is purely at the type level.
This commit resolves this by considering `Function` a primitive type,
which will cause us to not wrap the types of field values, instead
leaving them as `Function`.
Instead of looping forever, due to some recent improvements in engine
error handling it's sufficient for a language host to exit cleanly with
a zero exit code when the resource monitor is shutting down.
If you forget to implement check on a dynamic provider, all your
inputs mysteriously disappear. It's doubly odd because many providers
don't need to perform any checking on transformation of their inputs.
This change simply propagates the new inputs as-is by default when
a user-supplied check method isn't provided. This would have saved
me 20 minutes just now ... :-)
This introduces a Dockerfile for the Pulumi CLI. This makes it
easier to develop and test the engine in a self-contained environment,
in addition to being suitable for running the actual CLI itself.
For instance,
$ docker run pulumi/pulumi -e "PULUMI_ACCESS_TOKEN=x" up
will run the Pulumi program mounted under the /app volume. This will
be used in some upcoming CI/CD scenarios.
This uses multi-stage builds, and Debian Stretch as the base, for
relatively fast and lean build times and resulting images. We are
intentional about restoring dep packages independent of the actual
source code so that we don't end up needlessly re-depping, which can
consume quite a bit of time. After fixing
https://github.com/pulumi/pulumi/issues/1986, we should explore an
Alpine base image option.
I made the decision to keep this image scoped to just the Go builds.
Therefore, none of the actual SDK packages themselves are built, just
the engine, CLI, and language plugins for Node.js, Python, and Go.
It's possible to create a mega-container that has all of these full
environments so that we can rebuild them too, but for now I figured
it was better to rely on package management for them.
Another alternative would have been to install released binaries,
rather than building them. To keep the useful flow for development,
however, I decided to go the build route for now. If we build at the
same hashes, the resulting binaries "should" be ~identical anyhow.
I've created a pulumi/pulumi Docker Hub repo that we can publish this
into. For now, there is no CI publishing of the image.
This fixespulumi/pulumi#1991.
* Revert RunError behavior. Introduce new ResourceError for errors associated with a resource.
* Fix docs.
* Use resource error.
* Use ResourceError in more places.
* Use ResourceError in a few more places.
* Throw a resource error.
* Make required.
* Revert this.
* Lint.
* Only report errors once.
* Better comment.
* Protobuf changes
* Move management of root resource state to engine
This commit fixes a persistent side-by-side issue in the NodeJS SDK by
moving the management of root resource state to the engine. Doing so
adds two new endpoints to the Engine gRPC service: 1) GetRootResource
and 2) SetRootResource, which get and set the root resource
respectively.
* Rebase against master, regenerate proto
* Use nightly protoc gRPC plugin for Node
Newer versions of the Node gRPC plugin accept the 'minimum_node_version'
flag, which we can use to instruct protoc to not support Node versions
earlier than Node 6. This allows the compiler to use 'Buffer.from'
instead of the deprecated 'Buffer' constructor, which fixes a
deprecation warning on Node 10.
* Protobuf changes
- Do not require replacement of dynamic resources due to provider
changes. This is not necessary, and is almost certainly the wrong
thing to do if the dynamic provider is managing a physical resource.
- Return all inputs by default from a dynamic provider's check method.
Currently a dynamic provider that does not implement check will end up
receiving no inputs. This is confusing, and is not the correct default.