pulumi/sdk/nodejs/tsconfig.json

110 lines
3.5 KiB
JSON
Raw Normal View History

Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
{
"compilerOptions": {
2019-09-11 23:21:35 +00:00
"strict": true,
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
"outDir": "bin",
"target": "es2016",
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
"module": "commonjs",
"moduleResolution": "node",
"declaration": true,
"resolveJsonModule": true,
"sourceMap": true,
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
"stripInternal": true,
"experimentalDecorators": true,
"pretty": true,
"noFallthroughCasesInSwitch": true,
"noImplicitReturns": true,
2022-09-03 18:24:52 +00:00
"forceConsistentCasingInFileNames": true,
Replace deprecated read-package-tree with @npmcli/arborist (#15503) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> read-package-tree is deprecated. Additionally it has a dependency that is flagged by security scanners. <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/9129 Ref https://github.com/pulumi/pulumi/issues/12688 ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-02-26 18:40:28 +00:00
"esModuleInterop": true,
"typeRoots": ["node_modules/@types", "types/"]
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
},
"files": [
"index.ts",
"config.ts",
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 12:20:11 +00:00
"errors.ts",
"invoke.ts",
"metadata.ts",
"output.ts",
"resource.ts",
"stackReference.ts",
Vendor TypeScript and ts-node (#15622) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description Fixes https://github.com/pulumi/pulumi/issues/15733 Historically these packages were direct dependencies of `@pulumi/pulumi`. To decouple the node SDK from the precise version of TypeScript, the packages are now declared as optional peer pependencies of `@pulumi/pulumi` and customers can pick the versions they want. The reason we mark the peer dependencies as *optional* is to prevent package managers from automatically installing them. This avoids the situation where the package manger would install a more recent version of TypeScript without the user explictly opting in. Newer versions have stricter type checks, and can thus stop existing programs from running successfully. When the peer dependencies are not present, we load the vendored versions of the modules. ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-04-10 15:26:37 +00:00
"tsutils.ts",
"typescript-shim.ts",
"utils.ts",
"version.ts",
"asset/index.ts",
"asset/asset.ts",
"asset/archive.ts",
Switch to parent pointers; display components nicely This change switches from child lists to parent pointers, in the way resource ancestries are represented. This cleans up a fair bit of the old parenting logic, including all notion of ambient parent scopes (and will notably address pulumi/pulumi#435). This lets us show a more parent/child display in the output when doing planning and updating. For instance, here is an update of a lambda's text, which is logically part of a cloud timer: * cloud:timer:Timer: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:timer:Timer::lm-cts-malta-job-CleanSnapshots] * cloud:function:Function: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:function:Function::lm-cts-malta-job-CleanSnapshots] * aws:serverless:Function: (same) [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots] ~ aws:lambda/function:Function: (modify) [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741] [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots] - code : archive(assets:2092f44) { // etc etc etc Note that we still get walls of text, but this will be actually quite nice when combined with pulumi/pulumi#454. I've also suppressed printing properties that didn't change during updates when --detailed was not passed, and also suppressed empty strings and zero-length arrays (since TF uses these as defaults in many places and it just makes creation and deletion quite verbose). Note that this is a far cry from everything we can possibly do here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417). But it's a good start towards taming some of our output spew.
2017-11-17 02:21:41 +00:00
"dynamic/index.ts",
"iterable/index.ts",
"log/index.ts",
Initial support for remote component construction. (#5280) These changes add initial support for the construction of remote components. For now, this support is limited to the NodeJS SDK; follow-up changes will implement support for the other SDKs. Remote components are component resources that are constructed and managed by plugins rather than by Pulumi programs. In this sense, they are a bit like cloud resources, and are supported by the same distribution and plugin loading mechanisms and described by the same schema system. The construction of a remote component is initiated by a `RegisterResourceRequest` with the new `remote` field set to `true`. When the resource monitor receives such a request, it loads the plugin that implements the component resource and calls the `Construct` method added to the resource provider interface as part of these changes. This method accepts the information necessary to construct the component and its children: the component's name, type, resource options, inputs, and input dependencies. It is responsible for dispatching to the appropriate component factory to create the component, then returning its URN, resolved output properties, and output property dependencies. The dependency information is necessary to support features such as delete-before-replace, which rely on precise dependency information for custom resources. These changes also add initial support for more conveniently implementing resource providers in NodeJS. The interface used to implement such a provider is similar to the dynamic provider interface (and may be unified with that interface in the future). An example of a NodeJS program constructing a remote component resource also implemented in NodeJS can be found in `tests/construct_component/nodejs`. This is the core of #2430.
2020-09-08 02:33:55 +00:00
"provider/index.ts",
"provider/provider.ts",
"provider/internals.ts",
Initial support for remote component construction. (#5280) These changes add initial support for the construction of remote components. For now, this support is limited to the NodeJS SDK; follow-up changes will implement support for the other SDKs. Remote components are component resources that are constructed and managed by plugins rather than by Pulumi programs. In this sense, they are a bit like cloud resources, and are supported by the same distribution and plugin loading mechanisms and described by the same schema system. The construction of a remote component is initiated by a `RegisterResourceRequest` with the new `remote` field set to `true`. When the resource monitor receives such a request, it loads the plugin that implements the component resource and calls the `Construct` method added to the resource provider interface as part of these changes. This method accepts the information necessary to construct the component and its children: the component's name, type, resource options, inputs, and input dependencies. It is responsible for dispatching to the appropriate component factory to create the component, then returning its URN, resolved output properties, and output property dependencies. The dependency information is necessary to support features such as delete-before-replace, which rely on precise dependency information for custom resources. These changes also add initial support for more conveniently implementing resource providers in NodeJS. The interface used to implement such a provider is similar to the dynamic provider interface (and may be unified with that interface in the future). An example of a NodeJS program constructing a remote component resource also implemented in NodeJS can be found in `tests/construct_component/nodejs`. This is the core of #2430.
2020-09-08 02:33:55 +00:00
"provider/server.ts",
"runtime/index.ts",
"runtime/closure/codePaths.ts",
"runtime/closure/createClosure.ts",
"runtime/closure/parseFunction.ts",
"runtime/closure/rewriteSuper.ts",
"runtime/closure/serializeClosure.ts",
"runtime/closure/utils.ts",
"runtime/closure/v8.ts",
`StreamInvoke` should return `AsyncIterable` that completes A user who calls `StreamInvoke` probably expects the `AsyncIterable` that is returned to gracefully terminate. This is currently not the case. Where does something like this go wrong? A better question might be where any of this went right, because several days later, after wandering into civilization from the great Wilderness of Bugs, I must confess that I've forgotten if any of it had. `AsyncIterable` is a pull-based API. `for await (...)` will continuously call `next` ("pull") on the underlying `AsyncIterator` until the iterable is exhausted. But, gRPC's streaming-return API is _push_ based. That is to say, when a streaming RPC is called, data is provided by callback on the stream object, like: call.on("data", (thing: any) => {... do thing ...}); Our goal in `StreamInvoke` is to convert the push-based gRPC routines into the pull-based `AsyncIterable` retrun type. You may remember your CS theory this is one of those annoying "fundamental mismatches" in abstraction. So we're off to a good start. Until this point, we've depended on a library, `callback-to-async-iterator` to handle the details of being this bridge. Our trusting nature and innocent charm has mislead us. This library is not worthy of our trust. Instead of doing what we'd like it to do, it returns (in our case) an `AsyncIterable` that will never complete. Yes,, this `AsyncIterable` will patiently wait for eternity, which honestly is kind of poetic when you sit down in a nice bath and think about that fun time you considered eating your computer instead of finishing this idiotic bug. Indeed, this is the sort of bug that you wonder where it even comes from. Our query libraries? Why aren't these `finally` blocks executing? Is our language host terminating early? Is gRPC angry at me, and just passive-aggrssively not servicing some of my requests? Oh god I've been up for 48 hours, why is that wallpaper starting to move? And by the way, a fun interlude to take in an otherwise very productive week is to try to understand the gRPC streaming node client, which is code-gen'd, but which also takes the liberty of generating itself at runtime, so that gRPC is code-gen'ing a code-gen routine, which makes the whole thing un-introspectable, un-debuggable, and un-knowable. That's fine, I didn't need to understand any of this anyway, thanks friends. But we've come out the other side knowing that the weak link in this very sorry chain of incredibly weak links, is this dependency. This commit removes this dependency for a better monster: the one we know. It is at this time that I'd like to announce that I am quitting my job at Pulumi. I thank you all for the good times, but mostly, for taking this code over for me.
2019-11-11 23:00:46 +00:00
"runtime/asyncIterableUtil.ts",
"runtime/config.ts",
"runtime/debuggable.ts",
"runtime/invoke.ts",
"runtime/mocks.ts",
"runtime/resource.ts",
"runtime/rpc.ts",
"runtime/settings.ts",
"runtime/stack.ts",
"cmd/dynamic-provider/index.ts",
"cmd/run/index.ts",
"cmd/run/run.ts",
"cmd/run/tracing.ts",
"cmd/run-policy-pack/index.ts",
"cmd/run-policy-pack/run.ts",
"automation/index.ts",
"automation/cmd.ts",
"automation/config.ts",
"automation/localWorkspace.ts",
"automation/minimumVersion.ts",
"automation/projectSettings.ts",
"automation/remoteStack.ts",
"automation/remoteWorkspace.ts",
"automation/stackSettings.ts",
"automation/server.ts",
"automation/stack.ts",
"automation/workspace.ts",
"tests/config.spec.ts",
2022-03-04 00:26:06 +00:00
"tests/constants.ts",
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
"tests/init.spec.ts",
2018-10-27 19:56:16 +00:00
"tests/iterable.spec.ts",
"tests/options.spec.ts",
"tests/output.spec.ts",
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 06:01:01 +00:00
"tests/resource.spec.ts",
"tests/stackReference.spec.ts",
"tests/provider.spec.ts",
"tests/unwrap.spec.ts",
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
"tests/util.ts",
`StreamInvoke` should return `AsyncIterable` that completes A user who calls `StreamInvoke` probably expects the `AsyncIterable` that is returned to gracefully terminate. This is currently not the case. Where does something like this go wrong? A better question might be where any of this went right, because several days later, after wandering into civilization from the great Wilderness of Bugs, I must confess that I've forgotten if any of it had. `AsyncIterable` is a pull-based API. `for await (...)` will continuously call `next` ("pull") on the underlying `AsyncIterator` until the iterable is exhausted. But, gRPC's streaming-return API is _push_ based. That is to say, when a streaming RPC is called, data is provided by callback on the stream object, like: call.on("data", (thing: any) => {... do thing ...}); Our goal in `StreamInvoke` is to convert the push-based gRPC routines into the pull-based `AsyncIterable` retrun type. You may remember your CS theory this is one of those annoying "fundamental mismatches" in abstraction. So we're off to a good start. Until this point, we've depended on a library, `callback-to-async-iterator` to handle the details of being this bridge. Our trusting nature and innocent charm has mislead us. This library is not worthy of our trust. Instead of doing what we'd like it to do, it returns (in our case) an `AsyncIterable` that will never complete. Yes,, this `AsyncIterable` will patiently wait for eternity, which honestly is kind of poetic when you sit down in a nice bath and think about that fun time you considered eating your computer instead of finishing this idiotic bug. Indeed, this is the sort of bug that you wonder where it even comes from. Our query libraries? Why aren't these `finally` blocks executing? Is our language host terminating early? Is gRPC angry at me, and just passive-aggrssively not servicing some of my requests? Oh god I've been up for 48 hours, why is that wallpaper starting to move? And by the way, a fun interlude to take in an otherwise very productive week is to try to understand the gRPC streaming node client, which is code-gen'd, but which also takes the liberty of generating itself at runtime, so that gRPC is code-gen'ing a code-gen routine, which makes the whole thing un-introspectable, un-debuggable, and un-knowable. That's fine, I didn't need to understand any of this anyway, thanks friends. But we've come out the other side knowing that the weak link in this very sorry chain of incredibly weak links, is this dependency. This commit removes this dependency for a better monster: the one we know. It is at this time that I'd like to announce that I am quitting my job at Pulumi. I thank you all for the good times, but mostly, for taking this code over for me.
2019-11-11 23:00:46 +00:00
"tests/runtime/asyncIterableUtil.spec.ts",
"tests/runtime/findWorkspaceRoot.spec.ts",
"tests/runtime/registrations.spec.ts",
Vendor TypeScript and ts-node (#15622) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description Fixes https://github.com/pulumi/pulumi/issues/15733 Historically these packages were direct dependencies of `@pulumi/pulumi`. To decouple the node SDK from the precise version of TypeScript, the packages are now declared as optional peer pependencies of `@pulumi/pulumi` and customers can pick the versions they want. The reason we mark the peer dependencies as *optional* is to prevent package managers from automatically installing them. This avoids the situation where the package manger would install a more recent version of TypeScript without the user explictly opting in. Newer versions have stricter type checks, and can thus stop existing programs from running successfully. When the peer dependencies are not present, we load the vendored versions of the modules. ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-04-10 15:26:37 +00:00
"tests/runtime/pack.ts",
"tests/runtime/rpc.spec.ts",
Vendor TypeScript and ts-node (#15622) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> # Description Fixes https://github.com/pulumi/pulumi/issues/15733 Historically these packages were direct dependencies of `@pulumi/pulumi`. To decouple the node SDK from the precise version of TypeScript, the packages are now declared as optional peer pependencies of `@pulumi/pulumi` and customers can pick the versions they want. The reason we mark the peer dependencies as *optional* is to prevent package managers from automatically installing them. This avoids the situation where the package manger would install a more recent version of TypeScript without the user explictly opting in. Newer versions have stricter type checks, and can thus stop existing programs from running successfully. When the peer dependencies are not present, we load the vendored versions of the modules. ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-04-10 15:26:37 +00:00
"tests/runtime/install-package-tests.ts",
Make function serialization work on typescript 4 and 5 (#15761) # Description Builds on top of https://github.com/pulumi/pulumi/pull/15753 Fixes https://github.com/pulumi/pulumi/issues/15735 There are a couple breaking changes in the typescript API that we use in `sdk/nodejs/runtime/closure/rewriteSuper.ts`. This PR adds a shim that is used to bridge the differences and versions the snapshots where needed. This does not make typescript a peer dependency yet. Instead the tests force a specific version to be used via [yarn resolutions](https://classic.yarnpkg.com/lang/en/docs/selective-version-resolutions/). ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [x] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-03-27 10:03:57 +00:00
"tests/runtime/closure-integration-tests.ts",
"tests/runtime/props.spec.ts",
"tests/runtime/settings.spec.ts",
2020-09-14 03:28:24 +00:00
"tests/runtime/langhost/run.spec.ts",
"tests/runtime/langhost/cases/069.ambiguous_entrypoints/index.ts",
"tests/runtime/testdata/nested/project/index.ts",
"tests/automation/cmd.spec.ts",
"tests/automation/localWorkspace.spec.ts",
"tests/automation/remoteWorkspace.spec.ts",
"tests/automation/util.ts",
"tests_with_mocks/mocks.spec.ts",
Replace deprecated read-package-tree with @npmcli/arborist (#15503) <!--- Thanks so much for your contribution! If this is your first time contributing, please ensure that you have read the [CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md) documentation. --> read-package-tree is deprecated. Additionally it has a dependency that is flagged by security scanners. <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. --> Fixes https://github.com/pulumi/pulumi/issues/9129 Ref https://github.com/pulumi/pulumi/issues/12688 ## Checklist - [x] I have run `make tidy` to update any new dependencies - [x] I have run `make lint` to verify my code passes the lint check - [x] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [x] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @Pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
2024-02-26 18:40:28 +00:00
"npm/testdata/nested/project/index.ts",
"types/arborist.d.ts"
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 19:07:54 +00:00
]
}