# Description
This is an alternative to https://github.com/pulumi/pulumi/pull/14244.
Instead of adding type information to the run request, pass the config
through as property values. Property values are properly encoded on the
wire, and can be unmarshalled on the other end including type
information, so this should be a more future proof way to go forward.
Eventually we'll want to parse the config directly into property values,
but that can be left for the future, as it's a bigger change.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [x] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
We need to be able to "pack" SDKs to refer to them as local dependencies
in matrix testing.
This is for two reasons.
1) We want to test as close as possible to the things we ship.
2) Not every language supports linking to a source tree, some require a
build step to give a linkable artifact.
These commands are going to end up looking _very_ similar to the publish
workflows, but while Providers work on that and while we work on matrix
testing we'll let them evolve in parallel.
The sdk-pack command is hidden unless PULUMI_DEV is set. I've checked
this works with matrix testing for NodeJS. We'll fill in the rest as we
need them for matrix testing.
This changes codegen to be invoked via gRPC from pkg, rather than
invoking pkg/codegen directly.
Consider it a proof-of-concept for moving codegen to a gRPC interface
without the worries of forwards-backwards compatability (because we ship
language plugins at a fixed version side-by-side to users).
This allows the pulumi-language-go plugin to start up providers directly
from .go source files.
The other language providers will be extended to support this as well in
time.
* Move InstallDependencies to the language plugin
This changes `pulumi new` and `pulumi up <template>` to invoke the language plugin to install dependencies, rather than having the code to install dependencies hardcoded into the cli itself.
This does not change the way policypacks or plugin dependencies are installed. In theory we can make pretty much the same change to just invoke the language plugin, but baby steps we don't need to make that change at the same time as this.
We used to feed the result of these install commands (dotnet build, npm install, etc) directly through to the CLI stdout/stderr. To mostly maintain that behaviour the InstallDependencies gRCP method streams back bytes to be written to stdout/stderr, those bytes are either read from pipes or a pty that we run the install commands with. The use of a pty is controlled by the global colorisation option in the cli.
An alternative designs was to use the Engine interface to Log the results of install commands. This renders very differently to just writing directly to the standard outputs and I don't think would support control codes so well.
The design as is means that `npm install` for example is still able to display a progress bar and colors even though we're running it in a separate process and streaming its output back via gRPC.
The only "oddity" I feel that's fallen out of this work is that InstallDependencies for python used to overwrite the virtualenv runtime option. It looks like this was because our templates don't bother setting that. Because InstallDependencies doesn't have the project file, and at any rate will be used for policy pack projects in the future, I've moved that logic into `pulumi new` when it mutates the other project file settings. I think we should at some point cleanup so the templates correctly indicate to use a venv, or maybe change python to assume a virtual env of "venv" if none is given?
* Just warn if pty fails to open
* Add tests and return real tty files
* Add to CHANGELOG
* lint
* format
* Test strings
* Log pty opening for trace debugging
* s/Hack/Workaround
* Use termios
* Tweak terminal test
* lint
* Fix windows build
* Make `async:true` the default for `invoke` calls (#3750)
* Switch away from native grpc impl. (#3728)
* Remove usage of the 'deasync' library from @pulumi/pulumi. (#3752)
* Only retry as long as we get unavailable back. Anything else continues. (#3769)
* Handle all errors for now. (#3781)
* Do not assume --yes was present when using pulumi in non-interactive mode (#3793)
* Upgrade all paths for sdk and pkg to v2
* Backport C# invoke classes and other recent gen changes (#4288)
Adjust C# generation
* Replace IDeployment with a sealed class (#4318)
Replace IDeployment with a sealed class
* .NET: default to args subtype rather than Args.Empty (#4320)
* Adding system namespace for Dotnet code gen
This is required for using Obsolute attributes for deprecations
```
Iam/InstanceProfile.cs(142,10): error CS0246: The type or namespace name 'ObsoleteAttribute' could not be found (are you missing a using directive or an assembly reference?) [/Users/stack72/code/go/src/github.com/pulumi/pulumi-aws/sdk/dotnet/Pulumi.Aws.csproj]
Iam/InstanceProfile.cs(142,10): error CS0246: The type or namespace name 'Obsolete' could not be found (are you missing a using directive or an assembly reference?) [/Users/stack72/code/go/src/github.com/pulumi/pulumi-aws/sdk/dotnet/Pulumi.Aws.csproj]
```
* Fix the nullability of config type properties in C# codegen (#4379)
* Use nightly protoc gRPC plugin for Node
Newer versions of the Node gRPC plugin accept the 'minimum_node_version'
flag, which we can use to instruct protoc to not support Node versions
earlier than Node 6. This allows the compiler to use 'Buffer.from'
instead of the deprecated 'Buffer' constructor, which fixes a
deprecation warning on Node 10.
* Protobuf changes
This change includes a lot more functionality. Enough to actually
run the webserver-py example through previews, updates, and destroys!
* Actually wire up the gRPC connections to the engine/monitor.
* Move the Node.js and Python generated Protobuf/gRPC files underneath
the actual SDK directories to simplify this generally. No more
copying during `make` and, in fact, this was required to give a smoother
experience with good packages/modules for the Python's SDK development.
* Build the Python egg during `make build`.
* Add support for program stacks. Just like with the Node.js runtime,
we will auto-parent any resources without explicit parents to a single
top-level resource component.
* Add support for component resource output properties.
* Add get_project() and get_stack() functions for retrieving the current
project and stack names.
* Properly use UNKNOWN sentinels.
* Add a set_outputs() function on Resource. This is defined by the
code-generator and allows custom logic for output property setting.
This is cleaner than the way we do this in Node.js, and gives us a
way to ensure that output properties are "real" properties, complete
with member documentation. This also gives us a hook to perform
name demangling, which the code-generator typically controls anyway.
* Add package dependencies to setuptools.py and requirements.txt.