The majority of our NodeJS Automation API tests are currently configured
to use the (default) option of a Pulumi Cloud backend. This is
undesirable for several reasons:
* Using Pulumi Cloud introduces a network into the mix, meaning there is
a greater chance of tests flaking.
* Pulumi Cloud is essentially a massive shared global state, so if
multiple tests run concurrently, we need to ensure that they don't
interfere with each other. We thus have code to generate random stack
names, projects, etc. to avoid these sorts of clashes. If we don't, we
again run the risk of having flaky tests.
* It's slower than not using Pulumi Cloud/the network and instead using
e.g. a local file-backed state.
This commit reworks as many tests as possible to use local file-backed
state. In doing so, we optimistically re-enable some previously skipped
tests that were marked as flaky. For hygiene, and in case there is a
chance we'll run multiple instances of the suite on a single instance in
CI, we use temporary directories for state where possible which are
aggressively cleaned up on test suite exit (see the `tmp` NPM package
which we already use elsewhere). For tests that use `Pulumi.yaml` files
(and thus must have a fixed `backend.url`), we use `file://~`, which is
already used in one test and seems to work just fine.
As a result of these changes, the tests run significantly faster (from
the order of tens of minutes to just a few, if that) and should in
theory be less (never, 🙏) flaky. Moreover, it's likely that we
could later remove some more of the code in our tests that e.g.
generates random names, since there should be no need for this any more.
Fixes#8061
Part of #15940
While looking at https://github.com/pulumi/pulumi/issues/16469, I
noticed a lot of the missing errors were a bit repetitive for the
not-a-missing-error case and disable-plugin-acquisition case. So this
rejigs the if statements a bit to make them all a bit more standard of
"if not a missing error, or if disable plugin acquisition is set, then
just err immediately, else retry".
This is more consistent with `RegisterResourceTransform`. The
`RegisterStackInvokeTransform` version has not been released yet, so we
still have the option of making this consistent.
Enable the l1-empty test for Go.
This required some changes in reporting versions from the Go language
host (replaced dependencies don't really have a version anymore, because
they've been replaced by a local artefact and that doesn't contain any
version info itself). This then cascaded that the conformance test had
to be less strict about checking versions from GetDependencies because
Go now returns empty versions for these local deps.
For conformance testing we want "replace" directives to be respected
since they change the version of the dependency. These tests were all
written using replace directives instead of just normal real
dependencies and so fixing the Go host to respect "replace" broke these
tests.
This rewrites the tests to use real dependencies pulled from a separate
repo at https://github.com/pulumi/go-dependency-testdata.
By default, `pulumi destroy` removes all resources within a stack but
leaves the stack and its configuration intact. If one passes the
`--remove` option to `destroy`, however, the stack and its configuration
will also be removed once the resources within the stack have been
deleted. This commit updates the work of @Moon1706 in #11080 to add
`remove` as an option to the Go, NodeJS and Python Automation API SDKs'
`destroy` methods, which then perform an analogous clean-up.
Closes#11080
---------
Co-authored-by: Nikita Sharaev <n.p.sharaev@tinkoff.ru>
Currently when no resources are moved, we just show no resources being
moved, but let the command continue. This can be quite confusing to the
user. Error out in that case instead. Also when an argument doesn't
match any resources in the source snapshot, we now warn the user about
that.
This adds the framework for running conformance tests for Go. Currently
_all_ the tests are skipped, but I'll raise small PRs on top of this one
to start fixing some of the issues.
When generating literals for inputs that have TypedDict types, we want
to use the pythonic names (snake_case) for keys.
We also have to take care of tracking the fact that we’re inisde a
TypedDict for nested dicts.
Fixes https://github.com/pulumi/pulumi/issues/16646
When converting from a requirements.txt to a pyproject.toml, we were
previously indenting each logical level in the file. Disable the
indentation to match the style of pyproject.toml files as generated
directly by Poetry.
Fixes https://github.com/pulumi/pulumi/issues/16657
urlAuthParser.Parse caches auth methods. To avoid caching the auth
method when running into an error, we first checked that the auth method
is not nil. However we ran into a classic Go issue where a nil value of
a concrete type is assigned to a variable of an interface type, making
it not equal to nil.
https://go.dev/doc/faq#nil_errorhttps://go.dev/play/p/AOSdCWd3XC1
Fixes https://github.com/pulumi/pulumi/issues/16637
---------
Co-authored-by: Thomas Gummerer <t.gummerer@gmail.com>
Currently when we have stacks with no snapshot, `pulumi state move`
fails because it tries to use a nil pointer. Handle this scenario
correctly by:
- erroring out if there is no snapshot in the source stack. In this case
there are no resources that can be moved, so there's nothing more that
we can do other than showing the error.
- creating a snapshot in the destination stack if necessary. It's valid
to move a resource to a currently empty stack, so we'll make this work.
This commit fixes a small error in one of our integration tests that
happened to be spotted in #14543, but was never addressed separately/has
not been addressed since that PR has not yet been merged.
Co-authored-by: Sam Eiderman <sameid@gmail.com>
At some point while moving the code around we lost the output for the
resources to be moved. We would look for them in the source snapshot for
correct ordering, but the code happened after the URN renaming, so the
resources would not match the ones in the source snapshot anymore.
Fix this by using the resourcesToMoveOrdered list to print the
resources, and also moving this code a bit earlier, as we want to show
the resource URNs from the source snapshot, instead of the rewritten
ones.
CPython doesn't optimize tail recursion, and because of that has a
fairly low recursion limit set to avoid stack overflows. This also means
that where we rely on recursion we are prone to raising
`RecursionError`s. In this particular case, `_add_dependency` relies on
recursion for finding all the child dependencies.
When deeply nested trees of `ComponentResources` are used,
`_add_dependencies` can end up raising such an error.
Rewrite the function to be iterative instead of recursive to avoid that.
Fixes https://github.com/pulumi/pulumi/issues/16328
(I didn't manage to reproduce the exact stacktrace of the above issue,
so there might be another thing hidden there. But this should help with
it either way)
This adds support for replacement parameterised providers to Python and
a small integration test to check it works e2e.
When using parameterised providers we need to use the new (currently
unstable) RegisterPackage system, instead of sending
Version/DownloadURL/etc via RegisterResourceRequest. Once
RegisterPackage is stable the intention is to change _all_ packages to
use it and for normal packages to fall back to the
RegisterResourceRequest options, while parameterised packages will
error.
The actual parameter value is embedded in the python SDK as a base64
string that we decode before sending to the gRPC endpoint as bytes.
The Python Automation API SDK serializes project settings (the contents
of `Pulumi.yaml` files) using Python's `pyyaml` package. Project
settings in the Python Automation API SDK are represented as instances
of the `ProjectSettings` class. By default, `pyyaml` will serialize
class instances as YAML objects "tagged" with a string that indicates
their class. This is so that, upon deserialization, it can construct
objects of the appropriate class (as opposed to, just dictionaries). As
an example, the following Python program:
```python
class Person:
def __init(self, name: str, age: int) -> None:
self.name = name
self.age = age
will = Person("will", 37)
yaml.dump(will)
```
will produce the following YAML:
```yaml
!!python/object:__main__.Person
age: 37
name: will
```
The string `!!python/object:__main__.Person` is the _tag_ indicating the
class that Python will need to instantiate if e.g. this YAML is
deserialized with `yaml.load` or similar.
Outside of the various Automation APIs, `Pulumi.yaml` files are
"plain-old YAML files" -- language- or library-specific concepts such as
class tags are nowhere to be seen. We thus don't really want this
behaviour when we serialize project settings to YAML. Fortunately, there
is a relatively simple workaround -- instead of passing instances of
`ProjectSettings` to `yaml.dump`, we can just pass vanilla dictionaries
containing the same data. These will be rendered as YAML objects with no
tags, which is what we want.
<em>Un</em>fortunately, we must turn _all_ objects in a hierarchy into
plain dictionaries, or tags will appear at some point. Presently, this
is not the case in the Python SDK, which just uses
`ProjectSettings.__dict__` to get the dictionary for the top-level
object. If there are nested objects in this dictionary (such as e.g.
`ProjectBackend` or `ProjectRuntimeInfo` objects), these are _not_
converted into dictionaries and `pyyaml` writes them as tagged objects,
which will later fail to deserialize.
This commit fixes this issue, adding explicit `to_dict` and `from_dict`
methods to settings classes for the purposes of recursively converting
objects to and from dictionaries. It also:
* adds a test that confirms serialization/deserialization of
configurations containing nested objects now works.
* in order to write this test, exports some types (`ProjectTemplate` and
friends) that appear to be part of the public API but have until now not
been exported.
This fixes go package generation so users don't _have_ to set the import
base path explicitly in the go info part of the schema. This only really
helps our packages (because we assume the path is "github.com/pulumi")
but it fixes some of the generated test data and should also work for
local SDKs, where they aren't _really_ at "github.com/pulumi" but also
that it doesn't matter what URL is used as long as it's a valid one.
Fixes https://github.com/pulumi/pulumi/issues/13756
---------
Co-authored-by: Thomas Gummerer <t.gummerer@gmail.com>
Clean up the temporary `PULUMI_HOME` directory we create during a
program test.
This is necessary to reclaim the disk space of the plugins that were
downloaded
during the test.
In pulumi-aws we started seeing test failures because the CI runners
started running out of disk space due to plugins in `PULUMI_HOME` not
being cleaned up.
Rather than writing a go.mod with string lerping, this uses the
"golang.org/x/mod/modfile" to construct and render the go.mod files we
generate for programs.
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
This commit improves the TypeScript TypeDocs for `runtime` modules in
the NodeJS SDK. Specifically:
* It adds documentation to interfaces, properties, etc. that are missing
it.
* It transforms would-be TypeDoc comments erroneously written using
either `/*` (a single asterisk) or `//` (a normal line comment) to
actual TypeDoc comments.
* It standardises on TypeDoc's `{@link Name}` syntax for linking
identifiers, as opposed to the mixture of backticks and square brackets
we have today.
* It fixes typos and generally cleans up the formatting here and there,
as well as introducing more consistency where the same concepts crop up
in multiple places.
There's a few instances where I *think* we can use the GITHUB_TOKEN we
get from actions, so we can try to avoid ratelimits.
All of these jobs should be running during PRs, so it should be
relatively safe if the CI tests pass.
This commit improves the TypeScript TypeDocs for the remaining modules
in the NodeJS SDK. Specifically:
* It adds documentation to interfaces, properties, etc. that are missing
it.
* It transforms would-be TypeDoc comments erroneously written using
either `/*` (a single asterisk) or `//` (a normal line comment) to
actual TypeDoc comments.
* It standardises on TypeDoc's `{@link Name}` syntax for linking
identifiers, as opposed to the mixture of backticks and square brackets
we have today.
* It fixes typos and generally cleans up the formatting here and there,
as well as introducing more consistency where the same concepts crop up
in multiple places.
---------
Co-authored-by: Thomas Gummerer <t.gummerer@gmail.com>
The test now checks that destroy works, and also that the parameterized
provider state is what we expect.
This required flipping around some of the state managment around reading
and writing provider inputs.
Add support to the Go SDK for invoke transforms. This only adds support
for setting transforms globally, not yet via a resource option, which
resource transforms allow.
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
We run a short script with nodejs to find the path to the nodejs SDK
entrypoint. In https://github.com/pulumi/pulumi/pull/16160 this script
was changed and used newlines.
This broke when using the Volta package manager to manage different
nodejs versions. Volta adds shim programs into the user's path that
routes to the correct nodejs executable. Seemingly Volta does not handle
the newlines in the arguments.
To fix this, we ensure that the script is on a single line.
Fixes https://github.com/pulumi/pulumi/issues/16393
This commit adds a test to ensure that provider `Delete`s are passed
with the correct set of parameters. This is a regression test for
#16440, as part of the follow-up to #16441 in #16484. Assuming we are
happy with this level/style of test, we should consider adding tests for
the other provider methods also.
Fixes#16484
From SDKs, we call invokes in one of two ways:
* In a "non-output" context (e.g. `getX`), which has a result dependent
on the language (e.g. a `Promise` in NodeJS) that _does not_ track
dependencies.
* In an "output" context (e.g. `getXOutput`), which has an `Output` type
and does track dependencies.
In the non-output case, `dependsOn` really doesn't make sense, since
this style of invoke is inherently ignoring dependency tracking/outputs.
This commit thus reverts 492c57c7dd so
that we can rethink the design before people's programs are subtly
broken in this case.
When running `pulumi up --continue-on-error`, we can't bring up new
resources that have dependencies that have failed, or have been skipped.
Since the resource would have a failed dependency, that dependency would
not be in the snapshot, resulting in a snapshot integrity failure. Also
it simply does not make sense to `up` a resource that has a failed
dependency.
We took care of that for the regular dependency relationship, however at
the time we missed doing the same for other types of dependencies,
namely parent-child relationships, deleted with relationships and
property dependencies.
This can result in snapshot integrity failures for example when a parent
fails to be created, but we still do the resource creation of the child,
such as what happened in https://github.com/pulumi/pulumi/issues/16638.
Fix this by skipping the step when a resource with any type of
dependency relationship fails or is skipped beforehand.
Fixes https://github.com/pulumi/pulumi/issues/16638
When a policy pack is nested within a project, or vice-versa, install
the appropriate dependencies, based on where the command is run from.
If there are both policy pack and project files in the parent
directories of the current working directory, pick the one that is
closest.
Fixes https://github.com/pulumi/pulumi/issues/16605
Correct the usage text, and mark the command not as non-experimental.
The command should be ready for regular usage, so I think the
"EXPERIMENTAL" comment can be dropped now.
We've recently introduced resource transforms, which allow users to
update any resource options and properties for the duration of a program
using a callback. We want to introduce similar functionality for Invokes
(and eventually also StreamInvokes, Read and Calls). This can help users
e.g. set default providers through transforms consistently for all
components.
While this PR only implements the engine parts of invoke transforms, the
API for this will look very similar to what the API for resource
transforms looks like. For example in TypeScript:
```
pulumi.runtime.registerInvokeTransform(args => {
[...]
});
```
---------
Co-authored-by: Will Jones <will@sacharissa.co.uk>
For nodejs we look at the node_modules of all child directories of the
root path to find the required plugins. When a policy pack is nested
within a pulumi project, we were including its dependencies in the list
of required plugins. To avoid this, we stop recursing once we see a
directory that has a PulumiPolicy.yaml.
Fixes https://github.com/pulumi/pulumi/issues/16604
Part of pulumi/pulumi-yaml#599.
Pulumi YAML uses the schema defined here to type check outputs from the
`pulumi:pulumi:StackReference` resource. This changes the type of
`outputs` from `map[string]string` to `map[string]any`, permitting list,
map, and numeric outputs to be used as inputs to other resources.
To release this fix, we will need to release Pulumi twice and YAML once
if we are to use our ordinary release process:
1. Merge this PR.
2. Release sdk and pkg dependencies with this update applied.
3. Merge https://github.com/pulumi/pulumi-yaml/pull/600
4. Create and merge a PR to update Pulumi YAML's dependency on
`github.com/pulumi/pkg/v3`, as YAML links to the schema loader and will
read the updated schema here:
7f48ca370d/pkg/codegen/schema/loader.go (L136-L140)
5. Release Pulumi YAML
6. Create and merge a PR to update the YAML language plugin shipped with
Pulumi
7. Release Pulumi
Change how the deployment settings configures git repos to support
directories that are not git repositories
Fix https://github.com/pulumi/pulumi-service/issues/20675
---------
Co-authored-by: Levi Blackstone <levi@pulumi.com>
When moving state it can be useful to also include all the parents. Add
a flag to give users that option.
Using this flag, all parents of all the resources being moved will be
included in the move.
When the user tries to move a resource that would end up being a
duplicate in the destination stack, we need to prevent that, to avoid a
broken state file.
Error out in this case.
Depends on: https://github.com/pulumi/pulumi/pull/16543