2018-03-21 19:43:21 +00:00
|
|
|
// Copyright 2016-2018, Pulumi Corporation. All rights reserved.
|
2017-10-30 22:17:13 +00:00
|
|
|
|
|
|
|
package tests
|
|
|
|
|
|
|
|
import (
|
2018-06-04 23:34:38 +00:00
|
|
|
cryptorand "crypto/rand"
|
|
|
|
"encoding/hex"
|
2018-05-25 20:29:59 +00:00
|
|
|
"encoding/json"
|
|
|
|
"fmt"
|
2018-02-21 05:05:57 +00:00
|
|
|
"io/ioutil"
|
|
|
|
"os"
|
2018-05-25 20:29:59 +00:00
|
|
|
"path"
|
2018-02-21 05:05:57 +00:00
|
|
|
"path/filepath"
|
|
|
|
"strconv"
|
|
|
|
"strings"
|
2017-10-30 22:17:13 +00:00
|
|
|
"testing"
|
2018-02-21 05:05:57 +00:00
|
|
|
"time"
|
2017-10-30 22:17:13 +00:00
|
|
|
|
2018-05-25 20:29:59 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/apitype"
|
2018-09-05 14:53:31 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/backend/filestate"
|
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
2018-08-11 04:39:59 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource"
|
2018-05-25 20:29:59 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource/stack"
|
2017-10-30 22:17:13 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/testing/integration"
|
2018-06-04 23:34:38 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/contract"
|
2018-02-21 05:05:57 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/workspace"
|
2017-10-30 22:17:13 +00:00
|
|
|
"github.com/stretchr/testify/assert"
|
|
|
|
|
|
|
|
ptesting "github.com/pulumi/pulumi/pkg/testing"
|
|
|
|
)
|
|
|
|
|
|
|
|
func TestStackCommands(t *testing.T) {
|
|
|
|
// stack init, stack ls, stack rm, stack ls
|
|
|
|
t.Run("SanityTest", func(t *testing.T) {
|
|
|
|
e := ptesting.NewEnvironment(t)
|
2017-11-01 21:55:16 +00:00
|
|
|
defer func() {
|
|
|
|
if !t.Failed() {
|
|
|
|
e.DeleteEnvironment()
|
|
|
|
}
|
|
|
|
}()
|
2017-10-30 22:17:13 +00:00
|
|
|
|
2017-10-31 01:33:03 +00:00
|
|
|
integration.CreateBasicPulumiRepo(e)
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
e.RunCommand("pulumi", "login", "--cloud-url", e.LocalURL())
|
2018-04-04 22:31:01 +00:00
|
|
|
e.RunCommand("pulumi", "stack", "init", "foo")
|
2017-10-30 22:17:13 +00:00
|
|
|
|
|
|
|
stacks, current := integration.GetStacks(e)
|
|
|
|
assert.Equal(t, 1, len(stacks))
|
|
|
|
assert.NotNil(t, current)
|
2017-10-31 01:33:03 +00:00
|
|
|
if current == nil {
|
|
|
|
t.Logf("stacks: %v, current: %v", stacks, current)
|
|
|
|
t.Fatalf("No current stack?")
|
|
|
|
}
|
|
|
|
|
2017-10-30 22:17:13 +00:00
|
|
|
assert.Equal(t, "foo", *current)
|
|
|
|
assert.Contains(t, stacks, "foo")
|
|
|
|
|
|
|
|
e.RunCommand("pulumi", "stack", "rm", "foo", "--yes")
|
|
|
|
|
|
|
|
stacks, _ = integration.GetStacks(e)
|
|
|
|
assert.Equal(t, 0, len(stacks))
|
|
|
|
})
|
|
|
|
|
|
|
|
t.Run("StackSelect", func(t *testing.T) {
|
|
|
|
e := ptesting.NewEnvironment(t)
|
2017-11-01 21:55:16 +00:00
|
|
|
defer func() {
|
|
|
|
if !t.Failed() {
|
|
|
|
e.DeleteEnvironment()
|
|
|
|
}
|
|
|
|
}()
|
2017-10-30 22:17:13 +00:00
|
|
|
|
2017-10-31 01:33:03 +00:00
|
|
|
integration.CreateBasicPulumiRepo(e)
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
e.RunCommand("pulumi", "login", "--cloud-url", e.LocalURL())
|
2018-04-04 22:31:01 +00:00
|
|
|
e.RunCommand("pulumi", "stack", "init", "blighttown")
|
|
|
|
e.RunCommand("pulumi", "stack", "init", "majula")
|
|
|
|
e.RunCommand("pulumi", "stack", "init", "lothric")
|
2017-10-30 22:17:13 +00:00
|
|
|
|
|
|
|
// Last one created is always selected.
|
2017-10-31 01:33:03 +00:00
|
|
|
stacks, current := integration.GetStacks(e)
|
|
|
|
if current == nil {
|
|
|
|
t.Fatalf("No stack was labeled as current among: %v", stacks)
|
|
|
|
}
|
2017-10-30 22:17:13 +00:00
|
|
|
assert.Equal(t, "lothric", *current)
|
|
|
|
|
|
|
|
// Select works
|
|
|
|
e.RunCommand("pulumi", "stack", "select", "blighttown")
|
2017-10-31 01:33:03 +00:00
|
|
|
stacks, current = integration.GetStacks(e)
|
|
|
|
if current == nil {
|
|
|
|
t.Fatalf("No stack was labeled as current among: %v", stacks)
|
|
|
|
}
|
2017-10-30 22:17:13 +00:00
|
|
|
assert.Equal(t, "blighttown", *current)
|
|
|
|
|
|
|
|
// Error
|
|
|
|
out, err := e.RunCommandExpectError("pulumi", "stack", "select", "anor-londo")
|
|
|
|
assert.Empty(t, out)
|
2017-10-31 01:33:03 +00:00
|
|
|
// local: "no stack with name 'anor-londo' found"
|
|
|
|
// cloud: "Stack 'integration-test-59f645ba/pulumi-test/anor-londo' not found"
|
|
|
|
assert.Contains(t, err, "anor-londo")
|
2018-04-16 23:03:40 +00:00
|
|
|
e.RunCommand("pulumi", "stack", "rm", "--yes")
|
2017-10-30 22:17:13 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
t.Run("StackRm", func(t *testing.T) {
|
|
|
|
e := ptesting.NewEnvironment(t)
|
2017-11-01 21:55:16 +00:00
|
|
|
defer func() {
|
|
|
|
if !t.Failed() {
|
|
|
|
e.DeleteEnvironment()
|
|
|
|
}
|
|
|
|
}()
|
2017-10-30 22:17:13 +00:00
|
|
|
|
2017-10-31 01:33:03 +00:00
|
|
|
integration.CreateBasicPulumiRepo(e)
|
2017-10-30 22:17:13 +00:00
|
|
|
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
e.RunCommand("pulumi", "login", "--cloud-url", e.LocalURL())
|
2018-04-04 22:31:01 +00:00
|
|
|
e.RunCommand("pulumi", "stack", "init", "blighttown")
|
|
|
|
e.RunCommand("pulumi", "stack", "init", "majula")
|
|
|
|
e.RunCommand("pulumi", "stack", "init", "lothric")
|
2017-10-30 22:17:13 +00:00
|
|
|
stacks, _ := integration.GetStacks(e)
|
|
|
|
assert.Equal(t, 3, len(stacks))
|
|
|
|
|
|
|
|
e.RunCommand("pulumi", "stack", "rm", "majula", "--yes")
|
|
|
|
stacks, _ = integration.GetStacks(e)
|
|
|
|
assert.Equal(t, 2, len(stacks))
|
|
|
|
assert.Contains(t, stacks, "blighttown")
|
|
|
|
assert.Contains(t, stacks, "lothric")
|
|
|
|
|
|
|
|
e.RunCommand("pulumi", "stack", "rm", "lothric", "--yes")
|
|
|
|
stacks, _ = integration.GetStacks(e)
|
|
|
|
assert.Equal(t, 1, len(stacks))
|
|
|
|
assert.Contains(t, stacks, "blighttown")
|
|
|
|
|
|
|
|
e.RunCommand("pulumi", "stack", "rm", "blighttown", "--yes")
|
|
|
|
stacks, _ = integration.GetStacks(e)
|
|
|
|
assert.Equal(t, 0, len(stacks))
|
|
|
|
|
|
|
|
// Error
|
|
|
|
out, err := e.RunCommandExpectError("pulumi", "stack", "rm", "anor-londo", "--yes")
|
|
|
|
assert.Empty(t, out)
|
2017-10-31 01:33:03 +00:00
|
|
|
// local: .pulumi/stacks/pulumi-test/anor-londo.json: no such file or directory
|
|
|
|
// cloud: Stack 'integration-test-59f645ba/pulumi-test/anor-londo' not found
|
|
|
|
assert.Contains(t, err, "anor-londo")
|
2017-10-30 22:17:13 +00:00
|
|
|
})
|
2018-05-25 20:29:59 +00:00
|
|
|
|
|
|
|
// Test that stack import fails if the version of the deployment we give it is not
|
|
|
|
// one that the CLI supports.
|
|
|
|
t.Run("CheckpointVersioning", func(t *testing.T) {
|
|
|
|
versions := []int{
|
|
|
|
apitype.DeploymentSchemaVersionCurrent + 1,
|
|
|
|
stack.DeploymentSchemaVersionOldestSupported - 1,
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, deploymentVersion := range versions {
|
|
|
|
t.Run(fmt.Sprintf("Version%d", deploymentVersion), func(t *testing.T) {
|
|
|
|
e := ptesting.NewEnvironment(t)
|
|
|
|
defer func() {
|
|
|
|
if !t.Failed() {
|
|
|
|
e.DeleteEnvironment()
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
|
|
|
integration.CreateBasicPulumiRepo(e)
|
|
|
|
e.RunCommand("pulumi", "login", "--cloud-url", e.LocalURL())
|
|
|
|
e.RunCommand("pulumi", "stack", "init", "the-abyss")
|
|
|
|
stacks, _ := integration.GetStacks(e)
|
|
|
|
assert.Equal(t, 1, len(stacks))
|
|
|
|
|
|
|
|
stackFile := path.Join(e.RootPath, "stack.json")
|
|
|
|
e.RunCommand("pulumi", "stack", "export", "--file", "stack.json")
|
|
|
|
stackJSON, err := ioutil.ReadFile(stackFile)
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
|
|
|
|
var deployment apitype.UntypedDeployment
|
|
|
|
err = json.Unmarshal(stackJSON, &deployment)
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
|
|
|
|
deployment.Version = deploymentVersion
|
|
|
|
bytes, err := json.Marshal(deployment)
|
|
|
|
assert.NoError(t, err)
|
|
|
|
err = ioutil.WriteFile(stackFile, bytes, os.FileMode(os.O_CREATE))
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
|
|
|
|
stdout, stderr := e.RunCommandExpectError("pulumi", "stack", "import", "--file", "stack.json")
|
|
|
|
assert.Empty(t, stdout)
|
|
|
|
switch {
|
|
|
|
case deploymentVersion > apitype.DeploymentSchemaVersionCurrent:
|
|
|
|
assert.Contains(t, stderr, "the stack 'the-abyss' is newer than what this version of the Pulumi CLI understands")
|
|
|
|
case deploymentVersion < stack.DeploymentSchemaVersionOldestSupported:
|
|
|
|
assert.Contains(t, stderr, "the stack 'the-abyss' is too old")
|
|
|
|
}
|
|
|
|
})
|
|
|
|
}
|
|
|
|
})
|
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
2018-08-11 04:39:59 +00:00
|
|
|
|
|
|
|
t.Run("FixingInvalidResources", func(t *testing.T) {
|
|
|
|
e := ptesting.NewEnvironment(t)
|
|
|
|
defer func() {
|
|
|
|
if !t.Failed() {
|
|
|
|
e.DeleteEnvironment()
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
stackName := addRandomSuffix("invalid-resources")
|
|
|
|
integration.CreateBasicPulumiRepo(e)
|
|
|
|
e.ImportDirectory("integration/stack_dependencies")
|
2018-09-04 17:06:50 +00:00
|
|
|
e.RunCommand("pulumi", "login", "--cloud-url", e.LocalURL())
|
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
2018-08-11 04:39:59 +00:00
|
|
|
e.RunCommand("pulumi", "stack", "init", stackName)
|
|
|
|
e.RunCommand("yarn", "install")
|
|
|
|
e.RunCommand("yarn", "link", "@pulumi/pulumi")
|
2018-10-26 21:50:47 +00:00
|
|
|
e.RunCommand("pulumi", "update", "--non-interactive", "--skip-preview")
|
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
2018-08-11 04:39:59 +00:00
|
|
|
// We're going to futz with the stack a little so that one of the resources we just created
|
|
|
|
// becomes invalid.
|
|
|
|
stackFile := path.Join(e.RootPath, "stack.json")
|
|
|
|
e.RunCommand("pulumi", "stack", "export", "--file", "stack.json")
|
|
|
|
stackJSON, err := ioutil.ReadFile(stackFile)
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
var deployment apitype.UntypedDeployment
|
|
|
|
err = json.Unmarshal(stackJSON, &deployment)
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
snap, err := stack.DeserializeUntypedDeployment(&deployment)
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
// Let's say that the the CLI crashed during the deletion of the last resource and we've now got
|
|
|
|
// invalid resources in the snapshot.
|
|
|
|
res := snap.Resources[len(snap.Resources)-1]
|
|
|
|
snap.PendingOperations = append(snap.PendingOperations, resource.Operation{
|
|
|
|
Resource: res,
|
|
|
|
Type: resource.OperationTypeDeleting,
|
|
|
|
})
|
|
|
|
v2deployment := stack.SerializeDeployment(snap)
|
|
|
|
data, err := json.Marshal(&v2deployment)
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
deployment.Deployment = data
|
|
|
|
bytes, err := json.Marshal(&deployment)
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
err = ioutil.WriteFile(stackFile, bytes, os.FileMode(os.O_CREATE))
|
|
|
|
if !assert.NoError(t, err) {
|
|
|
|
t.FailNow()
|
|
|
|
}
|
|
|
|
_, stderr := e.RunCommand("pulumi", "stack", "import", "--file", "stack.json")
|
|
|
|
assert.Contains(t, stderr, fmt.Sprintf("removing pending operation 'deleting' on '%s'", res.URN))
|
|
|
|
// The engine should be happy now that there are no invalid resources.
|
2018-10-26 21:50:47 +00:00
|
|
|
e.RunCommand("pulumi", "update", "--non-interactive", "--skip-preview")
|
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
2018-08-11 04:39:59 +00:00
|
|
|
e.RunCommand("pulumi", "stack", "rm", "--yes", "--force")
|
|
|
|
})
|
2017-10-30 22:17:13 +00:00
|
|
|
}
|
2018-02-21 05:05:57 +00:00
|
|
|
|
|
|
|
func TestStackBackups(t *testing.T) {
|
|
|
|
t.Run("StackBackupCreatedSanityTest", func(t *testing.T) {
|
|
|
|
e := ptesting.NewEnvironment(t)
|
|
|
|
defer func() {
|
|
|
|
if !t.Failed() {
|
|
|
|
e.DeleteEnvironment()
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
|
|
|
integration.CreateBasicPulumiRepo(e)
|
2018-11-08 17:44:34 +00:00
|
|
|
e.ImportDirectory("integration/stack_outputs/nodejs")
|
2018-02-21 05:05:57 +00:00
|
|
|
|
|
|
|
// We're testing that backups are created so ensure backups aren't disabled.
|
2018-09-05 14:53:31 +00:00
|
|
|
if env := os.Getenv(filestate.DisableCheckpointBackupsEnvVar); env != "" {
|
|
|
|
os.Unsetenv(filestate.DisableCheckpointBackupsEnvVar)
|
|
|
|
defer os.Setenv(filestate.DisableCheckpointBackupsEnvVar, env)
|
2018-02-21 05:05:57 +00:00
|
|
|
}
|
|
|
|
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
const stackName = "imulup"
|
2018-02-21 05:05:57 +00:00
|
|
|
|
|
|
|
// Get the path to the backup directory for this project.
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
backupDir, err := getStackProjectBackupDir(e, stackName)
|
2018-02-21 05:05:57 +00:00
|
|
|
assert.NoError(t, err, "getting stack project backup path")
|
|
|
|
defer func() {
|
|
|
|
if !t.Failed() {
|
|
|
|
// Cleanup the backup directory.
|
|
|
|
os.RemoveAll(backupDir)
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
e.RunCommand("pulumi", "login", "--cloud-url", e.LocalURL())
|
2018-04-04 22:31:01 +00:00
|
|
|
e.RunCommand("pulumi", "stack", "init", stackName)
|
2018-02-21 05:05:57 +00:00
|
|
|
|
|
|
|
// Build the project.
|
|
|
|
e.RunCommand("yarn", "install")
|
|
|
|
e.RunCommand("yarn", "link", "@pulumi/pulumi")
|
|
|
|
|
|
|
|
// Now run pulumi update.
|
|
|
|
before := time.Now().UnixNano()
|
2018-10-26 21:50:47 +00:00
|
|
|
e.RunCommand("pulumi", "up", "--non-interactive", "--skip-preview")
|
2018-02-21 05:05:57 +00:00
|
|
|
after := time.Now().UnixNano()
|
|
|
|
|
|
|
|
// Verify the backup directory contains a single backup.
|
|
|
|
files, err := ioutil.ReadDir(backupDir)
|
|
|
|
assert.NoError(t, err, "getting the files in backup directory")
|
2018-04-23 21:01:28 +00:00
|
|
|
assert.Equal(t, 1, len(files))
|
2018-02-21 05:05:57 +00:00
|
|
|
fileName := files[0].Name()
|
|
|
|
|
|
|
|
// Verify the backup file.
|
|
|
|
assertBackupStackFile(t, stackName, files[0], before, after)
|
|
|
|
|
|
|
|
// Now run pulumi destroy.
|
|
|
|
before = time.Now().UnixNano()
|
2018-10-26 21:50:47 +00:00
|
|
|
e.RunCommand("pulumi", "destroy", "--non-interactive", "--skip-preview")
|
2018-02-21 05:05:57 +00:00
|
|
|
after = time.Now().UnixNano()
|
|
|
|
|
|
|
|
// Verify the backup directory has been updated with 1 additional backups.
|
|
|
|
files, err = ioutil.ReadDir(backupDir)
|
|
|
|
assert.NoError(t, err, "getting the files in backup directory")
|
|
|
|
assert.Equal(t, 2, len(files))
|
|
|
|
|
|
|
|
// Verify the new backup file.
|
|
|
|
for _, file := range files {
|
|
|
|
// Skip the file we previously verified.
|
|
|
|
if file.Name() == fileName {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
assertBackupStackFile(t, stackName, file, before, after)
|
|
|
|
}
|
2018-04-16 23:03:40 +00:00
|
|
|
|
|
|
|
e.RunCommand("pulumi", "stack", "rm", "--yes")
|
2018-02-21 05:05:57 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
func assertBackupStackFile(t *testing.T, stackName string, file os.FileInfo, before int64, after int64) {
|
|
|
|
assert.False(t, file.IsDir())
|
|
|
|
assert.True(t, file.Size() > 0)
|
|
|
|
split := strings.Split(file.Name(), ".")
|
|
|
|
assert.Equal(t, 3, len(split))
|
|
|
|
assert.Equal(t, stackName, split[0])
|
|
|
|
parsedTime, err := strconv.ParseInt(split[1], 10, 64)
|
|
|
|
assert.NoError(t, err, "parsing the time in the stack backup filename")
|
|
|
|
assert.True(t, parsedTime > before)
|
|
|
|
assert.True(t, parsedTime < after)
|
|
|
|
}
|
|
|
|
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
func getStackProjectBackupDir(e *ptesting.Environment, stackName string) (string, error) {
|
|
|
|
return filepath.Join(e.RootPath,
|
2018-02-21 05:05:57 +00:00
|
|
|
workspace.BookkeepingDir,
|
|
|
|
workspace.BackupDir,
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-16 23:15:10 +00:00
|
|
|
stackName,
|
2018-02-21 05:05:57 +00:00
|
|
|
), nil
|
|
|
|
}
|
2018-06-04 23:34:38 +00:00
|
|
|
|
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
2018-08-11 04:39:59 +00:00
|
|
|
func addRandomSuffix(s string) string {
|
2018-06-04 23:34:38 +00:00
|
|
|
b := make([]byte, 4)
|
|
|
|
_, err := cryptorand.Read(b)
|
|
|
|
contract.AssertNoError(err)
|
|
|
|
return s + "-" + hex.EncodeToString(b)
|
|
|
|
}
|