Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
// Copyright 2016-2023, Pulumi Corporation.
|
2018-05-22 19:43:36 +00:00
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-09-04 22:40:15 +00:00
|
|
|
package display
|
2018-04-10 19:03:11 +00:00
|
|
|
|
|
|
|
import (
|
|
|
|
"bytes"
|
|
|
|
"fmt"
|
2022-10-31 14:59:14 +00:00
|
|
|
"io"
|
2018-04-12 17:56:39 +00:00
|
|
|
"os"
|
2022-10-31 15:40:22 +00:00
|
|
|
"runtime"
|
2018-04-24 18:13:22 +00:00
|
|
|
"sort"
|
2018-04-10 19:03:11 +00:00
|
|
|
"strings"
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
"sync"
|
2018-04-12 17:56:39 +00:00
|
|
|
"time"
|
2018-04-14 05:26:01 +00:00
|
|
|
"unicode"
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
"github.com/dustin/go-humanize/english"
|
2022-10-31 15:40:22 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/backend/display/internal/terminal"
|
2023-09-18 11:01:28 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/display"
|
2021-03-17 13:20:05 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/v3/engine"
|
|
|
|
"github.com/pulumi/pulumi/pkg/v3/resource/deploy"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/apitype"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/diag"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/diag/colors"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/resource"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/tokens"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/cmdutil"
|
|
|
|
"github.com/pulumi/pulumi/sdk/v3/go/common/util/contract"
|
2018-04-10 19:03:11 +00:00
|
|
|
)
|
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// DiagInfo contains the bundle of diagnostic information for a single resource.
|
2018-04-15 19:47:53 +00:00
|
|
|
type DiagInfo struct {
|
|
|
|
ErrorCount, WarningCount, InfoCount, DebugCount int
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-07-17 20:00:10 +00:00
|
|
|
// The very last diagnostic event we got for this resource (regardless of severity). We'll print
|
|
|
|
// this out in the non-interactive mode whenever we get new events. Importantly, we don't want
|
|
|
|
// to print out the most significant diagnostic, as that means a flurry of event swill cause us
|
|
|
|
// to keep printing out the most significant diagnostic over and over again.
|
|
|
|
LastDiag *engine.DiagEventPayload
|
|
|
|
|
2019-08-01 17:21:47 +00:00
|
|
|
// The last error we received. If we have an error, and we're in tree-view, we'll prefer to
|
|
|
|
// show this over the last non-error diag so that users know about something bad early on.
|
|
|
|
LastError *engine.DiagEventPayload
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-05-07 22:11:52 +00:00
|
|
|
// All the diagnostic events we've heard about this resource. We'll print the last diagnostic
|
|
|
|
// in the status region while a resource is in progress. At the end we'll print out all
|
|
|
|
// diagnostics for a resource.
|
|
|
|
//
|
|
|
|
// Diagnostic events are bucketed by their associated stream ID (with 0 being the default
|
2020-02-13 23:16:46 +00:00
|
|
|
// stream).
|
2018-05-07 22:11:52 +00:00
|
|
|
StreamIDToDiagPayloads map[int32][]engine.DiagEventPayload
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
|
2022-10-31 14:59:14 +00:00
|
|
|
type progressRenderer interface {
|
|
|
|
io.Closer
|
|
|
|
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
initializeDisplay(display *ProgressDisplay)
|
|
|
|
tick()
|
|
|
|
rowUpdated(row Row)
|
|
|
|
systemMessage(payload engine.StdoutEventPayload)
|
|
|
|
done()
|
|
|
|
println(line string)
|
2022-10-31 14:59:14 +00:00
|
|
|
}
|
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// ProgressDisplay organizes all the information needed for a dynamically updated "progress" view of an update.
|
2018-04-17 06:41:00 +00:00
|
|
|
type ProgressDisplay struct {
|
2024-05-06 16:28:18 +00:00
|
|
|
// eventMutex is used to synchronize access to eventUrnToResourceRow, which is accessed
|
|
|
|
// by the treeRenderer
|
|
|
|
eventMutex sync.RWMutex
|
|
|
|
// stopwatchMutex is used to synchronixe access to opStopwatch, which is used to track the times
|
2024-05-03 11:17:06 +00:00
|
|
|
// taken to perform actions on resources.
|
2024-05-06 16:28:18 +00:00
|
|
|
stopwatchMutex sync.RWMutex
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
|
2022-10-31 14:59:14 +00:00
|
|
|
opts Options
|
|
|
|
|
|
|
|
renderer progressRenderer
|
2018-04-17 06:41:00 +00:00
|
|
|
|
2018-08-30 00:06:48 +00:00
|
|
|
// action is the kind of action (preview, update, refresh, etc) being performed.
|
|
|
|
action apitype.UpdateKind
|
Make a smattering of CLI UX improvements
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.
* Stylize the preview/update summary differently, so that it stands
out as a section. Also highlight the total changes with bold -- it
turns out this has a similar effect to the bright white colorization,
just without the negative effects on e.g. white terminals.
* Eliminate some verbosity in the phrasing of change summaries.
* Make all heading sections stylized consistently. This includes
the color (bright magenta) and the vertical spacing (always a newline
separating headings). We were previously inconsistent on this (e.g.,
outputs were under "---outputs---"). Now the headings are:
Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.
* Fix an issue where we'd parent things to "global" until the stack
object later showed up. Now we'll simply mock up a stack resource.
* Don't show messages like "no change" or "unchanged". Prior to the
light black removal, these faded into the background of the terminal.
Now they just clutter up the display. Similar to the elision of "*"
for OpSames in a prior commit, just leave these out. Now anything
that's written is actually a meaningful status for the user to note.
* Don't show the "3 info messages," etc. summaries in the Info column
while an update is ongoing. Instead, just show the latest line. This
is more respectful of width -- I often find that the important
messages scroll off the right of my screen before this change.
For discussion:
- I actually wonder if we should eliminate the summary
altogether and always just show the latest line. Or even
blank it out. The summary feels better suited for the
Diagnostics section, and the Status concisely tells us
how a resource's update ended up (failed, succeeded, etc).
- Similarly, I question the idea of showing only the "worst"
message. I'd vote for always showing the latest, and again
leaving it to the Status column for concisely telling the
user about the final state a resource ended up in.
* Stop prepending "info: " to every stdout/stderr message. It adds
no value, clutters up the display, and worsens horizontal usage.
* Lessen the verbosity of update headline messages, so we now instead
of e.g. "Previewing update of stack 'x':", we just say
"Previewing update (x):".
* Eliminate vertical whitespace in the Diagnostics section. Every
independent console.out previously was separated by an entire newline,
which made the section look cluttered to my eyes. These are just
streams of logs, there's no reason for the extra newlines.
* Colorize the resource headers in the Diagnostic section light blue.
Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
2018-09-24 15:31:19 +00:00
|
|
|
// stack is the stack this progress pertains to.
|
Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
stack tokens.StackName
|
Make a smattering of CLI UX improvements
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.
* Stylize the preview/update summary differently, so that it stands
out as a section. Also highlight the total changes with bold -- it
turns out this has a similar effect to the bright white colorization,
just without the negative effects on e.g. white terminals.
* Eliminate some verbosity in the phrasing of change summaries.
* Make all heading sections stylized consistently. This includes
the color (bright magenta) and the vertical spacing (always a newline
separating headings). We were previously inconsistent on this (e.g.,
outputs were under "---outputs---"). Now the headings are:
Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.
* Fix an issue where we'd parent things to "global" until the stack
object later showed up. Now we'll simply mock up a stack resource.
* Don't show messages like "no change" or "unchanged". Prior to the
light black removal, these faded into the background of the terminal.
Now they just clutter up the display. Similar to the elision of "*"
for OpSames in a prior commit, just leave these out. Now anything
that's written is actually a meaningful status for the user to note.
* Don't show the "3 info messages," etc. summaries in the Info column
while an update is ongoing. Instead, just show the latest line. This
is more respectful of width -- I often find that the important
messages scroll off the right of my screen before this change.
For discussion:
- I actually wonder if we should eliminate the summary
altogether and always just show the latest line. Or even
blank it out. The summary feels better suited for the
Diagnostics section, and the Status concisely tells us
how a resource's update ended up (failed, succeeded, etc).
- Similarly, I question the idea of showing only the "worst"
message. I'd vote for always showing the latest, and again
leaving it to the Status column for concisely telling the
user about the final state a resource ended up in.
* Stop prepending "info: " to every stdout/stderr message. It adds
no value, clutters up the display, and worsens horizontal usage.
* Lessen the verbosity of update headline messages, so we now instead
of e.g. "Previewing update of stack 'x':", we just say
"Previewing update (x):".
* Eliminate vertical whitespace in the Diagnostics section. Every
independent console.out previously was separated by an entire newline,
which made the section look cluttered to my eyes. These are just
streams of logs, there's no reason for the extra newlines.
* Colorize the resource headers in the Diagnostic section light blue.
Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
2018-09-24 15:31:19 +00:00
|
|
|
// proj is the project this progress pertains to.
|
|
|
|
proj tokens.PackageName
|
2018-08-30 00:06:48 +00:00
|
|
|
|
2018-04-17 06:41:00 +00:00
|
|
|
// Whether or not we're previewing. We don't know what we are actually doing until
|
|
|
|
// we get the initial 'prelude' event.
|
|
|
|
//
|
|
|
|
// this flag is only used to adjust how we describe what's going on to the user.
|
|
|
|
// i.e. if we're previewing we say things like "Would update" instead of "Updating".
|
|
|
|
isPreview bool
|
|
|
|
|
|
|
|
// The urn of the stack.
|
|
|
|
stackUrn resource.URN
|
|
|
|
|
2018-09-11 23:44:06 +00:00
|
|
|
// Whether or not we've seen outputs for the stack yet.
|
|
|
|
seenStackOutputs bool
|
|
|
|
|
2018-04-17 06:41:00 +00:00
|
|
|
// The summary event from the engine. If we get this, we'll print this after all
|
|
|
|
// normal resource events are heard. That way we don't interfere with all the progress
|
|
|
|
// messages we're outputting for them.
|
2018-04-19 21:44:53 +00:00
|
|
|
summaryEventPayload *engine.SummaryEventPayload
|
|
|
|
|
|
|
|
// Any system events we've received. They will be printed at the bottom of all the status rows
|
|
|
|
systemEventPayloads []engine.StdoutEventPayload
|
2018-04-17 06:41:00 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
// Used to record the order that rows are created in. That way, when we present in a tree, we
|
|
|
|
// can keep things ordered so they will not jump around.
|
|
|
|
displayOrderCounter int
|
|
|
|
|
2018-04-17 06:41:00 +00:00
|
|
|
// What tick we're currently on. Used to determine the number of ellipses to concat to
|
|
|
|
// a status message to help indicate that things are still working.
|
|
|
|
currentTick int
|
|
|
|
|
2018-04-23 01:10:19 +00:00
|
|
|
headerRow Row
|
|
|
|
resourceRows []ResourceRow
|
2018-04-17 06:41:00 +00:00
|
|
|
|
|
|
|
// A mapping from each resource URN we are told about to its current status.
|
|
|
|
eventUrnToResourceRow map[resource.URN]ResourceRow
|
|
|
|
|
|
|
|
// Remember if we're a terminal or not. In a terminal we get a little bit fancier.
|
|
|
|
// For example, we'll go back and update previous status messages to make sure things
|
|
|
|
// align. We don't need to do that in non-terminal situations.
|
|
|
|
isTerminal bool
|
|
|
|
|
|
|
|
// If all progress messages are done and we can print out the final display.
|
2018-08-02 04:48:14 +00:00
|
|
|
done bool
|
2018-04-17 06:41:00 +00:00
|
|
|
|
Don't rewrite step operations following failure (#16292)
When displaying the progress of a Pulumi operation to the user, we want
the operation being displayed to reflect what is actually happening at
that moment in time. Most of the time, this means "just display the
operation in question" -- if a `create` is being executed, show
"creating", if a `delete` just completed, show "deleted", and so on.
However, there are cases where we can do better than just displaying the
"raw" operation. Specifically, our "replacement-like" operations
comprise a _series_ of steps that must execute for the operation as a
whole to make sense. For create-before-replace, we have:
* `create replacement` resource
* `replace` the old resource
* `delete original` resource
Other sequences, such as delete-before-replace, are similar (in the case
of delete-before-replace, the `delete original` step comes first).
While it might make sense to display the underlying steps as the
operation progresses, when the series of steps has _completed_, it's
(arguably) much clearer to simply render the string `replaced` so that
the user knows what has gone on. Similarly, during a preview, it (again
arguably) makes more sense for us to state that the intention is to
`replace`, rather than any one of `create replacement`/`replace`/`delete
original` and so on.
Alas, there is a case where this is potentially misleading and thus
undesirable behaviour. If an _error_ occurs during execution, the
operation will terminate at the next opportunity. In doing so, it will
enter a "done" state. At this point, we _do not_ want to rewrite the
step that was actually happening before the error interrupted it (e.g.
`create replacement`) with the "end" state (e.g. `replaced`), since the
error may mean we never reached that desired state. We want the display
to be as true to the raw series of steps as possible. This PR implements
this change, so that programs which terminate due to errors do not
rewrite their steps.
This PR addresses some of the confusion in #16270, in which we
incorrectly reported that a delete-before-replace resource had been
`replaced` when in fact we had only completed the deletion before being
interrupted by an error elsewhere.
2024-05-31 10:48:07 +00:00
|
|
|
// True if one or more resource operations have failed.
|
|
|
|
failed bool
|
|
|
|
|
2018-04-17 06:41:00 +00:00
|
|
|
// The column that the suffix should be added to
|
|
|
|
suffixColumn int
|
|
|
|
|
|
|
|
// the list of suffixes to rotate through
|
|
|
|
suffixesArray []string
|
|
|
|
|
2022-10-11 14:32:16 +00:00
|
|
|
// Structure that tracks the time taken to perform an action on a resource.
|
|
|
|
opStopwatch opStopwatch
|
2023-11-15 11:19:31 +00:00
|
|
|
|
|
|
|
// Indicates whether we already printed the loading policy packs message.
|
|
|
|
shownPolicyLoadEvent bool
|
2022-10-11 14:32:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
type opStopwatch struct {
|
|
|
|
start map[resource.URN]time.Time
|
|
|
|
end map[resource.URN]time.Time
|
|
|
|
}
|
|
|
|
|
|
|
|
func newOpStopwatch() opStopwatch {
|
|
|
|
return opStopwatch{
|
|
|
|
start: map[resource.URN]time.Time{},
|
|
|
|
end: map[resource.URN]time.Time{},
|
|
|
|
}
|
2018-04-17 06:41:00 +00:00
|
|
|
}
|
|
|
|
|
2023-03-03 16:36:39 +00:00
|
|
|
// policyPayloads is a collection of policy violation events for a single resource.
|
|
|
|
var policyPayloads []engine.PolicyViolationEventPayload
|
2018-04-10 19:03:11 +00:00
|
|
|
|
|
|
|
// getEventUrn returns the resource URN associated with an event, or the empty URN if this is not an
|
2018-04-12 17:56:39 +00:00
|
|
|
// event that has a URN. If this is also a 'step' event, then this will return the step metadata as
|
|
|
|
// well.
|
|
|
|
func getEventUrnAndMetadata(event engine.Event) (resource.URN, *engine.StepEventMetadata) {
|
turn on the golangci-lint exhaustive linter (#15028)
Turn on the golangci-lint exhaustive linter. This is the first step
towards catching more missing cases during development rather than
in tests, or in production.
This might be best reviewed commit-by-commit, as the first commit turns
on the linter with the `default-signifies-exhaustive: true` option set,
which requires a lot less changes in the current codebase.
I think it's probably worth doing the second commit as well, as that
will get us the real benefits, even though we end up with a little bit
more churn. However it means all the `switch` statements are covered,
which isn't the case after the first commit, since we do have a lot of
`default` statements that just call `assert.Fail`.
Fixes #14601
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-01-17 16:50:41 +00:00
|
|
|
//nolint:exhaustive // Only a subset of events have urns.
|
2020-07-17 06:52:31 +00:00
|
|
|
switch event.Type {
|
|
|
|
case engine.ResourcePreEvent:
|
|
|
|
payload := event.Payload().(engine.ResourcePreEventPayload)
|
2020-07-09 14:19:12 +00:00
|
|
|
return payload.Metadata.URN, &payload.Metadata
|
2020-07-17 06:52:31 +00:00
|
|
|
case engine.ResourceOutputsEvent:
|
|
|
|
payload := event.Payload().(engine.ResourceOutputsEventPayload)
|
2020-07-09 14:19:12 +00:00
|
|
|
return payload.Metadata.URN, &payload.Metadata
|
2020-07-17 06:52:31 +00:00
|
|
|
case engine.ResourceOperationFailed:
|
|
|
|
payload := event.Payload().(engine.ResourceOperationFailedPayload)
|
2020-07-09 14:19:12 +00:00
|
|
|
return payload.Metadata.URN, &payload.Metadata
|
2020-07-17 06:52:31 +00:00
|
|
|
case engine.DiagEvent:
|
|
|
|
return event.Payload().(engine.DiagEventPayload).URN, nil
|
2023-10-09 18:31:17 +00:00
|
|
|
case engine.PolicyRemediationEvent:
|
|
|
|
return event.Payload().(engine.PolicyRemediationEventPayload).ResourceURN, nil
|
2020-07-17 06:52:31 +00:00
|
|
|
case engine.PolicyViolationEvent:
|
|
|
|
return event.Payload().(engine.PolicyViolationEventPayload).ResourceURN, nil
|
|
|
|
default:
|
|
|
|
return "", nil
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-09-05 15:25:23 +00:00
|
|
|
// ShowProgressEvents displays the engine events with docker's progress view.
|
Add tokens.StackName (#14487)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This adds a new type `tokens.StackName` which is a relatively strongly
typed container for a stack name. The only weakly typed aspect of it is
Go will always allow the "zero" value to be created for a struct, which
for a stack name is the empty string which is invalid. To prevent
introducing unexpected empty strings when working with stack names the
`String()` method will panic for zero initialized stack names.
Apart from the zero value, all other instances of `StackName` are via
`ParseStackName` which returns a descriptive error if the string is not
valid.
This PR only updates "pkg/" to use this type. There are a number of
places in "sdk/" which could do with this type as well, but there's no
harm in doing a staggered roll out, and some parts of "sdk/" are user
facing and will probably have to stay on the current `tokens.Name` and
`tokens.QName` types.
There are two places in the system where we panic on invalid stack
names, both in the http backend. This _should_ be fine as we've had long
standing validation that stacks created in the service are valid stack
names.
Just in case people have managed to introduce invalid stack names, there
is the `PULUMI_DISABLE_VALIDATION` environment variable which will turn
off the validation _and_ panicing for stack names. Users can use that to
temporarily disable the validation and continue working, but it should
only be seen as a temporary measure. If they have invalid names they
should rename them, or if they think they should be valid raise an issue
with us to change the validation code.
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [x] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2023-11-15 07:44:54 +00:00
|
|
|
func ShowProgressEvents(op string, action apitype.UpdateKind, stack tokens.StackName, proj tokens.PackageName,
|
2023-03-06 17:06:48 +00:00
|
|
|
permalink string, events <-chan engine.Event, done chan<- bool, opts Options, isPreview bool,
|
2023-03-03 16:36:39 +00:00
|
|
|
) {
|
[cli] Reimplement the interactive renderer
The display pipleline looks like this:
╭──────╮
│Engine│
╰──────╯
⬇ engine events
╭────────────────╮
│Progress Display│
╰────────────────╯
⬇ display events: ticks, resource updates, system messages
╭─────────────────╮
│Progress Renderer│
╰─────────────────╯
⬇ text
╭────────╮
│Terminal│
╰────────╯
The existing implementation of the interactive Progress Renderer is broken
into two parts, the display renderer and the message renderer. The display
renderer converts display events into progress messages, each of which
generally represents a single line of text at a particular position in
the output. The message renderer converts progress messages into screen
updates by identifying whether or not the contents of a particular
message have changed and if so, re-rendering its output line. In
somewhat greater detail:
╭────────────────╮
│Display Renderer│
╰────────────────╯
⬇ convert resource rows into a tree table
⬇ convert the tree table and system messages into lines
⬇ convert each line into a progress message with an index
╭────────────────╮
│Message Renderer│
╰────────────────╯
⬇ if the line identified in a progress message has changed,
⬇ go to that line on the terminal, clear it, and update it
╭────────╮
│Terminal│
╰────────╯
This separation of concerns is unnecessary and makes it difficult to
understand where and when the terminal is updated. This approach also
makes it somewhat challenging to change the way in which the display
interacts with the terminal, as both the display renderer and the
message renderer need to e.g. understand terminal dimensions, movement,
etc.
These changes reimplement the interactive Progress Renderer using a
frame-oriented approach. The display is updated at 60 frame per second.
If nothing has happened to invalidate the display's contents (i.e. no
changes to the terminal geometry or the displayable contents have occurred),
then the frame is not redrawn. Otherwise, the contents of the display
are re-rendered and redrawn.
An advantage of this approach is that it made it relatively simple to
fix a long-standing issue with the interactive display: when the number
of rows in the output exceed the height of the terminal, the new
renderer clamps the output and allows the user to scroll the tree table
using the up and down arrow keys.
2022-10-31 14:59:14 +00:00
|
|
|
stdin := opts.Stdin
|
|
|
|
if stdin == nil {
|
|
|
|
stdin = os.Stdin
|
|
|
|
}
|
2020-12-10 19:39:01 +00:00
|
|
|
stdout := opts.Stdout
|
|
|
|
if stdout == nil {
|
|
|
|
stdout = os.Stdout
|
|
|
|
}
|
2022-10-31 15:40:22 +00:00
|
|
|
stderr := opts.Stderr
|
|
|
|
if stderr == nil {
|
|
|
|
stderr = os.Stderr
|
|
|
|
}
|
|
|
|
|
|
|
|
isInteractive, term := opts.IsInteractive, opts.term
|
|
|
|
if isInteractive && term == nil {
|
|
|
|
raw := runtime.GOOS != "windows"
|
|
|
|
t, err := terminal.Open(stdin, stdout, raw)
|
|
|
|
if err != nil {
|
|
|
|
_, err = fmt.Fprintln(stderr, "Failed to open terminal; treating display as non-interactive (%w)", err)
|
|
|
|
contract.IgnoreError(err)
|
|
|
|
isInteractive = false
|
|
|
|
} else {
|
|
|
|
term = t
|
|
|
|
}
|
|
|
|
}
|
2020-12-10 19:39:01 +00:00
|
|
|
|
2022-10-31 15:40:22 +00:00
|
|
|
var renderer progressRenderer
|
|
|
|
if isInteractive {
|
2023-03-10 17:09:53 +00:00
|
|
|
printPermalinkInteractive(term, opts, permalink)
|
2023-03-06 17:06:48 +00:00
|
|
|
renderer = newInteractiveRenderer(term, permalink, opts)
|
2022-10-31 15:40:22 +00:00
|
|
|
} else {
|
2023-03-10 17:09:53 +00:00
|
|
|
printPermalinkNonInteractive(stdout, opts, permalink)
|
2022-10-31 15:40:22 +00:00
|
|
|
renderer = newNonInteractiveRenderer(stdout, op, opts)
|
2022-10-31 14:59:14 +00:00
|
|
|
}
|
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
display := &ProgressDisplay{
|
2022-10-31 16:00:20 +00:00
|
|
|
action: action,
|
|
|
|
isPreview: isPreview,
|
|
|
|
isTerminal: isInteractive,
|
|
|
|
opts: opts,
|
|
|
|
renderer: renderer,
|
|
|
|
stack: stack,
|
|
|
|
proj: proj,
|
|
|
|
eventUrnToResourceRow: make(map[resource.URN]ResourceRow),
|
|
|
|
suffixColumn: int(statusColumn),
|
|
|
|
suffixesArray: []string{"", ".", "..", "..."},
|
|
|
|
displayOrderCounter: 1,
|
|
|
|
opStopwatch: newOpStopwatch(),
|
2020-02-19 17:21:03 +00:00
|
|
|
}
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
renderer.initializeDisplay(display)
|
2018-04-10 19:03:11 +00:00
|
|
|
|
[cli] Reimplement the interactive renderer
The display pipleline looks like this:
╭──────╮
│Engine│
╰──────╯
⬇ engine events
╭────────────────╮
│Progress Display│
╰────────────────╯
⬇ display events: ticks, resource updates, system messages
╭─────────────────╮
│Progress Renderer│
╰─────────────────╯
⬇ text
╭────────╮
│Terminal│
╰────────╯
The existing implementation of the interactive Progress Renderer is broken
into two parts, the display renderer and the message renderer. The display
renderer converts display events into progress messages, each of which
generally represents a single line of text at a particular position in
the output. The message renderer converts progress messages into screen
updates by identifying whether or not the contents of a particular
message have changed and if so, re-rendering its output line. In
somewhat greater detail:
╭────────────────╮
│Display Renderer│
╰────────────────╯
⬇ convert resource rows into a tree table
⬇ convert the tree table and system messages into lines
⬇ convert each line into a progress message with an index
╭────────────────╮
│Message Renderer│
╰────────────────╯
⬇ if the line identified in a progress message has changed,
⬇ go to that line on the terminal, clear it, and update it
╭────────╮
│Terminal│
╰────────╯
This separation of concerns is unnecessary and makes it difficult to
understand where and when the terminal is updated. This approach also
makes it somewhat challenging to change the way in which the display
interacts with the terminal, as both the display renderer and the
message renderer need to e.g. understand terminal dimensions, movement,
etc.
These changes reimplement the interactive Progress Renderer using a
frame-oriented approach. The display is updated at 60 frame per second.
If nothing has happened to invalidate the display's contents (i.e. no
changes to the terminal geometry or the displayable contents have occurred),
then the frame is not redrawn. Otherwise, the contents of the display
are re-rendered and redrawn.
An advantage of this approach is that it made it relatively simple to
fix a long-standing issue with the interactive display: when the number
of rows in the output exceed the height of the terminal, the new
renderer clamps the output and allows the user to scroll the tree table
using the up and down arrow keys.
2022-10-31 14:59:14 +00:00
|
|
|
ticker := time.NewTicker(1 * time.Second)
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
if opts.DeterministicOutput {
|
2022-10-31 15:40:22 +00:00
|
|
|
ticker.Stop()
|
|
|
|
}
|
[cli] Reimplement the interactive renderer
The display pipleline looks like this:
╭──────╮
│Engine│
╰──────╯
⬇ engine events
╭────────────────╮
│Progress Display│
╰────────────────╯
⬇ display events: ticks, resource updates, system messages
╭─────────────────╮
│Progress Renderer│
╰─────────────────╯
⬇ text
╭────────╮
│Terminal│
╰────────╯
The existing implementation of the interactive Progress Renderer is broken
into two parts, the display renderer and the message renderer. The display
renderer converts display events into progress messages, each of which
generally represents a single line of text at a particular position in
the output. The message renderer converts progress messages into screen
updates by identifying whether or not the contents of a particular
message have changed and if so, re-rendering its output line. In
somewhat greater detail:
╭────────────────╮
│Display Renderer│
╰────────────────╯
⬇ convert resource rows into a tree table
⬇ convert the tree table and system messages into lines
⬇ convert each line into a progress message with an index
╭────────────────╮
│Message Renderer│
╰────────────────╯
⬇ if the line identified in a progress message has changed,
⬇ go to that line on the terminal, clear it, and update it
╭────────╮
│Terminal│
╰────────╯
This separation of concerns is unnecessary and makes it difficult to
understand where and when the terminal is updated. This approach also
makes it somewhat challenging to change the way in which the display
interacts with the terminal, as both the display renderer and the
message renderer need to e.g. understand terminal dimensions, movement,
etc.
These changes reimplement the interactive Progress Renderer using a
frame-oriented approach. The display is updated at 60 frame per second.
If nothing has happened to invalidate the display's contents (i.e. no
changes to the terminal geometry or the displayable contents have occurred),
then the frame is not redrawn. Otherwise, the contents of the display
are re-rendered and redrawn.
An advantage of this approach is that it made it relatively simple to
fix a long-standing issue with the interactive display: when the number
of rows in the output exceed the height of the terminal, the new
renderer clamps the output and allows the user to scroll the tree table
using the up and down arrow keys.
2022-10-31 14:59:14 +00:00
|
|
|
display.processEvents(ticker, events)
|
|
|
|
contract.IgnoreClose(display.renderer)
|
2018-04-12 17:56:39 +00:00
|
|
|
ticker.Stop()
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
// let our caller know we're done.
|
2018-12-14 03:58:26 +00:00
|
|
|
close(done)
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2022-10-31 14:59:14 +00:00
|
|
|
func (display *ProgressDisplay) println(line string) {
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
display.renderer.println(line)
|
2018-04-17 06:41:00 +00:00
|
|
|
}
|
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
type treeNode struct {
|
|
|
|
row Row
|
|
|
|
|
|
|
|
colorizedColumns []string
|
|
|
|
colorizedSuffix string
|
|
|
|
|
|
|
|
childNodes []*treeNode
|
|
|
|
}
|
|
|
|
|
|
|
|
func (display *ProgressDisplay) getOrCreateTreeNode(
|
2023-03-03 16:36:39 +00:00
|
|
|
result *[]*treeNode, urn resource.URN, row ResourceRow, urnToTreeNode map[resource.URN]*treeNode,
|
|
|
|
) *treeNode {
|
2018-04-24 18:13:22 +00:00
|
|
|
node, has := urnToTreeNode[urn]
|
|
|
|
if has {
|
|
|
|
return node
|
|
|
|
}
|
|
|
|
|
|
|
|
node = &treeNode{
|
|
|
|
row: row,
|
|
|
|
colorizedColumns: row.ColorizedColumns(),
|
|
|
|
colorizedSuffix: row.ColorizedSuffix(),
|
|
|
|
}
|
|
|
|
|
|
|
|
urnToTreeNode[urn] = node
|
|
|
|
|
2018-06-18 23:03:26 +00:00
|
|
|
// if it's the not the root item, attach it as a child node to an appropriate parent item.
|
2018-04-24 18:13:22 +00:00
|
|
|
if urn != "" && urn != display.stackUrn {
|
|
|
|
var parentURN resource.URN
|
|
|
|
|
|
|
|
res := row.Step().Res
|
|
|
|
if res != nil {
|
|
|
|
parentURN = res.Parent
|
|
|
|
}
|
|
|
|
|
|
|
|
parentRow, hasParentRow := display.eventUrnToResourceRow[parentURN]
|
2018-06-18 23:03:26 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
if !hasParentRow {
|
2018-06-18 23:03:26 +00:00
|
|
|
// If we haven't heard about this node's parent, then just parent it to the stack.
|
|
|
|
// Note: getting the parent row for the stack-urn will always succeed as we ensure that
|
|
|
|
// such a row is always there in ensureHeaderAndStackRows
|
2018-04-24 18:13:22 +00:00
|
|
|
parentURN = display.stackUrn
|
2018-06-18 23:03:26 +00:00
|
|
|
parentRow = display.eventUrnToResourceRow[parentURN]
|
2018-04-24 18:13:22 +00:00
|
|
|
}
|
|
|
|
|
2018-06-18 23:03:26 +00:00
|
|
|
parentNode := display.getOrCreateTreeNode(result, parentURN, parentRow, urnToTreeNode)
|
|
|
|
parentNode.childNodes = append(parentNode.childNodes, node)
|
|
|
|
return node
|
2018-04-24 18:13:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
*result = append(*result, node)
|
|
|
|
return node
|
|
|
|
}
|
|
|
|
|
|
|
|
func (display *ProgressDisplay) generateTreeNodes() []*treeNode {
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
// We take the reader lock here because this is called from the renderer and reads from
|
|
|
|
// the eventUrnToResourceRow map
|
2024-05-06 16:28:18 +00:00
|
|
|
display.eventMutex.RLock()
|
|
|
|
defer display.eventMutex.RUnlock()
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
result := []*treeNode{}
|
|
|
|
|
|
|
|
result = append(result, &treeNode{
|
|
|
|
row: display.headerRow,
|
|
|
|
colorizedColumns: display.headerRow.ColorizedColumns(),
|
|
|
|
})
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
urnToTreeNode := make(map[resource.URN]*treeNode)
|
|
|
|
for urn, row := range display.eventUrnToResourceRow {
|
|
|
|
display.getOrCreateTreeNode(&result, urn, row, urnToTreeNode)
|
|
|
|
}
|
|
|
|
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
|
|
|
func (display *ProgressDisplay) addIndentations(treeNodes []*treeNode, isRoot bool, indentation string) {
|
|
|
|
childIndentation := indentation + "│ "
|
|
|
|
lastChildIndentation := indentation + " "
|
2018-04-12 17:56:39 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
for i, node := range treeNodes {
|
|
|
|
isLast := i == len(treeNodes)-1
|
|
|
|
|
|
|
|
prefix := indentation
|
|
|
|
|
|
|
|
var nestedIndentation string
|
|
|
|
if !isRoot {
|
|
|
|
if isLast {
|
|
|
|
prefix += "└─ "
|
|
|
|
nestedIndentation = lastChildIndentation
|
|
|
|
} else {
|
|
|
|
prefix += "├─ "
|
|
|
|
nestedIndentation = childIndentation
|
2018-04-15 19:47:53 +00:00
|
|
|
}
|
2018-04-24 18:13:22 +00:00
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
node.colorizedColumns[typeColumn] = prefix + node.colorizedColumns[typeColumn]
|
|
|
|
display.addIndentations(node.childNodes, false /*isRoot*/, nestedIndentation)
|
|
|
|
}
|
|
|
|
}
|
2018-04-17 06:41:00 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
func (display *ProgressDisplay) convertNodesToRows(
|
2023-03-03 16:36:39 +00:00
|
|
|
nodes []*treeNode, maxSuffixLength int, rows *[][]string, maxColumnLengths *[]int,
|
|
|
|
) {
|
2018-04-24 18:13:22 +00:00
|
|
|
for _, node := range nodes {
|
|
|
|
if len(*maxColumnLengths) == 0 {
|
|
|
|
*maxColumnLengths = make([]int, len(node.colorizedColumns))
|
|
|
|
}
|
|
|
|
|
|
|
|
colorizedColumns := make([]string, len(node.colorizedColumns))
|
|
|
|
|
|
|
|
for i, colorizedColumn := range node.colorizedColumns {
|
2022-10-31 16:00:20 +00:00
|
|
|
columnWidth := colors.MeasureColorizedString(colorizedColumn)
|
2018-04-24 18:13:22 +00:00
|
|
|
|
|
|
|
if i == display.suffixColumn {
|
|
|
|
columnWidth += maxSuffixLength
|
|
|
|
colorizedColumns[i] = colorizedColumn + node.colorizedSuffix
|
|
|
|
} else {
|
|
|
|
colorizedColumns[i] = colorizedColumn
|
|
|
|
}
|
|
|
|
|
|
|
|
if columnWidth > (*maxColumnLengths)[i] {
|
|
|
|
(*maxColumnLengths)[i] = columnWidth
|
2018-04-15 19:47:53 +00:00
|
|
|
}
|
|
|
|
}
|
2018-04-24 18:13:22 +00:00
|
|
|
|
|
|
|
*rows = append(*rows, colorizedColumns)
|
|
|
|
|
|
|
|
display.convertNodesToRows(node.childNodes, maxSuffixLength, rows, maxColumnLengths)
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-23 01:10:19 +00:00
|
|
|
}
|
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
type sortable []*treeNode
|
|
|
|
|
|
|
|
func (sortable sortable) Len() int {
|
|
|
|
return len(sortable)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (sortable sortable) Less(i, j int) bool {
|
|
|
|
return sortable[i].row.DisplayOrderIndex() < sortable[j].row.DisplayOrderIndex()
|
|
|
|
}
|
|
|
|
|
|
|
|
func (sortable sortable) Swap(i, j int) {
|
|
|
|
sortable[i], sortable[j] = sortable[j], sortable[i]
|
|
|
|
}
|
|
|
|
|
|
|
|
func sortNodes(nodes []*treeNode) {
|
|
|
|
sort.Sort(sortable(nodes))
|
|
|
|
|
|
|
|
for _, node := range nodes {
|
|
|
|
childNodes := node.childNodes
|
|
|
|
sortNodes(childNodes)
|
|
|
|
node.childNodes = childNodes
|
2018-04-23 01:10:19 +00:00
|
|
|
}
|
2018-04-24 18:13:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (display *ProgressDisplay) filterOutUnnecessaryNodesAndSetDisplayTimes(nodes []*treeNode) []*treeNode {
|
|
|
|
result := []*treeNode{}
|
|
|
|
|
|
|
|
for _, node := range nodes {
|
|
|
|
node.childNodes = display.filterOutUnnecessaryNodesAndSetDisplayTimes(node.childNodes)
|
|
|
|
|
|
|
|
if node.row.HideRowIfUnnecessary() && len(node.childNodes) == 0 {
|
|
|
|
continue
|
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
|
2018-04-24 18:13:22 +00:00
|
|
|
display.displayOrderCounter++
|
|
|
|
node.row.SetDisplayOrderIndex(display.displayOrderCounter)
|
|
|
|
result = append(result, node)
|
2018-04-23 01:10:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return result
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-10-05 20:03:30 +00:00
|
|
|
func removeInfoColumnIfUnneeded(rows [][]string) {
|
|
|
|
// If there have been no info messages, then don't print out the info column header.
|
|
|
|
for i := 1; i < len(rows); i++ {
|
|
|
|
row := rows[i]
|
|
|
|
if row[len(row)-1] != "" {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
firstRow := rows[0]
|
|
|
|
firstRow[len(firstRow)-1] = ""
|
|
|
|
}
|
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
// Performs all the work at the end once we've heard about the last message from the engine.
|
|
|
|
// Specifically, this will update the status messages for any resources, and will also then
|
|
|
|
// print out all final diagnostics. and finally will print out the summary.
|
|
|
|
func (display *ProgressDisplay) processEndSteps() {
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
// Take the read lock here because we are reading from the eventUrnToResourceRow map
|
2024-05-06 16:28:18 +00:00
|
|
|
display.eventMutex.RLock()
|
|
|
|
defer display.eventMutex.RUnlock()
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
|
2018-08-02 04:48:14 +00:00
|
|
|
// Figure out the rows that are currently in progress.
|
2022-10-31 14:59:14 +00:00
|
|
|
var inProgressRows []ResourceRow
|
|
|
|
if !display.isTerminal {
|
|
|
|
for _, v := range display.eventUrnToResourceRow {
|
|
|
|
if !v.IsDone() {
|
|
|
|
inProgressRows = append(inProgressRows, v)
|
|
|
|
}
|
2018-08-02 04:48:14 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// Transition the display to the 'done' state. This will transitively cause all
|
2018-08-02 04:48:14 +00:00
|
|
|
// rows to become done.
|
|
|
|
display.done = true
|
|
|
|
|
|
|
|
// Now print out all those rows that were in progress. They will now be 'done'
|
|
|
|
// since the display was marked 'done'.
|
|
|
|
if !display.isTerminal {
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
if display.opts.DeterministicOutput {
|
|
|
|
sort.Slice(inProgressRows, func(i, j int) bool {
|
|
|
|
if inProgressRows[i].Step().Op == "same" && inProgressRows[i].Step().URN == "" {
|
|
|
|
// This is the root stack event. Always sort it last
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
if inProgressRows[j].Step().Op == "same" && inProgressRows[j].Step().URN == "" {
|
|
|
|
// This is the root stack event. Always sort it last
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
if inProgressRows[i].Step().Res == nil {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
if inProgressRows[j].Step().Res == nil {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
return inProgressRows[i].Step().Res.URN < inProgressRows[j].Step().Res.URN
|
|
|
|
})
|
|
|
|
}
|
2018-08-02 04:48:14 +00:00
|
|
|
for _, v := range inProgressRows {
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
display.renderer.rowUpdated(v)
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// Now refresh everything. This ensures that we go back and remove things like the diagnostic
|
2018-04-15 19:47:53 +00:00
|
|
|
// messages from a status message (since we're going to print them all) below. Note, this will
|
2020-02-13 23:16:46 +00:00
|
|
|
// only do something in a terminal. This is what we want, because if we're not in a terminal we
|
2018-04-15 19:47:53 +00:00
|
|
|
// don't really want to reprint any finished items we've already printed.
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
display.renderer.done()
|
2018-04-14 05:26:01 +00:00
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
// Render the policies section; this will print all policy packs that ran plus any specific
|
|
|
|
// policies that led to violations or remediations. This comes before diagnostics since policy
|
|
|
|
// violations yield failures and it is better to see those in advance of the failure message.
|
|
|
|
wroteMandatoryPolicyViolations := display.printPolicies()
|
|
|
|
|
|
|
|
// Render the actual diagnostics streams (warnings, errors, etc).
|
2023-06-23 21:21:13 +00:00
|
|
|
hasError := display.printDiagnostics()
|
2023-10-09 18:31:17 +00:00
|
|
|
|
|
|
|
// Print output variables; this comes last, prior to the summary, since these are the final
|
|
|
|
// outputs after having run all of the above.
|
2020-02-13 23:16:46 +00:00
|
|
|
display.printOutputs()
|
2023-10-09 18:31:17 +00:00
|
|
|
|
|
|
|
// Print a summary of resource operations unless there were mandatory policy violations.
|
|
|
|
// In that case, we want to abruptly terminate the display so as not to confuse.
|
2023-06-20 17:20:28 +00:00
|
|
|
if !wroteMandatoryPolicyViolations {
|
2023-06-23 21:21:13 +00:00
|
|
|
display.printSummary(hasError)
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// printDiagnostics prints a new "Diagnostics:" section with all of the diagnostics grouped by
|
2023-06-23 21:21:13 +00:00
|
|
|
// resource. If no diagnostics were emitted, prints nothing. Returns whether an error was encountered.
|
2020-02-13 23:16:46 +00:00
|
|
|
func (display *ProgressDisplay) printDiagnostics() bool {
|
2023-06-23 21:21:13 +00:00
|
|
|
hasError := false
|
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// Since we display diagnostic information eagerly, we need to keep track of the first
|
|
|
|
// time we wrote some output so we don't inadvertently print the header twice.
|
2018-04-14 05:26:01 +00:00
|
|
|
wroteDiagnosticHeader := false
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
|
|
|
|
eventRows := make([]ResourceRow, 0, len(display.eventUrnToResourceRow))
|
2018-04-15 19:47:53 +00:00
|
|
|
for _, row := range display.eventUrnToResourceRow {
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
eventRows = append(eventRows, row)
|
|
|
|
}
|
|
|
|
if display.opts.DeterministicOutput {
|
|
|
|
sort.Slice(eventRows, func(i, j int) bool {
|
|
|
|
if eventRows[i].Step().Op == "same" && eventRows[i].Step().URN == "" {
|
|
|
|
// This is the root stack event. Always sort it last
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
if eventRows[j].Step().Op == "same" && eventRows[j].Step().URN == "" {
|
|
|
|
// This is the root stack event. Always sort it last
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
if eventRows[i].Step().Res == nil {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
if eventRows[j].Step().Res == nil {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
return eventRows[i].Step().Res.URN < eventRows[j].Step().Res.URN
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, row := range eventRows {
|
2020-02-13 23:16:46 +00:00
|
|
|
// The header for the diagnogistics grouped by resource, e.g. "aws:apigateway:RestApi (accountsApi):"
|
2018-09-25 17:58:22 +00:00
|
|
|
wroteResourceHeader := false
|
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// Each row in the display corresponded with a resource, and that resource could have emitted
|
|
|
|
// diagnostics to various streams.
|
2018-05-07 22:11:52 +00:00
|
|
|
for id, payloads := range row.DiagInfo().StreamIDToDiagPayloads {
|
2020-02-13 23:16:46 +00:00
|
|
|
if len(payloads) == 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
if id != 0 {
|
|
|
|
// For the non-default stream merge all the messages from the stream into a single
|
|
|
|
// message.
|
|
|
|
p := display.mergeStreamPayloadsToSinglePayload(payloads)
|
|
|
|
payloads = []engine.DiagEventPayload{p}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Did we write any diagnostic information for the resource x stream?
|
|
|
|
wrote := false
|
|
|
|
for _, v := range payloads {
|
|
|
|
if v.Ephemeral {
|
|
|
|
continue
|
2018-04-14 05:26:01 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2023-06-23 21:21:13 +00:00
|
|
|
if v.Severity == diag.Error {
|
|
|
|
// An error occurred and the display should consider this a failure.
|
|
|
|
hasError = true
|
|
|
|
}
|
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
msg := display.renderProgressDiagEvent(v, true /*includePrefix:*/)
|
|
|
|
|
|
|
|
lines := splitIntoDisplayableLines(msg)
|
|
|
|
if len(lines) == 0 {
|
|
|
|
continue
|
Make a smattering of CLI UX improvements
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.
* Stylize the preview/update summary differently, so that it stands
out as a section. Also highlight the total changes with bold -- it
turns out this has a similar effect to the bright white colorization,
just without the negative effects on e.g. white terminals.
* Eliminate some verbosity in the phrasing of change summaries.
* Make all heading sections stylized consistently. This includes
the color (bright magenta) and the vertical spacing (always a newline
separating headings). We were previously inconsistent on this (e.g.,
outputs were under "---outputs---"). Now the headings are:
Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.
* Fix an issue where we'd parent things to "global" until the stack
object later showed up. Now we'll simply mock up a stack resource.
* Don't show messages like "no change" or "unchanged". Prior to the
light black removal, these faded into the background of the terminal.
Now they just clutter up the display. Similar to the elision of "*"
for OpSames in a prior commit, just leave these out. Now anything
that's written is actually a meaningful status for the user to note.
* Don't show the "3 info messages," etc. summaries in the Info column
while an update is ongoing. Instead, just show the latest line. This
is more respectful of width -- I often find that the important
messages scroll off the right of my screen before this change.
For discussion:
- I actually wonder if we should eliminate the summary
altogether and always just show the latest line. Or even
blank it out. The summary feels better suited for the
Diagnostics section, and the Status concisely tells us
how a resource's update ended up (failed, succeeded, etc).
- Similarly, I question the idea of showing only the "worst"
message. I'd vote for always showing the latest, and again
leaving it to the Status column for concisely telling the
user about the final state a resource ended up in.
* Stop prepending "info: " to every stdout/stderr message. It adds
no value, clutters up the display, and worsens horizontal usage.
* Lessen the verbosity of update headline messages, so we now instead
of e.g. "Previewing update of stack 'x':", we just say
"Previewing update (x):".
* Eliminate vertical whitespace in the Diagnostics section. Every
independent console.out previously was separated by an entire newline,
which made the section look cluttered to my eyes. These are just
streams of logs, there's no reason for the extra newlines.
* Colorize the resource headers in the Diagnostic section light blue.
Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
2018-09-24 15:31:19 +00:00
|
|
|
}
|
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// If we haven't printed the Diagnostics header, do so now.
|
|
|
|
if !wroteDiagnosticHeader {
|
|
|
|
wroteDiagnosticHeader = true
|
2022-10-31 14:59:14 +00:00
|
|
|
display.println(colors.SpecHeadline + "Diagnostics:" + colors.Reset)
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
|
|
|
// If we haven't printed the header for the resource, do so now.
|
|
|
|
if !wroteResourceHeader {
|
|
|
|
wroteResourceHeader = true
|
|
|
|
columns := row.ColorizedColumns()
|
2022-10-31 14:59:14 +00:00
|
|
|
display.println(
|
|
|
|
" " + colors.BrightBlue + columns[typeColumn] + " (" + columns[nameColumn] + "):" + colors.Reset)
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for _, line := range lines {
|
|
|
|
line = strings.TrimRightFunc(line, unicode.IsSpace)
|
2022-10-31 14:59:14 +00:00
|
|
|
display.println(" " + line)
|
2018-04-14 05:26:01 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
wrote = true
|
|
|
|
}
|
2019-09-18 16:49:13 +00:00
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
if wrote {
|
2022-10-31 14:59:14 +00:00
|
|
|
display.println("")
|
2018-05-05 19:54:57 +00:00
|
|
|
}
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
2018-08-24 22:36:55 +00:00
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
2023-06-23 21:21:13 +00:00
|
|
|
return hasError
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
type policyPackSummary struct {
|
2024-03-04 14:02:25 +00:00
|
|
|
HasCloudPack bool
|
|
|
|
LocalPaths []string
|
2023-10-09 18:31:17 +00:00
|
|
|
ViolationEvents []engine.PolicyViolationEventPayload
|
|
|
|
RemediationEvents []engine.PolicyRemediationEventPayload
|
|
|
|
}
|
|
|
|
|
|
|
|
func (display *ProgressDisplay) printPolicies() bool {
|
|
|
|
if display.summaryEventPayload == nil || len(display.summaryEventPayload.PolicyPacks) == 0 {
|
2020-02-13 23:16:46 +00:00
|
|
|
return false
|
|
|
|
}
|
2023-10-09 18:31:17 +00:00
|
|
|
|
|
|
|
var hadMandatoryViolations bool
|
|
|
|
display.println(display.opts.Color.Colorize(colors.SpecHeadline + "Policies:" + colors.Reset))
|
|
|
|
|
|
|
|
// Print policy packs that were run and any violations or remediations associated with them.
|
|
|
|
// Gather up all policy packs and their associated violation and remediation events.
|
|
|
|
policyPackInfos := make(map[string]policyPackSummary)
|
|
|
|
|
|
|
|
// First initialize empty lists for all policy packs just to ensure they show if no events are found.
|
|
|
|
for name, version := range display.summaryEventPayload.PolicyPacks {
|
|
|
|
var summary policyPackSummary
|
|
|
|
baseName, path := engine.GetLocalPolicyPackInfoFromEventName(name)
|
2024-03-04 14:02:25 +00:00
|
|
|
var key string
|
2023-10-09 18:31:17 +00:00
|
|
|
if baseName != "" {
|
2024-03-04 14:02:25 +00:00
|
|
|
key = fmt.Sprintf("%s@v%s", baseName, version)
|
|
|
|
if s, has := policyPackInfos[key]; has {
|
|
|
|
summary = s
|
|
|
|
summary.LocalPaths = append(summary.LocalPaths, path)
|
|
|
|
} else {
|
|
|
|
summary.LocalPaths = []string{path}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
key = fmt.Sprintf("%s@v%s", name, version)
|
|
|
|
if s, has := policyPackInfos[key]; has {
|
|
|
|
summary = s
|
|
|
|
summary.HasCloudPack = true
|
|
|
|
}
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
2024-03-04 14:02:25 +00:00
|
|
|
policyPackInfos[key] = summary
|
2023-10-09 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Next associate all violation events with the corresponding policy pack in the list.
|
|
|
|
for _, row := range display.eventUrnToResourceRow {
|
|
|
|
for _, event := range row.PolicyPayloads() {
|
|
|
|
key := fmt.Sprintf("%s@v%s", event.PolicyPackName, event.PolicyPackVersion)
|
|
|
|
newInfo := policyPackInfos[key]
|
|
|
|
newInfo.ViolationEvents = append(newInfo.ViolationEvents, event)
|
|
|
|
policyPackInfos[key] = newInfo
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
2023-10-09 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Now associate all remediation events with the corresponding policy pack in the list.
|
|
|
|
for _, row := range display.eventUrnToResourceRow {
|
|
|
|
for _, event := range row.PolicyRemediationPayloads() {
|
|
|
|
key := fmt.Sprintf("%s@v%s", event.PolicyPackName, event.PolicyPackVersion)
|
|
|
|
newInfo := policyPackInfos[key]
|
|
|
|
newInfo.RemediationEvents = append(newInfo.RemediationEvents, event)
|
|
|
|
policyPackInfos[key] = newInfo
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
2023-10-09 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Enumerate all policy packs in a deterministic order:
|
|
|
|
policyKeys := make([]string, len(policyPackInfos))
|
|
|
|
policyKeyIndex := 0
|
|
|
|
for key := range policyPackInfos {
|
|
|
|
policyKeys[policyKeyIndex] = key
|
|
|
|
policyKeyIndex++
|
|
|
|
}
|
|
|
|
sort.Strings(policyKeys)
|
|
|
|
|
|
|
|
// Finally, print the policy pack info and any violations and any remediations for each one.
|
|
|
|
for _, key := range policyKeys {
|
|
|
|
info := policyPackInfos[key]
|
2018-05-05 19:54:57 +00:00
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
// Print the policy pack status and name/version as a header:
|
|
|
|
passFailWarn := "✅"
|
|
|
|
for _, violation := range info.ViolationEvents {
|
|
|
|
if violation.EnforcementLevel == apitype.Mandatory {
|
|
|
|
passFailWarn = "❌"
|
|
|
|
hadMandatoryViolations = true
|
|
|
|
break
|
|
|
|
}
|
2020-02-13 23:16:46 +00:00
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
passFailWarn = "⚠️"
|
|
|
|
// do not break; subsequent mandatory violations will override this.
|
2018-04-17 06:41:00 +00:00
|
|
|
}
|
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
var localMark string
|
2024-03-04 14:02:25 +00:00
|
|
|
if len(info.LocalPaths) > 0 {
|
|
|
|
localMark = " (local: "
|
|
|
|
sort.Strings(info.LocalPaths)
|
|
|
|
for i, path := range info.LocalPaths {
|
|
|
|
if i > 0 {
|
|
|
|
localMark += "; "
|
|
|
|
}
|
|
|
|
localMark += path
|
|
|
|
}
|
|
|
|
localMark += ")"
|
|
|
|
|
|
|
|
if info.HasCloudPack {
|
|
|
|
localMark += " + (cloud)"
|
|
|
|
}
|
2023-10-09 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
display.println(fmt.Sprintf(" %s %s%s%s%s", passFailWarn, colors.SpecInfo, key, colors.Reset, localMark))
|
|
|
|
subItemIndent := " "
|
|
|
|
|
|
|
|
// First show any remediations since they happen first.
|
|
|
|
if display.opts.ShowPolicyRemediations {
|
|
|
|
// If the user has requested detailed remediations, print each one. Do not sort them -- show them in the
|
|
|
|
// order in which events arrived, since for remediations, the order matters.
|
|
|
|
for _, remediationEvent := range info.RemediationEvents {
|
|
|
|
// Print the individual policy event.
|
|
|
|
remediationLine := renderDiffPolicyRemediationEvent(
|
2023-12-12 12:19:42 +00:00
|
|
|
remediationEvent, subItemIndent+"- ", false, display.opts)
|
2023-10-09 18:31:17 +00:00
|
|
|
remediationLine = strings.TrimSuffix(remediationLine, "\n")
|
|
|
|
if remediationLine != "" {
|
|
|
|
display.println(remediationLine)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// Otherwise, simply print a summary of which remediations ran and how many resources were affected.
|
|
|
|
policyNames := make([]string, 0)
|
|
|
|
policyRemediationCounts := make(map[string]int)
|
|
|
|
for _, e := range info.RemediationEvents {
|
|
|
|
name := e.PolicyName
|
|
|
|
if policyRemediationCounts[name] == 0 {
|
|
|
|
policyNames = append(policyNames, name)
|
|
|
|
}
|
|
|
|
policyRemediationCounts[name]++
|
|
|
|
}
|
2024-03-05 07:47:46 +00:00
|
|
|
sort.Strings(policyNames)
|
2023-10-09 18:31:17 +00:00
|
|
|
for _, policyName := range policyNames {
|
|
|
|
count := policyRemediationCounts[policyName]
|
|
|
|
display.println(fmt.Sprintf("%s- %s[remediate] %s%s (%d %s)",
|
|
|
|
subItemIndent, colors.SpecInfo, policyName, colors.Reset,
|
|
|
|
count, english.PluralWord(count, "resource", "")))
|
|
|
|
}
|
|
|
|
}
|
2023-06-20 17:20:28 +00:00
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
// Next up, display all violations. Sort policy events by: policy pack name, policy pack version,
|
|
|
|
// enforcement level, policy name, and finally the URN of the resource.
|
|
|
|
sort.SliceStable(info.ViolationEvents, func(i, j int) bool {
|
|
|
|
eventI, eventJ := info.ViolationEvents[i], info.ViolationEvents[j]
|
|
|
|
if enfLevelCmp := strings.Compare(
|
|
|
|
string(eventI.EnforcementLevel), string(eventJ.EnforcementLevel)); enfLevelCmp != 0 {
|
|
|
|
return enfLevelCmp < 0
|
|
|
|
}
|
|
|
|
if policyNameCmp := strings.Compare(eventI.PolicyName, eventJ.PolicyName); policyNameCmp != 0 {
|
|
|
|
return policyNameCmp < 0
|
|
|
|
}
|
|
|
|
return strings.Compare(string(eventI.ResourceURN), string(eventJ.ResourceURN)) < 0
|
|
|
|
})
|
|
|
|
for _, policyEvent := range info.ViolationEvents {
|
|
|
|
// Print the individual policy event.
|
|
|
|
policyLine := renderDiffPolicyViolationEvent(
|
2023-12-12 12:19:42 +00:00
|
|
|
policyEvent, subItemIndent+"- ", subItemIndent+" ", display.opts)
|
2023-10-09 18:31:17 +00:00
|
|
|
policyLine = strings.TrimSuffix(policyLine, "\n")
|
|
|
|
display.println(policyLine)
|
2023-06-20 17:20:28 +00:00
|
|
|
}
|
|
|
|
}
|
2023-10-09 18:31:17 +00:00
|
|
|
|
|
|
|
display.println("")
|
|
|
|
return hadMandatoryViolations
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// printOutputs prints the Stack's outputs for the display in a new section, if appropriate.
|
|
|
|
func (display *ProgressDisplay) printOutputs() {
|
|
|
|
// Printing the stack's outputs wasn't desired.
|
|
|
|
if display.opts.SuppressOutputs {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
// Cannot display outputs for the stack if we don't know its URN.
|
|
|
|
if display.stackUrn == "" {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
stackStep := display.eventUrnToResourceRow[display.stackUrn].Step()
|
|
|
|
|
2022-04-28 23:16:06 +00:00
|
|
|
props := getResourceOutputsPropertiesString(
|
2020-02-13 23:16:46 +00:00
|
|
|
stackStep, 1, display.isPreview, display.opts.Debug,
|
|
|
|
false /* refresh */, display.opts.ShowSameResources)
|
|
|
|
if props != "" {
|
2022-10-31 14:59:14 +00:00
|
|
|
display.println(colors.SpecHeadline + "Outputs:" + colors.Reset)
|
|
|
|
display.println(props)
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2020-02-13 23:16:46 +00:00
|
|
|
// printSummary prints the Stack's SummaryEvent in a new section if applicable.
|
2023-06-23 21:21:13 +00:00
|
|
|
func (display *ProgressDisplay) printSummary(hasError bool) {
|
2020-02-13 23:16:46 +00:00
|
|
|
// If we never saw the SummaryEvent payload, we have nothing to do.
|
|
|
|
if display.summaryEventPayload == nil {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2023-10-09 18:31:17 +00:00
|
|
|
msg := renderSummaryEvent(*display.summaryEventPayload, hasError, false, display.opts)
|
2022-10-31 14:59:14 +00:00
|
|
|
display.println(msg)
|
2020-02-13 23:16:46 +00:00
|
|
|
}
|
|
|
|
|
2018-05-07 22:11:52 +00:00
|
|
|
func (display *ProgressDisplay) mergeStreamPayloadsToSinglePayload(
|
2023-03-03 16:36:39 +00:00
|
|
|
payloads []engine.DiagEventPayload,
|
|
|
|
) engine.DiagEventPayload {
|
2018-05-07 22:11:52 +00:00
|
|
|
buf := bytes.Buffer{}
|
|
|
|
|
2018-08-02 14:55:15 +00:00
|
|
|
for _, p := range payloads {
|
2018-05-07 22:11:52 +00:00
|
|
|
buf.WriteString(display.renderProgressDiagEvent(p, false /*includePrefix:*/))
|
|
|
|
}
|
|
|
|
|
|
|
|
firstPayload := payloads[0]
|
|
|
|
msg := buf.String()
|
|
|
|
return engine.DiagEventPayload{
|
Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 20:12:40 +00:00
|
|
|
URN: firstPayload.URN,
|
|
|
|
Message: msg,
|
|
|
|
Prefix: firstPayload.Prefix,
|
|
|
|
Color: firstPayload.Color,
|
|
|
|
Severity: firstPayload.Severity,
|
|
|
|
StreamID: firstPayload.StreamID,
|
|
|
|
Ephemeral: firstPayload.Ephemeral,
|
2018-05-07 22:11:52 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-04-19 21:44:53 +00:00
|
|
|
func splitIntoDisplayableLines(msg string) []string {
|
|
|
|
lines := strings.Split(msg, "\n")
|
|
|
|
|
|
|
|
// Trim off any trailing blank lines in the message.
|
|
|
|
for len(lines) > 0 {
|
|
|
|
lastLine := lines[len(lines)-1]
|
|
|
|
if strings.TrimSpace(colors.Never.Colorize(lastLine)) == "" {
|
|
|
|
lines = lines[0 : len(lines)-1]
|
|
|
|
} else {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return lines
|
|
|
|
}
|
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
func (display *ProgressDisplay) processTick() {
|
2018-07-10 20:44:18 +00:00
|
|
|
// Got a tick. Update the progress display if we're in a terminal. If we're not,
|
|
|
|
// print a hearbeat message every 10 seconds after our last output so that the user
|
|
|
|
// knows something is going on. This is also helpful for hosts like jenkins that
|
|
|
|
// often timeout a process if output is not seen in a while.
|
2018-04-12 17:56:39 +00:00
|
|
|
display.currentTick++
|
2018-04-10 19:03:11 +00:00
|
|
|
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
display.renderer.tick()
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
func (display *ProgressDisplay) getRowForURN(urn resource.URN, metadata *engine.StepEventMetadata) ResourceRow {
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
// Take the write lock here because this can write the the eventUrnToResourceRow map
|
2024-05-06 16:28:18 +00:00
|
|
|
display.eventMutex.Lock()
|
|
|
|
defer display.eventMutex.Unlock()
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
// If there's already a row for this URN, return it.
|
|
|
|
row, has := display.eventUrnToResourceRow[urn]
|
|
|
|
if has {
|
|
|
|
return row
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
// First time we're hearing about this resource. Create an initial nearly-empty status for it.
|
2020-07-09 14:19:12 +00:00
|
|
|
step := engine.StepEventMetadata{URN: urn, Op: deploy.OpSame}
|
2018-05-22 20:30:52 +00:00
|
|
|
if metadata != nil {
|
|
|
|
step = *metadata
|
|
|
|
}
|
|
|
|
|
|
|
|
// If this is the first time we're seeing an event for the stack resource, check to see if we've already
|
|
|
|
// recorded root events that we want to reassociate with this URN.
|
|
|
|
if isRootURN(urn) {
|
2018-06-18 23:03:26 +00:00
|
|
|
display.stackUrn = urn
|
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
if row, has = display.eventUrnToResourceRow[""]; has {
|
|
|
|
row.SetStep(step)
|
|
|
|
display.eventUrnToResourceRow[urn] = row
|
|
|
|
delete(display.eventUrnToResourceRow, "")
|
|
|
|
return row
|
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
row = &resourceRowData{
|
|
|
|
display: display,
|
|
|
|
tick: display.currentTick,
|
|
|
|
diagInfo: &DiagInfo{},
|
2020-02-13 23:16:46 +00:00
|
|
|
policyPayloads: policyPayloads,
|
2018-05-22 20:30:52 +00:00
|
|
|
step: step,
|
|
|
|
hideRowIfUnnecessary: true,
|
|
|
|
}
|
|
|
|
|
|
|
|
display.eventUrnToResourceRow[urn] = row
|
2018-06-18 23:03:26 +00:00
|
|
|
|
|
|
|
display.ensureHeaderAndStackRows()
|
2018-05-22 20:30:52 +00:00
|
|
|
display.resourceRows = append(display.resourceRows, row)
|
|
|
|
return row
|
|
|
|
}
|
|
|
|
|
|
|
|
func (display *ProgressDisplay) processNormalEvent(event engine.Event) {
|
turn on the golangci-lint exhaustive linter (#15028)
Turn on the golangci-lint exhaustive linter. This is the first step
towards catching more missing cases during development rather than
in tests, or in production.
This might be best reviewed commit-by-commit, as the first commit turns
on the linter with the `default-signifies-exhaustive: true` option set,
which requires a lot less changes in the current codebase.
I think it's probably worth doing the second commit as well, as that
will get us the real benefits, even though we end up with a little bit
more churn. However it means all the `switch` statements are covered,
which isn't the case after the first commit, since we do have a lot of
`default` statements that just call `assert.Fail`.
Fixes #14601
## Checklist
- [x] I have run `make tidy` to update any new dependencies
- [x] I have run `make lint` to verify my code passes the lint check
- [x] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
2024-01-17 16:50:41 +00:00
|
|
|
//nolint:exhaustive // we are only interested in a subset of events
|
2018-04-12 17:56:39 +00:00
|
|
|
switch event.Type {
|
|
|
|
case engine.PreludeEvent:
|
|
|
|
// A prelude event can just be printed out directly to the console.
|
|
|
|
// Note: we should probably make sure we don't get any prelude events
|
|
|
|
// once we start hearing about actual resource events.
|
2020-07-17 06:52:31 +00:00
|
|
|
payload := event.Payload().(engine.PreludeEventPayload)
|
2020-02-07 20:44:22 +00:00
|
|
|
preludeEventString := renderPreludeEvent(payload, display.opts)
|
|
|
|
if display.isTerminal {
|
2023-11-16 16:54:03 +00:00
|
|
|
display.processNormalEvent(engine.NewEvent(engine.DiagEventPayload{
|
2020-07-17 06:52:31 +00:00
|
|
|
Ephemeral: false,
|
|
|
|
Severity: diag.Info,
|
|
|
|
Color: cmdutil.GetGlobalColorization(),
|
|
|
|
Message: preludeEventString,
|
|
|
|
}))
|
2020-02-07 20:44:22 +00:00
|
|
|
} else {
|
2022-10-31 14:59:14 +00:00
|
|
|
display.println(preludeEventString)
|
2023-11-15 11:19:31 +00:00
|
|
|
}
|
|
|
|
return
|
|
|
|
case engine.PolicyLoadEvent:
|
|
|
|
if !display.shownPolicyLoadEvent {
|
|
|
|
policyLoadEventString := colors.SpecInfo + "Loading policy packs..." + colors.Reset + "\n"
|
|
|
|
display.println(policyLoadEventString)
|
|
|
|
display.shownPolicyLoadEvent = true
|
2020-02-07 20:44:22 +00:00
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
return
|
|
|
|
case engine.SummaryEvent:
|
2020-02-13 23:16:46 +00:00
|
|
|
// keep track of the summary event so that we can display it after all other
|
2018-04-12 17:56:39 +00:00
|
|
|
// resource-related events we receive.
|
2020-07-17 06:52:31 +00:00
|
|
|
payload := event.Payload().(engine.SummaryEventPayload)
|
2018-04-19 21:44:53 +00:00
|
|
|
display.summaryEventPayload = &payload
|
2018-04-12 17:56:39 +00:00
|
|
|
return
|
|
|
|
case engine.DiagEvent:
|
2020-07-17 06:52:31 +00:00
|
|
|
msg := display.renderProgressDiagEvent(event.Payload().(engine.DiagEventPayload), true /*includePrefix:*/)
|
2018-04-12 17:56:39 +00:00
|
|
|
if msg == "" {
|
|
|
|
return
|
|
|
|
}
|
2018-04-19 21:44:53 +00:00
|
|
|
case engine.StdoutColorEvent:
|
2020-07-17 06:52:31 +00:00
|
|
|
display.handleSystemEvent(event.Payload().(engine.StdoutEventPayload))
|
2018-04-19 21:44:53 +00:00
|
|
|
return
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
// At this point, all events should relate to resources.
|
2018-05-22 20:30:52 +00:00
|
|
|
eventUrn, metadata := getEventUrnAndMetadata(event)
|
2019-09-18 16:49:13 +00:00
|
|
|
|
|
|
|
// If we're suppressing reads from the tree-view, then convert notifications about reads into
|
|
|
|
// ephemeral messages that will go into the info column.
|
|
|
|
if metadata != nil && !display.opts.ShowReads {
|
2019-06-18 22:38:32 +00:00
|
|
|
if metadata.Op == deploy.OpReadDiscard || metadata.Op == deploy.OpReadReplacement {
|
|
|
|
// just flat out ignore read discards/replace. They're only relevant in the context of
|
|
|
|
// 'reads', and we only present reads as an ephemeral diagnostic anyways.
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
if metadata.Op == deploy.OpRead {
|
|
|
|
// Don't show reads as operations on a specific resource. It's an underlying detail
|
|
|
|
// that we don't want to clutter up the display with. However, to help users know
|
|
|
|
// what's going on, we can show them as ephemeral diagnostic messages that are
|
|
|
|
// associated at the top level with the stack. That way if things are taking a while,
|
|
|
|
// there's insight in the display as to what's going on.
|
2023-11-16 16:54:03 +00:00
|
|
|
display.processNormalEvent(engine.NewEvent(engine.DiagEventPayload{
|
2020-07-17 06:52:31 +00:00
|
|
|
Ephemeral: true,
|
|
|
|
Severity: diag.Info,
|
|
|
|
Color: cmdutil.GetGlobalColorization(),
|
2023-04-13 21:33:18 +00:00
|
|
|
Message: fmt.Sprintf("read %v %v", eventUrn.Type().DisplayName(), eventUrn.Name()),
|
2020-07-17 06:52:31 +00:00
|
|
|
}))
|
2019-06-18 22:38:32 +00:00
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
if eventUrn == "" {
|
|
|
|
// If this event has no URN, associate it with the stack. Note that there may not yet be a stack resource, in
|
|
|
|
// which case this is a no-op.
|
|
|
|
eventUrn = display.stackUrn
|
|
|
|
}
|
|
|
|
isRootEvent := eventUrn == display.stackUrn
|
2018-04-12 17:56:39 +00:00
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
row := display.getRowForURN(eventUrn, metadata)
|
2018-04-15 19:47:53 +00:00
|
|
|
|
2018-05-22 20:30:52 +00:00
|
|
|
// Don't bother showing certain events (for example, things that are unchanged). However
|
|
|
|
// always show the root 'stack' resource so we can indicate that it's still running, and
|
|
|
|
// also so we have something to attach unparented diagnostic events to.
|
|
|
|
hideRowIfUnnecessary := metadata != nil && !shouldShow(*metadata, display.opts) && !isRootEvent
|
2019-07-22 20:33:54 +00:00
|
|
|
// Always show row if there's a policy violation event. Policy violations prevent resource
|
|
|
|
// registration, so if we don't show the row, the violation gets attributed to the stack
|
|
|
|
// resource rather than the resources whose policy failed.
|
|
|
|
hideRowIfUnnecessary = hideRowIfUnnecessary || event.Type == engine.PolicyViolationEvent
|
2018-05-22 20:30:52 +00:00
|
|
|
if !hideRowIfUnnecessary {
|
|
|
|
row.SetHideRowIfUnnecessary(false)
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
if event.Type == engine.ResourcePreEvent {
|
2020-07-17 06:52:31 +00:00
|
|
|
step := event.Payload().(engine.ResourcePreEventPayload).Metadata
|
2022-10-11 14:32:16 +00:00
|
|
|
|
|
|
|
// Register the resource update start time to calculate duration
|
|
|
|
// and time elapsed.
|
2024-05-03 11:17:06 +00:00
|
|
|
start := time.Now()
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.Lock()
|
2024-05-03 11:17:06 +00:00
|
|
|
display.opStopwatch.start[step.URN] = start
|
2022-10-11 14:32:16 +00:00
|
|
|
|
2022-12-12 20:15:35 +00:00
|
|
|
// Clear out potential event end timings for prior operations on the same resource.
|
|
|
|
delete(display.opStopwatch.end, step.URN)
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.Unlock()
|
2022-12-12 20:15:35 +00:00
|
|
|
|
2018-04-15 19:47:53 +00:00
|
|
|
row.SetStep(step)
|
2018-04-12 17:56:39 +00:00
|
|
|
} else if event.Type == engine.ResourceOutputsEvent {
|
2018-08-23 00:52:46 +00:00
|
|
|
isRefresh := display.getStepOp(row.Step()) == deploy.OpRefresh
|
2020-07-17 06:52:31 +00:00
|
|
|
step := event.Payload().(engine.ResourceOutputsEventPayload).Metadata
|
2018-09-11 23:44:06 +00:00
|
|
|
|
2022-10-11 14:32:16 +00:00
|
|
|
// Register the resource update end time to calculate duration
|
|
|
|
// to display.
|
2024-05-03 11:17:06 +00:00
|
|
|
end := time.Now()
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.Lock()
|
2024-05-03 11:17:06 +00:00
|
|
|
display.opStopwatch.end[step.URN] = end
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.Unlock()
|
2022-10-11 14:32:16 +00:00
|
|
|
|
2018-09-11 23:44:06 +00:00
|
|
|
// Is this the stack outputs event? If so, we'll need to print it out at the end of the plan.
|
2020-07-09 14:19:12 +00:00
|
|
|
if step.URN == display.stackUrn {
|
2018-09-11 23:44:06 +00:00
|
|
|
display.seenStackOutputs = true
|
|
|
|
}
|
|
|
|
|
2018-05-11 23:48:05 +00:00
|
|
|
row.SetStep(step)
|
2018-08-02 04:48:14 +00:00
|
|
|
row.AddOutputStep(step)
|
2018-05-22 18:31:48 +00:00
|
|
|
|
|
|
|
// If we're not in a terminal, we may not want to display this row again: if we're displaying a preview or if
|
|
|
|
// this step is a no-op for a custom resource, refreshing this row will simply duplicate its earlier output.
|
2018-08-23 00:52:46 +00:00
|
|
|
hasMeaningfulOutput := isRefresh ||
|
|
|
|
!display.isPreview && (step.Res == nil || step.Res.Custom && step.Op != deploy.OpSame)
|
2018-05-22 18:31:48 +00:00
|
|
|
if !display.isTerminal && !hasMeaningfulOutput {
|
|
|
|
return
|
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
} else if event.Type == engine.ResourceOperationFailed {
|
Don't rewrite step operations following failure (#16292)
When displaying the progress of a Pulumi operation to the user, we want
the operation being displayed to reflect what is actually happening at
that moment in time. Most of the time, this means "just display the
operation in question" -- if a `create` is being executed, show
"creating", if a `delete` just completed, show "deleted", and so on.
However, there are cases where we can do better than just displaying the
"raw" operation. Specifically, our "replacement-like" operations
comprise a _series_ of steps that must execute for the operation as a
whole to make sense. For create-before-replace, we have:
* `create replacement` resource
* `replace` the old resource
* `delete original` resource
Other sequences, such as delete-before-replace, are similar (in the case
of delete-before-replace, the `delete original` step comes first).
While it might make sense to display the underlying steps as the
operation progresses, when the series of steps has _completed_, it's
(arguably) much clearer to simply render the string `replaced` so that
the user knows what has gone on. Similarly, during a preview, it (again
arguably) makes more sense for us to state that the intention is to
`replace`, rather than any one of `create replacement`/`replace`/`delete
original` and so on.
Alas, there is a case where this is potentially misleading and thus
undesirable behaviour. If an _error_ occurs during execution, the
operation will terminate at the next opportunity. In doing so, it will
enter a "done" state. At this point, we _do not_ want to rewrite the
step that was actually happening before the error interrupted it (e.g.
`create replacement`) with the "end" state (e.g. `replaced`), since the
error may mean we never reached that desired state. We want the display
to be as true to the raw series of steps as possible. This PR implements
this change, so that programs which terminate due to errors do not
rewrite their steps.
This PR addresses some of the confusion in #16270, in which we
incorrectly reported that a delete-before-replace resource had been
`replaced` when in fact we had only completed the deletion before being
interrupted by an error elsewhere.
2024-05-31 10:48:07 +00:00
|
|
|
display.failed = true
|
2018-04-15 19:47:53 +00:00
|
|
|
row.SetFailed()
|
2018-04-12 17:56:39 +00:00
|
|
|
} else if event.Type == engine.DiagEvent {
|
|
|
|
// also record this diagnostic so we print it at the end.
|
2018-04-15 19:47:53 +00:00
|
|
|
row.RecordDiagEvent(event)
|
2019-06-10 22:20:44 +00:00
|
|
|
} else if event.Type == engine.PolicyViolationEvent {
|
|
|
|
// also record this policy violation so we print it at the end.
|
|
|
|
row.RecordPolicyViolationEvent(event)
|
2023-10-09 18:31:17 +00:00
|
|
|
} else if event.Type == engine.PolicyRemediationEvent {
|
|
|
|
// record this remediation so we print it at the end.
|
|
|
|
row.RecordPolicyRemediationEvent(event)
|
2018-04-12 17:56:39 +00:00
|
|
|
} else {
|
|
|
|
contract.Failf("Unhandled event type '%s'", event.Type)
|
|
|
|
}
|
|
|
|
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
display.renderer.rowUpdated(row)
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
|
2018-04-19 21:44:53 +00:00
|
|
|
func (display *ProgressDisplay) handleSystemEvent(payload engine.StdoutEventPayload) {
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
// We need too take the writer lock here because ensureHeaderAndStackRows expects to be
|
|
|
|
// called under the write lock
|
2024-05-06 16:28:18 +00:00
|
|
|
display.eventMutex.Lock()
|
|
|
|
defer display.eventMutex.Unlock()
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
|
2018-04-19 21:44:53 +00:00
|
|
|
// Make sure we have a header to display
|
2018-06-18 23:03:26 +00:00
|
|
|
display.ensureHeaderAndStackRows()
|
2018-04-19 21:44:53 +00:00
|
|
|
|
|
|
|
display.systemEventPayloads = append(display.systemEventPayloads, payload)
|
|
|
|
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
display.renderer.systemMessage(payload)
|
2018-04-19 21:44:53 +00:00
|
|
|
}
|
|
|
|
|
2018-06-18 23:03:26 +00:00
|
|
|
func (display *ProgressDisplay) ensureHeaderAndStackRows() {
|
2024-05-06 16:28:18 +00:00
|
|
|
contract.Assertf(!display.eventMutex.TryLock(), "ProgressDisplay.ensureHeaderAndStackRows MUST be called "+
|
Decouple persist and display events (#15709)
<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
Retry #15529 with fix for the issue that required the revert in #15705
This removes a scenario where events could not be persisted to the cloud
because they were waiting on the same event being displayed
Instead of rendering the tree every time a row is updated, instead, this
renders when the display actually happens in the the `frame` call. The
renderer instead simply marks itself as dirty in the `rowUpdated`,
`tick`, `systemMessage` and `done` methods and relies on the frame being
redrawn on a 60Hz timer (the `done` method calls `frame` explicitly).
This makes the rowUpdated call exceedingly cheap (it simply marks the
treeRenderer as dirty) which allows the ProgressDisplay instance to
service the display events faster, which prevents it from blocking the
persist events.
This requires a minor refactor to ensure that the display object is
available in the frame method
Because the treeRenderer is calling back into the ProgressDisplay object
in a goroutine, the ProgressDisplay object needs to be thread safe, so a
read-write mutex is added to protect the `eventUrnToResourceRow` map.
The unused `urnToID` map was removed in passing.
## Impact
There are scenarios where the total time taken for an operation was
dominated by servicing the events.
This reduces the time for a complex (~2000 resources) `pulumi preview`
from 1m45s to 45s
For a `pulumi up` with `-v=11` on a the same stack, where all the
register resource spans were completing in 1h6m and the
postEngineEventBatch events were taking 3h45m, this PR removes the time
impact of reporting the events (greatly inflated by the high verbosity
setting) and the operation takes the anticipated 1h6m
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
Fixes #15668
This was happening because the renderer was being marked dirty once per
second in a tick event, which caused frame to redraw. There is a check
in the render method that `display.headerRow` is not nil that was
previously used to prevent rendering when no events had been added. This
check is now part of the `markDirty` logic
Some of the tests needed to be updated to make this work and have also
been refactored
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [ ] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [X] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [ ] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
2024-03-18 16:53:13 +00:00
|
|
|
"under the write lock")
|
2018-04-23 01:10:19 +00:00
|
|
|
if display.headerRow == nil {
|
2018-04-19 21:44:53 +00:00
|
|
|
// about to make our first status message. make sure we present the header line first.
|
2018-04-23 01:10:19 +00:00
|
|
|
display.headerRow = &headerRowData{display: display}
|
2018-04-19 21:44:53 +00:00
|
|
|
}
|
2018-06-18 23:03:26 +00:00
|
|
|
|
|
|
|
// we've added at least one row to the table. make sure we have a row to designate the
|
|
|
|
// stack if we haven't already heard about it yet. This also ensures that as we build
|
|
|
|
// the tree we can always guarantee there's a 'root' to parent anything to.
|
|
|
|
_, hasStackRow := display.eventUrnToResourceRow[display.stackUrn]
|
|
|
|
if hasStackRow {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
stackRow := &resourceRowData{
|
|
|
|
display: display,
|
|
|
|
tick: display.currentTick,
|
|
|
|
diagInfo: &DiagInfo{},
|
2020-02-13 23:16:46 +00:00
|
|
|
policyPayloads: policyPayloads,
|
2018-06-18 23:03:26 +00:00
|
|
|
step: engine.StepEventMetadata{Op: deploy.OpSame},
|
|
|
|
hideRowIfUnnecessary: false,
|
|
|
|
}
|
|
|
|
|
|
|
|
display.eventUrnToResourceRow[display.stackUrn] = stackRow
|
|
|
|
display.resourceRows = append(display.resourceRows, stackRow)
|
2018-04-19 21:44:53 +00:00
|
|
|
}
|
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
func (display *ProgressDisplay) processEvents(ticker *time.Ticker, events <-chan engine.Event) {
|
|
|
|
// Main processing loop. The purpose of this func is to read in events from the engine
|
|
|
|
// and translate them into Status objects and progress messages to be presented to the
|
|
|
|
// command line.
|
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case <-ticker.C:
|
|
|
|
display.processTick()
|
|
|
|
|
|
|
|
case event := <-events:
|
|
|
|
if event.Type == "" || event.Type == engine.CancelEvent {
|
|
|
|
// Engine finished sending events. Do all the final processing and return
|
|
|
|
// from this local func. This will print out things like full diagnostic
|
|
|
|
// events, as well as the summary event from the engine.
|
|
|
|
display.processEndSteps()
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
display.processNormalEvent(event)
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-07 22:11:52 +00:00
|
|
|
func (display *ProgressDisplay) renderProgressDiagEvent(payload engine.DiagEventPayload, includePrefix bool) string {
|
2018-04-12 17:56:39 +00:00
|
|
|
if payload.Severity == diag.Debug && !display.opts.Debug {
|
2018-04-10 19:03:11 +00:00
|
|
|
return ""
|
|
|
|
}
|
2018-05-07 22:11:52 +00:00
|
|
|
|
|
|
|
msg := payload.Message
|
|
|
|
if includePrefix {
|
|
|
|
msg = payload.Prefix + msg
|
|
|
|
}
|
|
|
|
|
|
|
|
return strings.TrimRightFunc(msg, unicode.IsSpace)
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
|
2023-02-13 22:15:13 +00:00
|
|
|
// getStepStatus handles getting the value to put in the status column.
|
|
|
|
func (display *ProgressDisplay) getStepStatus(step engine.StepEventMetadata, done bool, failed bool) string {
|
|
|
|
var status string
|
|
|
|
if done {
|
|
|
|
status = display.getStepDoneDescription(step, failed)
|
|
|
|
} else {
|
|
|
|
status = display.getStepInProgressDescription(step)
|
|
|
|
}
|
|
|
|
status = addRetainStatusFlag(status, step)
|
|
|
|
return status
|
|
|
|
}
|
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
func (display *ProgressDisplay) getStepDoneDescription(step engine.StepEventMetadata, failed bool) string {
|
|
|
|
makeError := func(v string) string {
|
|
|
|
return colors.SpecError + "**" + v + "**" + colors.Reset
|
|
|
|
}
|
|
|
|
|
2018-08-02 04:48:14 +00:00
|
|
|
op := display.getStepOp(step)
|
|
|
|
|
2018-04-14 05:26:01 +00:00
|
|
|
if display.isPreview {
|
2018-09-17 17:50:11 +00:00
|
|
|
// During a preview, when we transition to done, we'll print out summary text describing the step instead of a
|
|
|
|
// past-tense verb describing the step that was performed.
|
2022-06-27 14:08:06 +00:00
|
|
|
return deploy.Color(op) + display.getPreviewDoneText(step) + colors.Reset
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
getDescription := func() string {
|
2022-10-11 14:32:16 +00:00
|
|
|
opText := ""
|
2018-04-12 17:56:39 +00:00
|
|
|
if failed {
|
|
|
|
switch op {
|
|
|
|
case deploy.OpSame:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "failed"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpCreate, deploy.OpCreateReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "creating failed"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpUpdate:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "updating failed"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpDelete, deploy.OpDeleteReplaced:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "deleting failed"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpReplace:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "replacing failed"
|
2018-08-03 21:06:00 +00:00
|
|
|
case deploy.OpRead, deploy.OpReadReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "reading failed"
|
2018-08-23 00:52:46 +00:00
|
|
|
case deploy.OpRefresh:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "refreshing failed"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpReadDiscard, deploy.OpDiscardReplaced:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "discarding failed"
|
2019-07-12 18:12:01 +00:00
|
|
|
case deploy.OpImport, deploy.OpImportReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "importing failed"
|
Don't set `PendingReplacement` until `Delete` succeeds (#16699)
There are a number of use cases for `Delete` steps in a Pulumi
operation. Aside from direct deletions, where a resource has been
removed from a program, they are also a key component of _replacements_,
whereby a resource has changed in a manner where it cannot simply be
updated in the provider but instead must be completely removed and
reconstructed. In such cases, Pulumi offers two options:
* "Delete after replace", in which a new resource is first created
before the old one is deleted.
* "Delete before replace", in which the old resource is first deleted
before a replacement is created.
Delete-after-replace is the default, since where possible it is
typically the preferred option by users, enabling e.g. zero-downtime
deployments. However, there are cases where delete-after-replace is not
possible (e.g. because the new and old resources would clash in some
manner), and so delete-before-replace is an option that can be opted
into.
In cases where the deletion must happen first, we must be careful how we
handle the Pulumi state. Either or both of the delete and create calls
could fail, and we always want a state file that tells us how to resume
a failed call to yield the desired outcome. In the case of
delete-before-replace operations, a key component of resumable state is
the `PendingReplacement` field on a resource. `PendingReplacement`
indicates that a resource has been deleted in the provider, but that
this deletion is part of a replacement (and thus that a create call will
subsequently occur). In this way, the deleted resource can remain in the
state file throughout the operation, meaning that e.g. resources that
depend on the deleted resource won't have their dependencies violated
(causing a snapshot integrity error).
Alas, until this point, `PendingReplacement` was set unconditionally on
the creation of a delete-before-replace step, meaning that if the
provider delete failed, we'd elide the delete on a retry and end up with
a bad state failing snapshot integrity checks. This commit fixes this by
moving the setting of `PendingReplacement` inside `DeleteStep.Apply`, so
that it occurs only if the provider delete call succeeds. It also adds a
lifecycle test to test this case and hopefully guard against
regressions.
Fixes #16597
Part of #16667
2024-07-18 12:27:06 +00:00
|
|
|
case deploy.OpRemovePendingReplace:
|
|
|
|
opText = ""
|
2022-10-11 14:32:16 +00:00
|
|
|
default:
|
|
|
|
contract.Failf("Unrecognized resource step op: %v", op)
|
|
|
|
return ""
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
switch op {
|
|
|
|
case deploy.OpSame:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = ""
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpCreate:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "created"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpUpdate:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "updated"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpDelete:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "deleted"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpReplace:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "replaced"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpCreateReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "created replacement"
|
2018-04-12 17:56:39 +00:00
|
|
|
case deploy.OpDeleteReplaced:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "deleted original"
|
2018-08-03 21:06:00 +00:00
|
|
|
case deploy.OpRead:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "read"
|
2018-08-03 21:06:00 +00:00
|
|
|
case deploy.OpReadReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "read for replacement"
|
2018-08-23 00:52:46 +00:00
|
|
|
case deploy.OpRefresh:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "refresh"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpReadDiscard:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "discarded"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpDiscardReplaced:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "discarded original"
|
2019-07-12 18:12:01 +00:00
|
|
|
case deploy.OpImport:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "imported"
|
2019-07-12 18:12:01 +00:00
|
|
|
case deploy.OpImportReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "imported replacement"
|
Don't re-delete resources that are `PendingReplacement` (#16510)
As well as indicating that a resource's state has changes, a diff can
also indicate that those changes require the _replacement_ of the
resource, meaning that it must be recreated and not just updated. In
this scenario, there are two possible ways to replace the resource -- by
first creating another new resource before deleting the old one
("create-before-replace"), or by first deleting the old resource before
creating its replacement ("delete-before-replace").
Create-before-replace is the default since generally, if possible to
implement, it should result in fewer instances of "downtime", where a
desired resource does not exist in the system.
Should delete-before-replace be chosen, Pulumi implements this under the
hood as three steps: delete for replacement, replace, and create
replacement. To track things consistently, as well as enable resumption
of an interrupted operation, Pulumi writes a flag, `PendingReplacement`
to the state of a deleted resource that will later be cleaned up by a
completed replacement.
Should an interrupted operation be resumed, Pulumi does not currently
take `PendingReplacement` into account, and always enqueues a(nother)
delete operation. This is typically fine (albeit wasteful) since deletes
are (should) be idempotent, but unnecessary. This commit adds
@jesse-triplewhale's fix for this behaviour whereby the
`PendingReplacement` flag is simply removed before the remainder of the
required steps (replace, create replacement) are actioned as normal. It
also extends this work with some lifecycle tests for this scenario and a
few others that may arise as a result of an interrupted replacement.
Fixes #16288
Closes #16303
Co-authored-by: Jesse Grodman <jesse@triplewhale.com>
2024-06-28 23:16:20 +00:00
|
|
|
case deploy.OpRemovePendingReplace:
|
|
|
|
opText = ""
|
2022-10-11 14:32:16 +00:00
|
|
|
default:
|
|
|
|
contract.Failf("Unrecognized resource step op: %v", op)
|
|
|
|
return ""
|
2018-04-12 17:56:39 +00:00
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
if op == deploy.OpSame || display.opts.DeterministicOutput || display.opts.SuppressTimings {
|
2022-10-25 21:44:42 +00:00
|
|
|
return opText
|
|
|
|
}
|
|
|
|
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.RLock()
|
2022-10-11 14:32:16 +00:00
|
|
|
start, ok := display.opStopwatch.start[step.URN]
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.RUnlock()
|
2024-05-03 11:17:06 +00:00
|
|
|
|
2022-10-11 14:32:16 +00:00
|
|
|
if !ok {
|
|
|
|
return opText
|
|
|
|
}
|
2018-04-10 19:03:11 +00:00
|
|
|
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.RLock()
|
2022-10-11 14:32:16 +00:00
|
|
|
end, ok := display.opStopwatch.end[step.URN]
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.RUnlock()
|
2024-05-03 11:17:06 +00:00
|
|
|
|
2022-10-11 14:32:16 +00:00
|
|
|
if !ok {
|
|
|
|
return opText
|
|
|
|
}
|
|
|
|
|
|
|
|
opDuration := end.Sub(start).Seconds()
|
|
|
|
if opDuration < 1 {
|
|
|
|
// Display a more fine-grain duration as the operation
|
|
|
|
// has completed.
|
|
|
|
return fmt.Sprintf("%s (%.2fs)", opText, opDuration)
|
|
|
|
}
|
|
|
|
return fmt.Sprintf("%s (%ds)", opText, int(opDuration))
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
|
2018-04-12 17:56:39 +00:00
|
|
|
if failed {
|
|
|
|
return makeError(getDescription())
|
|
|
|
}
|
|
|
|
|
2022-06-27 14:08:06 +00:00
|
|
|
return deploy.Color(op) + getDescription() + colors.Reset
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
|
2018-09-10 23:48:14 +00:00
|
|
|
func (display *ProgressDisplay) getPreviewText(step engine.StepEventMetadata) string {
|
|
|
|
switch step.Op {
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpSame:
|
Make a smattering of CLI UX improvements
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.
* Stylize the preview/update summary differently, so that it stands
out as a section. Also highlight the total changes with bold -- it
turns out this has a similar effect to the bright white colorization,
just without the negative effects on e.g. white terminals.
* Eliminate some verbosity in the phrasing of change summaries.
* Make all heading sections stylized consistently. This includes
the color (bright magenta) and the vertical spacing (always a newline
separating headings). We were previously inconsistent on this (e.g.,
outputs were under "---outputs---"). Now the headings are:
Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.
* Fix an issue where we'd parent things to "global" until the stack
object later showed up. Now we'll simply mock up a stack resource.
* Don't show messages like "no change" or "unchanged". Prior to the
light black removal, these faded into the background of the terminal.
Now they just clutter up the display. Similar to the elision of "*"
for OpSames in a prior commit, just leave these out. Now anything
that's written is actually a meaningful status for the user to note.
* Don't show the "3 info messages," etc. summaries in the Info column
while an update is ongoing. Instead, just show the latest line. This
is more respectful of width -- I often find that the important
messages scroll off the right of my screen before this change.
For discussion:
- I actually wonder if we should eliminate the summary
altogether and always just show the latest line. Or even
blank it out. The summary feels better suited for the
Diagnostics section, and the Status concisely tells us
how a resource's update ended up (failed, succeeded, etc).
- Similarly, I question the idea of showing only the "worst"
message. I'd vote for always showing the latest, and again
leaving it to the Status column for concisely telling the
user about the final state a resource ended up in.
* Stop prepending "info: " to every stdout/stderr message. It adds
no value, clutters up the display, and worsens horizontal usage.
* Lessen the verbosity of update headline messages, so we now instead
of e.g. "Previewing update of stack 'x':", we just say
"Previewing update (x):".
* Eliminate vertical whitespace in the Diagnostics section. Every
independent console.out previously was separated by an entire newline,
which made the section look cluttered to my eyes. These are just
streams of logs, there's no reason for the extra newlines.
* Colorize the resource headers in the Diagnostic section light blue.
Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
2018-09-24 15:31:19 +00:00
|
|
|
return ""
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpCreate:
|
2018-04-17 02:46:57 +00:00
|
|
|
return "create"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpUpdate:
|
2018-04-17 02:46:57 +00:00
|
|
|
return "update"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpDelete:
|
2018-04-17 02:46:57 +00:00
|
|
|
return "delete"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpReplace:
|
2018-04-17 02:46:57 +00:00
|
|
|
return "replace"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpCreateReplacement:
|
2018-08-02 04:48:14 +00:00
|
|
|
return "create replacement"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpDeleteReplaced:
|
2018-08-02 04:48:14 +00:00
|
|
|
return "delete original"
|
2018-08-03 21:06:00 +00:00
|
|
|
case deploy.OpRead:
|
|
|
|
return "read"
|
|
|
|
case deploy.OpReadReplacement:
|
|
|
|
return "read for replacement"
|
2018-08-23 00:52:46 +00:00
|
|
|
case deploy.OpRefresh:
|
|
|
|
return "refreshing"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpReadDiscard:
|
|
|
|
return "discard"
|
|
|
|
case deploy.OpDiscardReplaced:
|
2019-07-12 18:12:01 +00:00
|
|
|
return "discard original"
|
|
|
|
case deploy.OpImport:
|
|
|
|
return "import"
|
|
|
|
case deploy.OpImportReplacement:
|
|
|
|
return "import replacement"
|
2018-04-14 05:26:01 +00:00
|
|
|
}
|
|
|
|
|
2018-09-10 23:48:14 +00:00
|
|
|
contract.Failf("Unrecognized resource step op: %v", step.Op)
|
2018-04-14 05:26:01 +00:00
|
|
|
return ""
|
|
|
|
}
|
2018-04-12 17:56:39 +00:00
|
|
|
|
2018-09-17 17:50:11 +00:00
|
|
|
// getPreviewDoneText returns a textual representation for this step, suitable for display during a preview once the
|
|
|
|
// preview has completed.
|
|
|
|
func (display *ProgressDisplay) getPreviewDoneText(step engine.StepEventMetadata) string {
|
|
|
|
switch step.Op {
|
|
|
|
case deploy.OpSame:
|
Make a smattering of CLI UX improvements
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.
* Stylize the preview/update summary differently, so that it stands
out as a section. Also highlight the total changes with bold -- it
turns out this has a similar effect to the bright white colorization,
just without the negative effects on e.g. white terminals.
* Eliminate some verbosity in the phrasing of change summaries.
* Make all heading sections stylized consistently. This includes
the color (bright magenta) and the vertical spacing (always a newline
separating headings). We were previously inconsistent on this (e.g.,
outputs were under "---outputs---"). Now the headings are:
Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.
* Fix an issue where we'd parent things to "global" until the stack
object later showed up. Now we'll simply mock up a stack resource.
* Don't show messages like "no change" or "unchanged". Prior to the
light black removal, these faded into the background of the terminal.
Now they just clutter up the display. Similar to the elision of "*"
for OpSames in a prior commit, just leave these out. Now anything
that's written is actually a meaningful status for the user to note.
* Don't show the "3 info messages," etc. summaries in the Info column
while an update is ongoing. Instead, just show the latest line. This
is more respectful of width -- I often find that the important
messages scroll off the right of my screen before this change.
For discussion:
- I actually wonder if we should eliminate the summary
altogether and always just show the latest line. Or even
blank it out. The summary feels better suited for the
Diagnostics section, and the Status concisely tells us
how a resource's update ended up (failed, succeeded, etc).
- Similarly, I question the idea of showing only the "worst"
message. I'd vote for always showing the latest, and again
leaving it to the Status column for concisely telling the
user about the final state a resource ended up in.
* Stop prepending "info: " to every stdout/stderr message. It adds
no value, clutters up the display, and worsens horizontal usage.
* Lessen the verbosity of update headline messages, so we now instead
of e.g. "Previewing update of stack 'x':", we just say
"Previewing update (x):".
* Eliminate vertical whitespace in the Diagnostics section. Every
independent console.out previously was separated by an entire newline,
which made the section look cluttered to my eyes. These are just
streams of logs, there's no reason for the extra newlines.
* Colorize the resource headers in the Diagnostic section light blue.
Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
2018-09-24 15:31:19 +00:00
|
|
|
return ""
|
2018-09-17 17:50:11 +00:00
|
|
|
case deploy.OpCreate:
|
|
|
|
return "create"
|
|
|
|
case deploy.OpUpdate:
|
|
|
|
return "update"
|
|
|
|
case deploy.OpDelete:
|
|
|
|
return "delete"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpReplace, deploy.OpCreateReplacement, deploy.OpDeleteReplaced, deploy.OpReadReplacement,
|
|
|
|
deploy.OpDiscardReplaced:
|
2018-09-17 17:50:11 +00:00
|
|
|
return "replace"
|
|
|
|
case deploy.OpRead:
|
|
|
|
return "read"
|
|
|
|
case deploy.OpRefresh:
|
|
|
|
return "refresh"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpReadDiscard:
|
|
|
|
return "discard"
|
2019-07-12 18:12:01 +00:00
|
|
|
case deploy.OpImport, deploy.OpImportReplacement:
|
|
|
|
return "import"
|
2018-09-17 17:50:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
contract.Failf("Unrecognized resource step op: %v", step.Op)
|
|
|
|
return ""
|
|
|
|
}
|
|
|
|
|
Don't rewrite step operations following failure (#16292)
When displaying the progress of a Pulumi operation to the user, we want
the operation being displayed to reflect what is actually happening at
that moment in time. Most of the time, this means "just display the
operation in question" -- if a `create` is being executed, show
"creating", if a `delete` just completed, show "deleted", and so on.
However, there are cases where we can do better than just displaying the
"raw" operation. Specifically, our "replacement-like" operations
comprise a _series_ of steps that must execute for the operation as a
whole to make sense. For create-before-replace, we have:
* `create replacement` resource
* `replace` the old resource
* `delete original` resource
Other sequences, such as delete-before-replace, are similar (in the case
of delete-before-replace, the `delete original` step comes first).
While it might make sense to display the underlying steps as the
operation progresses, when the series of steps has _completed_, it's
(arguably) much clearer to simply render the string `replaced` so that
the user knows what has gone on. Similarly, during a preview, it (again
arguably) makes more sense for us to state that the intention is to
`replace`, rather than any one of `create replacement`/`replace`/`delete
original` and so on.
Alas, there is a case where this is potentially misleading and thus
undesirable behaviour. If an _error_ occurs during execution, the
operation will terminate at the next opportunity. In doing so, it will
enter a "done" state. At this point, we _do not_ want to rewrite the
step that was actually happening before the error interrupted it (e.g.
`create replacement`) with the "end" state (e.g. `replaced`), since the
error may mean we never reached that desired state. We want the display
to be as true to the raw series of steps as possible. This PR implements
this change, so that programs which terminate due to errors do not
rewrite their steps.
This PR addresses some of the confusion in #16270, in which we
incorrectly reported that a delete-before-replace resource had been
`replaced` when in fact we had only completed the deletion before being
interrupted by an error elsewhere.
2024-05-31 10:48:07 +00:00
|
|
|
// getStepOp returns the operation to display for the given step. Generally this
|
|
|
|
// will be the operation already attached to the step, but there are some cases
|
|
|
|
// where it makes sense for us to return a different operation. In particular,
|
|
|
|
// we often think of replacements as a series of steps, e.g.:
|
|
|
|
//
|
|
|
|
// * create replacement
|
|
|
|
// * replace
|
|
|
|
// * delete original
|
|
|
|
//
|
|
|
|
// When these steps are being applied, we want to display the individual steps
|
|
|
|
// to give the user the best indicator of what is happening currently. However,
|
|
|
|
// both before we apply all of them, and before they're all done, we want to
|
|
|
|
// show this sequence as a single conceptual "replace"/"replaced" step. We thus
|
|
|
|
// rewrite the operation to be "replace" in these cases.
|
|
|
|
//
|
|
|
|
// There are two cases where we do not want to rewrite any operations:
|
|
|
|
//
|
|
|
|
// - If we are operating in a non-interactive mode (that is, effectively
|
|
|
|
// outputting a log), we can afford to show all the individual steps, since we
|
|
|
|
// are not limited to a single line per resource, and we want the list of
|
|
|
|
// steps to be as clear and true to the actual operations as possible.
|
|
|
|
//
|
|
|
|
// - If a resource operation fails (meaning the entire operation will likely
|
|
|
|
// fail), we want the user to be left with as true a representation of where
|
|
|
|
// we got to when the program terminates (e.g. we don't want to show
|
|
|
|
// "replaced" when in fact we have only completed the first step of the
|
|
|
|
// series).
|
2022-06-27 14:08:06 +00:00
|
|
|
func (display *ProgressDisplay) getStepOp(step engine.StepEventMetadata) display.StepOp {
|
2018-08-02 04:48:14 +00:00
|
|
|
op := step.Op
|
Don't rewrite step operations following failure (#16292)
When displaying the progress of a Pulumi operation to the user, we want
the operation being displayed to reflect what is actually happening at
that moment in time. Most of the time, this means "just display the
operation in question" -- if a `create` is being executed, show
"creating", if a `delete` just completed, show "deleted", and so on.
However, there are cases where we can do better than just displaying the
"raw" operation. Specifically, our "replacement-like" operations
comprise a _series_ of steps that must execute for the operation as a
whole to make sense. For create-before-replace, we have:
* `create replacement` resource
* `replace` the old resource
* `delete original` resource
Other sequences, such as delete-before-replace, are similar (in the case
of delete-before-replace, the `delete original` step comes first).
While it might make sense to display the underlying steps as the
operation progresses, when the series of steps has _completed_, it's
(arguably) much clearer to simply render the string `replaced` so that
the user knows what has gone on. Similarly, during a preview, it (again
arguably) makes more sense for us to state that the intention is to
`replace`, rather than any one of `create replacement`/`replace`/`delete
original` and so on.
Alas, there is a case where this is potentially misleading and thus
undesirable behaviour. If an _error_ occurs during execution, the
operation will terminate at the next opportunity. In doing so, it will
enter a "done" state. At this point, we _do not_ want to rewrite the
step that was actually happening before the error interrupted it (e.g.
`create replacement`) with the "end" state (e.g. `replaced`), since the
error may mean we never reached that desired state. We want the display
to be as true to the raw series of steps as possible. This PR implements
this change, so that programs which terminate due to errors do not
rewrite their steps.
This PR addresses some of the confusion in #16270, in which we
incorrectly reported that a delete-before-replace resource had been
`replaced` when in fact we had only completed the deletion before being
interrupted by an error elsewhere.
2024-05-31 10:48:07 +00:00
|
|
|
if !display.isTerminal {
|
|
|
|
return op
|
|
|
|
}
|
2018-08-02 04:48:14 +00:00
|
|
|
|
Don't rewrite step operations following failure (#16292)
When displaying the progress of a Pulumi operation to the user, we want
the operation being displayed to reflect what is actually happening at
that moment in time. Most of the time, this means "just display the
operation in question" -- if a `create` is being executed, show
"creating", if a `delete` just completed, show "deleted", and so on.
However, there are cases where we can do better than just displaying the
"raw" operation. Specifically, our "replacement-like" operations
comprise a _series_ of steps that must execute for the operation as a
whole to make sense. For create-before-replace, we have:
* `create replacement` resource
* `replace` the old resource
* `delete original` resource
Other sequences, such as delete-before-replace, are similar (in the case
of delete-before-replace, the `delete original` step comes first).
While it might make sense to display the underlying steps as the
operation progresses, when the series of steps has _completed_, it's
(arguably) much clearer to simply render the string `replaced` so that
the user knows what has gone on. Similarly, during a preview, it (again
arguably) makes more sense for us to state that the intention is to
`replace`, rather than any one of `create replacement`/`replace`/`delete
original` and so on.
Alas, there is a case where this is potentially misleading and thus
undesirable behaviour. If an _error_ occurs during execution, the
operation will terminate at the next opportunity. In doing so, it will
enter a "done" state. At this point, we _do not_ want to rewrite the
step that was actually happening before the error interrupted it (e.g.
`create replacement`) with the "end" state (e.g. `replaced`), since the
error may mean we never reached that desired state. We want the display
to be as true to the raw series of steps as possible. This PR implements
this change, so that programs which terminate due to errors do not
rewrite their steps.
This PR addresses some of the confusion in #16270, in which we
incorrectly reported that a delete-before-replace resource had been
`replaced` when in fact we had only completed the deletion before being
interrupted by an error elsewhere.
2024-05-31 10:48:07 +00:00
|
|
|
if display.failed {
|
|
|
|
return op
|
|
|
|
}
|
|
|
|
|
|
|
|
// Replacing replace series with replace:
|
2018-08-02 04:48:14 +00:00
|
|
|
//
|
Don't rewrite step operations following failure (#16292)
When displaying the progress of a Pulumi operation to the user, we want
the operation being displayed to reflect what is actually happening at
that moment in time. Most of the time, this means "just display the
operation in question" -- if a `create` is being executed, show
"creating", if a `delete` just completed, show "deleted", and so on.
However, there are cases where we can do better than just displaying the
"raw" operation. Specifically, our "replacement-like" operations
comprise a _series_ of steps that must execute for the operation as a
whole to make sense. For create-before-replace, we have:
* `create replacement` resource
* `replace` the old resource
* `delete original` resource
Other sequences, such as delete-before-replace, are similar (in the case
of delete-before-replace, the `delete original` step comes first).
While it might make sense to display the underlying steps as the
operation progresses, when the series of steps has _completed_, it's
(arguably) much clearer to simply render the string `replaced` so that
the user knows what has gone on. Similarly, during a preview, it (again
arguably) makes more sense for us to state that the intention is to
`replace`, rather than any one of `create replacement`/`replace`/`delete
original` and so on.
Alas, there is a case where this is potentially misleading and thus
undesirable behaviour. If an _error_ occurs during execution, the
operation will terminate at the next opportunity. In doing so, it will
enter a "done" state. At this point, we _do not_ want to rewrite the
step that was actually happening before the error interrupted it (e.g.
`create replacement`) with the "end" state (e.g. `replaced`), since the
error may mean we never reached that desired state. We want the display
to be as true to the raw series of steps as possible. This PR implements
this change, so that programs which terminate due to errors do not
rewrite their steps.
This PR addresses some of the confusion in #16270, in which we
incorrectly reported that a delete-before-replace resource had been
`replaced` when in fact we had only completed the deletion before being
interrupted by an error elsewhere.
2024-05-31 10:48:07 +00:00
|
|
|
// * During preview -- we'll show a single "replace" plan.
|
|
|
|
// * During update -- we'll show the individual steps.
|
|
|
|
// * When done -- we'll show a single "replaced" step.
|
|
|
|
if display.isPreview || display.done {
|
|
|
|
if op == deploy.OpCreateReplacement || op == deploy.OpDeleteReplaced || op == deploy.OpDiscardReplaced {
|
|
|
|
return deploy.OpReplace
|
2018-08-02 04:48:14 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return op
|
|
|
|
}
|
|
|
|
|
2021-09-28 22:16:09 +00:00
|
|
|
func (display *ProgressDisplay) getStepOpLabel(step engine.StepEventMetadata, done bool) string {
|
2022-06-27 14:08:06 +00:00
|
|
|
return deploy.Prefix(display.getStepOp(step), done) + colors.Reset
|
2018-04-30 19:31:57 +00:00
|
|
|
}
|
|
|
|
|
2018-04-14 05:26:01 +00:00
|
|
|
func (display *ProgressDisplay) getStepInProgressDescription(step engine.StepEventMetadata) string {
|
2018-08-02 04:48:14 +00:00
|
|
|
op := display.getStepOp(step)
|
2018-04-12 17:56:39 +00:00
|
|
|
|
2018-04-14 05:26:01 +00:00
|
|
|
if isRootStack(step) && op == deploy.OpSame {
|
|
|
|
// most of the time a stack is unchanged. in that case we just show it as "running->done".
|
|
|
|
// otherwise, we show what is actually happening to it.
|
|
|
|
return "running"
|
|
|
|
}
|
|
|
|
|
2018-04-10 19:03:11 +00:00
|
|
|
getDescription := func() string {
|
2018-04-12 17:56:39 +00:00
|
|
|
if display.isPreview {
|
2018-09-10 23:48:14 +00:00
|
|
|
return display.getPreviewText(step)
|
2018-04-14 05:26:01 +00:00
|
|
|
}
|
|
|
|
|
2022-10-11 14:32:16 +00:00
|
|
|
opText := ""
|
2018-04-14 05:26:01 +00:00
|
|
|
switch op {
|
|
|
|
case deploy.OpSame:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = ""
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpCreate:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "creating"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpUpdate:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "updating"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpDelete:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "deleting"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpReplace:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "replacing"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpCreateReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "creating replacement"
|
2018-04-14 05:26:01 +00:00
|
|
|
case deploy.OpDeleteReplaced:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "deleting original"
|
2018-08-03 21:06:00 +00:00
|
|
|
case deploy.OpRead:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "reading"
|
2018-08-03 21:06:00 +00:00
|
|
|
case deploy.OpReadReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "reading for replacement"
|
2018-08-23 00:52:46 +00:00
|
|
|
case deploy.OpRefresh:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "refreshing"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpReadDiscard:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "discarding"
|
2019-01-31 21:48:44 +00:00
|
|
|
case deploy.OpDiscardReplaced:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "discarding original"
|
2019-07-12 18:12:01 +00:00
|
|
|
case deploy.OpImport:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "importing"
|
2019-07-12 18:12:01 +00:00
|
|
|
case deploy.OpImportReplacement:
|
2022-10-11 14:32:16 +00:00
|
|
|
opText = "importing replacement"
|
Don't re-delete resources that are `PendingReplacement` (#16510)
As well as indicating that a resource's state has changes, a diff can
also indicate that those changes require the _replacement_ of the
resource, meaning that it must be recreated and not just updated. In
this scenario, there are two possible ways to replace the resource -- by
first creating another new resource before deleting the old one
("create-before-replace"), or by first deleting the old resource before
creating its replacement ("delete-before-replace").
Create-before-replace is the default since generally, if possible to
implement, it should result in fewer instances of "downtime", where a
desired resource does not exist in the system.
Should delete-before-replace be chosen, Pulumi implements this under the
hood as three steps: delete for replacement, replace, and create
replacement. To track things consistently, as well as enable resumption
of an interrupted operation, Pulumi writes a flag, `PendingReplacement`
to the state of a deleted resource that will later be cleaned up by a
completed replacement.
Should an interrupted operation be resumed, Pulumi does not currently
take `PendingReplacement` into account, and always enqueues a(nother)
delete operation. This is typically fine (albeit wasteful) since deletes
are (should) be idempotent, but unnecessary. This commit adds
@jesse-triplewhale's fix for this behaviour whereby the
`PendingReplacement` flag is simply removed before the remainder of the
required steps (replace, create replacement) are actioned as normal. It
also extends this work with some lifecycle tests for this scenario and a
few others that may arise as a result of an interrupted replacement.
Fixes #16288
Closes #16303
Co-authored-by: Jesse Grodman <jesse@triplewhale.com>
2024-06-28 23:16:20 +00:00
|
|
|
case deploy.OpRemovePendingReplace:
|
|
|
|
opText = ""
|
2022-10-11 14:32:16 +00:00
|
|
|
default:
|
|
|
|
contract.Failf("Unrecognized resource step op: %v", op)
|
|
|
|
return ""
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
|
|
|
|
Add display to the engine tests (#16050)
We want to add more test coverage to the display code. The best way to
do that is to add it to the engine tests, that already cover most of the
pulumi functionality.
It's probably not really possible to review all of the output, but at
least it gives us a baseline, which we can work with.
There's a couple of tests that are flaky for reasons I don't quite
understand yet. I marked them as to skip and we can look at them later.
I'd rather get in the baseline tests sooner, rather than spending a
bunch of time looking at that. The output differences also seem very
minor, so not super concerning.
The biggest remaining issue is that this doesn't interact well with the
Chdir we're doing in the engine. We could either pass the CWD through,
or just try to get rid of that Chdir. So this should only be merged
after https://github.com/pulumi/pulumi/pull/15607.
I've tried to split this into a few commits, separating out adding the
testdata, so it's hopefully a little easier to review, even though the
PR is still quite large.
One other thing to note is that we're comparing that the output has all
the same lines, and not that it is exactly the same. Because of how the
engine is implemented, there's a bunch of race conditions otherwise,
that would make us have to skip a bunch of tests, just because e.g.
resource A is sometimes deleted before resource B and sometimes it's the
other way around.
The biggest downside of that is that running with `PULUMI_ACCEPT` will
produce a diff even when there are no changes. Hopefully we won't have
to run that way too often though, so it might not be a huge issue?
---------
Co-authored-by: Fraser Waters <fraser@pulumi.com>
2024-05-13 07:18:25 +00:00
|
|
|
if op == deploy.OpSame || display.opts.DeterministicOutput || display.opts.SuppressTimings {
|
2022-10-25 21:44:42 +00:00
|
|
|
return opText
|
|
|
|
}
|
|
|
|
|
2022-10-11 14:32:16 +00:00
|
|
|
// Calculate operation time elapsed.
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.RLock()
|
2022-10-11 14:32:16 +00:00
|
|
|
start, ok := display.opStopwatch.start[step.URN]
|
2024-05-06 16:28:18 +00:00
|
|
|
display.stopwatchMutex.RUnlock()
|
2024-05-03 11:17:06 +00:00
|
|
|
|
2022-10-11 14:32:16 +00:00
|
|
|
if !ok {
|
|
|
|
return opText
|
|
|
|
}
|
|
|
|
|
2023-01-11 15:59:43 +00:00
|
|
|
secondsElapsed := time.Since(start).Seconds()
|
2022-10-11 14:32:16 +00:00
|
|
|
return fmt.Sprintf("%s (%ds)", opText, int(secondsElapsed))
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|
2022-06-27 14:08:06 +00:00
|
|
|
return deploy.ColorProgress(op) + getDescription() + colors.Reset
|
2018-04-10 19:03:11 +00:00
|
|
|
}
|