* Improve error messages output by the CLI
This fixes a couple known issues with the way that we present errors
from the Pulumi CLI:
1. Any errors from RPC endpoints were bubbling up as they were to
the top-level, which was unfortunate because they contained
RPC-specific noise that we don't want to present to the user. This
commit unwraps errors from resource providers.
2. The "catastrophic error" message often got printed twice
3. Fatal errors are often printed twice, because our CLI top-level
prints out the fatal error that it receives before exiting. A lot of
the time this error has already been printed.
4. Errors were prefixed by PU####.
* Feedback: Omit the 'catastrophic' error message and use a less verbose error message as the final error
* Code review feedback: interpretRPCError -> resourceStateAndError
* Code review feedback: deleting some commented-out code, error capitalization
* Cleanup after rebase
Most of the errors in this package are holdovers from our previous
syetem where we had our own custom compiler and evaluator and are no
longer needed. The few we still use during plan applicaton (via the
diagnostics system, which is another component from the old system
that we still use) have been promoted into the diag package. Doing so,
allows us to not have to import "github.com/pkg/errors" as "goerr" in
some parts of the engine, a nice cleaup.
Our recent changes to colorization changed from a boolean to a tri-valued
enum (Always, Never, Raw). The events from the service, however, are still
boolean-valued. This changes the message payload to carry the full values.
Part of the work to make it easier to tests of diff output. Specifically, we now allow users to pass --color=option for several pulumi commands. 'option' can be one of 'always', 'never', 'raw', and 'auto' (the default).
The meaning of these flags are:
1. auto: colorize normally, unless in --debug
2. always: always colorize no matter what
3. never: never colorize no matter what.
4. raw: colorize, but preserve the original "<{%%}>" style control codes and not the translated platform specific codes. This is for testing purposes and ensures we can have test for this stuff across platform.
This change adds back component output properties. Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.
The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.
Fixes#551.
I noticed in our Docker builds, we often end up seeing %(MISSING)!
style messages, which were an indication we were trying to format
them. The reason was the presence of %c's in the stream, and the
fact that we passed said messages to Fprintf. We were careful in
all other layers to use the message on the "right hand side" of
any *f calls, but in this instance, we used Fprintf and passed the
message on the "left hand side", triggering formatting. It turns
out we've already formatted everything by the time we get here,
so there's no need -- we can just use Fprint instead.
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).
While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.
As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).
The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
This change improves our output formatting by generally adding
fewer prefixes. As shown in pulumi/pulumi#359, we were being
excessively verbose in many places, including prefixing every
console.out with "langhost[nodejs].stdout: ", displaying full
stack traces for simple errors like missing configuration, etc.
Overall, this change includes the following:
* Don't prefix stdout and stderr output from the program, other
than the standard "info:" prefix. I experimented with various
schemes here, but they all felt gratuitous. Simply emitting
the output seems fine, especially as it's closer to what would
happen if you just ran the program under node.
* Do NOT make writes to stderr fail the plan/deploy. Previously
we assumed that any console.errors, for instance, meant that
the overall program should fail. This simply isn't how stderr
is treated generally and meant you couldn't use certain
logging techniques and libraries, among other things.
* Do make sure that stderr writes in the program end up going to
stderr in the Pulumi CLI output, however, so that redirection
works as it should. This required a new Infoerr log level.
* Make a small fix to the planning logic so we don't attempt to
print the summary if an error occurs.
* Finally, add a new error type, RunError, that when thrown and
uncaught does not result in a full stack trace being printed.
Anyone can use this, however, we currently use it for config
errors so that we can terminate with a pretty error message,
rather than the monstrosity shown in pulumi/pulumi#359.
This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
While these codes can be understood on newer versions of Windows 10 (if
we were to set the console properties correctly) in general they do not
work and cause garbage to be printed to the screen. For now, just don't
colorize output on Windows.
This refactors the engine so all of the APIs on it are instance
methods on the type instead of raw methods that float around and use
data from a global engine.
A mechcanical change as we remove the global `E` and then make
anything that interacted with that in pkg/engine to be an instance
method and the dealing with the fallout.
This changes a few things in the CLI, mostly just prettying it up:
* Label all steps more clearly with the kind of step. Also
unify the way we present this during planning and deployment.
* Summarize the changes that *did not* get made just as clearly
as those that did. In other words, stuff like this:
info: 2 resources changed:
+1 resource created
-1 resource deleted
5 resources unchanged
and
info: no resources required
5 resources unchanged
* Always print output properties when they are pertinent.
This includes creates, replacements, and updates.
* Show replacement creates and deletes very distinctly. The
create parts show up minty green and the delete parts show up
rosey red. These are the "physical" steps, compared to the
"logical" step of replacement (which remains marigold).
I still don't love where we are here. The asymmetry between
planning and deployment bugs me, and could be surprising.
("Hey, my deploy doesn't look like my plan!") I don't know
what developers will want to see here and I feel like in
general we are spewing far too much into the CLI to make it
even useful for anything but diagnosing failures afterwards.
I propose that we should do a deep dive on this during the
CLI epic, pulumi/pulumi-service#2.
This resolvespulumi/pulumi-fabric#305.
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This change introduces a --debug option to the plan, deploy, and
destroy commands. Unlike --logtostderr, which merely hooks into the
copious Glogging that we perform (and is therefore meant for developers
of the tools themselves and not end users), --debug hooks into the
user-facing debug stream. This now includes any debug messages coming
from the resource providers as they perform their tasks.
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.
The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible. We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
This change alters diag.Message to not format strings and, instead,
encourages developers to use the Infof, Errorf, and Warningf varargs
functions. It also tests that arguments are never interepreted as
format strings.
This change enables parallelism for our tests.
It also introdues a `test_core` Makefile target to just run the
core engine tests, and not the providers, since they take a long time.
This is intended only as part of the inner developer loop.
This change fixes a couple issues that prevented restarting a
deployment after partial failure; this was due to the fact that
unchanged resources didn't propagate IDs from old to new. This
is remedied by making unchanged a map from new to old, and making
ID propagation the first thing plan application does.
This change does a few things:
* First and foremost, it tracks configuration variables that are
initialized, and optionally prints them out as part of the
prelude/header (based on --show-config), both in a dry-run (plan)
and in an actual deployment (apply).
* It tidies up some of the colorization and messages, and includes
nice banners like "Deploying changes:", etc.
* Fix an assertion.
* Issue a new error
"One or more errors occurred while applying X's configuration"
just to make it easier to distinguish configuration-specific
failures from ordinary ones.
* Change config keys to tokens.Token, not tokens.ModuleMember,
since it is legal for keys to represent class members (statics).
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it. In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error. This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
Per Eric's suggestion, this moves the colors we use into a central
location so that it'll be easier someday down the road to reconfigure
and/or disable them, etc. This does not include a --no-colors option
although we should really include this soon before it gets too hairy.
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.
It contains the following key abstractions:
* resource.Provider is a wrapper around the CRUD operations exposed by
underlying resource plugins. It will eventually defer to resource.Plugin,
which itself defers -- over an RPC interface -- to the actual plugin, one
per package exposing resources. The provider will also understand how to
load, cache, and overall manage the lifetime of each plugin.
* resource.Resource is the actual resource object. This is created from
the overall evaluation object graph, but is simplified. It contains only
serializable properties, for example. Inter-resource references are
translated into serializable monikers as part of creating the resource.
* resource.Moniker is a serializable string that uniquely identifies
a resource in the Mu system. This is in contrast to resource IDs, which
are generated by resource providers and generally opaque to the Mu
system. See marapongo/mu#69 for more information about monikers and some
of their challenges (namely, designing a stable algorithm).
* resource.Snapshot is a "snapshot" taken from a graph of resources. This
is a transitive closure of state representing one possible configuration
of a given environment. This is what plans are created from. Eventually,
two snapshots will be diffable, in order to perform incremental updates.
One way of thinking about this is that a snapshot of the old world's state
is advanced, one step at a time, until it reaches a desired snapshot of
the new world's state.
* resource.Plan is a plan for carrying out desired CRUD operations on a target
environment. Each plan consists of zero-to-many Steps, each of which has
a CRUD operation type, a resource target, and a next step. This is an
enumerator because it is possible the plan will evolve -- and introduce new
steps -- as it is carried out (hence, the Next() method). At the moment, this
is linearized; eventually, we want to make this more "graph-like" so that we
can exploit available parallelism within the dependencies.
There are tons of TODOs remaining. However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.
This is part of marapongo/mu#38 and marapongo/mu#41.
* Add a TODO as a reminder to implement number toString formatting.
* Change the Loreley delimiters to something obscure ("<{%" and "%}>")
to avoid conflicting with actual characters we might use in messages.
Also, make the assertions more descriptive should Loreley fail.
* Rip out a debug.PrintStack() in verbose logging.
* Check the underlying pointer's object type for +=, not the pointer type.
This change implements stack traces. This is primarily so that we
can print out a full stack trace in the face of an unhandled exception,
and is done simply by recording the full trace during evaluation
alongside the existing local variable scopes.
* Persue the default/optional checking if a property value == nil.
* Use the Interface() function to convert a reflect.Type to its underlying
interface{} value. This is required for typechecking to check out.
* Also, unrelated to the above, change type assertions to use nil rather than
allocating real objects. Although minimal, this incurs less GC pressure.
During subtypeOf checking, we need to walk the chain of documents from
which a stack came. This is because, due to template expansion, we'll
end up with a different document for instantiated types than uninstantiated
ones. This change keeps track of the parent and walks it appropriately.