This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This change fixes a few things:
* Most importantly, we need to place a leading "." in the paths
to Gometalinter, otherwise some sub-linters just silently skip
the directory altogether. errcheck is one such linter, which
is a very important one!
* Use an explicit Gometalinter.json file to configure the various
settings. This flips on a few additional linters that aren't
on by default (line line length checking). Sadly, a few that
I'd like to enable take waaaay too much time, so in the future
we may consider a nightly job (this includes code similarity,
unused parameters, unused functions, and others that generally
require global analysis).
* Now that we're running more, however, linting takes a while!
The core Lumi project now takes 26 seconds to lint on my laptop.
That's not terrible, but it's long enough that we don't want to
do the silly "run them twice" thing our Makefiles were previously
doing. Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
to rely on the fact that grep returns 1 on "zero lines".
* Finally, fix the many issues that this turned up.
I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment). It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.
During this, I took the opportunity to also clean up a few other things
that had been bugging me. Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI. Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type. Even better, we
no longer require resource providers to deal with the mapper
infrastructure. Instead, the `Check` function can simply return an
array of errors. It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.
As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.
This completes pulumi/lumi#183.
This change makes progress on a few things with respect to properly
receiving properties on the engine side, coming from the provider side,
of the RPC boundary. The issues here are twofold:
1. Properties need to get unmapped using a JSON-tag-sensitive
marshaler, so that they are cased properly, etc. For that, we
have a new mapper.Unmap function (which is ultra lame -- see
pulumi/lumi#138).
2. We have the reverse problem with respect to resource IDs: on
the send side, we must translate from URNs (which the engine
knows about) and provider IDs (which the provider knows about);
similarly, then, on the receive side, we must translate from
provider IDs back into URNs.
As a result of these getting fixed, we can now properly marshal the
resulting properties back into the resource object during the plan
execution, alongside propagating and memoizing its ID.
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so. As a result, this change takes an initial
whack at implementing Get on all existing AWS resources. The Get API needs
to return a fully populated structure containing all inputs and outputs.
Believe it or not, this is actually part of pulumi/lumi#90.
This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.
This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties. For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one. As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
This change recognizes TextUnmarshaler during object mapping, and
will defer to it when we have a string but are assigning to a
non-string target that implements the interface.
* Print a pretty message if the plan has nothing to do:
"info: nothing to do -- resources are up to date"
* Add an extra validation step after reading in a snapshot,
so that we detect more errors sooner. For example, I've
fed in the wrong file several times, and it just chugs
along as though it were actually a snapshot.
* Skip printing nulls in most plan outputs. These just
clutter up the output.
This change revives some compiler tests that are still lingering around
from the old architecture, before our latest round of ship burning.
It also fixes up some bugs uncovered during this:
* Don't claim that a symbol's kind is incorrect in the binder error
message when it wasn't found. Instead, say that it was missing.
* Do not attempt to compile if an error was issued during workspace
resolution and/or loading of the Mufile. This leads to trying to
load an empty path and badness quickly ensues (crash).
* Issue an error if the Mufile wasn't found (this got lost apparently).
* Rename the ErrorMissingPackageName message to ErrorInvalidPackageName,
since missing names are now caught by our new fancy decoder that
understands required versus optional fields. We still need to guard
against illegal characters in the name, including the empty string "".
* During decoding, reject !src.IsValid elements. This represents the
zero value and should be treated equivalently to a missing field.
* Do not permit empty strings "" as Names or QNames. The old logic
accidentally permitted them because regexp.FindString("") == "", no
matter the regex!
* Move the TestDiagSink abstraction to a new pkg/util/testutil package,
allowing us to share this common code across multiple package tests.
* Fix up a few messages that needed tidying or to use Infof vs. Info.
The binder tests -- deleted in this -- are about to come back, however,
I am splitting up the changes, since this represents a passing fixed point.
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler. A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.
The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:
* Split up the notion of symbols and tokens, resulting in:
- pkg/symbols for true compiler symbols (bound nodes)
- pkg/tokens for name-based tokens, identifiers, constants
* Several packages move underneath pkg/compiler:
- pkg/ast becomes pkg/compiler/ast
- pkg/errors becomes pkg/compiler/errors
- pkg/symbols becomes pkg/compiler/symbols
* pkg/ast/... becomes pkg/compiler/legacy/ast/...
* pkg/pack/ast becomes pkg/compiler/ast.
* pkg/options goes away, merged back into pkg/compiler.
* All binding functionality moves underneath a dedicated
package, pkg/compiler/binder. The legacy.go file contains
cruft that will eventually go away, while the other files
represent a halfway point between new and old, but are
expected to stay roughly in the current shape.
* All parsing functionality is moved underneath a new
pkg/compiler/metadata namespace, and we adopt new terminology
"metadata reading" since real parsing happens in the MetaMu
compilers. Hence, Parser has become metadata.Reader.
* In general phases of the compiler no longer share access to
the actual compiler.Compiler object. Instead, shared state is
moved to the core.Context object underneath pkg/compiler/core.
* Dependency resolution during binding has been rewritten to
the new model, including stashing bound package symbols in the
context object, and detecting import cycles.
* Compiler construction does not take a workspace object. Instead,
creation of a workspace is entirely hidden inside of the compiler's
constructor logic.
* There are three Compile* functions on the Compiler interface, to
support different styles of invoking compilation: Compile() auto-
detects a Mu package, based on the workspace; CompilePath(string)
loads the target as a Mu package and compiles it, regardless of
the workspace settings; and, CompilePackage(*pack.Package) will
compile a pre-loaded package AST, again regardless of workspace.
* Delete the _fe, _sema, and parsetree phases. They are no longer
relevant and the functionality is largely subsumed by the above.
...and so very much more. I'm surprised I ever got this to compile again!
This is the first change of many to merge the MuPack/MuIL formats
into the heart of the "compiler".
In fact, the entire meaning of the compiler has changed, from
something that took metadata and produced CloudFormation, into
something that takes MuPack/MuIL as input, and produces a MuGL
graph as output. Although this process is distinctly different,
there are several aspects we can reuse, like workspace management,
dependency resolution, and some amount of name binding and symbol
resolution, just as a few examples.
An overview of the compilation process is available as a comment
inside of the compiler.Compile function, although it is currently
unimplemented.
The relationship between Workspace and Compiler has been semi-
inverted, such that all Compiler instances require a Workspace
object. This is more natural anyway and moves some of the detection
logic "outside" of the Compiler. Similarly, Options has moved to
a top-level package, so that Workspace and Compiler may share
access to it without causing package import cycles.
Finally, all that templating crap is gone. This alone is cause
for mass celebration!
This change overhauls the approach to custom decoding. Instead of decoding
the parts of the struct that are "trivial" in one pass, and then patching up
the structure afterwards with custom decoding, the decoder itself understands
the notion of custom decoder functions.
First, the general purpose logic has moved out of pkg/pack/encoding and into
a new package, pkg/util/mapper. Most functions are now members of a new top-
level type, Mapper, which may be initialized with custom decoders. This
is a map from target type to a function that can decode objects into it.
Second, the AST-specific decoding logic is rewritten to use it. All AST nodes
are now supported, including definitions, statements, and expressions. The
overall approach here is to simply define a custom decoder for any interface
type that will occur in a node field position. The mapper, upon encountering
such a type, will consult the custom decoder map; if a decoder is found, it
will be used, otherwise an error results. This decoder then needs to switch
on the type discriminated kind field that is present in the metadata, creating
a concrete struct of the right type, and then converting it to the desired
interface type. Note that, subtly, interface types used only for "marker"
purposes don't require any custom decoding, because they do not appear in
field positions and therefore won't be encountered during the decoding process.