<!---
Thanks so much for your contribution! If this is your first time
contributing, please ensure that you have read the
[CONTRIBUTING](https://github.com/pulumi/pulumi/blob/master/CONTRIBUTING.md)
documentation.
-->
# Description
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. -->
This enables you to insert arbitrary dot inside the `digraph` element.
This can be used to style the dot file. For example:
with the file `fragment.dot` having the contents:
```
rankdir=LR
ranksep=0.01
dpi=300
node [shape=rect style=filled fillcolor="#8a3391" fontcolor="white" fontname="Arial"]
edge [fontname="Arial" fontcolor="#83a3391" penwidth=2]
```
You could then call
`pulumu stack graph --dot-fragment "$(<fragment.dot)" stack.dot`
Which could be in a command pipeline like
`pulumu stack graph --dot-fragment "$(<fragment.dot)" stack.dot && dot
-Tpng -O stack.dot && open stack.dot.png`
To generate and display a styled view of the graph without needing to
hand edit the dot file in the process.
## Checklist
- [X] I have run `make tidy` to update any new dependencies
- [X] I have run `make lint` to verify my code passes the lint check
- [X] I have formatted my code using `gofumpt`
<!--- Please provide details if the checkbox below is to be left
unchecked. -->
- [ ] I have added tests that prove my fix is effective or that my
feature works
<!---
User-facing changes require a CHANGELOG entry.
-->
- [X] I have run `make changelog` and committed the
`changelog/pending/<file>` documenting my change
<!--
If the change(s) in this PR is a modification of an existing call to the
Pulumi Cloud,
then the service should honor older versions of the CLI where this
change would not exist.
You must then bump the API version in
/pkg/backend/httpstate/client/api.go, as well as add
it to the service.
-->
- [ ] Yes, there are changes in this PR that warrants bumping the Pulumi
Cloud API version
<!-- @Pulumi employees: If yes, you must submit corresponding changes in
the service repo. -->
---------
Co-authored-by: Paul Roberts <proberts@pulumi.com>
Fixes https://github.com/pulumi/pulumi/issues/12738https://github.com/pulumi/pulumi/pull/11834 turned on the prealloc
linter and changed a load of slice uses from just `var x T[]` to `x :=
make([]T, 0, preallocSize)`. This was good for performance but it turns
out there are a number of places in the codebase that treat a `nil`
slice as semnatically different to an empty slice.
Trying to test that, or even reason that through for every callsite is
untractable, so this PR replaces all expressions of the form `make([]T,
0, size)` with a call to `slice.Prealloc[T](size)`. When size is 0 that
returns a nil array, rather than an empty array.
Migrates all remaining usages of
`contract.Assert*` and `contract.Require*` to the f variants,
which require adding meaningful error messages.
There were a couple cases where a `testing.T` or `testing.B`
was already available.
For those, this uses t.FailNow or require.NoError.
Refs #12132
Replace `buffer.WriteString(fmt.Sprintf(..))` calls,
where buffer is one of `bytes.Buffer`, `strings.Builder`, or `bufio.Writer`,
with equivalent `fmt.Fprintf` calls -- all those types are io.Writers.
* Make `async:true` the default for `invoke` calls (#3750)
* Switch away from native grpc impl. (#3728)
* Remove usage of the 'deasync' library from @pulumi/pulumi. (#3752)
* Only retry as long as we get unavailable back. Anything else continues. (#3769)
* Handle all errors for now. (#3781)
* Do not assume --yes was present when using pulumi in non-interactive mode (#3793)
* Upgrade all paths for sdk and pkg to v2
* Backport C# invoke classes and other recent gen changes (#4288)
Adjust C# generation
* Replace IDeployment with a sealed class (#4318)
Replace IDeployment with a sealed class
* .NET: default to args subtype rather than Args.Empty (#4320)
* Adding system namespace for Dotnet code gen
This is required for using Obsolute attributes for deprecations
```
Iam/InstanceProfile.cs(142,10): error CS0246: The type or namespace name 'ObsoleteAttribute' could not be found (are you missing a using directive or an assembly reference?) [/Users/stack72/code/go/src/github.com/pulumi/pulumi-aws/sdk/dotnet/Pulumi.Aws.csproj]
Iam/InstanceProfile.cs(142,10): error CS0246: The type or namespace name 'Obsolete' could not be found (are you missing a using directive or an assembly reference?) [/Users/stack72/code/go/src/github.com/pulumi/pulumi-aws/sdk/dotnet/Pulumi.Aws.csproj]
```
* Fix the nullability of config type properties in C# codegen (#4379)
1. Output different-colored edges for parent-child resource
relationships
2. Allow the changing of edge colors via command-line parameters
3. Allow the skipping of the parent-child graph or the
dependency graph when calculating all edges
This modifies the Graph interface slightly to allow an edge to specify
what color should be used when drawing it.
This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This change fixes a few things:
* Most importantly, we need to place a leading "." in the paths
to Gometalinter, otherwise some sub-linters just silently skip
the directory altogether. errcheck is one such linter, which
is a very important one!
* Use an explicit Gometalinter.json file to configure the various
settings. This flips on a few additional linters that aren't
on by default (line line length checking). Sadly, a few that
I'd like to enable take waaaay too much time, so in the future
we may consider a nightly job (this includes code similarity,
unused parameters, unused functions, and others that generally
require global analysis).
* Now that we're running more, however, linting takes a while!
The core Lumi project now takes 26 seconds to lint on my laptop.
That's not terrible, but it's long enough that we don't want to
do the silly "run them twice" thing our Makefiles were previously
doing. Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
to rely on the fact that grep returns 1 on "zero lines".
* Finally, fix the many issues that this turned up.
I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
This changes a few naming things:
* Rename "husk" to "environment" (`coco env` for short).
* Rename NutPack/NutIL to CocoPack/CocoIL.
* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.
* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.
* Rename the package asset directory from nutpack/ to .coconut/.
This change is mostly just a rename of Moniker to URN. It does also
prefix resource URNs to have a standard URN namespace; in other words,
"urn🥥<name>", where <name> is the same as the prior Moniker.
This is a minor step that helps to prepare us for pulumi/coconut#109.
This change is a first whack at implementing updates.
Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order. For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies). These are just special cases of the more
general idea of performing DAG operations in dependency order.
Updates must work in terms of this more general notion. For example:
* It is an error to delete a resource while another refers to it; thus,
resources are deleted after deleting dependents, or after updating
dependent properties that reference the resource to new values.
* It is an error to depend on a create a resource before it is created;
thus, resources must be created before dependents are created, and/or
before updates to existing resource properties that would cause them
to refer to the new resource.
Of course, all of this is tangled up in a graph of dependencies. As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.
To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
This change introduces object monikers. These are unique, serializable
names that refer to resources created during the execution of a MuIL
program. They are pretty darned ugly at the moment, but at least they
serve their desired purpose. I suspect we will eventually want to use
more information (like edge "labels" (variable names and what not)),
but this should suffice for the time being. The names right now are
particularly sensitive to simple refactorings.
This is enough for marapongo/mu#69 during the current sprint, although
I will keep the work item (in a later sprint) to think more about how
to make these more stable. I'd prefer to do that with a bit of
experience under our belts first.
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.
It contains the following key abstractions:
* resource.Provider is a wrapper around the CRUD operations exposed by
underlying resource plugins. It will eventually defer to resource.Plugin,
which itself defers -- over an RPC interface -- to the actual plugin, one
per package exposing resources. The provider will also understand how to
load, cache, and overall manage the lifetime of each plugin.
* resource.Resource is the actual resource object. This is created from
the overall evaluation object graph, but is simplified. It contains only
serializable properties, for example. Inter-resource references are
translated into serializable monikers as part of creating the resource.
* resource.Moniker is a serializable string that uniquely identifies
a resource in the Mu system. This is in contrast to resource IDs, which
are generated by resource providers and generally opaque to the Mu
system. See marapongo/mu#69 for more information about monikers and some
of their challenges (namely, designing a stable algorithm).
* resource.Snapshot is a "snapshot" taken from a graph of resources. This
is a transitive closure of state representing one possible configuration
of a given environment. This is what plans are created from. Eventually,
two snapshots will be diffable, in order to perform incremental updates.
One way of thinking about this is that a snapshot of the old world's state
is advanced, one step at a time, until it reaches a desired snapshot of
the new world's state.
* resource.Plan is a plan for carrying out desired CRUD operations on a target
environment. Each plan consists of zero-to-many Steps, each of which has
a CRUD operation type, a resource target, and a next step. This is an
enumerator because it is possible the plan will evolve -- and introduce new
steps -- as it is carried out (hence, the Next() method). At the moment, this
is linearized; eventually, we want to make this more "graph-like" so that we
can exploit available parallelism within the dependencies.
There are tons of TODOs remaining. However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.
This is part of marapongo/mu#38 and marapongo/mu#41.
In select few cases (right now, just nulls and boolean true/false),
we use predefined object constants to avoid allocating wastefully.
In the future, it's possible we will do this for other types, like
interned strings. So, the graph generator should tolerate this.
This change starts tracking all objects in our MuGL graph. The reason is
that resources can be buried deep within a nest of objects, and unless we
do "on the fly" reachability analysis, we can't know a priori whether any
given object will be of interest or not. So, we track 'em all. For large
programs, this would obviously create space leak problems, so we'll
eventually, I assume, want to prune the graph at some point.
With this change, the EC2instance example produces a (gigantic) graph!
This change adds a --dot option to the eval command, which will simply
output the MuGL graph using the DOT language. This allows you to use
tools like Graphviz to inspect the resulting graph, including using the
`dot` command to generate images (like PNGs and whatnot).
For example, the simple MuGL program:
class C extends mu.Resource {...}
class B extends mu.Resource {...}
class A extends mu.Resource {
private b: B;
private c: C;
constructor() {
this.b = new B();
this.c = new C();
}
}
let a = new A();
Results in the following DOT file, from `mu eval --dot`:
strict digraph {
Resource0 [label="A"];
Resource0 -> {Resource1 Resource2}
Resource1 [label="B"];
Resource2 [label="C"];
}
Eventually the auto-generated ResourceN identifiers will go away in
favor of using true object monikers (marapongo/mu#76).
This is pretty worthless, but will help me debug some issues locally.
Eventually we want MuGL to be fully serializable, including the option
to emit DOT files.
This change actually invokes the OnVariableAssign interpreter hook
at the right places. (Renamed from OnAssignProperty, as it will now
handle all variable assignments, and not just properties.) This
requires tracking a bit more information about l-values so that we
can accurately convey the target object and symbol associated with
the assignment (resulting in the new "location" struct type).
This change lowers the information collected about resource allocations
and dependencies into the MuGL graph representation.
As part of this, we've moved pkg/compiler/eval out into its own top-level
package, pkg/eval, and split up its innards into a smaller sub-package,
pkg/eval/rt, that contains the Object and Pointer abstractions. This
permits the graph generation logic to use it without introducing cycles.
This change refactors the interpreter hooks into a first class interface
with many relevant event handlers (including enter/leave functions for
packages, modules, and functions -- something necessary to generate object
monikers). It also includes a rudimentary start for tracking actual object
allocations and their dependencies, a step towards creating a MuGL graph.
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler. A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.
The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:
* Split up the notion of symbols and tokens, resulting in:
- pkg/symbols for true compiler symbols (bound nodes)
- pkg/tokens for name-based tokens, identifiers, constants
* Several packages move underneath pkg/compiler:
- pkg/ast becomes pkg/compiler/ast
- pkg/errors becomes pkg/compiler/errors
- pkg/symbols becomes pkg/compiler/symbols
* pkg/ast/... becomes pkg/compiler/legacy/ast/...
* pkg/pack/ast becomes pkg/compiler/ast.
* pkg/options goes away, merged back into pkg/compiler.
* All binding functionality moves underneath a dedicated
package, pkg/compiler/binder. The legacy.go file contains
cruft that will eventually go away, while the other files
represent a halfway point between new and old, but are
expected to stay roughly in the current shape.
* All parsing functionality is moved underneath a new
pkg/compiler/metadata namespace, and we adopt new terminology
"metadata reading" since real parsing happens in the MetaMu
compilers. Hence, Parser has become metadata.Reader.
* In general phases of the compiler no longer share access to
the actual compiler.Compiler object. Instead, shared state is
moved to the core.Context object underneath pkg/compiler/core.
* Dependency resolution during binding has been rewritten to
the new model, including stashing bound package symbols in the
context object, and detecting import cycles.
* Compiler construction does not take a workspace object. Instead,
creation of a workspace is entirely hidden inside of the compiler's
constructor logic.
* There are three Compile* functions on the Compiler interface, to
support different styles of invoking compilation: Compile() auto-
detects a Mu package, based on the workspace; CompilePath(string)
loads the target as a Mu package and compiles it, regardless of
the workspace settings; and, CompilePackage(*pack.Package) will
compile a pre-loaded package AST, again regardless of workspace.
* Delete the _fe, _sema, and parsetree phases. They are no longer
relevant and the functionality is largely subsumed by the above.
...and so very much more. I'm surprised I ever got this to compile again!