2018-05-22 19:43:36 +00:00
|
|
|
// Copyright 2016-2018, Pulumi Corporation.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 19:08:06 +00:00
|
|
|
|
Overhaul resources, planning, and environments
This change, part of pulumi/lumi#90, overhauls quite a bit of the
core resource, planning, environments, and related areas.
The biggest amount of movement comes from the splitting of pkg/resource
into multiple sub-packages. This results in:
- pkg/resource: just the core resource data structures.
- pkg/resource/deployment: all planning and deployment logic.
- pkg/resource/environment: all environment, configuration, and
serialized checkpoint structures and logic.
- pkg/resource/plugin: all dynamically loaded analyzer and
provider logic, including the actual loading and RPC mechanisms.
This also splits the resource abstraction up. We now have:
- resource.Resource: a shared interface.
- resource.Object: a resource that is connected to a live object
that will periodically observe mutations due to ongoing
evaluation of computations. Snapshots of its state may be
taken; however, this is purely a "pre-planning" abstraction.
- resource.State: a snapshot of a resource's state that is frozen.
In other words, it is no longer connected to a live object.
This is what will store provider outputs (ID and properties),
and is what may be serialized into a deployment record.
The branch is in a half-baked state as of this change; more changes
are to come...
2017-06-08 23:37:40 +00:00
|
|
|
package plugin
|
Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 19:08:06 +00:00
|
|
|
|
|
|
|
import (
|
|
|
|
"context"
|
|
|
|
|
2017-11-09 01:08:51 +00:00
|
|
|
"github.com/opentracing/opentracing-go"
|
|
|
|
|
2017-09-22 02:18:21 +00:00
|
|
|
"github.com/pulumi/pulumi/pkg/diag"
|
|
|
|
"github.com/pulumi/pulumi/pkg/util/rpcutil"
|
Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 19:08:06 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
// Context is used to group related operations together so that associated OS resources can be cached, shared, and
|
|
|
|
// reclaimed as appropriate.
|
|
|
|
type Context struct {
|
Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 20:12:40 +00:00
|
|
|
Diag diag.Sink // the diagnostics sink to use for messages.
|
|
|
|
StatusDiag diag.Sink // the diagnostics sink to use for status messages.
|
|
|
|
Host Host // the host that can be used to fetch providers.
|
|
|
|
Pwd string // the working directory to spawn all plugins in.
|
2017-11-09 01:08:51 +00:00
|
|
|
|
|
|
|
tracingSpan opentracing.Span // the OpenTracing span to parent requests within.
|
Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 19:08:06 +00:00
|
|
|
}
|
|
|
|
|
2017-06-01 18:41:24 +00:00
|
|
|
// NewContext allocates a new context with a given sink and host. Note that the host is "owned" by this context from
|
|
|
|
// here forwards, such that when the context's resources are reclaimed, so too are the host's.
|
Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 20:12:40 +00:00
|
|
|
func NewContext(d, statusD diag.Sink, host Host, cfg ConfigSource, events Events,
|
2018-08-06 18:05:44 +00:00
|
|
|
pwd string, runtimeOptions map[string]interface{}, parentSpan opentracing.Span) (*Context, error) {
|
2017-06-01 18:41:24 +00:00
|
|
|
ctx := &Context{
|
2017-11-09 01:08:51 +00:00
|
|
|
Diag: d,
|
Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 20:12:40 +00:00
|
|
|
StatusDiag: statusD,
|
2017-11-09 01:08:51 +00:00
|
|
|
Host: host,
|
2017-12-12 20:31:09 +00:00
|
|
|
Pwd: pwd,
|
2017-11-09 01:08:51 +00:00
|
|
|
tracingSpan: parentSpan,
|
Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 19:08:06 +00:00
|
|
|
}
|
2017-06-01 18:41:24 +00:00
|
|
|
if host == nil {
|
2018-06-25 05:47:54 +00:00
|
|
|
h, err := NewDefaultHost(ctx, cfg, events, runtimeOptions)
|
Introduce an interface to read config
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.
The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible. We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
2017-06-21 02:45:07 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
ctx.Host = h
|
Add basic analyzer support
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119. In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource. The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules). These run simultaneous to overall checking.
Analyzers are loaded as plugins just like providers are. The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.
This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
2017-03-11 07:49:17 +00:00
|
|
|
}
|
Introduce an interface to read config
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.
The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible. We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
2017-06-21 02:45:07 +00:00
|
|
|
return ctx, nil
|
Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 19:08:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Request allocates a request sub-context.
|
|
|
|
func (ctx *Context) Request() context.Context {
|
2017-09-22 02:18:21 +00:00
|
|
|
// TODO[pulumi/pulumi#143]: support cancellation.
|
2017-11-09 01:08:51 +00:00
|
|
|
return opentracing.ContextWithSpan(context.Background(), ctx.tracingSpan)
|
Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 19:08:06 +00:00
|
|
|
}
|
2017-03-07 13:47:42 +00:00
|
|
|
|
|
|
|
// Close reclaims all resources associated with this context.
|
|
|
|
func (ctx *Context) Close() error {
|
2017-11-09 01:08:51 +00:00
|
|
|
if ctx.tracingSpan != nil {
|
|
|
|
ctx.tracingSpan.Finish()
|
|
|
|
}
|
2017-06-21 17:31:06 +00:00
|
|
|
err := ctx.Host.Close()
|
|
|
|
if err != nil && !rpcutil.IsBenignCloseErr(err) {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
return nil
|
2017-03-07 13:47:42 +00:00
|
|
|
}
|