When handling root input variable values, we now consider unset and null
values to be equivalent to each other. This is consistent with how we
handle variables in embedded stacks, and very similar to how we handle
variable in the modules runtime with `nullable = false`.
One difference from the modules runtime case is that we do not prevent
a null default value for stack variables.
When evaluating a stack's root input variables, supplied by the caller,
we must apply any default values specified in the variable
configuration for variables with no specified value. This commit adds
this default fallback case, using NilVal as a marker indicating the lack
of a specified value.
If no default value exists for a variable, it is therefore required to
be supplied by the caller. This commit also reports a diagnostic error
in this case.
This updates the stackruntime `hooks.ResourceInstanceStatusHookData` struct to
include the provider address, and updates everything that instantiates that
struct to pass along the valid provider address it received from its own caller.
In other words, this commit is a bridge between the terraform.Hook interface
methods (which already have access to the provider address) and the stacks hook
callbacks that result in RPC messages being sent to the agent.
The terraform.Hook interface lets other areas of code perform streaming
reactions to various events, generally in the service of some UI somewhere.
Nearly all of the methods on this interface take an `addrs.AbsResourceInstance`
as their first argument, to identify the resource that's being operated on.
However, that addrs struct doesn't necessarily contain everything you might want
in order to uniquely and usefully identify a resource. It has the module
instance and resource instance addresses, but it lacks the provider source
address, which can affect how the consuming UI should display the resource's
events. (For example, Terraform Cloud wants reliable info about who maintains a
given provider, what cloud provider it operates on, and where to find its
documentation.)
Instead of polluting `addrs.AbsResourceInstance` with extra information that
isn't relevant to other call sites, let's change the first argument of each Hook
method to be a wrapper struct defined in the package that owns the Hook
interface, and add the provider address to that wrapper as a sibling of the
resource address. This causes a big noisy commit today, but should streamline
future updates to the UI-facing "identity" of a resource; existing callers can
ignore any new fields they're uninterested in, or exploit new info as needed.
Other than making new information available for future edits to Hook
implementing types, this commit should have no effect on existing behavior.
Instead of relying on the module call's source being unique, we now use
the entire path as the key for looking up the parent's absolute source
address.
If multiple submodules of a given component share the same relative
source, they should result in distint source addresses in the
diagnostics. This commit introduces an example with the following
structure:
- component.remote_invalid_grandchildren
- module.first_child
- module.child (source = "./child")
- module.second_child
- module.child (source = "./child")
Both of these use the same invalid module as the rest of the examples in
this test, so we should (and do) see equivalent diagnostics differing
only in filename.
The source bundle aware module loader requires "absolute" source
addresses, which are fully qualified rather than relative. We generate
these during the module loading process.
The previous implementation assumed that any local module source address
should be parented by the root module source, but this is incorrect when
a descendant module targets a remote or registry source. This commit
addresses this by tracking each module request's generated absolute
source address, and using it as the base for any descendant local module
requests.
Due to an oversight in our handling of resource instance objects that are
neither in configuration nor plan -- which is true for data resources that
have since been removed from the configuration -- we were generating plan
change objects that were lacking a provider configuration address, which
made them syntactically invalid and thus not reloadable using the
raw plan parser.
This is a bit of a strange situation since we don't technically _need_ a
provider configuration address for these; all we're going to do is just
unceremoniously delete them from the state during apply anyway. However,
we always have the provider configuration address available anyway, so
adding this in here is overall simpler than changing the parser, the
models it populates, and all of the downstream users of those models to
treat this field as optional.
This commit is more test case than it is fix, since the fix was relatively
straightforward once I had a test case to reproduce the problem it's
fixing.
The terraform.Hook implementation in stackeval is used to track the
operations performed during stack runtime operations, for later
reporting to the caller. This hook did not correctly support the replace
actions (create-then-destroy, destroy-then-create), resulting in a loss
of data between plan and apply. This exhibited as a plan with a replace
reporting 1 add and 1 remove operation, which when applied would report
only 1 add operation.
Previously, we stored the action performed for each apply operation on a
given resource instance in the PreApply hook, then allowed access to it
via the ResourceInstanceObjectAppliedAction method. Here we extend the
PostApply hook to look for a previous action performed on this instance,
and use that to reconstruct the planned replace action.
This method is called in the stackruntime package, where replace
operations are counted for both add and remove. No changes are needed at
the call site to fix the bug.
This was originally part of 7dad938fdb but
unfortunately seems to have got lost during some rebasing, or some other
similar sort of annoying reason.
This now allows the "experiments allowed" flag to propagate into the
stackeval package when we're running the apply phase, for consistency with
all of the other phases. Without this, it's possible to plan a
configuration that's participating in experiments, but then it fails in a
strange way during the apply step due to Terraform suddenly thinking it's
a stable release where experiments are disabled.
This is the bare minimum functionality to ensure that we defer all actions
in any component that depends on a component that already had deferred
actions.
We will also eventually need to propagate out a signal to the caller for
whether the stack plan as a whole is complete or incomplete, but we'll
save that for later commits, since the stack orchestration in Terraform
Cloud will do the right thing regardless, aside from the cosmetic concern
that it won't yet know to show a message to the user saying that there
are deferred changes.
We allow experiments only in alpha builds, and so we propagate the flag
for whether that's allowed in from "package main". We previously had that
plumbed in only as far as the rpcapi startup.
This plumbs the flag all the way into package stackeval so that we can
in turn propagate it to Terraform's module config loader, which is
ultimately the one responsible for ensuring that language experiments can
be enabled only when the flag is set.
Therefore it will now be possible to opt in to language experiments in
modules that are used in stack components.
These ideas are both already implied by some logic elsewhere in the system,
but until now we didn't have the decision logic centralized in a single
place that could therefore evolve over time without necessarily always
updating every caller together.
We'll now have the modules runtime produce its own boolean ruling about
each characteristic, which callers can rely on for the mechanical
decision-making of whether to offer the user an "approve" prompt, and
whether to remind the user after apply that it was an incomplete plan
that will probably therefore need at least one more plan/apply round to
converge.
The "Applyable" flag directly replaces the previous method Plan.CanApply,
with equivalent logic. Making this a field instead of a method means that
we can freeze it as part of a saved plan, rather than recalculating it
when we reload the plan, and we can export the field value in our export
formats like JSON while ensuring it'll always be consistent with what
Terraform is using internally.
Callers can (and should) still use other context in the plan to return
more tailored messages for specific situations they already know about
that might be useful to users, but with these flags as a baseline callers
can now just fall back to a generic presentation when encountering a
situation they don't yet understand, rather than making the wrong decision
and causing something strange to happen. That is: a lack of awareness of
a new rule will now cause just a generic message in the UI, rather than
incorrect behavior.
This commit mostly just deals with populating the flags, and then all of
the direct consequences of that on our various tests. Further changes to
actually make use of these flags elsewhere in the system will follow in
later commits, both in this repository and in other repositories.
The terraform.workspace attribute is a rare example of a CLI- and Cloud-
specific concern bleeding into the Terraform language, and it can only
really have meaning when used in the traditional Terraform workflow
because otherwise there's no workspace to return the name of.
In Stacks any variations between instances of a module must be created
through input variables. Within Terraform Cloud in particular it's also
possible to use stack-level input variables that are assigned different
values from different stack deployments, and thus an author can recreate
the effect of terraform.workspace using a stack-level input variable that
has a different value for each deployment.
This is one of the few cases where the Terraform module language differs
in stacks compared to traditional Terraform. Any module that makes use of
terraform.workspace will need to be generalized to use input variables
instead before it can be used within a stack component.
Prior to this change, references to terraform.workspace from a module used
in a stack component would just panic altogether, because the stacks
runtime doesn't provide the object that the workspace name would be taken
from. Now we'll return a user-oriented error instead.
Because we treat dependency edges as reversed when a component instance
is being destroyed, the final result (an object representing output values)
for a component instance being destroyed must not depend on anything else
in the evaluation graph, or else we'd cause a promise self-reference as
the downstream component tries to configure itself based on our outputs.
As a special case then, for a component instance being destroyed we take
the planned output values directly from the plan, relying on the fact that
the plan phase sets them to the prior state output values in that case,
and therefore the result for such a component is available immediately
without blocking on any other expression evaluation during the apply phase.
This combines with several previous commits to create a first pass at
handling ordering correctly when planning and applying a full destroy.
This commit also incorporates some fixes and improvements to stackeval's
apply-time testing helpers, which had some quirks and bugs when first
added in a recent commit. One of those problems also revealed that the
raw state loader was not resilient to a buggy caller setting a state
entry to nil instead of removing it altogether, and that mistake seems
relatively easy to make (as I did here in the test helper) so we'll
tolerate it to make it possible to recover if such a bug does end up
occurring in real code too.
Before we start any apply-time actions for a particular component, we'll
block until other components' apply are complete.
In the normal case a component instance waits until its dependencies have
been applied. However, if component instances are being destroyed then
they instead wait for their _dependents_ to complete being applied, since
a component must outlive all other components that depend on it.
These helpers make it easier to write our tests because the plan and apply
phases both work in a callback-based fashion where they gradually emit
events, and that's not a very convenient API for test code to interact
with.
This exposes some more details about the planning results, and also adds
a new similar helper for the apply phase. We'll make use of both of these
in future commits.
* Update proto schema and provider interfaces with support for moved across resource type RPCs
* address comments
* remove unused functions
* remove support for flatmap format
This makes the built-in "remote-exec" and "file" provisioners available
for use in the modules that implement stack components. These are both
relatively easy and low-risk to include because they are builtins and
don't require anything from outside of Terraform itself.
For now this intentionally excludes local-exec because we'll want to think
about what constraints we want to put on it, if any, to help ensure we can
meet the goal of stack configurations being portable between different
execution environments without significant modification, and our current
stacks execution environment doesn't guarantee the availability of any
external software _at all_.
The motivation for adding this now is just to give some better feedback
when someone uses a module using one of these provisioners, since otherwise
they'll see just a confusing generic error message from the modules
runtime about the provisioners not being available. I expect we'll revisit
this later and consider expanding it to at least include local-exec, and
_maybe_ external provisioner plugins, although that's more questionable
because the provisioner plugin mechanism is incredibly legacy and doesn't
have any way to work outside of local Terraform CLI usage today.
There are no tests here yet because these provisioners are not mockable
and would depend on having an SSH or WinRM server to connect to. Later we
should ponder how to make this more testable, which might mean making
another part of the system responsible for actually providing the
provisioner factories and thus our tests here can use fakes. The goal here
is just to get this done in a relatively lightweight way for better
feedback during preview though, so we're not yet ready to make significant
time investments here.
Components can emit sensitive values as outputs, which can be consumed
as inputs to other components. This commit ensures that such values are
correctly processed in order to pass their sensitivity to the modules
runtime.
When using stacks the provider configurations belong in the stack
configuration rather than inline in the individual modules.
Shared modules with inline provider configurations has been a deprecated
legacy practice for many years now, but traditional Terraform continued
to support it for backward-compatibility with older modules despite the
significant downsides of doing so. Stacks now finally removes that
capability, since it isn't straightforward to continue supporting it once
we've made the stacks runtime be responsible for instantiating and
configuring providers.
This means both that the validate walk can now describe static problems in
the component's module tree, and that we'll catch such problems earlier
in the planning phase and thus avoid reporting them repeatedly in cases
where a component block uses for_each to declare multiple instances.
This includes a fix to a bug in StackConfig.Components, which was
incorrectly using the input variable declarations as the source for its
result, instead of the component declarations.
Until we've updated the module config loader to be sourcebundle-aware and
thus return proper source addresses, this will at least make the paths
we show in diagnostics a little less verbose, and more consistent across
platforms.