The recent addition of "aws login" to AWS CLI finally gives a user-friendly
best practice way to acquire AWS credentials for use in interactive
workflows. Combined with the pre-existing support for authenticating using
JSON web tokens (by "web identity", as AWS calls it), there's no longer any
good reason for most users of this backend to explicitly configure AWS
credentials.
Now that OpenTofu itself supports using credentials issued by "aws login",
this reorganizes our documentation to begin with opinionated
recommendations for how to provide credentials for the S3 backend in both
interactive and non-interactive settings, and explicitly documents the
inline static configuration settings as an absolute last resort not
recommended in any cases.
This new documentation also includes links to the relevant parts of the
AWS CLI documentation, since there's a lot of extra detail there which may
be useful for someone trying to debug why their setup isn't working.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Along with all of the usual purely-mechanical updates that we get from
these upgrades, these upgrades also introduce the ability for OpenTofu's
"s3" state storage backend to make use of credentials issued by the
"aws login" command recently added to the official AWS CLI.
Behind the scenes, this command uses OAuth to issue a refresh token, and
then the AWS SDK implementation uses that refresh token to request
time-limited session credentials. Successfully running "aws login" causes
there to be a new "login_session" setting in the AWS config file,
specifying which principal ARN was used to log in. Whenever that setting is
present, the SDK expects to find a valid refresh token in a new cache
directory kept in "~/.aws/login", and will fail if no valid token is
present in that cache directory.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a routine upgrade, with the intention that the forthcoming
OpenTofu v1.12 series will be based on this Go release series and so will
be under security support until February 2027 when this Go release will
cease to be supported.
As with the v1.11 series, we cannot predict exactly which day of February
Go 1.28 will be released on and so we'll be conservative and promise
support until the first day of that month, but in practice we're likely to
continue adopting relevant Go 1.26 minor releases for additional weeks of
February until the Go team stops publishing them.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This provides at least a partial implementation of every resource instance
action except the ones involving "forget" actions.
However, we don't really quite have all of the building blocks needed to
properly model "delete" yet, because properly handling those actions means
we need to generate "backwards" dependency edges to preserve the guarantee
that destroying happens in reverse order to creating. Therefore the main
outcome of this commit is to add a bunch of FIXME and TODO comments
explaining where the known gaps are, with the intention of then filling
those gaps in later commits once we devise a suitable strategy to handle
them.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These both effectively had the behavior of ResourceInstancePrior embedded
in them, reading something from the state and change its address as a
single compound operation.
In the case of ManagedDepose we need to split these up for the
CreateThenDestroy variant of "replace", because we want to make sure the
final plans are valid before we depose anything and we need the prior state
to produce the final plan. (Actually using that will follow in a subsequent
commit.)
This isn't actually necessary for ManageChangeAddr, but splitting it keeps
these two operations consistent in how they interact with the rest of the
operations.
Due to how the existing states.SyncState works we're not actually making
good use of the data flow of these objects right now, but in a future world
where we're no longer using the old state models hopefully the state API
will switch to an approach that's more aligned with how the execgraph
operations are modeled.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to make it possible to destroy previously-created
objects by removing them from the configuration. For now this intentionally
duplicates some of the logic from the desired instance planning function
because that more closely matches how this was dealt with in the
traditional runtime. Once all of the managed-resource-related planning
functions are in a more complete state we'll review them and try to extract
as much functionality as possible into a common location that can be shared
across all three situations. For now though it's more important that we be
able to quickly adjust the details of this code while we're still nailing
down exactly what the needed functionality is.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We're currently being intentionally cautious about adding too many tests
while these new parts of the system are still evolving and changing a lot,
but the execGraphBuilder behavior is hopefully self-contained enough for
a small set of basic tests to be more helpful than hurtful.
We should extend this test, and add other test cases that involve more
complicated interactions between different resource instances of different
modes, once we feel that these new codepaths have reached a more mature
state where we're more focused on localized maintenance than on broad
system design exploration.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
A DeleteThenCreate action decomposes into two plan-then-apply sequences,
representing first the deletion and then the creation. However, we order
these in such a way that both plans need to succeed before we begin
applying the "delete" change, so that a failure to create the final plan
for the "create" change can prevent the object from being destroyed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Because this operation represents the main externally-visible side effects
in the apply phase, in some cases we'll need to force additional
constraints on what must complete before executing it. For example, in
a DestroyThenCreate operation we need to guarantee that the apply of the
"destroy" part is completed before we attempt the apply for the "create"
part.
(The actual use of this for DestroyThenCreate will follow in a later
commit.)
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously the generated execgraph was naive and only really supported
"create" changes. This commit has some initial work on generalizing
that, though it's not yet complete and will continue in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Whenever we plan for a resource instance object to switch from one instance
address to another, we'll use this new op instead of ResourceInstancePrior
to perform the state modification needed for the rename before returning
the updated object.
We combine the rename and the state read into a single operation to ensure
that they can appear to happen atomically as far as the rest of the system
is concerned, so that there's no interim state where either both or neither
of the address bindings are present in the state.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In order to describe operations against deposed objects for managed
resource instances we need to be able to store constant states.DeposedKey
values in the execution graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The temporary placeholder code was relying only on the
DesiredResourceInstance object, which was good enough for proving that this
could work at all and had the advantage of being a new-style model with
new-style representations of the provider instance address and the other
resource instances that the desired object depends on.
But that isn't enough information to plan anything other than "create"
changes, so now we'll switch to using plans.ResourceInstanceChange as the
main input to the execgraph building logic, even though for now that means
we need to carry a few other values alongside it to compensate for the
shortcomings of that old model designed for the old language runtime.
So far this doesn't actually change what we do in response to the change
so it still only supports "create" changes. In future commits we'll make
the execGraphBuilder method construct different shapes of graph depending
on which change action was planned.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The previous commit split the handling of provider instance items into
separate dependency-analysis and execgraph construction steps, with the
intention of avoiding the need for the execGraphBuilder to directly
interact with the planning oracle and thus indirectly with the evaluator.
Overall the hope is that execGraphBuilder will be a self-contained object
that doesn't depend on anything else in the planning engine so that it's
easier to write unit tests for it that don't require creating an entire
fake planning context.
However, on reflection that change introduced a completely unnecessary
extra handoff from the execGraphBuilder to _another part of itself_, which
is obviously unnecessary complexity because it doesn't serve to separate
any concerns.
This is therefore a further simplification that returns to just doing the
entire handling of a provider instance's presence in the execution graph
only once we've decided that at least one resource instance will
definitely use the provider instance during the apply phase.
There is still a separation of concerns where the planGlue type is
responsible for calculating the provider dependencies and then the
execGraphBuilder is only responsible for adding items to the execution
graph based on that information. That separation makes sense because
planGlue's job is to bridge between the planning engine and the evaluator,
and it's the evaluator's job to calculate the dependencies for a provider
instance, whereas execGraphBuilder is the component responsible for
deciding exactly which low-level execgraph operations we'll use to describe
the situation to the apply engine.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we did _all_ of the work related to preparing execgraph items
for provider instances as a side-effect of the ProviderInstance method
of execGraphBuilder.
Now we'll split it into two parts: the first time we encounter each
provider instance during planning we'll proactively gather its dependencies
and stash them in the graph builder. However, we'll still wait until the
first request for the execution subgraph of a provider instance before
we actually insert that into the graph, because that then effectively
excludes from the apply phase any provider instances that aren't needed for
the actual planned side-effects. In particular, if all of the resource
instances belonging to a provider instance turn out to have "no-op" plans
then that provider instance won't appear in the execution graph at all.
An earlier draft of this change did the first dependency capturing step via
a new method of planGlue called by the evaluator, but after writing that
I found it unfortunate to introduce yet another inversion of control
situation where readers of the code just need to know and trust that the
evaluator will call things in a correct order -- there's already enough
of that for resource instances -- and so I settled on the compromise of
having the ensureProviderInstanceDependencies calls happen as part of the
linear code for handling each resource instance object, which makes it far
easier for a future maintainer to verify that we're upholding the contract
of calling ensureProviderInstanceDependencies before asking for an
execgraph result, while still allowing us to handle that step generically
instead of duplicating it into each resource-mode-specific handler.
While working on this I noticed a slight flaw in our initial approach to
handling ephemeral resource instances in the execution graph, which is
described inline as a comment in planDesiredEphemeralResourceInstance.
We'll need to think about that some more and deal with it in a future
commit.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As we've continue to build out the execution graph behavior during the plan
and apply phases we've found that the execgraph package isn't really the
best place for having the higher-level ideas like singleton provider
client operations because that package doesn't have enough awareness about
the planning process to manage those concerns well.
The mix of both high- and low-level concerns in execgraph.Builder was also
starting to make it have a pretty awkward shape where some of the low-level
operations needed to be split into two parts so that the higher-level parts
could call them while holding the builder's mutex.
In response to that here we split the execgraph.Builder functionality so
that the execgraph package part is concerned only with the lowest-level
concern of adding new stuff to the graph, without any attempt to dedupe
things and without care for concurrency. The higher-level parts then live
in a higher-level wrapper in the planning engine's own package, which
absorbs the responsibility for mutexing and managing singletons.
For now the new type in the planning package just has lightly-adapted
copies of existing code just to try to illustrate what concerns belong to
it and how the rest of the system ought to interact with it. There are
various FIXME/TODO comments describing how I expect this to evolve in
future commits as we continue to build out more complete planning
functionality.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Since an addrs.Set is really just a map with some special usage rules, this
is just a thin wrapper around maps.Values but (as with many of the other
methods on this type) avoids exposing that implementation detail in calling
code.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These were previously using "applying:" as the common prefix, but I found
that confusing in practice because only the "ManagedApply" operation is
_actually_ applying changes.
Instead then we'll identify these trace logs as belonging to the apply
phase as a whole, to try to be a little clearer about what's going on.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Here we ask the provider to produce a plan based on the finalized
configuration taken from the "desired state", and return it as a final plan
object only if the provider request succeeds and returns something that is
valid under our usual plan consistency rules.
We then ask the provider to apply that plan, and deal with whatever it
returns. The apply part of this is not so complete yet, but there's enough
here to handle the happy path where the operation succeeds and the provider
returns something valid.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This'll make it clearer which provider we're using to ask each question,
without making a human reader try to work backwards through the execution
graph data flow.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The statements were in the wrong order so we were adding the
ProviderInstanceConfig operation before checking whether we already added
the operations for a particular provider instance, and thus the generated
execution graph had redundant unused extra ProviderInstanceConfig ops.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These were useful during initial development, but the execgraph behavior
now seems to be doing well enough that these noisy logs have become more
annoying than helpful, since the useful context now comes from the logging
within engine/apply's implementation of the operations.
However, this does introduce some warning logs to draw attention to the
fact that resource instance postconditions are not implemented yet.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously this was calling "Value", which is incorrect because that
function resolves only after we've already planned and applied the resource
instance and so this was causing a promise self-dependency error.
Instead, we need to have the configgraph package expose the configuration
value directly and then use just that as part of this result.
(We do still want to eventually unify the two codepaths that produce these
DesiredResourceInstance objects, but this commit is focused only on fixing
this bug as directly as possible.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
There is currently a bug in the apply engine that's causing a
self-dependency error during promise resolution, which was a good reminder
that we'd not previously finished connecting all of the parts to be able
to unwind such problems into a useful error message.
Due to how the work is split between the apply engine, the evaluator, and
the execgraph package it takes some awkward back-and-forth to get all of
the needed information together into one place. This compromise aims to
do as little work as possible in the happy path and defer more expensive
analysis until we actually know we're going to report an error message.
In this case we can't really avoid proactively collecting the request IDs
because we don't know ahead of time what (if anything) will be involved in
a promise error, but only when actually generating an error message will
we dig into the original source execution graph to find out what each of
the affected requests was actually trying to do and construct a
human-friendly summary of each one.
This was a bit of a side-quest relative to the current goal of just getting
things basically working with a simple configuration, but this is useful
in figuring out what's going on with the current bug (which will be fixed
in a future commit) and will probably be useful when dealing with future
bugs too.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
During execution graph processing we need to carry addressing information
along with objects so that we can model changes of address as data flow
rather than as modifications to global mutable state.
In previous commits we arranged for other relevant types to track this
information, but the representation of a resource instance object was not
included because that was a messier change to wire in. This commit deals
with that mess: it introduces exec.ResourceInstanceObject to associate a
resource instance object with addressing information, and then adopts that
as the canonical representation of a "resource instance object result"
throughout all of the execution graph operations.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The first pass of this was mainly just to illustrate how the overall model
of execution graphs might work, using just a few "obvious" opcodes as a
placeholder for the full set.
Now that we're making some progress towards getting this to actually work
as part of a real apply phase this reworks the set of opcodes in a few
different ways:
- Some concepts that were previously handled as special cases in the
execution graph executor are now just regular operations. The general
idea here is that anything that is fallible and/or has externally-visible
side-effects should be modelled as an operation.
- The set of supported operations for managed resources should now be
complete enough to describe all of the different series of steps that
result from different kinds of plan. In particular, this now has the
building blocks needed to implement a "create then destroy" replace
operation, which includes the step of "deposing" the original object
until we've actually destroyed it.
- The new package "exec" contains vocabulary types for data flowing between
operations, and an interface representing the operations themselves.
The types in this package allow us to carry addressing information along
with objects that would not normally carry that directly, which means
we don't need to propagate that address information around the execution
graph through side-channels.
(A notable gap as of this commit is that resource instance objects don't
have address information yet. That'll follow in a subsequent commit just
because it requires some further rejiggering.)
- The interface implemented by the applying engine and provided to the
execution graph now more directly relates to the operations described
in the execution graph, so that it's considerably easier to follow how
the graph built by the planning phase corresponds to calls into that
interface in the applying phase.
- We're starting to consider OpenTofu's own tracking identifiers for
resource instances as separate from how providers refer to their resource
types, just so that the opinions about how those two related can be
more centralized in future, vs. the traditional runtime where the rules
about how those related tend to be spread haphazardly throughout the
codebase, and in particular implementing migration between resource
types was traditionally challenging because it creates a situation where
the desired resource type and the current resource type _intentionally_
differ.
This is a large commit because the execution graph concepts are
cross-cutting between the plan and apply engines, but outside of execgraph
the changes here are mainly focused on just making things compile again
with as little change as possible, and then subsequent commits will get
the other packages into a better working shape again.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>