* license: add support for publishing artifacts to IBM PAO (#8366)
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: brian shore <bshore@hashicorp.com>
Co-authored-by: Ethel Evans <ethel.evans@hashicorp.com>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers.
I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable.
On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following:
- .build/*
- .go-version
- .github/actions/build-vault
- tools/tools.sh
- Dockerfile
I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space.
Some follow-up idea that we might want to consider:
- Build everything inside the build container and remove the github actions that set up external tools
- Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries
- Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* VAULT-31402: Add verification for all container images
Add verification for all container images that are generated as part of
the build. Before this change we only ever tested a limited subset of
"default" containers based on Alpine Linux that we publish via the
Docker hub and AWS ECR.
Now we support testing all Alpine and UBI based container images. We
also verify the repository and tag information embedded in each by
deploying them and verifying the repo and tag metadata match our
expectations.
This does change the k8s scenario interface quite a bit. We now take in
an archive image and set image/repo/tag information based on the
scenario variants.
To enable this I also needed to add `tar` to the UBI base image. It was
already available in the Alpine image and is used to copy utilities to
the image when deploying and configuring the cluster via Enos.
Since some images contain multiple tags we also add samples for each
image and randomly select which variant to test on a given PR.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Adding explicit MPL license for sub-package.
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Adding explicit MPL license for sub-package.
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Updating the license from MPL to Business Source License.
Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at https://hashi.co/bsl-blog, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl.
* add missing license headers
* Update copyright file headers to BUS-1.1
* Fix test that expected exact offset on hcl file
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
Co-authored-by: Sarah Thompson <sthompson@hashicorp.com>
Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com>
* Copy UBI Dockerfile into Vault
This Dockerfile was modeled off of the existing Alpine Dockerfile (in
this repo) and the external Dockerfile from the docker-vault repo:
> https://github.com/hashicorp/docker-vault/blob/master/ubi/Dockerfile
We also import the UBI-specific docker-entrypoint.sh, as certain
RHEL/Alpine changes (like interpreter) require a separate entry script.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add UBI build to CRT pipeline
Also adds workflow_dispatch to the CRT pipeline, to allow manually
triggering CRT from PRs, when desired.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Update Dockerfile
Co-authored-by: Sam Salisbury <samsalisbury@gmail.com>
* Update Dockerfile
Co-authored-by: Sam Salisbury <samsalisbury@gmail.com>
* Update Dockerfile
Co-authored-by: Sam Salisbury <samsalisbury@gmail.com>
* Update Dockerfile
* Update Dockerfile
* Update build.yml
Allow for both push to arbitrary branch plus workflow dispatch, per Newsletter article.
Co-authored-by: Sam Salisbury <samsalisbury@gmail.com>
* adding CRT to main branch
* cleanup
* um i dont know how that got removed but heres the fix
* add vault.service
Co-authored-by: Kyle Penfound <kpenfound11@gmail.com>