Add MemoryReservationPolicy (None/HardReservation) controls memory.min. This allows
independently of memory.min protection, providing operators more
granular control over memoryQoS behavior.
Signed-off-by: Qi Wang <qiwan@redhat.com>
* Add more unit tests for constrained impersonation
test cases for large number of groups/extra
test cases for system:masters constrained impersonation is not allowed
Signed-off-by: Jian Qiu <jqiu@redhat.com>
* Validate each authz request in the constrained impersonation unit test
Signed-off-by: Jian Qiu <jqiu@redhat.com>
---------
Signed-off-by: Jian Qiu <jqiu@redhat.com>
This is to address the bug (gh-issue 134460), which reported that currently `PodReadyToStartContainers` condition is only set to `True` after the container image pull is completed. so, if the image size is big and image pull takes significant time to finish, the pod status managaer is blocked and the condition remaind `False`.
The commit implements the following changes, to allow kubelet to update the `PodReadyToStartContainers` pod condition immediately after all three requirements (pod sandbox, networking, volume)are ready, but before container images are pulled or containers are created.
* add `OnPodSandboxReady` method to the `RuntimeHelper` interface in `container/helpers.go`
* implement the `OnPodSandboxReady` method in Kubelet
* inside `(containerRuntime).SyncPod`, after sandbox creation and network configuration, invoke `runtimeHelper.OnPodSandboxReady()` directly
(this method retrieves current pod status, generates updated API status, and notifies the status manager to sync to the API server)
This implementation is gated under `PodReadyToStartContainersCondition` feature gate, and fails gracefully, i.e, it only logs error and continues the pod creation process to make sure that these new changes don't block pod startup.
TestEvictDuringNamespaceTerminating intentionally exercises the retry path
but only allows 10ms of total time. The production loop sleeps, refreshes
state, and retries under that same deadline, so a single retry plus
scheduler jitter is enough to exhaust the budget under -race or on busy
CI workers.
Keep the retry interval small so the test still covers the retry behavior,
but widen the overall timeout so the assertion measures semantics instead
of machine speed.
Tested:
go test -race ./staging/src/k8s.io/kubectl/pkg/drain -run TestEvictDuringNamespaceTerminating -count=100
kms does not depend on streaming hence the entry is not needed
in the dependencies in publishing/rules file
Signed-off-by: Akhil Mohan <akhilerm@gmail.com>
Replace the manual 3-retry loop (with no delay) in VerifyCgroupValue
with framework.Gomega().Eventually() + HandleRetry, matching the
pattern used for oom_score_adj deflake in #137329. This gives proper
polling with backoff when exec fails during container restarts.
Introduce support for specifying allowed TLS key exchange mechanisms
(IANA TLS Supported Groups) via a new --tls-curve-preferences flag,
following the same pattern as --tls-cipher-suites.
Curve preferences are specified as numeric IANA TLS Supported Group IDs
(e.g. 23,29,4588) rather than string names. This avoids maintaining a
hardcoded name-to-ID map that would become stale with each Go release,
and ensures new curves (such as Go 1.26's SecP256r1MLKEM768 and
SecP384r1MLKEM1024) work automatically when rebuilding with a newer Go
version -- no code changes required.
Changes:
- Add curves_flag.go in component-base/cli/flag with a simple
int-to-tls.CurveID cast function
- Add CurvePreferences field ([]int32) to SecureServingOptions, registered
via IntSliceVar, and wire it through to tls.Config
The order of the list is ignored; Go selects from the set using an
internal preference order. If omitted, Go defaults are used. The set of
accepted values depends on the Go version used to build the binary; see
https://pkg.go.dev/crypto/tls#CurveID for reference.