This commit fixes a failure in TestEtcdStoragePath when emulating version 1.34. The test failure was caused by the removal of alpha versions from the test data during emulation, which prevented 'storageVersionAtEmulationVersion' from correctly resolving the storage version for MutatingAdmissionPolicy (which relies on v1alpha1 in this compatibility mode).
Changes:
- Updated GetEtcdStorageDataServedAt in test/integration/etcd/data.go to pass a full copy of etcdStorageData (including alpha versions) to storageVersionAtEmulationVersion.
- Added ExpectedGVK to MutatingAdmissionPolicy and MutatingAdmissionPolicyBinding in test/integration/etcd/data.go to ensure correct version resolution during tests.
- Removed explicit storage version overrides for MutatingAdmissionPolicy in pkg/kubeapiserver/default_storage_factory_builder.go as part of the graduation process.
This change allows the preemption to preempt a pod that is not yet
bound, but is already in prebind phase) without issuing a delete call to the
apiserver.
Pods are added to a special map of pods currently in prebind phaseand
preemption can cancel the context that is used for given pod prebind phase ,
allowing it to gracefully handle error in the same manner as errors
coming out from prebind plugins. This results in pods being pushed to
backoff queue, allowing them to be rescheduled in upcoming scheduling
cycles.
The implicit matching of the ResourceClaim name to ExternalClaim was
convenient (no need to specify the parameter) but did go wrong in integration
testing where there are multiple calls to ExternalClaim.
Gomega matchers cannot be used concurrently, they get mutated. Each user must
get its own separate instance.
WARNING: DATA RACE
Write at 0x00c0195da678 by goroutine 322445:
github.com/onsi/gomega/matchers.(*AndMatcher).Match()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/matchers/and.go:18 +0x44
github.com/onsi/gomega/internal.(*AsyncAssertion).pollMatcher()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/async_assertion.go:387 +0xbe
github.com/onsi/gomega/internal.(*AsyncAssertion).match()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/async_assertion.go:415 +0x47b
github.com/onsi/gomega/internal.(*AsyncAssertion).Should()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/async_assertion.go:145 +0xc4
k8s.io/kubernetes/test/integration/dra.testShareResourceClaimSequentially.func3()
/home/prow/go/src/k8s.io/kubernetes/test/integration/dra/resourceclaim_test.go:104 +0x361
k8s.io/kubernetes/test/integration/dra.testShareResourceClaimSequentially.func5()
/home/prow/go/src/k8s.io/kubernetes/test/integration/dra/resourceclaim_test.go:139 +0xa1
sync.(*WaitGroup).Go.func1()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/cache/mod/golang.org/toolchain@v0.0.1-go1.25.7.linux-amd64/src/sync/waitgroup.go:239 +0x5d
Previous write at 0x00c0195da678 by goroutine 322438:
github.com/onsi/gomega/matchers.(*AndMatcher).Match()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/matchers/and.go:18 +0x44
github.com/onsi/gomega/internal.(*AsyncAssertion).pollMatcher()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/async_assertion.go:387 +0xbe
github.com/onsi/gomega/internal.(*AsyncAssertion).match()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/async_assertion.go:415 +0x47b
github.com/onsi/gomega/internal.(*AsyncAssertion).Should()
/home/prow/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/async_assertion.go:145 +0xc4
k8s.io/kubernetes/test/integration/dra.testShareResourceClaimSequentially.func3()
/home/prow/go/src/k8s.io/kubernetes/test/integration/dra/resourceclaim_test.go:104 +0x361
k8s.io/kubernetes/test/integration/dra.testShareResourceClaimSequentially.func5()
/home/prow/go/src/k8s.io/kubernetes/test/integration/dra/resourceclaim_test.go:139 +0xa1
sync.(*WaitGroup).Go.func1()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/cache/mod/golang.org/toolchain@v0.0.1-go1.25.7.linux-amd64/src/sync/waitgroup.go:239 +0x5d
Add integration tests for gang and basic policy workload scheduling
Add more tests for cluster snapshot
Proceed to binding cycle just after pod group cycle
Enforce one scheduler name per pod group, rename workload cycle to pod group cycle
Add unit tests for pod group scheduling cycle
Run ScheduleOne tests treating pod as part of a pod group
Rename NeedsPodGroupCycle to NeedsPodGroupScheduling
Observe correct per-pod and per-podgroup metrics during pod group cycle
Rename pod group algorithm status to waiting_on_preemption
Mention forgotAllAssumedPods is a safety check
A updated claim can only be stored in the assume cache if the cache already
contains a copy of it. That wasn't guaranteed because the update operations
where based on listing existing claims without going through the cache.
To avoid the race, we can ensure that the assume cache is up-to-date before
we start updating claims and the cache. This is simpler than retrying the
assume call.
This change introduces the DeclarativeValidationBeta feature gate in v1.36
as the global safety switch for Beta-stage validation rules and marks
DeclarativeValidationTakeover as deprecated.
Following KEP-5073.
Replace all imports of k8s.io/apimachinery/pkg/util/dump with
k8s.io/utils/dump across the repo. The apimachinery dump package
now contains deprecated wrapper functions that delegate to
k8s.io/utils/dump for backwards compatibility.
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
This used to be an E2E test, but it turned out to be too slow and unreliable
and therefore got removed. As an integration test we have a bit better control
over the environment, so it should be possible to avoid the same flakes.
Some of the slowness comes from pods entering backoff. Maybe this is an
opportunity for future improvements.
To support this tests, the ResourceClaim controller is needed. The framework
can start it on demand now, similar to how the scheduler was handled already.