As usual, consumers of an allocated claim react to the information stored in
the status. In this case, the scheduler did not copy the tolerations into the
status and as a result a pod with a toleration for NoExecute got scheduled and
then immediately evicted.
Some additional logging gets added to make the handling easier to track in the
eviction controller. Example YAMLs allow reproducing the use case manually.
When building the test binary without race detection, we don't
need the post-processing of the JUnit file because it cannot
contain data race reports. This can be done via build tags.
Inline the endpointSlicesEqual() method into the test, since despite
its generic-sounding name, it made assumptions specific to this test.
Also, port to generic sets.
Move the Endpoints API test from endpointslice.go to endpoints.go
Move the "kubernetes.default Service exists" and "kubernetes.default
endpoints exist" tests to apiserver.go, since (unlike the rest of
service.go/endpointslice.go) they aren't testing the behavior of the
Service/EndpointSlice/Endpoints APIs.
(No code changes, but fixed a typo in a comment.)
Presumably
https://github.com/kubernetes/kubernetes/pull/127260/files#r2405215911
was meant to continue polling after a watch was closed by the apiserver.
This is something that can happen under load.
However, returning the error has the effect that polling stops.
This can be seen as test failures when testing with race
detection enabled:
persistent_volumes_test.go:1101: Failed to wait for all claims to be bound: watch closed
adds a new integration test to verify that the API server's egress
to admission webhooks correctly respects the standard `HTTPS_PROXY`
and `NO_PROXY` environment variables.
It adds a new test util to implement a Fake DNS server that allows
to override DNS resolution in tests, specially useful for integration
test that can only bind to localhost the servers, that is ignored
by certain functionalities.
Promoting real tests turned out to be harder than expected (should be rewritten
to be self-contained, additional reviews, etc.).
They would not achieve 100% endpoint+operation coverage because real tests only
use some of the operations. Therefore each API type has to be covered with
CRUD-style tests which only exercise the apiserver, then maybe additional
functional tests can be added later (depending on time and motivation).
The machinery for testing different API types is meant to be reusable, so it
gets added in the new e2e/framework/conformance helper package.
That was the original intent, but the implementation then ended up checking
ResourceClaims in all namespaces. Depending on timing this was merely
misleading (showing ResourceClaim changes from a different test running in
parallel), but with upcoming CRUD tests which intentionally set an allocation
result without a finalizer it breaks the non-CRUD tests when they check the
those CRUD ResourceClaims.
It's a nested map which looks a lot nicer as YAML, in particular
when it represents a Kubernetes object.
Unit+integration tests using ktesting+gomega and E2E tests benefit from this
change.