Remove issue reference and trim the comment down to the assertion's
intent, per @roidelapluie review.
Signed-off-by: alliasgher <alliasgher123@gmail.com>
tsdb: Skip clean series during periodic head chunk mmap
The periodic mmapHeadChunks cycle previously acquired a per-series
lock on every series, even though typically >99% have nothing to
mmap. This was identified as a CPU bottleneck in Grafana Mimir.
Add a headChunkCount field (sync/atomic.Uint32) to memSeries that
tracks the number of head chunks. It is incremented in
cutNewHeadChunk and the histogram new-chunk paths, and reset by
mmapChunks and truncateChunksBefore. mmapHeadChunks uses a lock-free
Load to skip series with fewer than 2 head chunks, avoiding the
per-series lock for clean series.
sync/atomic.Uint32 (4 bytes) is used instead of go.uber.org/atomic
(8 bytes) to fit in existing struct padding without growing
memSeries. Chunk counts are bounded by the 3-byte field in
HeadChunkRef, so cannot overflow uint32.
Also fix pre-existing comment inaccuracies in the touched code:
headChunks.next -> headChunks.prev, mmapHeadChunks() -> mmapChunks()
in the doc comment, and a grammar error.
---------
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* util/strutil: add Jaro-Winkler similarity implementation
This is part of the implementation of prometheus/proposals#74
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* util/strutil: optimise JaroWinkler with string-native ASCII path
Replace the generic jaroWinkler[T byte|rune] with two specialised
functions: jaroWinklerString (ASCII path) operates directly on the
string values and avoids the []byte conversion that previously caused
two heap allocations per call; jaroWinklerRunes (Unicode path) is
unchanged in algorithm but split out from the generic.
Both paths replace the repeated float64 divisions in the Jaro formula
with precomputed reciprocals (invL1, invL2).
Result: short ASCII strings drop from 2 allocs/op to 0 allocs/op;
long ASCII drops from 4 allocs/op to 2 allocs/op (bool match arrays
only).
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* util/strutil: replace JaroWinkler with JaroWinklerMatcher
Remove the free JaroWinkler function and replace it with a
JaroWinklerMatcher struct. NewJaroWinklerMatcher pre-computes the
ASCII check and rune conversion for the search term once; Score then
runs the comparison against each candidate without repeating that work.
This is the expected usage pattern in Prometheus: one fixed term scored
against many label names or values.
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* Update util/strutil/jarowinkler.go and util/strutil/jarowinkler_test.go
Co-authored-by: Arve Knudsen <arve.knudsen@gmail.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
---------
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
Co-authored-by: Arve Knudsen <arve.knudsen@gmail.com>
Rather than widening the assertion to accept raw hex codes, skip the
strict _MAGIC format check with t.Skipf when the filesystem is not in
the known map. The test still exercises the error paths and will run
fully on standard Linux/macOS filesystems.
Fixesprometheus/prometheus#18471
Signed-off-by: Ali <ali@kscope.ai>
FsType() returns the known magic-name string when the filesystem is
present in its internal map, and falls back to strconv.FormatInt(..., 16)
otherwise. The test was asserting the *MAGIC regex only, so it failed
whenever it happened to run on a filesystem not yet mapped — the
downstream Arch Linux packager hit this with a btrfs subvolume.
Extend the regex to accept either a magic-name or the numeric
lowercase-hex fallback, keeping the test stable across OS upgrades and
exotic filesystems.
Fixes#18471
Signed-off-by: Ali <alliasgher123@gmail.com>
Gitpod has rebranded to Ona a while ago and is now focusing on AI-agentic
coding, so at least the traditional links that opened the repo in a cloud-based
coding environment without login don't work anymore. So let's remove the files
and badge to get rid of old cruft.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
This adds a /api/v1/status/self_metrics endpoint that allows the frontend to
fetch metrics about the server itself, making it easier to construct frontend
pages that show the current server state. This is needed because fetching
metrics from its own /metrics endpoint would be both hard to parse and also
require CORS permissions on that endpoint (for cases where the frontend
dashboard is not the same origin, at least).
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Metric names, label names, and label values containing HTML/JavaScript were
inserted into `innerHTML` without escaping in several UI code paths, enabling
stored XSS attacks via crafted metrics. This mostly becomes exploitable in
Prometheus 3.x, since it defaults to allowing any UTF-8 characters in metric
and label names.
Apply `escapeHTML()` to all user-controlled values before innerHTML
insertion in:
* Mantine UI chart tooltip
* Old React UI chart tooltip
* Old React UI metrics explorer fuzzy search
* Old React UI heatmap tooltip
See https://github.com/prometheus/prometheus/security/advisories/GHSA-vffh-x6r8-xx99
Signed-off-by: Julius Volz <julius.volz@gmail.com>
When health_filter is set without explicit services, the catalog needs
to be watched to enumerate services. Add watchedFilter to the condition
that triggers catalog watching.
Improve the filter test suite:
- Replace defer with t.Cleanup for stub servers.
- Rewrite TestFilterOption to assert that the catalog receives the filter
and the health endpoint does not.
- Rewrite TestHealthFilterOption to assert that health_filter is routed
correctly to the health endpoint only.
- Add TestBothFiltersOption to verify both filters are routed to their
respective endpoints when both are configured.
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
The filter field was documented as targeting the Catalog API but since
PR #17349 it was also passed to the Health API. This broke existing
configs using Catalog-only fields like ServiceTags, which the Health API
rejects (it uses Service.Tags instead).
Introduce a separate health_filter field that is passed exclusively to
the Health API, while filter remains catalog-only. Update the docs to
explain the two-phase discovery (Catalog for service listing, Health for
instances) and the field name differences between the two APIs.
Fixes#18479
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* promql: add test for info() with non-matching labels and empty-string data matcher
This adds a test case for info() where identifying labels don't match
any info series, but the data label matcher accepts empty strings
({data=~".*"}). In this case, the base series should be returned
unchanged, since the matcher doesn't require info series data to be
present.
This complements the existing test with {non_existent=~".+"}, which
drops the series because .+ doesn't match the empty string.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* promql: remove stale XXX comment and add info() empty-matcher test
Remove the outdated XXX comment about vector selectors requiring at
least one non-empty matcher, since the info function's second argument
now bypasses this check via BypassEmptyMatcherCheck.
Add a test case for info(metric, {non_existent=~".*"}) to verify that
a data matcher accepting empty labels on its own returns the metric
unchanged when the requested label doesn't exist on the info series.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* promql: add test for info() with non-matching identifying labels and non-empty data matcher
Add a test case for info(metric_not_matching_target_info, {data=~".+"})
to verify that the series is dropped when identifying labels don't match
any info series and the data matcher doesn't accept empty strings.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
---------
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Export parser.Keywords() and add GetDictForFuzzParseExpr() so that
the corpus generator can produce a stable fuzzParseExpr.dict file
derived directly from the PromQL grammar rather than maintained by hand.
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
Promote the Prometheus container archs settings to common so they
are used by default on other projects.
* Add `.dockerignore` to sync script to include docker archs.
Signed-off-by: SuperQ <superq@gmail.com>