Vishal Gupta
1 min readNov 9, 2024

--

If Promtail is still scraping logs from all pods despite the relabel_configs setting, here are a few things to double-check:

Verify Annotation Key in Kubernetes: Ensure that the annotation on the pods is exactly promtail/logs-enabled: "true". Kubernetes annotations are case-sensitive, so any variation (e.g., Promtail/Logs-Enabled: "true") will not match.

Double-check Relabeling Logic: Update the scrape_configs to make sure the filter condition is specific and is the first (or only) filter applied:

scrape_configs:

- job_name: kubernetes-pods-with-annotation

kubernetes_sd_configs:

- role: pod

relabel_configs:

# Drop logs from pods without the annotation `promtail/logs-enabled: "true"`

- source_labels: [__meta_kubernetes_pod_annotation_promtail_logs_enabled]

regex: true

action: keep # Only keep logs from annotated pods

# Optionally, drop other unwanted namespaces

- source_labels: [__meta_kubernetes_namespace]

regex: unwanted-namespace

action: drop

Helm Chart Overrides: Ensure there are no other scrape_configs applied by default in the Helm chart. Sometimes, Promtail’s Helm chart includes a default scrape_configs that you may need to disable or override by setting:

config:

server:

scrape_configs: []

Enable Debug Logs: Enable debug mode in Promtail to see exactly which pods and annotations it is detecting. This can help identify if any specific configuration issues are causing the filtering to fail:

config:

server:

log_level: debug

Apply the Configuration Again: Apply the configuration with helm upgrade and monitor Promtail’s logs to verify if only the expected pods are scraped.

--

--

No responses yet