Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A way to disable go, statsd_exporter and prometheus metrics #446

Closed
insomnes opened this issue Jun 28, 2022 · 6 comments
Closed

A way to disable go, statsd_exporter and prometheus metrics #446

insomnes opened this issue Jun 28, 2022 · 6 comments

Comments

@insomnes
Copy link

Hello!
Is there any way to disable these metrics?

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.4476e-05
go_gc_duration_seconds{quantile="0.25"} 2.4044e-05
go_gc_duration_seconds{quantile="0.5"} 8.5533e-05
go_gc_duration_seconds{quantile="0.75"} 0.000104522
go_gc_duration_seconds{quantile="1"} 0.000135163
go_gc_duration_seconds_sum 0.000845045
go_gc_duration_seconds_count 12
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 15
...
# HELP statsd_exporter_lines_total The total number of StatsD lines received.
# TYPE statsd_exporter_lines_total counter
statsd_exporter_lines_total 4306
# HELP statsd_exporter_loaded_mappings The current number of configured metric mappings.
# TYPE statsd_exporter_loaded_mappings gauge
statsd_exporter_loaded_mappings 26
# HELP statsd_exporter_metrics_total The total number of metrics.
# TYPE statsd_exporter_metrics_total gauge
statsd_exporter_metrics_total{type="counter"} 8
statsd_exporter_metrics_total{type="gauge"} 10
statsd_exporter_metrics_total{type="histogram"} 2
# HELP statsd_exporter_samples_total The total number of StatsD samples received.
# TYPE statsd_exporter_samples_total counter
statsd_exporter_samples_total 4306
# HELP statsd_exporter_tag_errors_total The number of errors parsing DogStatsD tags.
# TYPE statsd_exporter_tag_errors_total counter
statsd_exporter_tag_errors_total 0
# HELP statsd_exporter_tags_total The total number of DogStatsD tags processed.
# TYPE statsd_exporter_tags_total counter
statsd_exporter_tags_total 0
...
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.33011968e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 10
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
@SuperQ
Copy link
Member

SuperQ commented Jun 28, 2022

These can be dropped using metric relabel configs.

For questions/help/support please use our community channels. There are more people available to potentially respond to your request and the whole community can benefit from the answers provided.

@matthiasr
Copy link
Contributor

At the moment there isn't. I would be open to it in principle, but only if there is a really good use case that cannot be satisfied otherwise. It is possible to run into conflicts between translated and internal metrics, but so far this has always come down to someone trying to build a push metric system by [looping Prometheus metrics through another protocol](https://github.com/prometheus/graphite_exporter/issues/165#issuecomment-922747805].

What problem are these causing for you?

@insomnes
Copy link
Author

insomnes commented Jun 29, 2022

What problem are these causing for you?

I'am using statsd_exporter in a sidecar container in my kubernetes deployment for airflow metrics collection by ServiceMonitor.
I thought it would interfere with some other metrics, because they are not labeled by default, but now I understand that they would be labeled by my k8s labels by ServiceMonitor. So there would be no problem at all. I'm sorry I've bothered you all.
Thx for the quick answers so.

@erikvanzijst
Copy link

At the moment there isn't.

Would it at least be possible to add custom labels to them (so they will end up on the appropriate service dashboard in my case)?

I might be using it wrong, but the following mapping file does not seem to affect the built-in metrics:

mappings:
  - match: ".+"
    match_type: regex
    name: "$0"
    labels:
      group: my_service_group
      instance: my_service_name

@matthiasr
Copy link
Contributor

Not really, we don't have influence over the metrics generated by the Go client. Generally, the Prometheus approach is to not have a service "identify itself" like this; this is a job for service discovery (and if necessary, relabeling).

I would also recommend deploying the exporter as a sidecar and considering it as part of the service that sends metrics to it; this way you don't have to transport service identity through statsd events and mappings. If this isn't an option, use the honor_labels: true option on the Prometheus side so that the target labels for the exporter don't override target labels from mapped metrics.

@erikvanzijst
Copy link

Yeah you make a good point about not self-identifying services. Prometheus knows which service it's scraping and so there's no value in putting that responsibility on the applications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants