How to get etcd metrics? #7214
-
It's not a pod, and I can't scape metrics from the cluster. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 19 replies
-
You'd set the and add a |
Beta Was this translation helpful? Give feedback.
-
So what is the recommended way of configuring etcd metrics scraping again? I was not able to get the above recommendation to work. I am running:
After applying the following cluster patch: cluster:
etcd:
extraArgs:
listen-metrics-url: 0.0.0.0
# netstat -ltnp | grep etcd
tcp6 0 0 :::2380 :::* LISTEN 2061/etcd
tcp6 0 0 :::2379 :::* LISTEN 2061/etcd FWIW: # curl -6 "http://[::1]:2380/metrics" -v
* Trying [::1]:2380...
* Connected to ::1 (::1) port 2380 (#0)
> GET /metrics HTTP/1.1
> Host: [::1]:2380
> User-Agent: curl/7.88.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server # curl -6 "http://[::1]:2379/metrics" -v
* Trying [::1]:2379...
* Connected to ::1 (::1) port 2379 (#0)
> GET /metrics HTTP/1.1
> Host: [::1]:2379
> User-Agent: curl/7.88.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server |
Beta Was this translation helpful? Give feedback.
-
You can retrieve etcd metrics directly from the etcd server using its HTTP API. The metrics endpoint typically resides at |
Beta Was this translation helpful? Give feedback.
-
For those coming here using kube-prometheus-stack and looking for a way to make scraping etcd, controller manager and scheduler work, here is how I did it. Add mc patch on all control planes: - op: add
path: /cluster/etcd/extraArgs
value:
listen-metrics-urls: https://0.0.0.0:2379
- op: add
path: /cluster/controllerManager/extraArgs
value:
bind-address: 0.0.0.0
- op: add
path: /cluster/scheduler/extraArgs
value:
bind-address: 0.0.0.0 Patch: And repeat for all control planes. Get the certificates from etcd by running: talosctl get etcdrootsecret -o yaml
talosctl get etcdsecret -o yaml The output should look like: spec:
etcdCA:
crt spec:
etcd:
crt:
key: The strings are already base64 encoded so straight copy/paste into a new secret. Create a new secret: ---
apiVersion: v1
kind: Secret
metadata:
name: etcd-client-cert
namespace: monitoring
type: Opaque
data:
etcd-ca.crt:
LS0t....LS0K
etcd-client.crt:
LS0t....LS0K
etcd-client-key.key:
LS0t....=
In your custom-values.yaml for the helm deployment, add / change the following and replace the IP's with your control planes: kubeControllerManager:
endpoints:
- 10.0.0.1
- 10.0.0.2
- 10.0.0.3
kubeEtcd:
endpoints:
- 10.0.0.1
- 10.0.0.2
- 10.0.0.3
service:
selector:
component: etcd
serviceMonitor:
scheme: https
insecureSkipVerify: false
serverName: "localhost"
caFile: "/etc/prometheus/secrets/etcd-client-cert/etcd-ca.crt"
certFile: "/etc/prometheus/secrets/etcd-client-cert/etcd-client.crt"
keyFile: "/etc/prometheus/secrets/etcd-client-cert/etcd-client-key.key"
kubeScheduler:
endpoints:
- 10.0.0.1
- 10.0.0.2
- 10.0.0.3
prometheus:
prometheusSpec:
secrets:
- etcd-client-cert Note stating the obvious: the values above are not enough for a complete and succesful deployment of kube-prometheus-stack. These are only the additional changes that you need to make this particular scraping work. |
Beta Was this translation helpful? Give feedback.
yes, you can also set it to the node ip