实操Install Victoriametrics in K8s

云计算
之前给大家介绍了victoriametrics以及安装中的一些注意事项,今天来给大家实操一下,如何在k8s中进行安装。

背景

之前给大家介绍了victoriametrics以及安装中的一些注意事项,今天来给大家实操一下,如何在k8s中进行安装。本次是基于云上的k8s上安装一个cluster版本的victoriametrics,需要使用到云上的负载均衡。

注:victoriametrics后续简称vm

安装准备

  • 一个k8s集群,我的k8s版本是v1.20.6
  • 在集群上准备好一个storageclass,我这里用的NFS来做的
  • operator镜像tag为v0.17.2,vmstorage、vmselect和vminsert镜像tag为v1.63.0。可提前拉取镜像保存到本地镜像仓库

安装须知

vm可以通过多种方式安装,如二进制、docker镜像以及源码。可根据场景进行选择。如果在k8s中进行安装,我们可以直接使用operator来进行安装。下面重点说一下安装过程中的一些注意事项。

一个最小的集群必须包含以下节点:

  •  一个vmstorage单节点,另外要指定-retentionPeriod和-storageDataPath两个参数
  •  一个vminsert单节点,要指定-storageNode=
  •  一个vmselect单节点,要指定-storageNode=注:高可用情况下,建议每个服务至少有个两个节点

在vmselect和vminsert前面需要一个负载均衡,比如vmauth、nginx。这里我们使用云上的负载均衡。同时要求:

  •  以/insert开头的请求必须要被路由到vminsert节点的8480端口
  •  以/select开头的请求必须要被路由到vmselect节点的8481端口注:各服务的端口可以通过-httpListenAddr进行指定

建议为集群安装监控

如果是在一个主机上进行安装测试集群,vminsert、vmselect和vmstorage各自的-httpListenAddr参数必须唯一,vmstorage的-storageDataPath、-vminsertAddr、-vmselectAddr这几个参数必须有唯一的值。

当vmstorage通过-storageDataPath目录大小小于通过-storage.minFreeDiskSpaceBytes指定的可用空间时,会切换到只读模式;vminsert停止像这类节点发送数据,转而将数据发送到其他可用vmstorage节点

安装过程

安装vm

1、创建crd

  1. # 下载安装文件 
  2. export VM_VERSION=`basename $(curl -fs -o/dev/null -w %{redirect_url} https://github.com/VictoriaMetrics/operator/releases/latest)` 
  3. wget https://github.com/VictoriaMetrics/operator/releases/download/$VM_VERSION/bundle_crd.zip 
  4. unzip  bundle_crd.zip  
  5. kubectl apply -f release/crds 
  6.  
  7. # 检查crd 
  8. [root@test opt]# kubectl get crd  |grep vm 
  9. vmagents.operator.victoriametrics.com                2022-01-05T07:26:01Z 
  10. vmalertmanagerconfigs.operator.victoriametrics.com   2022-01-05T07:26:01Z 
  11. vmalertmanagers.operator.victoriametrics.com         2022-01-05T07:26:01Z 
  12. vmalerts.operator.victoriametrics.com                2022-01-05T07:26:01Z 
  13. vmauths.operator.victoriametrics.com                 2022-01-05T07:26:01Z 
  14. vmclusters.operator.victoriametrics.com              2022-01-05T07:26:01Z 
  15. vmnodescrapes.operator.victoriametrics.com           2022-01-05T07:26:01Z 
  16. vmpodscrapes.operator.victoriametrics.com            2022-01-05T07:26:01Z 
  17. vmprobes.operator.victoriametrics.com                2022-01-05T07:26:01Z 
  18. vmrules.operator.victoriametrics.com                 2022-01-05T07:26:01Z 
  19. vmservicescrapes.operator.victoriametrics.com        2022-01-05T07:26:01Z 
  20. vmsingles.operator.victoriametrics.com               2022-01-05T07:26:01Z 
  21. vmstaticscrapes.operator.victoriametrics.com         2022-01-05T07:26:01Z 
  22. vmusers.operator.victoriametrics.com                 2022-01-05T07:26:01Z 

2、安装operator

  1. # 安装operator。记得提前修改operator的镜像地址 
  2. kubectl apply -f release/operator/ 
  3.  
  4. # 安装后检查operator是否正常 
  5. [root@test opt]# kubectl get po -n monitoring-system 
  6. vm-operator-76dd8f7b84-gsbfs              1/1     Running   0          25h 

3、安装vmcluster operator安装完成后,需要根据自己的需求去构建自己的的cr。我这里安装一个vmcluster。先看看vmcluster安装文件

  1. # cat vmcluster-install.yaml 
  2. apiVersion: operator.victoriametrics.com/v1beta1 
  3. kind: VMCluster 
  4. metadata: 
  5.   name: vmcluster 
  6.   namespace: monitoring-system 
  7. spec: 
  8.   replicationFactor: 1 
  9.   retentionPeriod: "4" 
  10.   vminsert: 
  11.     image: 
  12.       pullPolicy: IfNotPresent 
  13.       repository: images.huazai.com/release/vminsert 
  14.       tag: v1.63.0 
  15.     podMetadata: 
  16.       labels: 
  17.         victoriaMetrics: vminsert 
  18.     replicaCount: 1 
  19.     resources: 
  20.       limits: 
  21.         cpu: "1" 
  22.         memory: 1000Mi 
  23.       requests: 
  24.         cpu: 500m 
  25.         memory: 500Mi 
  26.   vmselect: 
  27.     cacheMountPath: /select-cache 
  28.     image: 
  29.       pullPolicy: IfNotPresent 
  30.       repository: images.huazai.com/release/vmselect 
  31.       tag: v1.63.0 
  32.     podMetadata: 
  33.       labels: 
  34.         victoriaMetrics: vmselect 
  35.     replicaCount: 1 
  36.     resources: 
  37.       limits: 
  38.         cpu: "1" 
  39.         memory: 1000Mi 
  40.       requests: 
  41.         cpu: 500m 
  42.         memory: 500Mi 
  43.     storage: 
  44.       volumeClaimTemplate: 
  45.         spec: 
  46.           accessModes: 
  47.           - ReadWriteOnce 
  48.           resources: 
  49.             requests: 
  50.               storage: 2G 
  51.           storageClassName: nfs-csi 
  52.           volumeMode: Filesystem 
  53.   vmstorage: 
  54.     image: 
  55.       pullPolicy: IfNotPresent 
  56.       repository: images.huazai.com/release/vmstorage 
  57.       tag: v1.63.0 
  58.     podMetadata: 
  59.       labels: 
  60.         victoriaMetrics: vmstorage 
  61.     replicaCount: 1 
  62.     resources: 
  63.       limits: 
  64.         cpu: "1" 
  65.         memory: 1500Mi 
  66.       requests: 
  67.         cpu: 500m 
  68.         memory: 750Mi 
  69.     storage: 
  70.       volumeClaimTemplate: 
  71.         spec: 
  72.           accessModes: 
  73.           - ReadWriteOnce 
  74.           resources: 
  75.             requests: 
  76.               storage: 20G 
  77.           storageClassName: nfs-csi 
  78.           volumeMode: Filesystem 
  79.     storageDataPath: /vm-data 
  80.   
  81.  # install vmcluster 
  82.  kubectl apply -f vmcluster-install.yaml 
  83.   
  84.  # 检查vmcluster install结果 
  85. [root@test opt]# kubectl get po -n monitoring-system  
  86. NAME                                      READY   STATUS    RESTARTS   AGE 
  87. vm-operator-76dd8f7b84-gsbfs              1/1     Running   0          26h 
  88. vminsert-vmcluster-main-69766c8f4-r795w   1/1     Running   0          25h 
  89. vmselect-vmcluster-main-0                 1/1     Running   0          25h 
  90. vmstorage-vmcluster-main-0                1/1     Running   0          25h 

4、创建vminsert和vmselect service

  1. # 查看创建的svc 
  2. [root@test opt]# kubectl get svc -n monitoring-system 
  3. NAME                            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE 
  4. vminsert-vmcluster-main         ClusterIP   10.0.182.73    <none>        8480/TCP                     25h 
  5. vmselect-vmcluster-main         ClusterIP   None           <none>        8481/TCP                     25h 
  6. vmstorage-vmcluster-main        ClusterIP   None           <none>        8482/TCP,8400/TCP,8401/TCP   25h 
  7.  
  8. # 这里为了方便不同k8s集群的数据都可以存储到该vm来,同时方便后续查询数据, 
  9. # 重新创建两个svc,类型为nodeport,分别为vminsert-lbsvc和vmselect-lbsvc.同时配置云上的lb监听8480和8481端口,后端服务器为vm所在集群的节点ip, 
  10. # 端口为vminsert-lbsvc和vmsleect-lbsvc两个service暴露出来的nodeport 
  11. # 但与vm同k8s集群的比如opentelemetry需要存储数据时,仍然可以用: 
  12. # vminsert-vmcluster-main.kube-system.svc.cluster.local:8480 
  13. # 与vm不同k8s集群的如opentelemetry存储数据时使用lb:8480 
  14.  
  15. # cat vminsert-lb-svc.yaml 
  16. apiVersion: v1 
  17. kind: Service 
  18. metadata: 
  19.   labels: 
  20.     app.kubernetes.io/component: monitoring 
  21.     app.kubernetes.io/instance: vmcluster-main 
  22.     app.kubernetes.io/name: vminsert 
  23.   name: vminsert-vmcluster-main-lbsvc 
  24.   namespace: monitoring-system 
  25. spec: 
  26.   externalTrafficPolicy: Cluster 
  27.   ports: 
  28.   - name: http 
  29.     nodePort: 30135 
  30.     port: 8480 
  31.     protocol: TCP 
  32.     targetPort: 8480 
  33.   selector: 
  34.     app.kubernetes.io/component: monitoring 
  35.     app.kubernetes.io/instance: vmcluster-main 
  36.     app.kubernetes.io/name: vminsert 
  37.   sessionAffinity: None 
  38.   type: NodePort 
  39.    
  40. # cat vmselect-lb-svc.yaml 
  41. apiVersion: v1 
  42. kind: Service 
  43. metadata: 
  44.   labels: 
  45.     app.kubernetes.io/component: monitoring 
  46.     app.kubernetes.io/instance: vmcluster-main 
  47.     app.kubernetes.io/name: vmselect 
  48.   name: vmselect-vmcluster-main-lbsvc 
  49.   namespace: monitoring-system 
  50. spec: 
  51.   externalTrafficPolicy: Cluster 
  52.   ports: 
  53.   - name: http 
  54.     nodePort: 31140 
  55.     port: 8481 
  56.     protocol: TCP 
  57.     targetPort: 8481 
  58.   selector: 
  59.     app.kubernetes.io/component: monitoring 
  60.     app.kubernetes.io/instance: vmcluster-main 
  61.     app.kubernetes.io/name: vmselect 
  62.   sessionAffinity: None 
  63.   type: NodePort 
  64.   
  65.  # 创建svc  
  66.  kubectl apply -f vmselect-lb-svc.yaml  
  67.  kubectl apply -f vminsert-lb-svc.yaml 
  68.   
  69.  # !!配置云上lb, 
  70.  自行配置 
  71.   
  72. # 最后检查vm相关的pod和svc 
  73.  
  74. [root@test opt]# kubectl get po,svc -n monitoring-system  
  75. NAME                                          READY   STATUS    RESTARTS   AGE 
  76. pod/vm-operator-76dd8f7b84-gsbfs              1/1     Running   0          30h 
  77. pod/vminsert-vmcluster-main-69766c8f4-r795w   1/1     Running   0          29h 
  78. pod/vmselect-vmcluster-main-0                 1/1     Running   0          29h 
  79. pod/vmstorage-vmcluster-main-0                1/1     Running   0          29h 
  80.  
  81. NAME                                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE 
  82. service/vminsert-vmcluster-main         ClusterIP   10.0.182.73    <none>        8480/TCP                     29h 
  83. service/vminsert-vmcluster-main-lbsvc   NodePort    10.0.255.212   <none>        8480:30135/TCP               7h54m 
  84. service/vmselect-vmcluster-main         ClusterIP   None           <none>        8481/TCP                     29h 
  85. service/vmselect-vmcluster-main-lbsvc   NodePort    10.0.45.239    <none>        8481:31140/TCP               7h54m 
  86. service/vmstorage-vmcluster-main        ClusterIP   None           <none>        8482/TCP,8400/TCP,8401/TCP   29h 

安装prometheus-expoter

这里还是来安装node exporter,暴露k8s节点数据,由后续的opentelemetry来采集,并通过vminsert存储到vmstorage。数据通过vmselect来进行查询

  1. # kubectl apply -f prometheus-node-exporter-install.yaml 
  2. apiVersion: apps/v1 
  3. kind: DaemonSet 
  4. metadata: 
  5.   labels: 
  6.     app: prometheus-node-exporter 
  7.     release: prometheus-node-exporter 
  8.   name: prometheus-node-exporter 
  9.   namespace: kube-system 
  10. spec: 
  11.   revisionHistoryLimit: 10 
  12.   selector: 
  13.     matchLabels: 
  14.       app: prometheus-node-exporter 
  15.       release: prometheus-node-exporter 
  16.   template: 
  17.     metadata: 
  18.       labels: 
  19.         app: prometheus-node-exporter 
  20.         release: prometheus-node-exporter 
  21.     spec: 
  22.       containers: 
  23.       - args: 
  24.         - --path.procfs=/host/proc 
  25.         - --path.sysfs=/host/sys 
  26.         - --path.rootfs=/host/root 
  27.         - --web.listen-address=$(HOST_IP):9100 
  28.         env: 
  29.         - name: HOST_IP 
  30.           value: 0.0.0.0 
  31.         image: images.huazai.com/release/node-exporter:v1.1.2 
  32.         imagePullPolicy: IfNotPresent 
  33.         livenessProbe: 
  34.           failureThreshold: 3 
  35.           httpGet: 
  36.             path: / 
  37.             port: 9100 
  38.             scheme: HTTP 
  39.           periodSeconds: 10 
  40.           successThreshold: 1 
  41.           timeoutSeconds: 1 
  42.         name: node-exporter 
  43.         ports: 
  44.         - containerPort: 9100 
  45.           hostPort: 9100 
  46.           name: metrics 
  47.           protocol: TCP 
  48.         readinessProbe: 
  49.           failureThreshold: 3 
  50.           httpGet: 
  51.             path: / 
  52.             port: 9100 
  53.             scheme: HTTP 
  54.           periodSeconds: 10 
  55.           successThreshold: 1 
  56.           timeoutSeconds: 1 
  57.         resources: 
  58.           limits: 
  59.             cpu: 200m 
  60.             memory: 50Mi 
  61.           requests: 
  62.             cpu: 100m 
  63.             memory: 30Mi 
  64.         terminationMessagePath: /dev/termination-log 
  65.         terminationMessagePolicy: File 
  66.         volumeMounts: 
  67.         - mountPath: /host/proc 
  68.           name: proc 
  69.           readOnly: true 
  70.         - mountPath: /host/sys 
  71.           name: sys 
  72.           readOnly: true 
  73.         - mountPath: /host/root 
  74.           mountPropagation: HostToContainer 
  75.           name: root 
  76.           readOnly: true 
  77.       dnsPolicy: ClusterFirst 
  78.       hostNetwork: true 
  79.       hostPID: true 
  80.       restartPolicy: Always 
  81.       schedulerName: default-scheduler 
  82.       securityContext: 
  83.         fsGroup: 65534 
  84.         runAsGroup: 65534 
  85.         runAsNonRoot: true 
  86.         runAsUser: 65534 
  87.       serviceAccount: prometheus-node-exporter 
  88.       serviceAccountName: prometheus-node-exporter 
  89.       terminationGracePeriodSeconds: 30 
  90.       tolerations: 
  91.       - effect: NoSchedule 
  92.         operator: Exists 
  93.       volumes: 
  94.       - hostPath: 
  95.           path: /proc 
  96.           type: "" 
  97.         name: proc 
  98.       - hostPath: 
  99.           path: /sys 
  100.           type: "" 
  101.         name: sys 
  102.       - hostPath: 
  103.           path: / 
  104.           type: "" 
  105.         name: root 
  106.   updateStrategy: 
  107.     rollingUpdate: 
  108.       maxUnavailable: 1 
  109.     type: RollingUpdate 
  110.  
  111. # 检查node-exporter 
  112. [root@test ~]# kubectl get po -n kube-system  |grep prometheus 
  113. prometheus-node-exporter-89wjk                 1/1     Running   0          31h 
  114. prometheus-node-exporter-hj4gh                 1/1     Running   0          31h 
  115. prometheus-node-exporter-hxm8t                 1/1     Running   0          31h 
  116. prometheus-node-exporter-nhqp6                 1/1     Running   0          31h 

安装opentelemetry

prometheus node exporter安装好之后,再来安装opentelemetry(以后有机会再介绍)

  1. # opentelemetry 配置文件。定义数据的接收、处理、导出 
  2. # 1.receivers即从哪里获取数据 
  3. # 2.processors即对获取的数据的处理 
  4. # 3.exporters即将处理过的数据导出到哪里,本次数据通过vminsert最终写入到vmstorage 
  5. # kubectl apply -f opentelemetry-install-cm.yaml 
  6. apiVersion: v1 
  7. data: 
  8.   relay: | 
  9.     exporters: 
  10.       prometheusremotewrite: 
  11.         # 我这里配置lb_ip:8480,即vminsert地址 
  12.         endpoint: http://lb_ip:8480/insert/0/prometheus 
  13.         # 不同的集群添加不同的label,比如cluster: uat/prd 
  14.         external_labels: 
  15.           cluster: uat 
  16.     extensions: 
  17.       health_check: {} 
  18.     processors: 
  19.       batch: {} 
  20.       memory_limiter: 
  21.         ballast_size_mib: 819 
  22.         check_interval: 5s 
  23.         limit_mib: 1638 
  24.         spike_limit_mib: 512 
  25.     receivers: 
  26.       prometheus: 
  27.         config: 
  28.           scrape_configs: 
  29.           - job_name: opentelemetry-collector 
  30.             scrape_interval: 10s 
  31.             static_configs: 
  32.             - targets: 
  33.               - localhost:8888 
  34. ...省略... 
  35.           - job_name: kube-state-metrics 
  36.             kubernetes_sd_configs: 
  37.             - namespaces: 
  38.                 names: 
  39.                 - kube-system 
  40.               role: service 
  41.             metric_relabel_configs: 
  42.             - regex: ReplicaSet;([\w|\-]+)\-[0-9|a-z]+ 
  43.               replacement: $$1 
  44.               source_labels: 
  45.               - created_by_kind 
  46.               - created_by_name 
  47.               target_label: created_by_name 
  48.             - regex: ReplicaSet 
  49.               replacement: Deployment 
  50.               source_labels: 
  51.               - created_by_kind 
  52.               target_label: created_by_kind 
  53.             relabel_configs: 
  54.             - action: keep 
  55.               regex: kube-state-metrics 
  56.               source_labels: 
  57.               - __meta_kubernetes_service_name 
  58.           - job_name: node-exporter 
  59.             kubernetes_sd_configs: 
  60.             - namespaces: 
  61.                 names: 
  62.                 - kube-system 
  63.               role: endpoints 
  64.             relabel_configs: 
  65.             - action: keep 
  66.               regex: node-exporter 
  67.               source_labels: 
  68.               - __meta_kubernetes_service_name 
  69.             - source_labels: 
  70.               - __meta_kubernetes_pod_node_name 
  71.               target_label: node 
  72.             - source_labels: 
  73.               - __meta_kubernetes_pod_host_ip 
  74.               target_label: host_ip 
  75.    ...省略... 
  76.     service: 
  77.     # 上面定义的receivors、processors、exporters以及extensions需要在这里配置,不然不起作用 
  78.       extensions: 
  79.       - health_check 
  80.       pipelines: 
  81.         metrics: 
  82.           exporters: 
  83.           - prometheusremotewrite 
  84.           processors: 
  85.           - memory_limiter 
  86.           - batch 
  87.           receivers: 
  88.           - prometheus 
  89. kind: ConfigMap 
  90. metadata: 
  91.   annotations: 
  92.     meta.helm.sh/release-name: opentelemetry-collector-hua 
  93.     meta.helm.sh/release-namespace: kube-system 
  94.   labels: 
  95.     app.kubernetes.io/instance: opentelemetry-collector-hua 
  96.     app.kubernetes.io/name: opentelemetry-collector-hua 
  97.   name: opentelemetry-collector-hua 
  98.   namespace: kube-system 
  1. # 安装opentelemetry 
  2. # kubectl apply -f  opentelemetry-install.yaml 
  3. apiVersion: apps/v1 
  4. kind: Deployment 
  5. metadata: 
  6.   labels: 
  7.     app.kubernetes.io/instance: opentelemetry-collector-hua 
  8.     app.kubernetes.io/name: opentelemetry-collector-hua 
  9.   name: opentelemetry-collector-hua 
  10.   namespace: kube-system 
  11. spec: 
  12.   progressDeadlineSeconds: 600 
  13.   replicas: 1 
  14.   revisionHistoryLimit: 10 
  15.   selector: 
  16.     matchLabels: 
  17.       app.kubernetes.io/instance: opentelemetry-collector-hua 
  18.       app.kubernetes.io/name: opentelemetry-collector-hua 
  19.   strategy: 
  20.     rollingUpdate: 
  21.       maxSurge: 25% 
  22.       maxUnavailable: 25% 
  23.     type: RollingUpdate 
  24.   template: 
  25.     metadata: 
  26.       labels: 
  27.         app.kubernetes.io/instance: opentelemetry-collector-hua 
  28.         app.kubernetes.io/name: opentelemetry-collector-hua 
  29.     spec: 
  30.       containers: 
  31.       - command: 
  32.         - /otelcol 
  33.         - --config=/conf/relay.yaml 
  34.         - --metrics-addr=0.0.0.0:8888 
  35.         - --mem-ballast-size-mib=819 
  36.         env: 
  37.         - name: MY_POD_IP 
  38.           valueFrom: 
  39.             fieldRef: 
  40.               apiVersion: v1 
  41.               fieldPath: status.podIP 
  42.         image: images.huazai.com/release/opentelemetry-collector:0.27.0 
  43.         imagePullPolicy: IfNotPresent 
  44.         livenessProbe: 
  45.           failureThreshold: 3 
  46.           httpGet: 
  47.             path: / 
  48.             port: 13133 
  49.             scheme: HTTP 
  50.           periodSeconds: 10 
  51.           successThreshold: 1 
  52.           timeoutSeconds: 1 
  53.         name: opentelemetry-collector-hua 
  54.         ports: 
  55.         - containerPort: 4317 
  56.           name: otlp 
  57.           protocol: TCP 
  58.         readinessProbe: 
  59.           failureThreshold: 3 
  60.           httpGet: 
  61.             path: / 
  62.             port: 13133 
  63.             scheme: HTTP 
  64.           periodSeconds: 10 
  65.           successThreshold: 1 
  66.           timeoutSeconds: 1 
  67.         resources: 
  68.           limits: 
  69.             cpu: "1" 
  70.             memory: 2Gi 
  71.           requests: 
  72.             cpu: 500m 
  73.             memory: 1Gi 
  74.         volumeMounts: 
  75.         - mountPath: /conf 
  76.         # 上面创建的给oepntelnemetry用的configmap 
  77.           name: opentelemetry-collector-configmap-hua 
  78.         - mountPath: /etc/otel-collector/secrets/etcd-cert/ 
  79.           name: etcd-tls 
  80.           readOnly: true 
  81.       dnsPolicy: ClusterFirst 
  82.       restartPolicy: Always 
  83.       schedulerName: default-scheduler 
  84.       securityContext: {} 
  85.       # sa这里自行创建吧 
  86.       serviceAccount: opentelemetry-collector-hua 
  87.       serviceAccountName: opentelemetry-collector-hua 
  88.       terminationGracePeriodSeconds: 30 
  89.       volumes: 
  90.       - configMap: 
  91.           defaultMode: 420 
  92.           items: 
  93.           - key: relay 
  94.             path: relay.yaml 
  95.            # 上面创建的给oepntelnemetry用的configmap 
  96.           name: opentelemetry-collector-hua 
  97.         name: opentelemetry-collector-configmap-hua 
  98.       - name: etcd-tls 
  99.         secret: 
  100.           defaultMode: 420 
  101.           secretName: etcd-tls 
  102.            
  103.  # 检查opentelemetry运行情况。如果opentelemetry与vm在同一个k8s集群,请写service那一套,不要使用lb(受制于云上 
  104.  # 4层监听器的后端服务器暂不能支持同时作为客户端和服务端) 
  105.  [root@kube-control-1 ~]# kubectl get po -n kube-system  |grep opentelemetry-collector-hua 
  106. opentelemetry-collector-hua-647c6c64c7-j6p4b   1/1     Running   0          8h 

安装检查

所有的组件安装完成后,在浏览器输入http://lb:8481/select/0/vmui,然后在server url输入;http://lb:8481/select/0/prometheus。最后再输入对应的指标就可以查询数据了,左上角还可以开启自动刷新!

总结

整个安装过程还是比较简单的。一旦安装完成后,即可存储多个k8s集群的监控数据。vm是支持基于PromeQL的MetricsQL的,也能够作为grafana的数据源。想想之前需要手动在每个k8s集群单独安装prometheus,还要去配置存储,需要查询数据时,要单独打开每个集群的prometheus UI是不是显得稍微麻烦一点呢。如果你也觉得vm不错,动手试试看吧!

全文参考

  • https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster
  • https://docs.victoriametrics.com/
  • https://opentelemetry.io/docs/
  • https://prometheus.io/docs/prometheus/latest/configuration/configuration/

 

责任编辑:姜华 来源: 运维开发故事
相关推荐

2022-04-22 13:32:01

K8s容器引擎架构

2023-11-06 07:16:22

WasmK8s模块

2023-09-06 08:12:04

k8s云原生

2022-12-26 08:14:57

K8sCronhpa定时弹性

2023-08-04 08:19:02

2023-08-03 08:36:30

Service服务架构

2023-05-25 21:38:30

2020-05-12 10:20:39

K8s kubernetes中间件

2022-09-05 08:26:29

Kubernetes标签

2023-07-04 07:30:03

容器Pod组件

2022-08-15 09:49:28

K8s云原生

2024-06-26 00:22:35

2023-03-05 21:50:46

K8s集群容量

2023-09-03 23:58:23

k8s集群容量

2024-01-26 14:35:03

鉴权K8sNode

2022-12-06 07:30:12

K8s云原生生态系统

2021-12-03 06:29:56

K8sDubboSpring

2021-04-12 20:42:50

K8S端口内存

2022-12-07 17:33:50

K8Skubernetes

2024-07-22 13:43:31

Kubernetes容器
点赞
收藏

51CTO技术栈公众号