1. 程式人生 > 其它 >k8s容器內部通過Prometheus Operator部署MySQL Exporter監控k8s叢集外部的MySQL

k8s容器內部通過Prometheus Operator部署MySQL Exporter監控k8s叢集外部的MySQL

寫在前面

在按照下面步驟操作之前,請先確保伺服器已經部署k8s,prometheus,prometheus operator,關於這些環境的部署,可以自行查詢相關資料安裝部署,本文件便不在此贅述。

關於prometheus監控這部分,大致的系統架構圖如下,感興趣的同學可以自行研究一下,這裡就不再具體說明。

1、問題說明

由於部分業務系統使用MySQL資料庫儲存資料,同時該資料庫部署在k8s叢集之外,但是prometheus operator部署在k8s叢集內部,這裡就涉及到了如何監控k8s叢集外部的MySQL例項的問題。MySQL的監控可以使用prometheus的mysql-exporter暴露metrics,對於mysql處在k8s叢集外部的場景,可以在建立Deployment時指定監控的資料來源例項的IP地址為MySQL所在主機的IP地址,以此來暴露外部MySQL服務到k8s叢集中。

2、部署操作

2.1、建立監控資料庫的使用者並授權

這裡主要是建立mysql-exporter連線mysql需要的使用者,同時並授予相應許可權,操作SQL如下:

# 檢視資料庫密碼長度,確保密碼符合要求
SHOW VARIABLES LIKE 'validate_password%';

# 建立使用者並授權,這裡以exporter使用者為例,密碼長度與上述查詢長度保持一致
create user 'exporter'@'%' identified with mysql_native_password by 'admin@321';
GRANT ALL PRIVILEGES ON *.* TO 'exporter'@'%' with grant option;
flush privileges;

2.2、k8s叢集建立mysql-exporter的Deployment

建立mysql-exporter容器,利用上一步建立的賬戶密碼資訊,通過DATA_SOURCE_NAME環境變數傳入連線mysql例項的資訊,注意需要暴露mysql-exporter的9104埠。

---
apiVersion: apps/v1 kind: Deployment metadata: name: mysqld-exporter namespace: prometheus-exporter labels: app: mysqld-exporter spec: replicas: 1 selector: matchLabels: app: mysqld-exporter template: metadata: labels: app: mysqld-exporter spec: containers: - name: mysqld-exporter image: prom/mysqld-exporter imagePullPolicy: IfNotPresent env: # 此處為mysql-exporter指定監控的資料庫地址以及對應的使用者名稱、密碼,這裡監控的資料庫IP地址為10.26.124.16:3306
- name: DATA_SOURCE_NAME value: exporter:admin@321@(10.26.124.16:3306)/mysql ports: - containerPort: 9104

部署成功圖如下:

2.3、k8s叢集建立mysql-exporter的Service

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: mysqld-exporter
  name: mysqld-exporter
  namespace: prometheus-exporter
spec:
  type: ClusterIP
  ports:
  - name: metrics
    port: 9104
    protocol: TCP
    targetPort: 9104
  selector:
   app: mysqld-exporter

部署成功圖如下:

2.4、建立ServiceMonitor

---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app: mysqld-exporter
    prometheus: k8s
  name: prometheus-mysqld-exporter
  namespace: prometheus-exporter
spec:
  endpoints:
    - interval: 1m
      port: metrics
      params:
        target:
          - '10.26.124.16:3306'
      relabelings:
        - sourceLabels: [__param_target]
          targetLabel: instance
  namespaceSelector:
    matchNames:
      - prometheus-exporter
  selector:
    matchLabels:
      app: mysqld-exporter

部署成功圖如下:

2.5、新增PrometheusRule監控規則

---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    prometheus: k8s
    role: alert-rules
  name: mysql-exporter-rules
  namespace: kubesphere-monitoring-system
spec:
  groups:
    - name: mysql-exporter
      rules:
        - alert: MysqlDown
          annotations:
            description: |-
              MySQL instance is down on {{ $labels.instance }}
                VALUE = {{ $value }}
                LABELS = {{ $labels }}
            summary: 'MySQL down (instance {{ $labels.instance }})'
          expr: mysql_up == 0
          for: 0m
          labels:
            severity: critical
        - alert: MysqlSlaveIoThreadNotRunning
          annotations:
            description: |-
              MySQL Slave IO thread not running on {{ $labels.instance }}
                VALUE = {{ $value }}
                LABELS = {{ $labels }}
            summary: >-
              MySQL Slave IO thread not running (instance {{ $labels.instance
              }})
          expr: >-
            mysql_slave_status_master_server_id > 0 and ON (instance)
            mysql_slave_status_slave_io_running == 0
          for: 0m
          labels:
            severity: critical
        - alert: MysqlSlaveSqlThreadNotRunning
          annotations:
            description: |-
              MySQL Slave SQL thread not running on {{ $labels.instance }}
                VALUE = {{ $value }}
                LABELS = {{ $labels }}
            summary: >-
              MySQL Slave SQL thread not running (instance {{ $labels.instance
              }})
          expr: >-
            mysql_slave_status_master_server_id > 0 and ON (instance)
            mysql_slave_status_slave_sql_running == 0
          for: 0m
          labels:
            severity: critical

部署成功圖如下: