0%

PrometheusRule介绍

PrometheusRule是安装prometheus-operator时默认安装的自定义资源对象(CRD),用来管理Prometheus上的告警规则,后面增删改查规则都可以通过这个资源对象查询。

例如查询默认加入的规则,通过下面的命令可以查询。

[root@k8s-master prometheus-operator]# kubectl get PrometheusRule -n monitoring
NAME                                                       AGE
prometheus-operator-alertmanager.rules                     28m
prometheus-operator-etcd                                   28m
prometheus-operator-general.rules                          28m
prometheus-operator-k8s.rules                              28m
prometheus-operator-kube-apiserver.rules                   28m
prometheus-operator-kube-prometheus-node-recording.rules   28m
prometheus-operator-kube-scheduler.rules                   28m
prometheus-operator-kubernetes-absent                      28m
prometheus-operator-kubernetes-apps                        28m
prometheus-operator-kubernetes-resources                   28m
prometheus-operator-kubernetes-storage                     28m
prometheus-operator-kubernetes-system                      28m
prometheus-operator-kubernetes-system-apiserver            28m
prometheus-operator-kubernetes-system-controller-manager   28m
prometheus-operator-kubernetes-system-kubelet              28m
prometheus-operator-kubernetes-system-scheduler            28m
prometheus-operator-node-exporter                          28m
prometheus-operator-node-exporter.rules                    28m
prometheus-operator-node-network                           28m
prometheus-operator-node-time                              28m
prometheus-operator-node.rules                             28m
prometheus-operator-prometheus                             28m
prometheus-operator-prometheus-operator                    28m

也可以在Prometheus -> Dashboard -> Status -> Rules中查看
prometheusRules

Prometheus怎么识别这个资源对象

简单来说,类似标签选择器,定义的PrometheusRule资源对象,需要带有一些Labels,具体哪些可以参考默认生成的PrometheusRule,然后新建的也给加上。

prometheusRules

参考链接:https://www.qikqiak.com/post/prometheus-operator-custom-alert/

所有的Rules都有对应的文件,默认生成在prometheus容器内

/etc/prometheus/rules/prometheus-prometheus-operator-prometheus-rulefiles-0/

目录下,新增一个PrometheusRule资源,也会在该目录下自动生成一个YAML文件

因此我们可以不用管理配置文件,只需要管理PrometheusRuleprometheus-operator使得prometheus监控更加K8s.

创建新的PrometheusRule资源

myrule.yaml

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: myrule
  labels:
    app: prometheus-operator
    chart: prometheus-operator-8.2.4
    heritage: Tiller
    release: prometheus-operator
spec:
  groups:
  - name: my-node-time
    rules:
    - alert: myClockSkewDetected
      annotations:
        message: myClock skew detected on node-exporter {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod }}`}}. Ensure NTP is configured correctly on this host.
      expr: abs(node_timex_offset_seconds{job="node-exporter"}) > 0.03
      for: 2m
      labels:
        severity: warning

注意:上面配置中具体的规则只是拷贝了一个默认规则中的内容。

使用命令或者在dashboard中贴上以上YAML,即可创建。

kubectl create -f myrule.yaml

创建完之后,可以通过命令查看

kubectl get PrometheusRule -n monitoring

也可以在Prometheus -> Dashboard中查看,最后在Prometheus容器Rule目录下确认是否新生成一个myrule.yaml的配置文件。

介绍

首先我们先来了解下 Prometheus-Operator的架构图:
Prometheus-Operator

上图是Prometheus-Operator官方提供的架构图,其中Operator是最核心的部分,作为一个控制器,他会去创建PrometheusServiceMonitorAlertManager以及PrometheusRule4个CRD资源对象,然后会一直监控并维持这4个资源对象的状态。

其中创建的prometheus这种资源对象就是作为Prometheus Server存在,而ServiceMonitor就是exporter的各种抽象,exporter前面我们已经学习了,是用来提供专门提供metrics数据接口的工具,Prometheus就是通过ServiceMonitor提供的metrics数据接口去 pull 数据的,当然alertmanager这种资源对象就是对应的AlertManager的抽象,而PrometheusRule是用来被Prometheus实例使用的报警规则文件。

这样我们要在集群中监控什么数据,就变成了直接去操作 Kubernetes 集群的资源对象了,是不是方便很多了。上图中的 Service ServiceMonitor 都是 Kubernetes 的资源,一个 ServiceMonitor 可以通过 labelSelector 的方式去匹配一类 ServicePrometheus 也可以通过 labelSelector 去匹配多个ServiceMonitor

查看stable/prometheus-operator的默认值

helm inspect values stable/prometheus-operator

将默认配置导出,备用

helm inspect values stable/prometheus-operator > prometheus-operator.yaml

提前将配置中涉及到的镜像下载好

#!/bin/sh
docker pull hub.deri.org.cn/k8s_monitor/alertmanager:v0.19.0
docker tag hub.deri.org.cn/k8s_monitor/alertmanager:v0.19.0 quay.io/prometheus/alertmanager:v0.19.0
docker rmi hub.deri.org.cn/k8s_monitor/alertmanager:v0.19.0
 
docker pull hub.deri.org.cn/k8s_monitor/ghostunnel:v1.4.1 
docker tag hub.deri.org.cn/k8s_monitor/ghostunnel:v1.4.1 squareup/ghostunnel:v1.4.1
docker rmi hub.deri.org.cn/k8s_monitor/ghostunnel:v1.4.1 
 
docker pull hub.deri.org.cn/k8s_monitor/kube-webhook-certgen:v1.0.0 
docker tag hub.deri.org.cn/k8s_monitor/kube-webhook-certgen:v1.0.0  jettech/kube-webhook-certgen:v1.0.0
docker rmi hub.deri.org.cn/k8s_monitor/kube-webhook-certgen:v1.0.0 
 
docker pull hub.deri.org.cn/k8s_monitor/prometheus-operator:v0.34.0
docker tag hub.deri.org.cn/k8s_monitor/prometheus-operator:v0.34.0 quay.io/coreos/prometheus-operator:v0.34.0
docker rmi hub.deri.org.cn/k8s_monitor/prometheus-operator:v0.34.0
 
docker pull hub.deri.org.cn/k8s_monitor/configmap-reload:v0.0.1
docker tag hub.deri.org.cn/k8s_monitor/configmap-reload:v0.0.1 quay.io/coreos/configmap-reload:v0.0.1
docker rmi pull hub.deri.org.cn/k8s_monitor/configmap-reload:v0.0.1
 
docker pull hub.deri.org.cn/k8s_monitor/prometheus-config-reloader:v0.34.0
docker tag hub.deri.org.cn/k8s_monitor/prometheus-config-reloader:v0.34.0 quay.io/coreos/prometheus-config-reloader:v0.34.0
docker rmi hub.deri.org.cn/k8s_monitor/prometheus-config-reloader:v0.34.0
 
docker pull hub.deri.org.cn/k8s_monitor/hyperkube:v1.12.1
docker tag hub.deri.org.cn/k8s_monitor/hyperkube:v1.12.1 k8s.gcr.io/hyperkube:v1.12.1
docker rmi  hub.deri.org.cn/k8s_monitor/hyperkube:v1.12.1
 
docker pull hub.deri.org.cn/k8s_monitor/prometheus:v2.13.1
docker tag hub.deri.org.cn/k8s_monitor/prometheus:v2.13.1 quay.io/prometheus/prometheus:v2.13.1
docker rmi hub.deri.org.cn/k8s_monitor/prometheus:v2.13.1
 
docker pull hub.deri.org.cn/k8s_monitor/kube-state-metrics:v1.8.0
docker tag hub.deri.org.cn/k8s_monitor/kube-state-metrics:v1.8.0 quay.io/coreos/kube-state-metrics:v1.8.0
docker rmi hub.deri.org.cn/k8s_monitor/kube-state-metrics:v1.8.0
 
docker pull hub.deri.org.cn/k8s_monitor/node-exporter:v0.18.1
docker tag hub.deri.org.cn/k8s_monitor/node-exporter:v0.18.1 quay.io/prometheus/node-exporter:v0.18.1
docker rmi hub.deri.org.cn/k8s_monitor/node-exporter:v0.18.1
 
docker pull hub.deri.org.cn/k8s_monitor/k8s-sidecar:0.1.20
docker tag hub.deri.org.cn/k8s_monitor/k8s-sidecar:0.1.20 kiwigrid/k8s-sidecar:0.1.20
docker rmi hub.deri.org.cn/k8s_monitor/k8s-sidecar:0.1.20
 
docker pull hub.deri.org.cn/k8s_monitor/grafana:6.4.2
docker tag hub.deri.org.cn/k8s_monitor/grafana:6.4.2 grafana/grafana:6.4.2
docker rmi hub.deri.org.cn/k8s_monitor/grafana:6.4.2

用默认配置安装,指定了name、namespace等信息

helm install stable/prometheus-operator  --name prometheus-operator --namespace monitoring -f prometheus-operator.yaml --set grafana.adminPassword=admin

查看pod运行情况

kubectl get pod -n monitoring

修改Grafana的service类型为NodePort,便于访问测试

kubectl edit svc prometheus-operator-grafana -n monitoring

通过下面命令,查看绑定宿主机的端口 ,访问测试密码安装时指定了admin/admin

kubectl get svc -n monitoring

卸载

helm delete prometheus-operator
helm delete --purge prometheus-operator

Prometheus-Operator安装时会去创建PrometheusServiceMonitorAlertManager以及PrometheusRule4个CRD资源对象,卸载时一并删掉

kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com

修改prometheus-operator.yaml,配置ingress

# 例如AlterManager配置,默认情况enabled: false修改成true,hosts: []填入alterManager的域名
alertmanager:
  ingress:
    enabled: true
    hosts: ["alert.deri.com"]

修改prometheus-operator.yaml,配置数据持久化

# 例如AlterManager,默认如下,是被注释掉的,取消注释,并填入你的storageClassName
    storage: {}
    # volumeClaimTemplate:
    #   spec:
    #     storageClassName: gluster
    #     accessModes: ["ReadWriteOnce"]
    #     resources:
    #       requests:
    #         storage: 50Gi
    #   selector: {}
 
alertmanager:
  alertmanagerSpec:
    storage: 
      volumeClaimTemplate:
        spec:
          storageClassName: nfs-client
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 5Gi
        selector: {}

其它配置可以在默认配置prometheus-operator.yaml中查看并修改。

问题

问题一:

【摘自https://github.com/helm/charts/tree/master/stable/prometheus-operator

KubeProxy【改完重启kubelet、docker服务】

The metrics bind address of kube-proxy is default to 127.0.0.1:10249 that prometheus instances cannot access to. You should expose metrics by changing metricsBindAddress field value to 0.0.0.0:10249 if you want to collect them.

Depending on the cluster, the relevant part config.conf will be in ConfigMap kube-system/kube-proxy or kube-system/kube-proxy-config. For example:
kubectl -n kube-system edit cm kube-proxy
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    # ...
    # metricsBindAddress: 127.0.0.1:10249
    metricsBindAddress: 0.0.0.0:10249
    # ...
  kubeconfig.conf: |-
    # ...
kind: ConfigMap
metadata:
  labels:
    app: kube-proxy
  name: kube-proxy
  namespace: kube-system

问题二:

在不同环境安装时发现结果不一样,所有配置都一样,可能原因:

由于上述是通过helm安装的,首先通过命令helm repo list检查源是否一致;

通过helm search | grep prometheus查看 CHART VERSION    APP VERSION 是否一致;

如果不一致通过helm repo update更新一下就可以了。

参考链接

安装htpasswd

htpasswd不是centos7自带的命令,需要使用yum安装。

yum install httpd-tools

选项

  • -c: 创建一个新的密码文件
  • -b: 在命令行中一并输入用户名和密码而不是根据提示输入密码
  • -D: 删除指定的用户
  • -n: 不更新密码文件,只将加密后的用户名密码输出到屏幕上
  • -p: 不对密码进行加密,采用明文的方式
  • -m: 采用MD5算法对密码进行加密(默认的加密方式)
  • -d: 采用CRYPT算法对密码进行加密
  • -s: 采用SHA算法对密码进行加密
  • -B: 采用bcrypt算法对密码进行加密(非常安全)

参数

  • 用户名: 要创建或者更新的用户名
  • 密码: 用户的新密码

创建用户,设置密码

用下面的三个操作,创建 basic-auth 用户 foo,密码 123456,将用户信息提交到 kubernetes

$ htpasswd -c auth foo
$ kubectl -n demo-echo create secret generic basic-auth --from-file=auth

注意其中命名空间demo-echosecret 与目标服务echo在同一个 namespace 中。

为目标服务设置 ingress

注意其中的basic-auth改成你的

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-echo-with-auth-basic
  annotations:
    # type of authentication
    nginx.ingress.kubernetes.io/auth-type: basic
    # name of the secret that contains the user/password definitions
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    # message to display with an appropriate context why the authentication is required
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
  rules:
  - host: auth-basic.echo.example
    http:
      paths:
      - path: /
        backend:
          serviceName: echo
          servicePort: 80

访问测试

ingress-nginx

原文链接

创建自签署证书

注意证书中的CN=tls.echo.example改成自己的域名地址。

echo "生成自签署的 ca 证书"
openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=My Cert Authority'
 
echo "生成用上述 ca 签署的 server 证书"
openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=tls.echo.example'
openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

将 server 证书上传到 kubernetes

注意其中的命名空间demo-echo和secret名称tls-echo-exmaple-secret,改成自己的。

kubectl -n demo-echo create secret generic tls-echo-exmaple-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key

配置ingress

ignress 中的 host 一定要与证书的 CN 相同,在 tls 配置中引用前面创建的 secret

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-echo-with-tls
spec:
  rules:
  - host: tls.echo.example
    http:
      paths:
      - path: /
        backend:
          serviceName: echo
          servicePort: 80
  tls:
  - hosts:
    - tls.echo.example
    secretName: tls-echo-exmaple-secret

为多个域名配置证书

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: foo-tls
  namespace: default
spec:
  tls:
  - hosts:
    - foo.bar.com
    # This secret must exist beforehand
    # The cert must also contain the subj-name foo.bar.com
    # https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md#tls-certificates
    secretName: foobar
  - hosts:
    - bar.baz.com
    # This secret must exist beforehand
    # The cert must also contain the subj-name bar.baz.com
    # https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md#tls-certificates
    secretName: barbaz
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: http-svc
          servicePort: 80
        path: /
  - host: bar.baz.com
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

参考链接

前文:在所有节点安装nfs-utils并启动相关服务。

NFS服务端新建一个挂载目录

echo "/home/nfs *(rw,async,no_root_squash)" >> /etc/exports
exportfs -r
showmount -e localhost

安装nfs-client

helm install stable/nfs-client-provisioner --name test-storageclass --set nfs.server=192.168.1.210 --set nfs.path=/home/nfs

注意

可以先用命令helm inspect values stable/nfs-client-provisioner查看所有配置项,并提前下载好镜像。

replicaCount: 1
strategyType: Recreate
 
image:
  repository: quay.io/external_storage/nfs-client-provisioner
  tag: v3.1.0-k8s1.11
  pullPolicy: IfNotPresent
 
nfs:
  server:
  path: /ifs/kubernetes
  mountOptions:
 
# For creating the StorageClass automatically:
storageClass:
  create: true
 
  # Set a provisioner name. If unset, a name will be generated.
  # provisionerName:
 
  # Set StorageClass as the default StorageClass
  # Ignored if storageClass.create is false
  defaultClass: false
 
  # Set a StorageClass name
  # Ignored if storageClass.create is false
  name: nfs-client
 
  # Allow volume to be expanded dynamically
  allowVolumeExpansion: true
 
  # Method used to reclaim an obsoleted volume
  reclaimPolicy: Delete
 
  # When set to false your PVs will not be archived by the provisioner upon deletion of the PVC.
  archiveOnDelete: true
 
## For RBAC support:
rbac:
  # Specifies whether RBAC resources should be created
  create: true
 
# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
  enabled: false
 
## Set pod priorityClassName
# priorityClassName: ""
 
serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
 
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name:
 
resources: {}
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

也可以修改下上面的yaml,直接通过nfs-client.yaml创建:

replicaCount: 1
strategyType: Recreate
 
image:
  repository: quay.io/external_storage/nfs-client-provisioner
  tag: v3.1.0-k8s1.11
  pullPolicy: IfNotPresent
 
nfs:
  server: 192.168.1.210
  path: /home/nfs
  mountOptions:
 
storageClass:
  create: true
  defaultClass: false
  name: nfs-client
  allowVolumeExpansion: true
  reclaimPolicy: Delete
  archiveOnDelete: true
 
rbac:
  create: true
 
podSecurityPolicy:
  enabled: false

执行安装命令:

helm install stable/nfs-client-provisioner -n test-storageclass -f nfs-client.yaml

创建PVC测试

创建test-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: testclaim
spec:
  storageClassName: "nfs-client"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi
kubectl apply -f test-pvc.yaml

查看结果

[root@k8s-master home]# kubectl get sc
NAME         PROVISIONER                                              AGE
nfs-client   cluster.local/test-storageclass-nfs-client-provisioner   36m
[root@k8s-master home]# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                 STORAGECLASS    REASON   AGE
persistentvolume/pvc-d9bdfa45-6417-4ad9-bbf0-02301f928342   10Mi       RWX            Delete           Bound         default/testclaim     nfs-client               33m
 
NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/testclaim   Bound    pvc-d9bdfa45-6417-4ad9-bbf0-02301f928342   10Mi       RWX            nfs-client     34m

多条件模糊查询

select * from t_log where (LOCATE('wu', user_name) > 0 or  LOCATE('wu', params ) > 0)and  (method='POST' or method='GET');

查询id在列表中的所有结果

select * from t_user where tenant_id in (1,2,3);