0%

kubectl describe 命令查看

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  13m                   default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-vfhnj to node-d1
  Normal   Pulled     13m                   kubelet, node-d1   Container image "quay.io/coreos/flannel:v0.11.0-amd64" already present on machine
  Normal   Created    13m                   kubelet, node-d1   Created container install-cni
  Normal   Started    13m                   kubelet, node-d1   Started container install-cni
  Normal   Pulled     10m (x5 over 13m)     kubelet, node-d1   Container image "quay.io/coreos/flannel:v0.11.0-amd64" already present on machine
  Normal   Created    10m (x5 over 13m)     kubelet, node-d1   Created container kube-flannel
  Normal   Started    10m (x5 over 13m)     kubelet, node-d1   Started container kube-flannel
  Warning  BackOff    3m39s (x30 over 12m)  kubelet, node-d1   Back-off restarting failed container

kubectl logs命令查看

[root@master ~]# kubectl logs  kube-flannel-ds-amd64-9w5nq -n kube-system
I1216 05:59:40.055608       1 main.go:527] Using interface with name eth0 and address 192.168.1.82
I1216 05:59:40.055666       1 main.go:544] Defaulting external address to interface address (192.168.1.82)
E1216 06:00:10.056546       1 main.go:241] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-9w5nq': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-amd64-9w5nq: dial tcp 10.96.0.1:443: i/o timeout

问题排查
网络问题?
通过curl命令测试,网络没有问题。

测试kube-proxy

重启该节点上的kube-proxy容器,并查看日志

kubectl delete pod kube-proxy-vtd27 -n kube-system
[root@master ~]# kubectl logs kube-proxy-pljct -n kube-system
W1216 06:30:51.741835       1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1216 06:30:51.742536       1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1216 06:30:51.748495       1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1216 06:30:51.749223       1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1216 06:30:51.750805       1 server_others.go:259] can't determine whether to use ipvs proxy, error: IPVS proxier will not be used because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh]
I1216 06:30:51.757054       1 server_others.go:143] Using iptables Proxier.
W1216 06:30:51.757122       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1216 06:30:51.757338       1 server.go:534] Version: v1.15.0

问题

这里明显有个问题。一些需要的内核模块加载失败,参考安装文档,已经配置过内核模块。重新尝试,问题依然存在。

[root@master ~]# cat > /etc/sysconfig/modules/ipvs.modules < #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  150 
ip_vs                 145497  156 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  6 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          139224  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  150 
ip_vs                 145497  156 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  6 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          139224  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack

问题解决

由于ipvs已经加入到内核主干,所以需要内核模块支持,请确保内核已经加载了相应模块;如不确定,执行以下脚本,以确保内核加载相应模块,否则会出现failed to load kernel modules: [ip_vs_rr ip_vs_sh ip_vs_wrr]错误

cat > /etc/sysconfig/modules/ipvs.modules < /dev/null 2>&1
    if [ $? -eq 0 ]; then
        /sbin/modprobe \${kernel_module}
    fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

执行后应该如下图所示,如果lsmod | grep ip_vs并未出现 ip_vs_rr 等模块;

[root@node-d1 ~]# lsmod | grep ip_vs
ip_vs_ftp              13079  0 
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
nf_nat                 26583  5 ip_vs_ftp,nf_nat_ipv4,nf_nat_ipv6,xt_nat,nf_nat_masquerade_ipv4
ip_vs_rr               12600  136 
ip_vs                 145497  162 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139224  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack

最后重启Kube-proxyflannel容器,问题解决。

a标签href不跳转 禁止跳转

当页面中a标签不需要任何跳转时,从原理上来讲,可分如下两种方法:

标签属性href,使其指向空或不返回任何内容。如:

<a href="javascript:void(0);" >点此无反应javascript:void(0)</a>
<a href="javascript:;" >点此无反应javascript:</a>

标签事件onclick,阻止其默认行为。如:

<a href="" οnclick="return false;">return false;</a>
<a href="#" οnclick="return false;">return false;</a>

注意:只有一个href="#"是不可以的。

用户表

user_id user_name
1 zhangsan
2 lisi
3 wangwu
4 zhaoliu

另一张money表,表示了借钱的关系

id from to how
1 1 2 100
2 3 4 100

关联查询

select m.id,u1.user_name,u2.user_name,m.how
from money as m
left outer join user u1 on u2.user_id=m.from
left outer join user u3 on u3.user_id=m.to

显示的结果

id user_name user_name how
1 zhangsan lisi 100
2 wangwu zhaoliu 100

问题

这里有个小问题,中间两列显示的名称一样,都是user_name,这样就不好辨别了,在Mybatis中如果想用

@ResultType(实体类名称.class)

来定义返回的类型,因为两个字段名一样,定义的实体类肯定不能一样,就会导致返回没有结果,怎么处理呢?

关联查询时,指定结果字段别名

select m.id,u1.user_name as user_name1,u2.user_name as user_name2,m.how
from money as m
left outer join user u1 on u2.user_id=m.from
left outer join user u3 on u3.user_id=m.to

钉钉群机器人定义

群机器人是钉钉群的高级扩展功能。群机器人可以将第三方服务的信息聚合到群聊中,实现自动化的信息同步。目前,大部分机器人在添加后,还需要进行Webhook配置,才可正常使用(配置说明详见操作流程中的帮助链接)。

例如:

  • 通过聚合GitHub,GitLab等源码管理服务,实现源码更新同步。
  • 通过聚合Trello,JIRA等项目协调服务,实现项目信息同步。
  • 另外,群机器人支持Webhook协议的自定义接入,支持更多可能性,例如:你可将运维报警通过自定义机器人聚合到钉钉群实现提醒功能。

机器人发送消息频率限制

消息发送太频繁会严重影响群成员的使用体验,大量发消息的场景(譬如系统监控报警)可以将这些信息进行整合,通过markdown消息以摘要的形式发送到群里。

每个机器人每分钟最多发送20条。如果超过20条,会限流10分钟

添加群机器人

参考官网:https://ding-doc.dingtalk.com/doc#/serverapi3/iydd5h

下载SDK简化开发

下载链接:https://ding-doc.dingtalk.com/doc#/faquestions/vzbp02

将SDK安装到本地maven仓库

mvn install:install-file -Dfile=D:/taobao-sdk-java-auto-20191203.jar -DgroupId=com.taobao -DartifactId=taobao-sdk-java-auto-20191203 -Dversion=1.0.0 -Dpackaging=jar

项目中引用

<dependency>
    <groupId>com.taobao</groupId>
    <artifactId>taobao-sdk-java-auto-20191203</artifactId>
    <version>1.0.0</version>
</dependency>

开发样例

DingTalkClient client = new DefaultDingTalkClient("https://oapi.dingtalk.com/robot/send?access_token=566cc69da782ec33e42541b09b08551f09fbe864eb8008112e994b43887");
OapiRobotSendRequest request = new OapiRobotSendRequest();
//文本类型
request.setMsgtype("text");
OapiRobotSendRequest.Text text = new OapiRobotSendRequest.Text();
text.setContent("测试文本消息");
request.setText(text);
OapiRobotSendRequest.At at = new OapiRobotSendRequest.At();
at.setAtMobiles(Arrays.asList("132xxxxxxxx"));
request.setAt(at);
 
//link类型
request.setMsgtype("link");
OapiRobotSendRequest.Link link = new OapiRobotSendRequest.Link();
link.setMessageUrl("https://www.dingtalk.com/");
link.setPicUrl("");
link.setTitle("时代的火车向前开");
link.setText("这个即将发布的新版本,创始人xx称它为“红树林”。\n" +
        "而在此之前,每当面临重大升级,产品经理们都会取一个应景的代号,这一次,为什么是“红树林");
request.setLink(link);
 
//markdown类型
request.setMsgtype("markdown");
OapiRobotSendRequest.Markdown markdown = new OapiRobotSendRequest.Markdown();
markdown.setTitle("杭州天气");
markdown.setText("#### 杭州天气 @156xxxx8827\n" +
        "> 9度,西北风1级,空气良89,相对温度73%\n\n" +
        "> ![screenshot](https://gw.alicdn.com/tfs/TB1ut3xxbsrBKNjSZFpXXcXhFXa-846-786.png)\n"  +
        "> ###### 10点20分发布 [天气](http://www.thinkpage.cn/) \n");
request.setMarkdown(markdown);
OapiRobotSendResponse response = client.execute(request);

常用的三种类型如上所示,具体参考官网类型

应答机制

自定义机器人尚不支持应答机制 (该机制指的是群里成员在聊天@机器人的时候,钉钉回调指定的服务地址,即Outgoing机器人)。

错误码

开发者每次调用接口时,可能获得正确或错误的返回码,企业可以根据返回码信息调试接口,排查错误。

注意:开发者的程序应该根据errcode来判断出错的情况,而不应该依赖errmsg来匹配,因为errmsg可能会调整。

参考官网:错误码

上文介绍了Alertmanager集成钉钉,我们还可以自己写一个webhook,用于接收Alertmanager的通知服务。

获取Alertmanager的请求内容

参考前文,我们知道Alertmanager会向配置的webhook地址发送一个POST请求,这里我们先编写一个简单的controller,用于接收Alertmanager的请求,如下:

    @RequestMapping("/test")
    public void watcher(HttpServletRequest request) throws IOException {
        System.out.println(request.getMethod());
        BufferedReader reader = new BufferedReader(new InputStreamReader(request.getInputStream()));
        String str = "";
        String wholeStr = "";
        while ((str = reader.readLine()) != null) {
            wholeStr += str;
        }
        System.out.println("body:" + wholeStr);
    }

写好上面的服务之后,修改Alertmanager中webhook地址,然后将上面打印的body分析分析。如下:

{
  "receiver":"webhook_alert",
  "status":"resolved",
  "alerts":[
    {
      "status":"resolved",
      "labels":{
        "alertname":"curl test",
        "severity":"warning"
      },
      "annotations":{
        "description":"this is a test alert from curl",
        "summary":"test alert from curl"
      },
      "startsAt":"2019-12-03T03:22:50.430372292Z",
      "endsAt":"2019-12-03T03:26:50.430372292Z",
      "generatorURL":"",
      "fingerprint":"960077177807fca5"
    }
  ],
  "groupLabels":{
    "alertname":"curl test"
  },
  "commonLabels":{
    "alertname":"curl test",
    "severity":"warning"
  },
  "commonAnnotations":{
    "description":"this is a test alert from curl",
    "summary":"test alert from curl"
  },
  "externalURL":"http://alertmanager-prometheus-operator-alertmanager-test-0:9093",
  "version":"4",
  "groupKey":"{}/{severity="warning"}:{alertname="curl test"}"
}

所有接收到的请求内容格式基本不变,标准的JSON格式,我们编写一个实体类用于接收。

编写实体类

编写内层实体类Alert.java

/**
 * @ClassName: Alert
 * @Description: TODO
 * @Author: wuzhiyong
 * @Time: 2019/12/3 11:39
 * @Version: v1.0
 **/
@Data
public class Alert {
    /**
     * 状态:firing / resolved
     */
    private String status;
    /**
     * 标签
     */
    private Map<String, String> labels;
    /**
     * 携带的其它信息
     */
    private Map<String, String> annotations;
    /**
     * 开始时间
     */
    private String startsAt;
    private String endsAt;
    /**
     * 产生的alertmanager信息
     */
    private String generatorURL;
    private String fingerprint;
 
    public Alert() {
    }
}

编写外层实体类Alerts.java

/**
 * @ClassName: Alerts
 * @Description: Alertmanager发出的告警格式
 * @Author: wuzhiyong
 * @Time: 2019/12/3 11:49
 * @Version: v1.0
 **/
@Data
public class Alerts {
    private String externalURL;
    private String version;
    private String groupKey;
    private String receiver;
    private String status;
    private List<Alert> alerts;
    private Map<String, String> groupLabels;
    private Map<String, String> commonLabels;
    private Map<String, String> commonAnnotations;
    public Alerts() {
    }
}

编写webhook

    @RequestMapping(value = "/receive", method = RequestMethod.POST)
    void receive(@RequestBody Alerts alerts) {
        log.info("new alert: {}",alerts.getCommonLabels().get("alertname"));
    }

上面接收到Alert manager的请求,只是打印了alert name的日志,我们可以根据接收到告警,做更多的事。

可以根据自身业务自行发挥。

全局配置

参考官网:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config

global:
  # How frequently to scrape targets by default.
  [ scrape_interval:  | default = 1m ]
 
  # How long until a scrape request times out.
  [ scrape_timeout:  | default = 10s ]
 
  # How frequently to evaluate rules.
  [ evaluation_interval:  | default = 1m ]
 
  # The labels to add to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    [ :  ... ]
 
# Rule files specifies a list of globs. Rules and alerts are read from
# all matching files.
rule_files:
  [ -  ... ]
 
# A list of scrape configurations.
scrape_configs:
  [ -  ... ]
 
# Alerting specifies settings related to the Alertmanager.
alerting:
  alert_relabel_configs:
    [ -  ... ]
  alertmanagers:
    [ -  ... ]
 
# Settings related to the remote write feature.
remote_write:
  [ -  ... ]
 
# Settings related to the remote read feature.
remote_read:
  [ -  ... ]

更改指标标签

更改标签的时机:抓取前修改、抓取后修改、告警时修改

  • 采集数据之前,通过relabel_config
  • 采集数据之后,写入存储之前,通过metric_relabel_configs
  • 在告警前修改标签,通过alert_relabel_configs

JOB配置

- job_name: prometheus
  honor_labels: false
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - monitoring
  scrape_interval: 30s
  relabel_configs:
  - action: keep
    source_labels:
    - __meta_kubernetes_service_label_prometheus
    regex: k8s
  - source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
    separator: ;
    regex: Pod;(.*)
    replacement: ${1}
    target_label: pod
  - source_labels:
    - __meta_kubernetes_namespace
    target_label: namespace
  • kubernetes_sd_configs:使用这个配置可以自动发现 k8s 中 node、service、pod、endpoint、ingress,并为其添加监控,更多的内容可以直接查看官方文档。__meta_kubernetes_xxxxx具体什么意思都可以在官网找到。
  • endpoints:采用endpoints方式采集,每创建一个 service 就会创建一个对应的 endpoint,通过endpoint方式可以将service下所有的pod都采集到。
  • 下面配置的意思是只有 service 的标签包含 prometheus=k8s,k8s 才会对其对应的 endpoint 进行采集。所以我们后面要为 Prometheus 创建一个 service,并且要为这个 service 加上 prometheus: k8s 这样的标签。
  - action: keep
    source_labels:
    - __meta_kubernetes_service_label_prometheus
    regex: k8s
  • 下面配置意识是如果 __meta_kubernetes_endpoint_address_target_kind 的值为 Pod,__meta_kubernetes_endpoint_address_target_name 的值为 prometheus-0,在它们之间加上一个 ; 之后,它们合起来就是 Pod;prometheus-0。使用正则表达式 Pod;(.*) 对其进行匹配,那么 ${1} 就是取第一个分组,它值就是 prometheus-0,最后将这个值交给 pod 这个标签。因此这一段配置就是为所有采集到的监控指标增加一个 pod=prometheus-0 的标签。如果 __meta_kubernetes_endpoint_address_target_kind 的值不是 Pod,那么不会添加任何标签。
- source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
  separator: ;
  regex: Pod;(.*)
  replacement: ${1}
  target_label: pod
  • 没有指定 url,Prometheus 会采集默认的 url /metrics

定义告警规则

groups:
- name: example
  rules:
  - alert: HighRequestLatency
    expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
    for: 10m
    labels:
      severity: page
    annotations:
      summary: High request latency

参考官网: https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/

  • for:Prometheus将在每次发出警报之前检查警报在10分钟内是否继续处于活动状态
  • labels:允许指定一组附加标签来附加到警报。任何现有的冲突标签都将被覆盖。标签值可以模板化。
  • annotations:指定了一组信息标签,可用于存储更长的附加信息,例如警报说明或运行手册链接。注释值可以模板化。

模板化

标签和注释值可以使用控制台模板进行模板化。该$labels 变量保存警报实例的标签键/值对。可以通过$externalLabels变量访问已组态的外部标签。该 $value变量保存警报实例的评估值。

# To insert a firing element's label values:
{{ $labels. }}
# To insert the numeric expression value of the firing element:
{{ $value }}

例子:

groups:
- name: example
  rules:
 
  # Alert for any instance that is unreachable for >5 minutes.
  - alert: InstanceDown
    expr: up == 0
    for: 5m
    labels:
      severity: page
    annotations:
      summary: "Instance {{ $labels.instance }} down"
      description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."
 
  # Alert for any instance that has a median request latency >1s.
  - alert: APIHighRequestLatency
    expr: api_http_request_latencies_second{quantile="0.5"} > 1
    for: 10m
    annotations:
      summary: "High request latency on {{ $labels.instance }}"
      description: "{{ $labels.instance }} has a median request latency above 1s (current value: {{ $value }}s)"