0
点赞
收藏
分享

微信扫一扫

【学习笔记】云原生初步

一、部署服务发现

1.1 基于文件的服务发现

(1)创建用于服务发现的文件,在文件中配置所需的 target  
cd /usr/local/prometheus
mkdir targets

vim targets/node-exporter.yaml
- targets:
  - 192.168.80.130:9100
  labels:
   from: node-exporter

vim targets/mysqld-exporter.yaml
- targets:
  - 192.168.80.160:9104
  labels:
    from: mysqld-exporter

(2)修改 prometheus 配置文件,发现 target 的配置,定义在配置文件的 job 之中 
vim /usr/local/prometheus/prometheus.yml
......
scrape_configs:
  - job_name: nodes
    file_sd_configs:                  #指定使用文件服务发现
    - files:                          #指定要加载的文件列表
      - targets/node*.yaml            #文件加载支持通配符
      refresh_interval: 2m            #每隔 2 分钟重新加载一次文件中定义的 Targets,默认为 5m
  
  - job_name: mysqld
    file_sd_configs:
    - files:
      - targets/mysqld*.yaml
      refresh_interval: 2m


systemctl reload prometheus
浏览器查看 Prometheus 页面的 Status -> Targets

1.2  基于 Consul 的服务发现

(1)部署 Consul 服务 
cd /opt/
mkdir cnsul
unzip consul_1.9.2_linux_amd64.zip
mv consul /usr/local/bin/

#创建 Consul 服务的数据目录和配置目录
mkdir data logs conf

#使用 server 模式启动 Consul 服务
consul agent -server -bootstrap -ui -data-dir=./data -config-dir=./conf  -bind=192.168.136.160 -client=0.0.0.0 -node=consul-server01 &> ./logs/consul.log &

#查看 consul 集群成员
consul members

(2)在 Consul 上注册 Services 
#在配置目录中添加文件
cd conf/
vim nodes.json

{
  "services": [
    {
      "id": "node_exporter-node01",
      "name": "node01",
      "address": "192.168.136.130",
      "port": 9100,
      "tags": ["nodes"],
      "checks": [{
        "http": "http://192.168.136.130:9100/metrics",
        "interval": "5s"
      }]
    }
  ]
}

vim mysqld.json

{
  "services": [
    {
      "id": "mysqld_exporter-node01",
      "name": "node02",
      "address": "192.168.136.160",
      "port": 9104,
      "tags": ["mysqld"],
      "checks": [{
        "http": "http://192.168.136.160:9104/metrics",
        "interval": "5s"
      }]
    }
  ]
}

vim nginx.json

{
  "services": [
    {
      "id": "nginx_exporter-node01",
      "name": "node03",
      "address": "192.168.136.160",
      "port": 9913,
      "tags": ["nginx"],
      "checks": [{
        "http": "http://192.168.136.160:9913/metrics",
        "interval": "5s"
      }]
    }
  ]
}


(3)修改 prometheus 配置文件 
vim /usr/local/prometheus/prometheus.yml
......
  - job_name: nodes
    consul_sd_configs:                  #指定使用 consul 服务发现
    - server: 192.168.136.160:8500        #指定 consul 服务的端点列表
      tags:                             #指定 consul 服务发现的 services 中哪些 service 能够加入到 prometheus 监控的标签
      - nodes
      refresh_interval: 1m
........


systemctl reload prometheus
浏览器查看 Prometheus 页面的 Status -> Targets

#让 consul 注销 Service
consul services deregister -id="node_exporter-node02"

#重新注册
consul services register /etc/consul/nodes.json

1.3 基于 Kubernetes API 的服务发现 

#基于 Kubernetes 发现机制的部分配置参数
# The API server addresses. If left empty, Prometheus is assumed to run inside of the cluster and will discover API servers automatically
and use the pod's
# CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/.
[ api_server: <host> ]
 
# The Kubernetes role of entities that should be discovered. One of endpoints, service, pod, node, or ingress.
role: <string>
 
# Optional authentication information used to authenticate to the API server.
# Note that 'basic_auth', 'bearer_token'和'bearer_token_file' 等认证方式互斥;
[ bearer_token: <secret> ]
[ bearer_token_file: <filename> ]
 
# TLS configuration.
tls_config:
# CA certificate to validate API server certificate with.
[ ca_file: <filename> ]
 
# Certificate and key files for client cert authentication to the server.
[ cert_file: <filename> ]
[ key_file: <filename> ]
 
# ServerName extension to indicate the name of the server.
[ server_name: <string> ]
 
# Optional namespace discovery. If omitted, all namespaces are used.
namespaces:
names:
[ - <string> ]
在K8S节点上 
apiVersion: v1
kind: Namespace
metadata:
  name: monitoring

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: outside-prometheus
  namespace: monitoring

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: outside-prometheus
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - services
  - endpoints
  - pods
  - nodes/proxy
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "networking.k8s.io"
  resources:
    - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  - nodes/metrics
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: outside-prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: outside-prometheus
subjects:
- kind: ServiceAccount
  name: outside-prometheus
  namespace: monitoring

获取ServiceAccount对应Secret资源对象中保存的token,然后将token保存到Prometheus节点上的文件里。

TOKEN=`kubectl get secret/$(kubectl -n monitoring get secret | awk '/outside-prometheus/{print $1}') -n monitoring -o jsonpath={.data.token} | base64 -d`

scp /etc/kubernetes/pki/ca.crt <prometheus_node>:/usr/local/prometheus/
在Prometheus 节点上 

 

echo <token-value> > /usr/local/prometheus/kubernetes-api-token

cat /usr/local/prometheus/kubernetes-api-token	#和上面获取的对比是否一致


修改Prometheus配置,添加job

  #集群api-server自动发现job
  - job_name: kubenetes-apiserver
    kubernetes_sd_configs:
    - role: endpoints
      api_server: https://192.168.80.10:6443	#指定API Server地址
      #这里的配置证书和token是连接API Server做服务发现时使用
      tls_config:
        ca_file: /usr/local/prometheus/pki/ca.crt	#指定kubernetes ca根证书,用于验证api-server证书
        # insecure_skip_verify: true #也可以使用此选项跳过证书验证
      authorization:
        credentials_file: /usr/local/prometheus/kubernetes-api-token	#指定访问api-server时使用的token文件
    scheme: https
    #这里的配置证书和token是连接从api-server抓取数据时使用
    tls_config:
      ca_file: /usr/local/prometheus/pki/ca.crt
    authorization:
      credentials_file: /usr/local/prometheus/kubernetes-api-token
    relabel_configs:
    - source_labels: ["__meta_kubernetes_namespace", "__meta_kubernetes_endpoints_name", "__meta_kubernetes_endpoint_port_name"]
      regex: default;kubernetes;https
      action: keep
  #集群节点自动发现job
  - job_name: "kubernetes-nodes"
    kubernetes_sd_configs:
    - role: node        #指定发现类型为node
      api_server: https://192.168.80.10:6443
      tls_config:
        ca_file: /usr/local/prometheus/pki/ca.crt
      authorization:
        credentials_file: /usr/local/prometheus/kubernetes-api-token
    relabel_configs:
    - source_labels: ["__address__"]    #重写target地址,默认端口是kubelet端口10250,修改为node-exporter端口9100
      regex: (.*):10250
      action: replace
      target_label: __address__
      replacement: $1:9100
    - action: labelmap  #保留之前存在的__meta_kubernetes_node_label开头的标签
      regex: __meta_kubernetes_node_label_(.+)


验证
配置修改完成后,重载Prometheus,然后在界面查看target状态。

其实除了需要额外配置访问 API Server的证书外,其余配置和集群内的Prometheus服务发现配置基本一致。 
另外,没有配置关于Pod的服务发现job,因为集群外的Prometheus无法访问集群内的Pod,需要添加路由规则才能实现互通。
举报

相关推荐

0 条评论