0
点赞
收藏
分享

微信扫一扫

kubernetes(k8s)构建elk(filebeat)日志收集系统 - k8s系列(四)

环境背景

  • 已实现 k8s系列(一) - 使用kubeadm安装kubernetes(k8s)
  • 已实现 k8s系列(二) - jenkins+kubernetes(k8s)+docker持续集成与部署(CI/CD) - 收集日志使用
  • km - 2cpu - 4g内存 - ip - 192.168.23.39
  • node1 - 2cpu - 2G内存 - ip - 192.168.23.40
  • node1 - 2cpu - 2G内存 - ip - 192.168.23.41
  • 基于 k8s系列(二) 已有可正常运行的k8s工作任务
  • 所有操作基于官方文档说明

镜像准备

1.拉取所需镜像

  • elasticsearch/kibana/filebeat版本必须相同
docker pull elasticsearch:8.1.1
docker pull kibana:8.1.1
docker pull docker.elastic.co/beats/filebeat:8.1.1

2.标记自定义tag

docker tag elasticsearch:8.1.1 192.168.23.39:5000/elasticsearch:8.1.1
docker tag kibana:8.1.1 192.168.23.39:5000/kibana:8.1.1
docker tag docker.elastic.co/beats/filebeat:8.1.1 192.168.23.39:5000/filebeat:8.1.1

3.推送到私有镜像仓库

docker push 192.168.23.39:5000/elasticsearch:8.1.1
docker push 192.168.23.39:5000/kibana:8.1.1
docker push 192.168.23.39:5000/filebeat:8.1.1

搭建elk(filebeat)

1.添加Elastic官方自定义资源对象

wget https://download.elastic.co/downloads/eck/2.1.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.1.0/operator.yaml

kubectl apply -f crds.yaml
kubectl apply -f operator.yaml

2.创建存储卷 - pv

  • 不创建存储卷将会报错 - pod has unbound immediate PersistentVolumeClaims

1).创建pv.yaml文件

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elasticsearch-data
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /www/wwwroot/mnt
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - km

2).创建PV

kubectl apply -f pv.yaml

2.将master标记为可调度

  • 当master节点不可调度的情况下,将会报错
  • node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory
#标记master可调度
kubectl taint nodes --all node-role.kubernetes.io/master-

#取消master可调度
kubectl taint nodes km node-role.kubernetes.io/master=true:NoSchedule

3.创建elasticsearch

1).创建elasticsearch.yaml文件

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: es
spec:
  image: 192.168.23.39:5000/elasticsearch:8.1.1
  version: 8.1.1
  nodeSets:
  - name: master
    count: 1
    config:
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            memory: 1Gi
            cpu: 1
            storage: 1Gi
          limits:
              memory: 1Gi
        storageClassName: local-storage

2).创建elasticsearch

kubectl apply -f elasticsearch.yaml

3).查看pod信息

kubectl describe pods es-es-master-0

4.创建kibana

1).创建kibana.yaml文件

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  image: 192.168.23.39:5000/kibana:8.1.1
  version: 8.1.1
  count: 1
  elasticsearchRef:
    name: es
  config: 
    i18n.locale: "zh-CN"
  http:
    service:
      spec:
        type: LoadBalancer
    tls:
      selfSignedCertificate:
        subjectAltNames:
        - ip: 192.168.23.39

2).创建kibana

kubectl apply -f elasticsearch.yaml

3).查看相关信息

#查看svc 
kubectl get service kibana-kb-http
#显示如下
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kibana-kb-http   LoadBalancer   10.111.40.122   <pending>     5601:30767/TCP   2m27s

#kubectl port-forward service/kibana-kb-http 5601

#PASSWORD=$(kubectl get secret es-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

#查看管理员elastic密码
kubectl get secret es-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
  • #本机访问 https://192.168.23.39:30767/ - https协议 - svcPORT(S) 5601:30767/TCP 账号:elastic 密码:如上

  • 如果想使用http协议 请在kibana.yaml文件http相关配置,添加以下代码
  • http:
    tls:
      selfSignedCertificate:
        disabled: true

4.创建filebeat

1).创建filebeat.yaml文件

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: beat
spec:
  image: 192.168.23.39:5000/filebeat:8.1.1
  type: filebeat
  version: 8.1.1
  elasticsearchRef:
    name: es
  config:
    filebeat.inputs:
    - type: container
      paths:
      - /var/log/containers/*.log
  daemonSet:
    podTemplate:
      spec:
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
        securityContext:
          runAsUser: 0
        containers:
        - name: filebeat
          volumeMounts:
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
        volumes:
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

2).创建kibana

kubectl apply -f filebeat.yaml

查看elk(filebeat)日志记录

1.创建数据视图

  • 访问kibana https://192.168.23.39:30767/
  • 登录账号:elastic
  • 点击左侧导航 Management -> 数据视图 -> 创建数据视图
  • 输入 filebeat-8.1.1 *当前filebeat版本 创建数据视图

kibana.png

2.模拟请求创建ELK(filebeat)日志记录

#查看已启动的项目
kubectl get svc

#模拟请求
curl 10.101.102.226/static/none.jpg > /dev/null

elk1.png

3.查看ELK(filebeat)日志记录

  • 点击左侧导航 Discover
  • 搜索static查看请求日志

elk2.png

结束

举报

相关推荐

0 条评论