0
点赞
收藏
分享

微信扫一扫

k8s的高可用集群与升级

  Kubernetes是一个开源的产品,简称k8s,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
image.png
k8s组件简单介绍:

核心组件:
apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
容器运行时(Container Runtime):负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy:负责Service提供cluster内部的服务发现和负载均衡;
etcd:保存了整个集群的状态和数据;

可选组件:
kube-dns:负责为整个集群提供DNS服务,现在用的多的是CoreDNS;
Ingress Controller:为服务提供外网的入口,Ingress 是对集群中服务的外部访问进行管理的 API 对象,典型的访问方式是 HTTP;Ingress可以提供负载均衡、SSL 终结和基于名称的虚拟托管。
Heapaster:提供资源监控,现在使用的是Prometheus来监控kubernetes的资源;
Dashboard:Dashboard是基于网页的Kubernetes用户界面;
Federation:提供跨可用区的集群;
Fluentd-elasticsearch:提供集群日志的采集、存储与查询,到后面我们会使用到ELK来收集日志。

1、部署k8s高可用集群

1.1、部署环境的准备事项

1.1.1、部署方式

使用批量部署工具如(ansible/saltstack)、手动二进制、kubeadm、apt/yum等方式安装,以守护进程的方式启动在宿主机上,类似于nginx一样使用service脚本启动,这里我就直接使用kubeadm来部署k8s集群了,使用k8s官方提供的部署工具kubeadm自动安装,需要在master和node节点上安装docker等组件,然后初始化,把管理端的控制服务和node上的服务都以pod的方式运行。

1.1.2、注意事项

部署之前是需要禁用主机的swap、以及优化内核参数和资源限制的参数,我这里使用的是ubuntu的系统,装虚拟机的时候我没有分配swap也就不用关swap了,在centos或者rocky系统的话是要额外关闭selinux、iptables等一些功能,这里就统一演示优化内核参数和资源限制吧。

root@node1:~# vim /etc/security/limits.conf
# End of file
*     soft   core     unlimited
*     hard   core     unlimited
*     soft   nproc    1000000
*     hard   nproc    1000000
*     soft   nofile   1000000
*     hard   nofile   1000000
*     soft   memlock  32000
*     hard   memlock  32000
*     soft   msgqueue 8192000
*     hard   msgqueue 8192000

root     soft   core     unlimited
root     hard   core     unlimited
root     soft   nproc    1000000
root     hard   nproc    1000000
root     soft   nofile   1000000
root     hard   nofile   1000000
root     soft   memlock  32000
root     hard   memlock  32000
root     soft   msgqueue 8192000
root     hard   msgqueue 8192000
#优化内核参数必须项:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1

root@node1:~# grep -Ev "^#|^$" /etc/sysctl.conf
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1 
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_mem = 786432 1048576 1572864
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 20480
net.core.optmem_max = 81920
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_max_tw_buckets = 20000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.ip_local_port_range = 10001    65000
vm.overcommit_memory = 0
vm.swappiness = 10
kernel.pid_max = 1000000

1.1.3、主机及IP地址的规划

主机名 IP地址 功能
k8s-master1.stars.com 10.0.0.100 k8s集群主节点1,master和etcd
k8s-master2.stars.com 10.0.0.101 k8s集群主节点2,master和etcd
k8s-master3.stars.com 10.0.0.102 k8s集群主节点3,master和etcd
ha1.stars.com 10.0.0.103 k8s主节点访问入口1,提供高可用及负载均衡
ha2.stars.com 10.0.0.104 k8s主节点访问入口2,提供高可用及负载均衡
harbor.stars.com 10.0.0.105 容器的镜像仓库
k8s-node1.stars.com 10.0.0.106 k8s集群的工作节点1
k8s-node2.stars.com 10.0.0.107 k8s集群的工作节点2
k8s-node3.stars.com 10.0.0.108 k8s集群的工作节点3
10.0.0.200 VIP地址,是在ha1和ha2这两个主机上实现

2、高可用反向代理

  这里是基于Haproxy和Keepalived来是先高可用的反向代理环境,为k8s的apiserver来提供高可用的反向代理功能。

2.1、安装Haproxy和Keepalived

  这里我就不编译安装了,编译安装太花时间了,我就直接使用apt安装了。

root@ha1:~# apt -y install haproxy keepalived

root@ha2:~# apt -y install haproxy keepalived

image.png
image.png

2.2、Keepalived服务配置

2.2.1、修改keepalived配置文件

  修改之前是需要找到keepalived配置文件模板的,模板文件是在/usr/share/doc/keepalived/samples/目录下文件名叫keepalived.conf.vrrp,找到后再拷贝到/etc/keepalived/目录下,并改名为keepalived.conf在进行配置。

ha1节点:
root@ha1:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
root@ha1:~# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     18473514861@163.com
   }
   notification_email_from 1916829748@qq.com    #要使用这个的话是需要配置邮箱配置的
   smtp_server smtp.qq.com
   smtp_connect_timeout 30
   router_id ha1.stars.com                                                                                                                                                                                      
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 234.0.0.100
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 66
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass wm521314
    }
    virtual_ipaddress {
        10.0.0.200 dev eth0 label eth0:1
    }
}
root@ha1:~# systemctl restart keepalived
root@ha1:~# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.103  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:feb7:9c8f  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b7:9c:8f  txqueuelen 1000  (Ethernet)
        RX packets 6965  bytes 6821055 (6.8 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4278  bytes 485110 (485.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:b7:9c:8f  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 168  bytes 13280 (13.2 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 168  bytes 13280 (13.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@ha1:~# scp /etc/keepalived/keepalived.conf 10.0.0.104:/etc/keepalived/keepalived.conf  #拷贝到ha2节点主机上,在ha2节点上稍作修改即可

ha2节点:
root@ha2:~# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     18473514861@163.com
   }
   notification_email_from 1916829748@qq.com
   smtp_server smtp.qq.com
   smtp_connect_timeout 30
   router_id ha2.stars.com
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 234.0.0.100
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 66
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass wm521314
    }
    virtual_ipaddress {
        10.0.0.200 dev eth0 label eth0:1
    }
}
root@ha2:~# systemctl restart keepalived

2.2.2、验证VIP正常切换

root@ha1:~# systemctl stop keepalived
root@ha1:~# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.103  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:feb7:9c8f  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b7:9c:8f  txqueuelen 1000  (Ethernet)
        RX packets 7249  bytes 6845839 (6.8 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4935  bytes 534712 (534.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 168  bytes 13280 (13.2 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 168  bytes 13280 (13.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0、

root@ha2:~# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.104  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fea5:67e7  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a5:67:e7  txqueuelen 1000  (Ethernet)
        RX packets 6252  bytes 6753699 (6.7 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3550  bytes 279714 (279.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:a5:67:e7  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 168  bytes 13280 (13.2 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 168  bytes 13280 (13.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

从上面的结果会发现VIP是可以正常切换的,配置是有效的。

2.3、Haproxy服务配置

  需要修改haproxy配置文件,并设置内核参数,在系统初始化时我已经修改好了内核参数,在这里我就修改haproxy配置文件即可。

ha1节点:
root@ha1:~# vim /etc/haproxy/haproxy.cfg #在文件最后加上下面内容
listen stats
    mode http
    bind 0.0.0.0:9999
    stats enable
    log global
    stats uri /haproxy-status
    stats auth admin:123456

listen kubernetes-api-6443
    bind 10.0.0.200:6443
    mode tcp 
    server master1 10.0.0.100:6443 check inter 3s fall 3 rise 5 
    server master2 10.0.0.101:6443 check inter 3s fall 3 rise 5 
    server master3 10.0.0.102:6443 check inter 3s fall 3 rise 5
root@ha1:~# systemctl restart haproxy.service

ha2节点:
root@ha2:~# vim /etc/haproxy/haproxy.cfg #在文件最后加上下面内容
listen stats
    mode http
    bind 0.0.0.0:9999
    stats enable
    log global
    stats uri /haproxy-status
    stats auth admin:123456

listen kubernetes-api-6443
    bind 10.0.0.200:6443
    mode tcp
    balance roundrobin
    server master1 10.0.0.100:6443 check inter 3s fall 3 rise 3 
    server master2 10.0.0.101:6443 check inter 3s fall 3 rise 3 
    server master3 10.0.0.102:6443 check inter 3s fall 3 rise 3
root@ha2:~# systemctl restart haproxy.service

3、镜像仓库harbor安装部署

安装过程就简单过一下,详细步骤就查看我的Harbor安装部署。
harbor简单安装过程:

root@harbor:~# docker --version
Docker version 19.03.15, build 99e3ed8
root@harbor:~# docker-compose --version
docker-compose version 1.24.1, build 4667896b
root@harbor:~# cd /usr/local/src/
root@harbor:/usr/local/src# wget https://github.com/goharbor/harbor/releases/download/v2.2.2/harbor-offline-installer-v2.2.2.tgz
root@harbor:/usr/local/src# tar xf harbor-offline-installer-v2.2.2.tgz
root@harbor:/usr/local/src/harbor# ls
common.sh  harbor.v2.2.2.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
root@harbor:/usr/local/src/harbor# mv harbor.yml.tmpl harbor.yml
root@harbor:/usr/local/src/harbor# vim harbor.yml
hostname: harbor.stars.com  #配置harbor的域名
#https: #这里我没用到https所以就把注释了
  # https port for harbor, default is 443
#  port: 443
  # The path of cert and key files for nginx
#  certificate: /your/certificate/path
#  private_key: /your/private/key/path
harbor_admin_password: wm521314 #设置harbor管理员密码
database:
  # The password for the root user of Harbor DB. Change this before any production use.
  password: harbor2022  #设置数据的密码
data_volume: /data/harbor   #配置数据存放的目录
root@harbor:/usr/local/src/harbor# ./install.sh --with-trivy
当出现下面图片中的成功了

image.png
创建项目:
image.png

4、安装kubeadm等组件

  需要在master和node节点上安装kubeadm、kubelet、kubectl、docker等组件,负载均衡服务器可以不需要安装。

4.1、安装docker

#更新软件仓库(在所有master节点和node节点上操作)
root@k8s-master1:~# apt update
root@k8s-node1:~# apt update

#安装一些工具包
root@k8s-master1:~# apt -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common

#安装GPG的证书
root@k8s-master1:~# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -

#添加软件源信息
root@k8s-master1:~# add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

#更新一下软件源后安装docker
root@k8s-master1:~# apt update
root@k8s-master1:~# apt-cache madison docker-ce docker-ce-cli
root@k8s-master1:~# apt -y install docker-ce=5:19.03.15~3-0~ubuntu-bionic docker-ce-cli=5:19.03.15~3-0~ubuntu-bionic containerd.io

#测试docker并查看harbor仓库的可用性
root@k8s-master1:~# docker --version
Docker version 19.03.15, build 99e3ed8919
root@k8s-master1:~# vim /etc/docker/daemon.json
root@k8s-master1:~# cat /etc/docker/daemon.json
{ 
    "registry-mirrors": ["https://c51gf9he.mirror.aliyuncs.com"],
    "insecure-registries": ["harbor.stars.com"],
    "data-root": "/data/docker-data",
    "exec-opts": ["native.cgroupdriver=systemd"]
}
root@k8s-master1:~# systemctl daemon-reload
root@k8s-master1:~# systemctl restart docker

image.png
image.png
image.png
image.png
image.png

4.2、安装kubeadm、kubelet和kubectl

  所有节点配置国内的镜像仓库地址并安装相关组件,node节点可以选择安装kubectl。阿里云镜像源:
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.70be1b11pwOhXj
image.png

4.2.1、使用阿里云kubernetes镜像源

#安装工具(这个工具我们在安装docker的时候安装了,可以不用安装了)
root@k8s-master1:~# apt update && apt install -y apt-transport-https

#下载GPG证书
root@k8s-master1:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

#添加阿里云中的kubernetes镜像源
root@k8s-master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
root@k8s-master1:~# apt update

image.png
image.png

4.2.2、安装kubectl、kubelet、kubeadm

现在在github上k8s版本是1.25.0,最近发布的,这里我就选1.22.10版本的安装了

#查看版本
root@k8s-master1:~# apt-cache madison kubeadm

#安装kubectl、kubelet、kubeadm
master节点安装:
root@k8s-master1:~# apt -y install kubelet=1.22.10-00 kubeadm=1.22.10-00 kubectl=1.22.10-00
node节点安装:
root@k8s-node1:~# apt -y install kubelet=1.22.10-00 kubeadm=1.22.10-00

#验证kubeadm版本
root@k8s-master1:~# kubeadm version

image.png
image.png
image.png

4.2.3、查看kubelet状态

  安装完后,这时查看kubelet并没有启动,通过日志可以查看到是少了一个文件/var/lib/kubelet/config.yaml,到这k8s集群并没有部署完。
image.png

5、单master节点的k8s集群部署

  在自己测试的环境中使用单master节点部署就好了,在生产中的话都是三个master节点起步,奇数台master节点,不过单master节点还是多master节点,初始化命令就只要在一个master节点上执行运行就好了。

5.1、初始化用到的命令简介

5.1.1、kubeadm命令

查看kubeadm命令帮助:

root@k8s-master1:~# kubeadm --help

image.png
kubeadm常用命令:
https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/ #kubeadm命令的使用

certs:#与处理 Kubernetes 证书相关的命令
completion:bash命令补全,需要安装bash-completion;
 # mkdir -p /data/scripts
 # kubeadm completion bash > /data/scripts/kubeadm_completion.sh
 # cp /data/scripts/kubeadm_completion.sh /etc/profile.d/kubeadm_completion.sh
config:管理kubeadm集群的配置,该配置保留在集群的ConfigMap中;
init:初始化一个kubernetes控制平面,该命令参数众多,笔者放在下面细讲;
join:可以将node节点添加到k8s的master节点上;
reset:还原使用kubeadm init或者kubeadm join命令对系统产生的环境影响;
token:管理token,当新的master或node节点添加到集群中来时,需要用到token,默认有效期为24小时;
upgrade:用于升级k8s版本;
version:查看版本信息,由于是基于GO由于来写的,所以也会显示GO相关信息。

image.png

5.1.2、kubeadm init命令

查看kubeadm init命令帮助:

root@k8s-master1:~# kubeadm init --help

image.png
kubeadm init常用命令:
https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/ #kubeadm init命令集群初始化的使用

## --apiserver-advertise-address string     #k8s API Server将要监听的监听主机的IP
## --apiserver-bind-port int32      #API Server绑定的端口,默认是6443

--apiserver-cert-extra-sans strings     #可选的证书额外信息,用于指定API Server的服务器证书。可以是IP地址也可以是DNS名称

--cert-dir string       #证书存储路径,默认的路径为/etc/kubernetes/pki
--certificate-key string        #定义一个用于加密kubeadm-certs Secret中控制平台证书的密钥
--config string     #kubeadm配置文件的路径

## --control-plane-endpoint string      #为控制平台指定一个稳定的IP地址或DNS名称,即配置一个长期使用切是高可用的VIP或者域名,k8s多master高可用基于此参数实现
--cri-socket string     #要连接的CRI(容器运行时的接口,Container Runtime Interface,简称CRI)套接字的路径,如果为空,则kubeadm将尝试自动检测这个值,“仅当安装了CRI或者具有非标准CRI插槽时,才会使用这个选项”

--dry-run       #不要应用任何更改,只是输出将要执行的操作,其实就是测试运行。
--experimental-kustomize string     #用于存储kustomize为静态pod清单所提供的补丁的路径。
--feature-gates string  #一组用来描述各种功能特性的键值(key=value)对,选项是:
IPv6Dualstack=true |false (ALPHA - default=false)

## --ignore-preflight-errors strings    #可以忽略检查过程中出现的错误信息,比如忽略swap,如果为all就忽略所有

## --image-repository string    #设置一个镜像仓库,默认为k8s, gcr.io
## --kubernetes-version string      #指定安装k8s版本,默认为stable-1
--node-name string      #指定node节点名称

## --pod-network-cidr       #设置pod ip地址范围
## --service-cidr   #设置service网络地址范围
## --service-dns-domain string  #设置k8s内部域名,默认为cluster.local,会有相应的DNs服务(kube-dns/coredns)解析生成的域名记录。

--skip-certificate-key-print        #不打印用于加密的key信息
--skip-phases strings   #要跳过哪些阶段
--skip-token-print      #跳过打印token信息
--token     #指定token
--token-ttl     #指定token过期时间,默认为24小时,o为永不过期
--upload-certs      #更新证书

全局可选性:
--add-dir-header           #如果为true,在日志头部添加日志目录
--log-file string          #如果不为空,将使用此日志文件
--log-file-max-size uint   #设置日志文件的最大大小,单位为M,默认是1800M,0为没有设置限制
--one-output               #如果为真,则仅将日志写入其本机严重性级别(而不是写入每个较低的严重性级别)
--rootfs string            #宿主机的根路径,也就是绝对路径
--skip-headers             #如果为true,在log日志里面不显示标题前缀
--skip-log-headers         #如果为true,在log日志里面不显示标题

5.2、准备镜像

5.2.1、查看所需要的镜像

root@k8s-master1:~# kubeadm config images list --kubernetes-version v1.22.10
k8s.gcr.io/kube-apiserver:v1.22.10
k8s.gcr.io/kube-controller-manager:v1.22.10
k8s.gcr.io/kube-scheduler:v1.22.10
k8s.gcr.io/kube-proxy:v1.22.10
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

image.png

5.2.2、换成国内的镜像源下载

root@k8s-master1:~# vim images-download.sh
root@k8s-master1:~# cat images-download.sh
#!/bin/bash
#
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
root@k8s-master1:~# chmod a+x images-download.sh
root@k8s-master1:~# ./images-download.sh

image.png

5.2.3、验证镜像

image.png

5.3、单master节点的集群初始化

5.3.1、执行初始化命令

root@k8s-master1:~# kubeadm init --apiserver-advertise-address=10.0.0.100 --apiserver-bind-port=6443 --kubernetes-version=v1.22.10 --pod-network-cidr=172.28.0.0/16 --service-cidr=192.168.0.0/16 --service-dns-domain=stars.org --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap

image.png

5.3.2、准备所需要的文件

image.png

root@k8s-master1:~# mkdir -p $HOME/.kube
root@k8s-master1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@k8s-master1:~# kubectl get node    #这时查看节点的信息还是没起来的NotReady,这时需要我们添加相关的网络组件
NAME                    STATUS     ROLES                  AGE     VERSION
k8s-master1.stars.com   NotReady   control-plane,master   4h47m   v1.22.10

image.png

5.3.3、部署网络组件

https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/ #kubernetes支持的网络扩展
https://quay.io/repository/coreos/flannel?tab=tags #flannel镜像下载地址
https://github.com/flannel-io/flannel #flannel的github项目地址
image.png
image.png
image.png

#下载kube-flannel.yml文件
root@k8s-master1:~# mkdir /data/kubeadm-yaml && cd $_
root@k8s-master1:/data/kubeadm-yaml# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

#修改默认的kube-flannel.yml文件
root@k8s-master1:/data/kubeadm-yaml# vim kube-flannel.yml
........
  net-conf.json: |
    {
      "Network": "172.28.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }                                                                                                                                                                                                         
    }
root@k8s-master1:/data/kubeadm-yaml# grep "image" kube-flannel.yml  #这有几个镜像需要下载,这里可以预先就下载好。
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
       #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
       #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
root@k8s-master1:/data/kubeadm-yaml# docker pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
root@k8s-master1:/data/kubeadm-yaml# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1

#使用kube-flannel.yml文件创建pod
root@k8s-master1:/data/kubeadm-yaml# kubectl apply -f kube-flannel.yml

#检查master节点状态是否正常
root@k8s-master1:/data/kubeadm-yaml# kubectl get pod -A
root@k8s-master1:/data/kubeadm-yaml# kubectl get nodes

image.png
image.png
image.png

5.4、添加node节点到k8s集群中

#下载好网络组件
root@k8s-node1:~# docker pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
root@k8s-node1:~# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1

#添加node1节点
root@k8s-node1:~# kubeadm join 10.0.0.100:6443 --token gank9f.co2bx7r5ow5iwoee --discovery-token-ca-cert-hash sha256:c040e0e13ff9ab0ffe8f089aea9d3f923adb063827f7285b9f13c25a10519212
root@k8s-master1:/data/kubeadm-yaml# kubectl get nodes

image.png
image.png
image.png

5.5、检测k8s集群创建pod的可用性

#首先可以在node节点上下载好测试的镜像
root@k8s-node1:~# docker pull alpine

#创建pod
root@k8s-master1:/data/kubeadm-yaml# kubectl run net-test1 --image=alpine sleep 360000
root@k8s-master1:/data/kubeadm-yaml# kubectl run net-test2 --image=alpine sleep 360000
root@k8s-master1:/data/kubeadm-yaml# kubectl get pod -o wide

#验证pod的可用性
使用kubectl exec命令来进入容器中来验证。

image.png
image.png

6、实现k8s集群高可用

上面的实验是实现的单master节点,如果这master节点挂了的话,就整个集群用不了了,为了避免出现单点失败导致集群出现不可用的现状,我们就会使用到master节点的高可用。

6.1、还原上个实验的环境

image.png
image.png

6.2、初始化设置

6.2.1、执行初始化命令

image.png

6.2.2、拷贝文件

image.png

6.2.3、部署网络组件

还是使用刚刚部署单master节点的时候用的yml文件。
image.png

6.3、添加节点至k8s集群

image.png

6.3.1、当前master生成证书用于添加新的控制节点

root@k8s-master1:/data/kubeadm-yaml# kubeadm init phase upload-certs --upload-certs
I0825 16:59:50.518452  101206 version.go:255] remote version is much newer: v1.25.0; falling back to: stable-1.22
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
74c61ba69a4c4a37e4f658737fb4dc761ab048b835493198ee0dca92699e41f5

6.3.2、添加master节点

想要其他master节点也可以使用kubectl命令的话,也是需要创建相关目录并拷贝相关的文件。

master2节点:
root@k8s-master2:~# kubeadm join 10.0.0.200:6443 --token w7r2lo.6svajqlnqiipdrvn --discovery-token-ca-cert-hash sha256:e650111eff92e96601695ee243a9a5d1b805fbbe3e14bfdc60cb1ba799b44a23 --control-plane --certificate-key 74c61ba69a4c4a37e4f658737fb4dc761ab048b835493198ee0dca92699e41f5

master3节点:
root@k8s-master3:~# kubeadm join 10.0.0.200:6443 --token w7r2lo.6svajqlnqiipdrvn --discovery-token-ca-cert-hash sha256:e650111eff92e96601695ee243a9a5d1b805fbbe3e14bfdc60cb1ba799b44a23 --control-plane --certificate-key 74c61ba69a4c4a37e4f658737fb4dc761ab048b835493198ee0dca92699e41f5

image.png
image.png
image.png

6.3.3、添加node节点

node1节点:
root@k8s-node1:~# kubeadm join 10.0.0.200:6443 --token w7r2lo.6svajqlnqiipdrvn --discovery-token-ca-cert-hash sha256:e650111eff92e96601695ee243a9a5d1b805fbbe3e14bfdc60cb1ba799b44a23

node2节点:
root@k8s-node2:~# kubeadm join 10.0.0.200:6443 --token w7r2lo.6svajqlnqiipdrvn --discovery-token-ca-cert-hash sha256:e650111eff92e96601695ee243a9a5d1b805fbbe3e14bfdc60cb1ba799b44a23

node3节点:
root@k8s-node3:~# kubeadm join 10.0.0.200:6443 --token w7r2lo.6svajqlnqiipdrvn --discovery-token-ca-cert-hash sha256:e650111eff92e96601695ee243a9a5d1b805fbbe3e14bfdc60cb1ba799b44a23

验证集群节点状态
root@k8s-master1:~# kubectl get nodes

image.png
image.png
image.png
image.png

6.3.4、验证k8s集群pod的可用性

root@k8s-master1:~# kubectl run net-test1 --image=alpine sleep 360000
pod/net-test1 created
root@k8s-master1:~# kubectl run net-test2 --image=alpine sleep 360000
pod/net-test2 created
root@k8s-master1:~# kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP           NODE                  NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          31s   172.28.4.2   k8s-node2.stars.com   <none>           <none>
net-test2   1/1     Running   0          26s   172.28.5.2   k8s-node3.stars.com   <none>           <none>

image.png

6.4、验证k8s集群的高可用

现在是有两个pod在运行,当模拟master1节点出现异常时,查看pod是否正常运行。
image.png
image.png

7、升级k8s版本

  升级k8s集群必须先升级kubeadm版本到目的k8s版本,也可以说kubeadm是k8s集群的准升证。

7.1、k8s集群升级准备事项

  在k8s的所有master节点进行组件升级,将管理端服务kube-controller-manager、kube-apiserver、kube-scheduler、kube-proxy进行组件版本的升级。

7.1.1、验证当前k8s master版本

root@k8s-master1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.10", GitCommit:"eae22ba6238096f5dec1ceb62766e97783f0ba2f", GitTreeState:"clean", BuildDate:"2022-05-24T12:55:22Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

7.1.1、验证当前k8s node版本

root@k8s-master1:~# kubectl get nodes
NAME                    STATUS   ROLES                  AGE    VERSION
k8s-master1.stars.com   Ready    control-plane,master   106m   v1.22.10
k8s-master2.stars.com   Ready    control-plane,master   39m    v1.22.10
k8s-master3.stars.com   Ready    control-plane,master   36m    v1.22.10
k8s-node1.stars.com     Ready    <none>                 31m    v1.22.10
k8s-node2.stars.com     Ready    <none>                 31m    v1.22.10
k8s-node3.stars.com     Ready    <none>                 31m    v1.22.10

7.2、升级k8s master节点版本

7.2.1、升级kubeadm版本

root@k8s-master1:~# apt -y install kubeadm=1.23.10-00
root@k8s-master2:~# apt -y install kubeadm=1.23.10-00
root@k8s-master3:~# apt -y install kubeadm=1.23.10-00

image.png
image.png
image.png

7.2.2、查看kubeadm升级命令使用的帮助并查看升级的计划

image.png
image.png

7.2.3、执行版本升级

  这里可以事先吧镜像下载好,不然升级会很慢的,在生产中如果进行k8s版本升级,一般放在用户访问量较少的时刻,并且在高可用反代服务器中将要升级的节点注释掉,以免影响用户访问。

root@k8s-master1:~# kubeadm upgrade apply v1.23.10
root@k8s-master2:~# kubeadm upgrade apply v1.23.10
root@k8s-master3:~# kubeadm upgrade apply v1.23.10

image.png

7.2.4、验证节点版本

  现在查看版本还是之前的版本,还没升级完成,还有其他的没完成,还需要下载同版本的kubelet、kubectl就会显示升级完的版本。
image.png

root@k8s-master1:~# apt -y install kubelet=1.23.10-00 kubectl=1.23.10-00
root@k8s-master2:~# apt -y install kubelet=1.23.10-00 kubectl=1.23.10-00
root@k8s-master3:~# apt -y install kubelet=1.23.10-00 kubectl=1.23.10-00

#下载完后再查看版本
root@k8s-master1:~# kubectl get nodes
NAME                    STATUS   ROLES                  AGE     VERSION
k8s-master1.stars.com   Ready    control-plane,master   3h26m   v1.23.10
k8s-master2.stars.com   Ready    control-plane,master   138m    v1.23.10
k8s-master3.stars.com   Ready    control-plane,master   135m    v1.23.10
k8s-node1.stars.com     Ready    <none>                 130m    v1.22.10
k8s-node2.stars.com     Ready    <none>                 130m    v1.22.10
k8s-node3.stars.com     Ready    <none>                 130m    v1.22.10

7.3、升级node节点

7.3.1、升级版本

root@k8s-node1:~# apt -y install kubelet=1.23.10-00 kubeadm=1.23.10-00
root@k8s-node2:~# apt -y install kubelet=1.23.10-00 kubeadm=1.23.10-00
root@k8s-node3:~# apt -y install kubelet=1.23.10-00 kubeadm=1.23.10-00

image.png
image.png
image.png

7.3.2、验证k8s集群中节点的状态

root@k8s-master1:~# kubectl get nodes
NAME                    STATUS   ROLES                  AGE     VERSION
k8s-master1.stars.com   Ready    control-plane,master   3h31m   v1.23.10
k8s-master2.stars.com   Ready    control-plane,master   144m    v1.23.10
k8s-master3.stars.com   Ready    control-plane,master   141m    v1.23.10
k8s-node1.stars.com     Ready    <none>                 136m    v1.23.10
k8s-node2.stars.com     Ready    <none>                 135m    v1.23.10
k8s-node3.stars.com     Ready    <none>                 135m    v1.23.10
举报

相关推荐

0 条评论