一 前言
基本环境配置
主机名 | IP | 组件 |
k8s-master01 | 10.10.203.160 | controller-manager、kube-scheduler、kube-apiserver、etcd、kubelet、kube-proxy |
k8s-master02 | 10.10.203.161 | controller-manager、kube-scheduler、kube-apiserver、etcd、kubelet、kube-proxy |
k8s-master03 | 10.10.203.162 | controller-manager、kube-scheduler、kube-apiserver、etcd、kubelet、kube-proxy |
k8s-node01 | 10.10.203.163 | kubelet、kube-proxy |
k8s-node02 | 10.10.203.164 | kubelet、kube-proxy |
配置信息 | 备注 |
系统版本 | Centos7.x |
Docker版本 | 20.10.x |
Pod网段 | 172.16.0.0/12 |
Service网段 | 192.168.0.0/16 |
K8s版本 | 1.23.x |
二 系统环境初始化
Host解析配置
需要每个节点均配置host
#vim/etc/hosts
10.10.203.160 k8s-master01
10.10.203.161 k8s-master02
10.10.203.162 k8s-master03
10.10.203.36 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
10.10.203.163 k8s-node01
10.10.203.164 k8s-node02
服务节点SSH免密
配置集群节点免密码
#ssh-keygen -t rsa
#ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.203.160~164
YUM源更新
更新Yum源
#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
#yum install -y yum-utils device-mapper-persistent-data lvm2
#yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
Yum 安装依赖组件
#yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
网络防火墙关闭
所有节点关闭firewalld 、dnsmasq、selinux(CentOS7需要关闭NetworkManager,CentOS8不需要)
#systemctl disable --now firewalld
#systemctl disable --now dnsmasq
#systemctl disable --now NetworkManager
Selinux设置
#setenforce 0
#sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
#sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
所有节点关闭swap分区,fstab注释swap
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
时间同步
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
#crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
系统limit限制
设置软限制和硬限制的最大打开文件描述符的上限为65535
#ulimit -SHn 65535
#cat >>/etc/security/limits.conf<<EOF
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
内核升级
#下载关于4.19内核rpm包
#wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
#wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
#从Master节点上拷贝
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
#cd /root/ && yum localinstall -y kernel-ml*
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
检查默认内核是否是4.19
#grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
接下来所有节点重启,最终验证内核版本是否生效
#uname -a
Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
#yum install ipvsadm ipset sysstat conntrack libseccomp -y
手动加在linux内核模块
加载ip_vs内核模块,该模块用于实现IP负载均衡
#modprobe -- ip_vs
加载ip_vs_rr内核模块,该模块提供基于轮询的负载均衡算法。
#modprobe -- ip_vs_rr
加载ip_vs_wrr内核模块,该模块提供基于加权轮询的负载均衡算法。
#modprobe -- ip_vs_wrr
加载ip_vs_sh内核模块,该模块提供基于哈希的负载均衡算法。
#modprobe -- ip_vs_sh
加载nf_conntrack内核模块,该模块用于网络连接跟踪,可用于实现连接状态跟踪和防火墙等功能。
#modprobe -- nf_conntrack
添加ipvs.conf配置
#vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
#systemctl enable --now systemd-modules-load.service
验证是否正常加载
#lsmod | grep -e ip_vs -e nf_conntrack
定义k8s相关参数配置
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
net.ipv4.conf.all.route_localnet = 1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
三 基础组件
准备好组件包环境
Docker环境部署
#yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
Kubernetes以及etcd相关的组件部署
#mkdir./package
#cd ./package
这里可根据自己实际情况选择etcd、k8s版本内容
#wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
#wget https://dl.k8s.io/v1.23.0/kubernetes-server-linux-amd64.tar.gz
#tar -zxf ./package/kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
#tar -zxvf ./package/etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.1-linux-amd64/etcd{,ctl}
将组件拷贝至其他节点
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
附上k8s GitHub地址 https://github.com/kubernetes/kubernetes/
四 证书生成
从github 克隆的k8s git 在这里一定要切换相应的版本分支
#cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x
生成证书工具
#wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
#wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
#chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
Etcd证书签发
#cfssljson -bare /etc/etcd/ssl/etcd-ca
2023/09/08 17:26:33 [INFO] generating a new CA key and certificate from CSR
2023/09/08 17:26:33 [INFO] generate received request
2023/09/08 17:26:33 [INFO] received CSR
2023/09/08 17:26:33 [INFO] generating key: rsa-2048
2023/09/08 17:26:33 [INFO] encoded CSR
2023/09/08 17:26:33 [INFO] signed certificate with serial number 54722918621641987898757006367878445975216520935
#cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,10.10.203.160,10.10.203.161,10.10.203.162 -profile=kubernetes etcd-csr.json
#cfssljson -bare /etc/etcd/ssl/etcd
2023/09/08 17:26:34 [INFO] generate received request
2023/09/08 17:26:34 [INFO] received CSR
2023/09/08 17:26:34 [INFO] generating key: rsa-2048
2023/09/08 17:26:34 [INFO] encoded CSR
2023/09/08 17:26:34 [INFO] signed certificate with serial number 658441448902441725900338597511638773260917653571
Kubernetes证书签发
#cd /root/k8s-ha-install/pki
签发生成ca.pem ca-key.pem
#cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
2023/09/08 17:30:24 [INFO] generating a new CA key and certificate from CSR
2023/09/08 17:30:24 [INFO] generate received request
2023/09/08 17:30:24 [INFO] received CSR
2023/09/08 17:30:24 [INFO] generating key: rsa-2048
2023/09/08 17:30:24 [INFO] encoded CSR
2023/09/08 17:30:24 [INFO] signed certificate with serial number 93145036152487417566541540038978281174445094760
#cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=192.168.0.1,10.10.203.36,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,10.10.203.160,10.10.203.161,10.10.203.162 \
-profile=kubernetes \
apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
2023/09/08 17:30:26 [INFO] generate received request
2023/09/08 17:30:26 [INFO] received CSR
2023/09/08 17:30:26 [INFO] generating key: rsa-2048
2023/09/08 17:30:26 [INFO] encoded CSR
2023/09/08 17:30:26 [INFO] signed certificate with serial number 563192997900344038406914326092566780009021086965
#cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
2023/09/08 17:30:28 [INFO] generating a new CA key and certificate from CSR
2023/09/08 17:30:28 [INFO] generate received request
2023/09/08 17:30:28 [INFO] received CSR
2023/09/08 17:30:28 [INFO] generating key: rsa-2048
2023/09/08 17:30:29 [INFO] encoded CSR
2023/09/08 17:30:29 [INFO] signed certificate with serial number 193128108235323243610432953951551739118047042691
#使用 Front Proxy CA 的证书和私钥,以及其他配置参数,生成前向代理客户端的证书和私钥
#cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
front-proxy-client-csr.json |cfssljson -bare /etc/kubernetes/pki/front-proxy-client
2023/09/08 17:30:31 [INFO] generate received request
2023/09/08 17:30:31 [INFO] received CSR
2023/09/08 17:30:31 [INFO] generating key: rsa-2048
2023/09/08 17:30:31 [INFO] encoded CSR
2023/09/08 17:30:31 [INFO] signed certificate with serial number 248893608162597325215159831174865996163068725196
2023/09/08 17:30:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
生成controller-manager证书
#cfssl gencert -ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
2023/09/08 17:30:33 [INFO] generate received request
2023/09/08 17:30:33 [INFO] received CSR
2023/09/08 17:30:33 [INFO] generating key: rsa-2048
2023/09/08 17:30:33 [INFO] encoded CSR
2023/09/08 17:30:33 [INFO] signed certificate with serial number 686364052363340747032112856304805110235136261207
2023/09/08 17:30:33 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
设置集群项目
设置kubeconfig文件中的集群配置,将证书嵌入到kubeconfig文件中
#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.10.203.36:8443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Cluster "kubernetes" set.
#kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Context "system:kube-controller-manager@kubernetes" created.
#kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
User "system:kube-controller-manager" set.
#kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Switched to context "system:kube-controller-manager@kubernetes".
#使用 CFSSL 工具生成 scheduler 组件的证书和私钥文件
#cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
设置 scheduler 组件的集群信息
#kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.10.203.36:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
设置 scheduler 组件的集群信息
#kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
User "system:kube-scheduler" set.
设置 kube-scheduler 组件的凭据信息
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
设置 kube-scheduler 组件的上下文配置
#kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
将system:kube-scheduler@kubernetes上下文设置为默认的上下文
#kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
使用 CFSSL 工具生成 admin 用户的证书和私钥文件。
#cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
将 kubernetes-admin@kubernetes 上下文设置为默认的上下文
#kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
Switched to context "kubernetes-admin@kubernetes".
使用openssl工具生成ServiceAccount 私钥 & secret
#openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
Generating RSA private key, 2048 bit long modulus
......................+++
........................................................................................................+++
e is 65537 (0x10001)
使用 OpenSSL 工具从 ServiceAccount 的私钥中提取公钥,并保存到 /etc/kubernetes/pki/sa.pub 文件中
#openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
查看证书文件
将证书发送至其他节点
for NODE in k8s-master02 k8s-master03; do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
五 Kubernetes系统组件配置
Etcd配置
k8s-master01节点配置
#vim /etc/etcd/etcd.config.yml
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.10.203.160:2380'
listen-client-urls: 'https://10.10.203.160:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.10.203.160:2380'
advertise-client-urls: 'https://10.10.203.160:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.10.203.160:2380,k8s-master02=https://10.10.203.161:2380,k8s-master03=https://10.10.203.162:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
k8s-master02节点配置
#vim /etc/etcd/etcd.config.yml
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.10.203.161:2380'
listen-client-urls: 'https://10.10.203.161:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.10.203.161:2380'
advertise-client-urls: 'https://10.10.203.161:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.10.203.160:2380,k8s-master02=https://10.10.203.161:2380,k8s-master03=https://10.10.203.162:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
k8s-master03节点配置
#vim /etc/etcd/etcd.config.yml
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.10.203.162:2380'
listen-client-urls: 'https://10.10.203.162:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.10.203.162:2380'
advertise-client-urls: 'https://10.10.203.162:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.10.203.160:2380,k8s-master02=https://10.10.203.161:2380,k8s-master03=https://10.10.203.162:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
创建Service
将etcd服务加入到systemctl进行管理
#vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
所有Master(Etcd)节点创建etcd证书目录
#mkdir /etc/kubernetes/pki/etcd
#ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
#systemctl daemon-reload
#systemctl enable --now etcd
定义etcd alias
由于我们在查看etcd资源列表中每次都需要跟上很长的大一堆字符串,在这里我们可通过alias 重命名简化,如下
# vim /root/.bashrc
#alias etcdctl-list='ETCDCTL_API=3;/usr/local/bin/etcdctl --endpoints="10.10.203.160:2379,10.10.203.161:2379,10.10.203.162:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table'
#source /root/.bashrc
六 HA 高可用方案实施
首先我们需要对三台Master节点配置Keepalived,详情如下:
Haproxy配置
haproxy配置内容相同,三个master节点内容一致即可
#vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend master
backend master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 10.10.203.160:6443 check
server k8s-master02 10.10.203.161:6443 check
server k8s-master03 10.10.203.162:6443 check
Keepalived配置
k8s-master01节点配置
#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens192
mcast_src_ip 10.10.203.160
virtual_router_id 51
priority 101
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.10.203.36
}
track_script {
chk_apiserver
} }
k8s-master02节点配置
#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
mcast_src_ip 10.10.203.161
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.10.203.36
}
track_script {
chk_apiserver
} }
k8s-master03节点配置
#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
mcast_src_ip 10.10.203.162
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.10.203.36
}
track_script {
chk_apiserver
} }
健康检查配置
定义shell脚本,用于检测haproxy进程是否在运行。如果haproxy进程没有运行,则停止keepalived服务,并以错误代码1退出;否则以错误代码0退出。
#vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in 1
2
3
do
check_code=
if [[ == "" ]]; then
err=1
sleep 1
continue
else
err=0
break
fi
done
if [[ != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
脚本剖析
脚本的执行过程如下:
1. 初始化错误计数器err为0。
2. 使用循环检测3次haproxy进程是否在运行。
3. 如果haproxy进程没有运行,则将错误计数器err加1,并等待1秒,然后继续下一次循环。
4. 如果haproxy进程在运行,则将错误计数器err重置为0,并跳出循环。
5. 如果错误计数器err不等于0,则停止keepalived服务,并以错误代码1退出。
6. 如果错误计数器err等于0,则以错误代码0退出。
启动HA服务
所有Master节点重启haproxy、keepalived服务
#systemctl daemon-reload;
#systemctl enable --now haproxy
#systemctl enable --now keepalived
七 Kubernetes Master节点配置
所有节点创建相关目录
for MASTER in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh $MASTER mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes;done
Apiserver
所有Master节点创建kube-apiserver service
k8s-master01
#vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=10.10.203.160 \
--service-cluster-ip-range=192.168.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.10.203.160:2379,https://10.10.203.161:2379,https://10.10.203.162:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
k8s-master02
#vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=10.10.203.161 \
--service-cluster-ip-range=192.168.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.10.203.160:2379,https://10.10.203.161:2379,https://10.10.203.162:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
k8s-master03
#vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=10.10.203.162 \
--service-cluster-ip-range=192.168.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.10.203.160:2379,https://10.10.203.161:2379,https://10.10.203.162:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
启动Apiserver服务
#systemctl daemon-reload && systemctl enable --now kube-apiserver
ControllerManager
kube-controller-manager.service配置
三个master主节点配置一致
#vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
启动controllermanager服务
#systectl daemon-reload
#systemctl enable --now kube-controller-manager
Scheduler
scheduler.service配置
所有Master节点配置kube-scheduler-service(配置内容一致)
#vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
启动Scheduler服务
# systemctl daemon-reload
# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
TLS Bootstrapping配置
#cd /root/k8s-ha-install/bootstrap
#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.10.203.236:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致
可以正常查询集群状态,才可以继续往下,否则不行,需要排查k8s组件是否有故障
#kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-2 Healthy {"health":"true","reason":""}
etcd-0 Healthy {"health":"true","reason":""}
etcd-1 Healthy {"health":"true","reason":""}
scheduler Healthy ok
controller-manager Healthy ok
#kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
八 Node节点配置
同步证书
Master01节点操作将相关证书文件同步至其它节点
#cd /etc/kubernetes/
#for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
done
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
Kubelet配置
所有节点创建相关目录
#mkdir -p /var/lib/kubelet /var/log/kubernetes
/etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
配置kubelet systemd
#vim /usr/lib/systemd/system/kubelet.service
/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
配置kubeley service文件
#vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --root-dir=/data/nfs/kubelet"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
配置kubelet conf yml文件
#vim /etc/kubernetes/kubelet-conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 192.168.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
启动Kubelet服务
#systemctl daemon-reload
#systemctl enable --now kubelet
Kube-proxy配置
#cd /root/k8s-ha-install
#kubectl -n kube-system create serviceaccount kube-proxy
#kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
#SECRET=$(kubectl -n kube-system get sa/kube-proxy \
--output=jsonpath='{.secrets[0].name}')
#JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
#PKI_DIR=/etc/kubernetes/pki
#K8S_DIR=/etc/kubernetes
#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.103.236.236:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
#kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
#kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
#kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
将kubecong发送至其它节点
#for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done
所有节点添加kube-proxy的配置和service文件
#vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
配置kube-proxy yaml文件
#vim /etc/kubernetes/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
配置kube-proxy服务
#systemctl daemon-reload
#systemctl enable --now kube-proxy
九 安装Calico网络插件
#cd /root/k8s-ha-install/calico/
更改calico的网段,主要需要将红色部分的网段,改为自己的Pod网段
#sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml
#kubectl apply -f calico.yaml
查看kube-system命名空间下的网络组件是否正常运行
#kubectl get po -n kube-system
十 安装CoreDNS 解析组件
#cd /root/k8s-ha-install/
#OREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
#sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yaml
#kubectl create -f CoreDNS/coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
十一 验证集群
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF
- Pod必须能解析Service
- Pod必须能解析跨namespace的Service
- 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
- Pod和Pod之前要能通
- 同namespace能通信
- 跨namespace能通信
- 跨机器能通信