0
点赞
收藏
分享

微信扫一扫

centos 7 用kubeadm部署k8s集群

Python百事通 2022-03-20 阅读 129

前言

环境:centos7.9 docker-ce-20.10.9 kubernetes-version v1.22.6

本篇来讲解如何在centos下安装部署k8s集群

生产环境部署k8s集群的两种方式

kubeadm
kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署k8s集群。
kube官网部署地址:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/、https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

二进制
从官方下载发行版的二进制包,手动部署每个组件,组成k8s集群。
下载地址:GitHub
推荐使用二进制安装部署,有助于更了解k8s。

服务器初始化、环境准备

准备3台虚拟机,1个master,2个node节点。

主机    说明

172.20.10.2 master 节点,能连外网,centos7.x版本,至少2核CPU,2G内存
172.20.10.3 node1 节点,能连外网,centos7.x版本,至少2核CPU,2G内存
172.20.10.4 node2 节点,能连外网,centos7.x版本,至少2核CPU,2G内存


3台主机都根据实际情况做如下6大步骤配置:

1、关闭防火墙

[root@master ~]# systemctl stop firewalld			#关闭防火墙
[root@master ~]# systemctl disable firewalld		#设置开机不启动


2、禁用selinux

[root@master ~]# getenforce 						#查看selinux状态
Permissive
[root@master ~]# vim /etc/selinux/config			#永久关闭selinux
SELINUX=disabled


3、关闭swap分区(必须,因为k8s官网要求)

[root@master ~]# swapoff -a	    #禁用所有swap交换分区
[root@master ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           1.8G        280M        1.2G        9.6M        286M        1.4G
Swap:            0B          0B          0B
[root@master ~]# vim /etc/fstab		#永久禁用swap,删除或注释掉/etc/fstab里的swap设备的挂载命令即可
#/dev/mapper/centos-swap swap                    swap    defaults        0 0


4、设置主机名

cat >> /etc/hosts << EOF
172.20.10.2 master
172.20.10.3 node1
172.20.10.4 node2
EOF


5、时间同步

没有安装ntpd,先安装,如果不想安装,用date命名看是3台机器时间是否一致,一致可以忽略下面的,生产、测试需要用下面的

[root@master ~]# systemctl start ntpd	#开始ntpd服务,或者做定时任务如:*/5 * * * * /usr/sbin/ntpdate -u 192.168.11.100
[root@master ~]# systemctl enable ntpd


6、将桥接的IPv4流量传递到iptables的链(有一些ipv4的流量不能走iptables链,因为linux内核的一个过滤器,每个流量都会经过他,然后再匹配是否可进入当前应用进程去处理,所以会导致流量丢失),配置k8s.conf文件(k8s.conf文件原来不存在,需要自己创建的)

[root@master sysctl.d]# touch /etc/sysctl.d/k8s.conf	#创建k8s.conf文件
[root@master sysctl.d]# cat >> /etc/sysctl.d/k8s.conf <<EOF     #往k8s.conf文件添加内容
> net.bridge.bridge-nf-call-ip6tables=1
> net.bridge.bridge-nf-call-iptables=1
> net.ipv4.ip_forward=1
> vm.swappiness=0
> EOF
[root@master sysctl.d]# sysctl --system		#重新加载系统全部参数,或者使用sysctl -p亦可


使用kubeadm安装k8s(本篇讲解使用kubeadm安装k8s)

以上6大步骤在每一台虚拟机做完之后,开始安装k8s。本篇讲解使用kubeadm安装k8s),kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具,这个工具能通过两条指令完成一个kubernetes集群的部署。
1、创建一个master节点,kubeadm init。
2、将node节点加入kubernetes集群,kubeadm join <master_IP:port >。

步骤一、安装docker(在所有节点执行,因为k8s默认CRI为docker,cri称之为容器运行时)

[root@master ~]# yum remove docker \	#先删除旧的docker版本, 二进制或社区版本卸载自行百度
                   docker-client \
                   docker-client-latest \
                   docker-common \
                   docker-latest \
                   docker-latest-logrotate \
                   docker-logrotate \
                   docker-engine
[root@master ~]# yum install -y yum-utils	#安装yum-utils,主要提供yum-config-manager命令
[root@master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo	 #下载并安装docker的repo仓库
[root@master ~]# yum list docker-ce --showduplicates | sort -r	#查看可获取的docker版本
[root@master ~]# yum -y install docker-ce docker-ce-cli containerd.io	#直接安装最新的docker版本
[root@master ~]# yum -y install docker-ce-20.10.9 docker-ce-cli-20.10.9 containerd.io  #或者安装指定版本
[root@master ~]# systemctl enable docker	#设置开机自启
[root@master ~]# systemctl start docker		#启动docker
[root@master ~]# cat /etc/docker/daemon.json 	#设置镜像加速器
{
    "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
[root@master ~]# systemctl restart docker	#重启docker
[root@master ~]# docker info |tail -5	#检查加速器配置是否成功
  127.0.0.0/8
 Registry Mirrors:
  https://b9pmyelo.mirror.aliyuncs.com/ 	#加速器配置成功,仓库已经是阿里云
 Live Restore Enabled: false


步骤二、配置kubernetes的阿里云yum源

[root@master ~]# cat >> /etc/yum.repos.d/kubernetes.repo << EOF 	#在3台虚拟机都配置k8s的yum源
[kubernetes]
name = Kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


步骤三、yum安装kubeadm、kubelet、kubectl

#在3台虚拟机上都执行安装kubeadm、kubelet、kubectl(kubeadm和kubectl都是工具,kubelet才是系统服务)
[root@master ~]# yum list --showduplicates | grep kubeadm	#查看yum可获取的kubeadm版本,这里安装1.22.6版本,不指定版本的话默认安装最新版本
[root@master ~]# yum -y install kubelet-1.22.6 kubeadm-1.22.6 kubectl-1.22.6	#安装kubeadm、kubelet、kubectl
[root@master ~]# systemctl enable kubelet  #设置kubelet开机自启(先不用启动,也起不了,后面kubeadm init初始化master时会自动拉起kubelet)


步骤四、初始化master节点的控制面板

# kubeadm init --help可以查看命令的具体参数用法
[root@master ~]# 

kubeadm init \                                                 #在master节点执行初始化(node节点不用执行)
--apiserver-advertise-address=172.20.10.2 \                    #指定apiserver的IP,即master节点的IP
--image-repository registry.aliyuncs.com/google_containers \   #设置镜像仓库为国内的阿里云镜像仓库
--kubernetes-version v1.22.6 \                                 #设置k8s的版本,跟步骤三的kubeadm版本一致
--service-cidr=10.96.0.0/12 \                                  #这是设置node节点的网络的,暂时这样设置
--pod-network-cidr=10.244.0.0/16                               #这是设置node节点的网络的,暂时这样设置


#再开一个窗口,执行docker images可以看到,其实执行kubeadm init时k8s去拉取了好多镜像
[root@master ~]# docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.22.6   d35b182b4200   2 weeks ago     128MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.22.6   63f3f385dcfe   2 weeks ago     104MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.22.6   3618e4ab750f   2 weeks ago     122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.22.6   9fe44a6192d1   2 weeks ago     52.7MB
registry.aliyuncs.com/google_containers/etcd                      3.5.0-0   004811815584   7 months ago    295MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.4    8d147537fb7d   8 months ago    47.6MB
registry.aliyuncs.com/google_containers/pause                     3.5       ed210e3e4a5b   10 months ago   683kB
[root@master ~]# 

#在执行kubeadm init的过程中,k8s报错,信息如下:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher



[root@master ~]# tail -22 /var/log/messages				#查看输出的日志,发现有个关键词cgroup driver,k8s和docker不一致
Feb  3 08:49:18 master kubelet: E0203 08:49:18.373751   14870 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""
Feb  3 08:49:18 master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Feb  3 08:49:18 master systemd: Unit kubelet.service entered failed state.
Feb  3 08:49:18 master systemd: kubelet.service failed.
[root@master ~]# 
#原因:kubernetes1.14之后的版本推荐使用systemd,但docker默认的Cgroup Driver 是Cgroup,使得kubelet部署报错
[root@master ~]# docker info | grep -i "Cgroup Driver"		#查看一下docker使用的Cgroup Driver,还真是cgroupfs
 Cgroup Driver: cgroupfs
[root@master ~]# 
#处理办法:修改/etc/docker/daemon.json 文件,添加如下参数:
[root@master ~]# vim /etc/docker/daemon.json 				#为了保持所有节点docker配置一致,所以其它节点的docker也改了
{
    "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],	#这句是之前就配置了的,但要注意加一个道号
    "exec-opts": ["native.cgroupdriver=systemd"]			#添加这一句
}
[root@master ~]# systemctl restart docker
[root@master ~]# docker info | grep -i "Cgroup Driver"		#查看验证
 Cgroup Driver: systemd


#重新开始初始化master节点
[root@master ~]# kubeadm init \                                                 #在master节点执行初始化(node节点不用执行)
[root@master ~]# --apiserver-advertise-address=172.20.10.2 \                    #指定apiserver的IP,即master节点的IP
[root@master ~]# --image-repository registry.aliyuncs.com/google_containers \   #设置镜像仓库为国内的阿里云镜像仓库
[root@master ~]# --kubernetes-version v1.22.6 \                                 #设置k8s的版本,跟步骤三的kubeadm版本一致
[root@master ~]# --service-cidr=10.96.0.0/12 \                                  #这是设置node节点的网络的,暂时这样设置
[root@master ~]# --pod-network-cidr=10.244.0.0/16                               #这是设置node节点的网络的,暂时这样设置

#最后kubeadm init初始化成功,提示信息如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.								#提示我们去配置pod网络,步骤六再配置
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.20.10.2:6443 --token llxh3m.t69t2bfwpvd2d3ao \
	--discovery-token-ca-cert-hash sha256:83f39269974cb90e2c6a57082acbd8a3ea8304d7e24484f396cd4fd8d9b8119d 


# 此处如果还是报错 没有报错忽略这一步

Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.442868   18387 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.465127   18387 kubelet.go:2412] "Error getting node" err="node \"master\" not found"
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.466690   18387 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.20.10.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.10.2:6443: connect: connection refused
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.539199   18387 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.20.10.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master?timeout=10s": dial tcp 172.20.10.2:6443: connect: connection refused
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.546786   18387 kubelet.go:1991] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.550066   18387 kubelet_node_status.go:71] "Attempting to register node" node="master"
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.550899   18387 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://172.20.10.2:6443/api/v1/nodes\": dial tcp 172.20.10.2:6443: connect: connection refused" node="master"
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.565399   18387 kubelet.go:2412] "Error getting node" err="node \"master\" not found"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.571980   18387 cpu_manager.go:209] "Starting CPU manager" policy="none"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.572025   18387 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.572078   18387 state_mem.go:36] "Initialized new in-memory state store"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.572259   18387 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.572311   18387 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.572338   18387 policy_none.go:49] "None policy: Start"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.576799   18387 memory_manager.go:168] "Starting memorymanager" policy="None"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.576925   18387 state_mem.go:35] "Initializing new in-memory state store"
Mar 19 22:33:09 localhost kubelet: I0319 22:33:09.577155   18387 state_mem.go:75] "Updated machine memory state"
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.585780   18387 node_container_manager_linux.go:60] "Failed to create cgroup" err="Cannot set property TasksAccounting, or unknown property." cgroupName=[kubepods]
Mar 19 22:33:09 localhost kubelet: E0319 22:33:09.585930   18387 kubelet.go:1423] "Failed to start ContainerManager" err="Cannot set property TasksAccounting, or unknown property."
Mar 19 22:33:09 localhost systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Mar 19 22:33:09 localhost systemd: Unit kubelet.service entered failed state.
Mar 19 22:33:09 localhost systemd: kubelet.service failed.


# 建议升级systemd ,然后重新执行上面的kubelet reset 重置步骤
[root@master ~]# yum -y upgrade systemd


#我们根据输入的提示信息复制粘贴照着做即可:
[root@master ~]# mkdir -p $HOME/.kube											#复制照着做即可
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config		#复制照着做即可
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config				#复制照着做即可
[root@master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf					#复制照着做即可
[root@master ~]# 


步骤五、将node节点加入k8s集群

#在步骤四初始化完成master节点之后会提示你在node节点执行如下的命令来将node节点加入k8s集群,如下所示,复制它到node节点执行即可;
#注意:这段kubeamd join命令的token只有24h,24h就过期,需要执行kubeadm token create --print-join-command 重新生成。
kubeadm join 172.20.10.2:6443 --token llxh3m.t69t2bfwpvd2d3ao \
	--discovery-token-ca-cert-hash sha256:83f39269974cb90e2c6a57082acbd8a3ea8304d7e24484f396cd4fd8d9b8119d
[root@node1 ~]# kubeadm join 172.20.10.2:6443 --token llxh3m.t69t2bfwpvd2d3ao \
	--discovery-token-ca-cert-hash sha256:83f39269974cb90e2c6a57082acbd8a3ea8304d7e24484f396cd4fd8d9b8119d
#在node1、node2节点执行

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node1 ~]# 


步骤六、部署容器网络,CNI网络插件

#在master节点配置pod网络创建
#node节点加入k8s集群后,在master上执行kubectl get nodes发现状态是NotReady,因为还没有部署CNI网络插件,其实在步骤四初始化
#完成master节点的时候k8s已经叫我们去配置pod网络了。在k8s系统上Pod网络的实现依赖于第三方插件进行,这类插件有近数十种之多,较为
#著名的有flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目。

#执行下面这条命令在线配置pod网络,因为是国外网站,所以可能报错,去http://ip.tool.chinaz.com/网站查到域名raw.githubusercontent.com对应的IP,把域名解析配置到/etc/hosts文件,然后执行在线配置pod网络,多尝试几次即可成功。
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml					
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# kubectl get pods -n kube-system						#查看pod状态
NAME                             READY   STATUS     RESTARTS   AGE
coredns-7f6cbbb7b8-bm2gl         0/1     Pending    0          86m
coredns-7f6cbbb7b8-frq8l         0/1     Pending    0          86m
etcd-master                      1/1     Running    1          87m
kube-apiserver-master            1/1     Running    1          87m
kube-controller-manager-master   1/1     Running    1          87m
kube-flannel-ds-5rwkt            0/1     Init:1/2   0          2m13s
kube-flannel-ds-9fqkl            1/1     Running    0          2m13s
kube-flannel-ds-bvgh4            1/1     Running    0          2m13s
kube-proxy-8vmqg                 1/1     Running    0          59m
kube-proxy-ll9hw                 1/1     Running    0          86m
kube-proxy-zndg7                 1/1     Running    0          59m
kube-scheduler-master            1/1     Running    1          87m
[root@master ~]# kubectl get pods -n kube-system
[root@master ~]# kubectl get nodes										#pod网络已经配置完成,状态已经是Ready
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   97m   v1.22.6
node1    Ready    <none>                 69m   v1.22.6
node2    Ready    <none>                 69m   v1.22.6
[root@master ~]#


步骤七、测试k8s集群

[root@master ~]# ubectl create deployment nginx --image=nginx					#创建一个httpd服务测试
deployment.apps/nginx created
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort	#端口就写80,如果你写其他的可能防火墙拦截了
service/nginx exposed
[root@master kube]# kubectl get svc,pod   #对外暴露端口
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        168m
service/nginx        NodePort    10.111.14.141   <none>        80:30870/TCP   2m5s

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-trqph   1/1     Running   0          2m19s
[root@master ~]# 
#作为初学者,以上命令先不用纠结,端口就写80即可,如果你写其他的端口可能防火墙拦截了,网页就访问不了

网页测试访问,使用master节点的IP或者node节点的IP都可以访问,端口就是30870,如下所示,这就说明我们k8s已经部署完成,网络ok。

 

举报

相关推荐

0 条评论