0
点赞
收藏
分享

微信扫一扫

Ubuntu 20 kubernetes集群 安装配置_2022年02月10日最新可用

_LEON_ 2022-02-10 阅读 59

kubernetes 安装配置

介绍

这是一个简单的基于ubuntu服务器的kubernetes的安装步骤,使用了kubeadm,
之前使用过kubespray,简单一点,但是感觉使用kubeadm搭建可能对细节更了解一些。
看自己需求吧。

ubuntu虚拟机搭建的k8s集群,用来测试学习。搭建包括下载安装修改配置的时间大概一两个小时。
	                                         --  2022年02月10日20:37:43 当前版本 Kubernetes 1.230

环境

Ubuntu 20 虚拟机 
Kubernetes 1.23.0

规划

主机名静态IP账号密码
master01172.16.106.11root123456
node01172.16.106.12root123456
node02172.16.106.13root123456
master02172.16.106.14root123456

操作

01.设置root登录

sudo passwd root

sudo vim /etc/ssh/sshd_config
PermitRootLogin yes # 添加

sudo systemctl restart sshd.service

02. 修改ip

sudo nano /etc/netplan/00-installer-config.yaml 
network:
  ethernets:
    enp0s3:
      dhcp4: no
      addresses: [172.16.106.11/24]  # 静态ip
      gateway4: 172.16.106.1     # 网关
      nameservers:
        addresses: [202.106.1.20, 202.106.111.120] # dns 需要根据实际修改,查看windows10主机的dns
  version: 2
sudo netplan apply

03. 修改主机名

sudo hostnamectl set-hostname master01 
sudo nano /etc/hosts

127.0.0.1 localhost
#127.0.1.1 master01

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

172.16.106.38 master01
172.16.106.39 node01
172.16.106.50 node02
172.16.106.51 master02

04. 查看 禁用 selinux

sestatus  # 查看

sudo nano /etc/selinux/config # 修改配置
SELINUX=disabled

sestatus # 查看
reboot
sestatus # 查看

05. 禁用 swap

swapoff -a && sed -i '/swap/d' /etc/fstab

06. 开放端口

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10259kube-schedulerSelf
TCPInbound10257kube-controller-managerSelf
ProtocolDirectionPort RangePurposeUsed By
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound30000-32767NodePort ServicesAll
sudo ufw disable # ubuntu
systemctl stop firewalld && systemctl disable firewalld # centos

07. 修改配置

# Enable kernel modules
sudo modprobe overlay && \
sudo modprobe br_netfilter

# Add some settings to sysctl
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload sysctl
sudo sysctl --system

08. 安装 docker

sudo apt update && \
sudo apt install apt-transport-https ca-certificates curl software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - && \
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-cache policy docker-ce && \
sudo apt install -y containerd.io docker-ce docker-ce-cli && \
sudo systemctl status docker

09. 安装 k8s

  • 翻墙版本:
sudo apt update
sudo apt -y install curl apt-transport-https 
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee  /etc/apt/sources.list.d/kubernetes.list

9.1 阿里源版本

sudo apt update && \
sudo apt -y install curl apt-transport-https  
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -  
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee   /etc/apt/sources.list.d/kubernetes.list

9.2 更新下载

sudo apt update && \
sudo apt -y install vim git curl wget kubelet kubeadm kubectl && \
sudo apt-mark hold kubelet kubeadm kubectl

9.3 查看安装情况及版本

kubectl version --client && kubeadm version

9.4 配置docker

# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d

# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

# Start and enable Services
sudo systemctl daemon-reload && \
sudo systemctl restart docker && \
sudo systemctl enable docker

9.5 拉取k8s镜像(重点)

sudo kubeadm config images list

root@master01:~# sudo kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.3
k8s.gcr.io/kube-controller-manager:v1.23.3
k8s.gcr.io/kube-scheduler:v1.23.3
k8s.gcr.io/kube-proxy:v1.23.3
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

sudo kubeadm config images pull
# 拉取阿里镜像
kubeadm config print init-defaults > kubeadm.conf
sed -i 's/k8s.gcr.io/registry.aliyuncs.com\/google_containers/g' kubeadm.conf
sudo kubeadm config images list --config kubeadm.conf

root@master01:~# sudo kubeadm config images list --config kubeadm.conf
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6

sudo kubeadm config images pull --config kubeadm.conf

# 修改镜像名字为谷歌镜像名字 前者为阿里云镜像名字 后者为谷歌镜像名字 
# 前者:sudo kubeadm config images list --config kubeadm.conf
# 后者:sudo kubeadm config images list

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0 k8s.gcr.io/kube-apiserver:v1.23.3 && 
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0 k8s.gcr.io/kube-controller-manager:v1.23.3 && 
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0 k8s.gcr.io/kube-scheduler:v1.23.3 && 
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0 k8s.gcr.io/kube-proxy:v1.23.3 && 
docker tag registry.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6 && 
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0 && 
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6

9.6 kubeadm建立集群

sudo kubeadm init \
  --pod-network-cidr=192.168.0.0/16 \
  --upload-certs \
  --control-plane-endpoint=master01

输出如下

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
	--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9 \
	--control-plane --certificate-key 8ca22421d961623b8958b89a628f2324da23e107622ab0f008de97af427b698b

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
	--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9 
mkdir -p $HOME/.kube && \
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl cluster-info
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
	--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9 
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
	--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9 \
	--control-plane --certificate-key 8ca22421d961623b8958b89a628f2324da23e107622ab0f008de97af427b698b

9.7 安装网络插件

  • 安装网络插件calico
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml 
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
  • 查看pod、node状态
watch kubectl get pods --all-namespaces
kubectl get nodes -o wide

其他

举报

相关推荐

0 条评论