0
点赞
收藏
分享

微信扫一扫

k8s1.18版本升级到1.23(containerd方式)

作者:李毓

概述

自从官方从1.20摒弃docker以来,一直没有升级1.2X的版本。趁着春节后这段假期研究了一下1.18升级到1.23的步骤。
约定:
系统:ubuntu:18.04.1
k8s版本:二进制已经安装好的1.18.19
进系统查看一下
root@harbor:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.3.61 Ready master 48d v1.18.19
root@ubuntu:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic

下载1.23的二进制文件

进入这个网址
下载二进制文件
image.png

覆盖掉/usr/bin/里面的二进制文件

这个时候暂时先不要启动

修改一系列的配置文件

修改vi /etc/systemd/system/kube-apiserver.service

删除
--kubelet-https=true \

增加2个参数

--service-account-issuer=kubernetes.default.svc \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \

修改 vi /etc/systemd/system/kubelet.service

增加参数

--container-runtime=remote \
--runtime-request-timeout=15m \
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \

修改 vi /var/lib/kubelet/config.yaml

cgroupDriver: cgroupfs
改成
cgroupDriver: systemd

修改 vi /etc/docker/daemon.json    

"exec-opts": ["native.cgroupdriver=cgroupfs"]
改成
"exec-opts": ["native.cgroupdriver=systemd"]

修改完毕后,重新启动。

systemctl daemon-reload && systemctl restart kube-apiserver && systemctl restart kube-controller-manager && systemctl restart kube-scheduler

查看是否升级成功

root@ubuntu:~# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
192.168.3.61   Ready    master   29d   v1.23.3

说明已经升级成功了

我们再查看pod的状态

root@ubuntu:~# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS      AGE
kube-system   calico-kube-controllers-648bc85d9-qz6pq   0/1     ContainerCreating   0             78m
kube-system   calico-node-bgcbk                         0/1     Init:0/2            0             78m
kube-system   coredns-848bd88f-crwxp                    0/1     ContainerCreating   7 (21h ago)   29d

发现pod都没起来,这是因为k8s没使用docker的情况下拉取私有仓库会失败。如果是公有仓库不会有这个问题,但是实际生产情况下,还是用harbor居多。这里解决一下。

解决私有仓库无法访问:

更改containerd 的config.toml文件
可通过命令:containerd config default> /etc/containerd/config.toml 生成默认配置文件!

编辑该文件,需要自定义两处地方:

sandbox_image = "harbor.XXXX.cn:8443/xxxx/k8s.gcr.io/pause:3.2"

插入这一段

    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.xxxx.cn:8443"]
          endpoint = ["https://harbor.xxxx.cn:8443"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.xxxx.cn:8443".tls]
          insecure_skip_verify = true
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.xxxx.cn:8443".auth]
          username = "zhangsan"
          password = "zhangsan@123"

具体文件可以参考我这段

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      max_conf_num = 1
      conf_template = ""
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.xxxx.cn:8443"]
          endpoint = ["https://harbor.xxxx.cn:8443"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.xxxx.cn:8443".tls]
          insecure_skip_verify = true
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.xxxx.cn:8443".auth]
          username = "zhangsan"
          password = "zhangsan@123"
  [plugins."io.containerd.apulis.v1.opt"]
    path = "/opt/containerd"
  [plugins."io.containerd.apulis.v1.restart"]
    interval = "10s"
  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"
  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false
  [plugins."io.containerd.runtime.v1.linux"]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]
  [plugins."io.containerd.snapshotter.v1.devmapper"]
    root_path = ""
    pool_name = ""
    base_image_size = ""
    async_remove = false

重启 contanierd

systemctl daemon-reload
systemctl restart containerd
systemctl enabled containerd

查看pod状态

root@ubuntu:~# kubectl get pods -A
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-b4f7f97b-jp6d9   1/1     Running   0          14m
kube-system   calico-node-lm8b2                        1/1     Running   0          14m
kube-system   coredns-75ffb4d4df-7t2s9                 1/1     Running   0          14m

大功告成了!

举报

相关推荐

0 条评论