机器准备
更多请访问http://iis3.com/server/
主机名 | IP | 备注 |
instance-2.c.slzcc-178908.internal | 10.140.0.2,35.229.197.59 | etcd,kube-apiserver,kube-controller-manager,kubelet,kube-proxy,kube-scheduler,kube-dns |
instance-3.c.slzcc-178908.internal | 10.140.0.3,35.194.142.199 | etcd,kube-apiserver,kube-controller-manager,kubelet,kube-proxy,kube-scheduler |
instance-4.c.slzcc-178908.internal | 10.140.0.4,35.194.196.149 | etcd,kube-apiserver,kube-controller-manager,kubelet,kube-proxy,kube-scheduler |
instance-5.c.slzcc-178908.internal | 10.140.0.5 | kubelet,kube-proxy |
instance-6.c.slzcc-178908.internal | 10.140.0.6 | kubelet,kube-proxy |
instance-7.c.slzcc-178908.internal | 10.140.0.7 | kubelet,kube-proxy |
环境准备
SSH
配置 SSH 免秘钥登入(使用 Google Cloud 服务时配置如下)
$ ssh -keygen -t rsa Generating public /private rsa key pair. Enter file in which to save the key ( /root/ . ssh /id_rsa ): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/ . ssh /id_rsa . Your public key has been saved in /root/ . ssh /id_rsa .pub. The key fingerprint is: SHA256:FgpTgF1qnUPd0ojEg5Hvt7nk /yHZn6BsHg7Z4neiObc root@instance-2 The key's randomart image is: +---[RSA 2048]----+ | ooO=o + | | . +=+.+ o | | =.+... | | . o.o . | | .. S | | ...o o | | .=o= + | | ++*=+.+ . | | *XE=. o | +----[SHA256]-----+
|
然后把三个节点下的 SSH 公钥 .ssh/id_rsa.pub 的内容放在 .ssh/authorized_keys,如下,三台机器分别添加三个公钥凭证:
.ssh/authorized_keys 展开源码
Docker
所有节点 Docker 安装:
# docker $ apt-get update && apt-get install -y curl apt-transport-https $ curl -fsSL https: //download .docker.com /linux/ubuntu/gpg | apt-key add - $ cat <<EOF > /etc/apt/sources .list.d /docker .list deb https: //download .docker.com /linux/ $(lsb_release -si | tr '[:upper:]' '[:lower:]' ) $(lsb_release -cs) stable EOF $ apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}' )
|
修改 Dockerd 配置:
/etc/docker/daemon.json
{ "storage-driver" : "overlay2" , "storage-opts" : [ "overlay2.override_kernel_check=true" ] }
|
重启 Docker:
$ systemctl restart docker
|
Kernel 属性
所有节点 配置内核属性
$ cat <<EOF > /etc/sysctl .d /k8s .conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl -p /etc/sysctl .d /k8s .conf
|
关闭 Swap
所有节点 Kubernetes v1.8+ 要求关闭 Swap,否则 kubelet 无法正常启动
$ swapoff -a && sysctl -w vm.swappiness=0
|
Kubernetes 二进制文件
所有 MASTER 节点 Master 下载安装 Kubernetes 二进制文件:
$ wget https: //dl .k8s.io /v1 .10.0 /kubernetes-server-linux-amd64 . tar .gz $ tar zxf kubernetes-server-linux-amd64. tar .gz && cp kubernetes /server/bin/ {kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
|
所有 NODE 节点 Node 节点下载安装 Kubernetes 二进制文件:
$ wget https: //dl .k8s.io /v1 .10.0 /kubernetes-node-linux-amd64 . tar .gz $ tar zxf kubernetes-node-linux-amd64. tar .gz && cp kubernetes /node/bin/ {kubelet,kube-proxy} /usr/local/bin/
|
Kubernetes CNI 二进制文件
所有节点 下载安装 Kubernetes CNI 二进制文件:
$ mkdir -p /opt/cni/bin && cd /opt/cni/bin $ export CNI_URL= "https://github.com/containernetworking/plugins/releases/download" $ wget -qO- --show-progress "${CNI_URL}/v0.6.0/cni-plugins-amd64-v0.6.0.tgz" | tar -zx
|
Cfssl
单个 MASTER 节点 安装 Cfssl:
$ curl -o /usr/local/bin/cfssl https: //pkg .cfssl.org /R1 .2 /cfssl_linux-amd64 $ curl -o /usr/local/bin/cfssljson https: //pkg .cfssl.org /R1 .2 /cfssljson_linux-amd64 $ chmod +x /usr/local/bin/cfssl *
|
Etcd CA
单个 MASTER 节点 创建 cfssl 配置文件:
$ mkdir -p /etc/etcd/ssl && cd /etc/etcd/ssl $ export PKI_URL= "https://mirror.shileizcc.com/Kubernetes/1.10/cfssl/jsonfiles/"
|
下载 ca-config.json 和
etcd-ca-csr.json 文件,并生成 Certificate:
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/etcd-ca-csr.json" $ cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca $ ls etcd-ca* etcd-ca.csr etcd-ca-csr.json etcd-ca-key.pem etcd-ca.pem
|
下载 etcd-csr.json 生成 etcd 证书
$ wget "${PKI_URL}/etcd-csr.json" $ cfssl gencert \ -ca=etcd-ca.pem \ -ca-key=etcd-ca-key.pem \ -config=ca-config.json \ - hostname =127.0.0.1,10.140.0.2,10.140.0.3,10.140.0.4 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare etcd $ ls etcd.* etcd.csr etcd.pem
|
Hostname 为所有的 Etcd Master 节点。
删除不需要的文件:
$ rm -rf *.json *.csr $ ls /etc/etcd/ssl etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem
|
拷贝文件到其他 Etcd Master 节点:
$ for NODE in instance-3 instance-4; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/ ${FILE} ${NODE}: /etc/etcd/ssl/ ${FILE} done done
|
Kubernetes CA
单个 MASTER 节点 创建 pki 配置文件目录:
$ mkdir -p /etc/kubernetes/pki && cd /etc/kubernetes/pki $ export PKI_URL= "https://mirror.shileizcc.com/Kubernetes/1.10/cfssl/jsonfiles/" $ export KUBE_APISERVER= "https://10.140.0.2:6443"
|
下载 ca-config.json 和
ca-csr.json 文件,生成 CA 证书:
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/ca-csr.json" $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls ca*.pem ca-key.pem ca.pem
|
API Server Certificate
下载 apiserver-csr.json,生成 CA 证书:
$ wget "${PKI_URL}/apiserver-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ - hostname =10.96.0.1,192.168.35.10,127.0.0.1,10.140.0.2,10.140.0.3,10.140.0.4,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster. local \ -profile=kubernetes \ apiserver-csr.json | cfssljson -bare apiserver $ ls apiserver*.pem apiserver-key.pem apiserver.pem
|
- Hostname 中的 10.96.0.1 是 Cluster IP ,为 Kubernetes SVC 地址;
- 192.168.35.10 为虚拟 VIP。(这里定义的 VIP 有可能在云服务器上不能实现,所有这里可能会出现某些问题,最好使用物理机等常见进行测试,或者可以使用负载均衡器进行测试,这样就可以不使用 keepalived了)
- Kubernetes.default 为 Kubernetes DN。
Front Proxy Certificate
下载 front-proxy-ca-csr.json 并生成 CA 证书,Front Proxy 主要是用在 API aggregator 上:
$ wget "${PKI_URL}/front-proxy-ca-csr.json" $ cfssl gencert \ -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca $ ls front-proxy-ca*.pem front-proxy-ca-key.pem front-proxy-ca.pem
|
下载 front-proxy-client-csr.json 并生成 CA 证书:
$ wget "${PKI_URL}/front-proxy-client-csr.json" $ cfssl gencert \ -ca=front-proxy-ca.pem \ -ca-key=front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ front-proxy-client-csr.json | cfssljson -bare front-proxy-client $ ls front-proxy-client*.pem front-proxy-client-key.pem front-proxy-client.pem
|
Admin Certificate
下载 admin-csr.json 并生成 CA 证书:
$ wget "${PKI_URL}/admin-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin $ ls admin*.pem admin-key.pem admin.pem
|
通过下面的命令,生成 admin 的 Kubeconfig 配置:
# admin set cluster $ kubectl config set -cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs= true \ --server=${KUBE_APISERVER} \ --kubeconfig=.. /admin .conf # admin set credentials $ kubectl config set -credentials kubernetes-admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs= true \ --kubeconfig=.. /admin .conf # admin set context $ kubectl config set -context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=.. /admin .conf # admin set default context $ kubectl config use-context kubernetes-admin@kubernetes \ --kubeconfig=.. /admin .conf
|
Controller Manager Certificate
下载 manager-csr.json 生成 CA 证书:
$ wget "${PKI_URL}/manager-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare controller-manager $ ls controller-manager*.pem controller-manager-key.pem controller-manager.pem
|
通过下面的命令,生成 controller-manager 的 Kubeconfig 配置:
# controller-manager set cluster $ kubectl config set -cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs= true \ --server=${KUBE_APISERVER} \ --kubeconfig=.. /controller-manager .conf # controller-manager set credentials $ kubectl config set -credentials system:kube-controller-manager \ --client-certificate=controller-manager.pem \ --client-key=controller-manager-key.pem \ --embed-certs= true \ --kubeconfig=.. /controller-manager .conf # controller-manager set context $ kubectl config set -context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=.. /controller-manager .conf # controller-manager set default context $ kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=.. /controller-manager .conf
|
Scheduler Certificate
下载 scheduler-csr.json 并生成 CA 证书:
3A网络科技致力于提供最稳定最快速的云服务器建设,通过高速的cn2线路显著提升网络速度,搭配企业级硬件设备竭诚为客户提供高效且可靠的服务器应用方案,加上十几年专业团队的协作来保障服务器的高质量以及超高安全性。更多请访问http://iis3.com/server/
$ wget "${PKI_URL}/scheduler-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare scheduler $ ls scheduler*.pem scheduler-key.pem scheduler.pem
|
通过下面的命令,生成 scheduler 的 Kubeconfig 配置:
# scheduler set cluster $ kubectl config set -cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs= true \ --server=${KUBE_APISERVER} \ --kubeconfig=.. /scheduler .conf # scheduler set credentials $ kubectl config set -credentials system:kube-scheduler \ --client-certificate=scheduler.pem \ --client-key=scheduler-key.pem \ --embed-certs= true \ --kubeconfig=.. /scheduler .conf # scheduler set context $ kubectl config set -context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=.. /scheduler .conf # scheduler use default context $ kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=.. /scheduler .conf
|
Master Kubelet Certificate
下载 kubelet-csr.json 并生成所有 Master 的 CA 证书
$ wget "${PKI_URL}/kubelet-csr.json" $ for NODE in instance-2 instance-3 instance-4; do echo "--- $NODE ---" cp kubelet-csr.json kubelet-$NODE-csr.json; sed -i "s/\$NODE/$NODE/g" kubelet-$NODE-csr.json; cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ - hostname =$NODE \ -profile=kubernetes \ kubelet-$NODE-csr.json | cfssljson -bare kubelet-$NODE done $ ls kubelet-*.pem kubelet-instance-2-key.pem kubelet-instance-2.pem kubelet-instance-3-key.pem kubelet-instance-3.pem kubelet-instance-4-key.pem kubelet-instance-4.pem
|
注意 Hostname 中使用的 NODE 变量,这里为 Master 节点的 HOSTNAME 值。
把生成的 kubelet 证书拷贝到其他 Master 节点中:
$ for NODE in instance-3 instance-4; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/kubernetes/pki" for FILE in kubelet-$NODE-key.pem kubelet-$NODE.pem ca.pem; do scp /etc/kubernetes/pki/ ${FILE} ${NODE}: /etc/kubernetes/pki/ ${FILE} done done
|
通过下面的命令,生成 kubelet 的 Kubeconfig 配置:
$ for NODE in instance-2 instance-3 instance-4; do echo "--- $NODE ---" ssh ${NODE} " cd /etc/kubernetes/pki && \ kubectl config set -cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs= true \ --server=${KUBE_APISERVER} \ --kubeconfig=.. /kubelet .conf && \ kubectl config set -cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs= true \ --server=${KUBE_APISERVER} \ --kubeconfig=.. /kubelet .conf && \ kubectl config set -credentials system:node:${NODE} \ --client-certificate=kubelet-${NODE}.pem \ --client-key=kubelet-${NODE}-key.pem \ --embed-certs= true \ --kubeconfig=.. /kubelet .conf && \ kubectl config set -context system:node:${NODE}@kubernetes \ --cluster=kubernetes \ --user=system:node:${NODE} \ --kubeconfig=.. /kubelet .conf && \ kubectl config use-context system:node:${NODE}@kubernetes \ --kubeconfig=.. /kubelet .conf && \ rm kubelet-${NODE}.pem kubelet-${NODE}-key.pem" done
|
Service Account Key
Service account 不是通过 CA 进行认证,因此不要通过 CA 来做 Service account key 的检测,这边建立一组 Private 与 Public 秘钥提供给 Service account key 使用:
$ openssl genrsa -out sa.key 2048 $ openssl rsa - in sa.key -pubout -out sa.pub $ ls sa.* sa.key sa.pub
|
删除不必要的文件
删除不需要的配置引用的文件
$ rm -rf *.json *.csr scheduler*.pem controller-manager*.pem admin*.pem kubelet*.pem $ ls apiserver-key.pem apiserver.pem ca-key.pem ca.pem front-proxy-ca-key.pem front-proxy-ca.pem front-proxy-client-key.pem front-proxy-client.pem sa.key sa.pub
|
复制文件到其他节点
复制生成的 CA 证书到其他 Master 节点:
$ for NODE in instance-3 instance-4; do echo "--- $NODE ---" for FILE in $( ls /etc/kubernetes/pki/ ); do scp /etc/kubernetes/pki/ ${FILE} ${NODE}: /etc/kubernetes/pki/ ${FILE} done done
|
复制生成的 kubeconfig 文件到其他 Master 节点:
$ for NODE in instance-3 instance-4; do echo "--- $NODE ---" for FILE in admin.conf controller-manager.conf scheduler.conf; do scp /etc/kubernetes/ ${FILE} ${NODE}: /etc/kubernetes/ ${FILE} done done
|
Kubernetes Masters Deployment
- kube-apiserver:提供 REST APIs,包含授权,认证与状态储存等。
- kube-controller-manager:负责维护丛集的状态,如自动扩展,滚动更新等。
- kube-scheduler:负责资源排程,依据预定的排程策略将 Pod 分配到对应节点上。
- Etcd:储存 Cluster 所有状态的 Key/Value 储存系统。
- HAProxy:提供负载均衡。
- Keepalived:提供虚拟网路位址(VIP)3A网络科技致力于提供最稳定最快速的云服务器建设,通过高速的cn2线路显著提升网络速度,搭配企业级硬件设备竭诚为客户提供高效且可靠的服务器应用方案,加上十几年专业团队的协作来保障服务器的高质量以及超高安全性。更多请访问http://iis3.com/server/
部署服务
所有 MASTER 节点 下载部署的 YAML 文件,不采用二进制 Systemd 来管理这些组件,全部采用 Static Pod 来达成。这边将文件下载至 /etc/kubernetes/manifests 目录下
$ export CORE_URL= "https://mirror.shileizcc.com/Kubernetes/1.10/master/podfile/" $ mkdir -p /etc/kubernetes/manifests && cd /etc/kubernetes/manifests $ for FILE in kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived etcd; do wget "${CORE_URL}/${FILE}.yml.conf" -O ${FILE}.yml if [ ${FILE} == "etcd" ]; then sed -i "s/\${HOSTNAME}/${HOSTNAME}/g" etcd.yml sed -i "s/\${PUBLIC_IP}/$(hostname -i)/g" etcd.yml fi done $ ls /etc/kubernetes/manifests etcd.yml haproxy.yml keepalived.yml kube-apiserver.yml kube-controller-manager.yml kube-scheduler.yml
|
- 请修改 kube-apiserver.yml 中的对应的 IP 地址。
- kube-apiserver 中的
NodeRestriction
请参考Using Node Authorization。 - kube-apiserver.yml 中绑定的端口为 5443 ,而 haproxy 作为负载均衡则需要绑定 6443。
生成 Etcd 的加密 Key 秘钥:
$ head -c 32 /dev/urandom | base64 ZvwKkmxl19pd /K9esTrLIKbWBsoBempWww0viwNyxaw =
|
所有的 Master 用同样的 Key。
所有 MASTER 节点 在 /etc/kubernetes/ 目录下创建 encryption.yml 加密文件:
$ cat <<EOF > /etc/kubernetes/encryption .yml kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: ZvwKkmxl19pd /K9esTrLIKbWBsoBempWww0viwNyxaw = - identity: {} EOF
|
Etcd 加密可参考文档 Encrypting data at rest。
所有 MASTER 节点 在 /etc/kubernetes/ 目录下创建 audit-policy.yml 文件:
$ cat <<EOF > /etc/kubernetes/audit-policy .yml apiVersion: audit.k8s.io /v1beta1 kind: Policy rules: - level: Metadata EOF
|
Audit Policy 可参考文档 Auditing。
所有 MASTER 节点 下载 haproxy.cfg 配置文件:
$ mkdir -p /etc/haproxy/ $ wget "${CORE_URL}/haproxy.cfg" -O /etc/haproxy/haproxy .cfg
|
注意修改配置文件中的 IP 地址。
所有 MASTER 节点 下载 kubelet.service 文件管理 kubelet:
$ mkdir -p /etc/systemd/system/kubelet .service.d $ wget "${CORE_URL}/kubelet.service" -O /etc/systemd/system/kubelet .service $ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet .service.d /10-kubelet .conf
|
若 cluster dns
或 domain 有变动,则需要修改
10-kubelet.conf 。
创造各组件所需存储目录,并启动服务:
$ mkdir -p /var/lib/kubelet /var/log/kubernetes /var/lib/etcd $ systemctl daemon-reload && systemctl enable kubelet.service $ systemctl start kubelet.service && systemctl status kubelet.service
|
查看服务启动状态:
$ netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID /Program name tcp 0 0 0.0.0.0:9090 0.0.0.0:* LISTEN 18894 /haproxy tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 18145 /kubelet tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 18894 /haproxy tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 18788 /kube-schedule tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 18727 /kube-controll tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1351 /sshd tcp6 0 0 :::5443 :::* LISTEN 19051 /kube-apiserve tcp6 0 0 :::10250 :::* LISTEN 18145 /kubelet tcp6 0 0 :::2379 :::* LISTEN 19173 /etcd tcp6 0 0 :::2380 :::* LISTEN 19173 /etcd tcp6 0 0 :::10255 :::* LISTEN 18145 /kubelet tcp6 0 0 :::22 :::* LISTEN 1351 /sshd
|
验证集群状态
拷贝 admin kubeconfig 文件,到 kubelet 指定目录,可以使用 KUBECONFIG 代替:
$ cp /etc/kubernetes/admin .conf ~/.kube /config $ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy { "health" : "true" } etcd-1 Healthy { "health" : "true" } etcd-0 Healthy { "health" : "true" } $ kubectl get node NAME STATUS ROLES AGE VERSION instance-2 NotReady master 9m v1.10.0 instance-3 NotReady master 9m v1.10.0 instance-4 NotReady master 9m v1.10.0 $ kubectl -n kube-system get po -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-instance-2 1 /1 Running 0 9m 10.140.0.2 instance-2 etcd-instance-3 1 /1 Running 3 9m 10.140.0.3 instance-3 etcd-instance-4 1 /1 Running 0 8m 10.140.0.4 instance-4 haproxy-instance-2 1 /1 Running 0 8m 10.140.0.2 instance-2 haproxy-instance-3 1 /1 Running 0 8m 10.140.0.3 instance-3 haproxy-instance-4 1 /1 Running 0 8m 10.140.0.4 instance-4 kube-apiserver-instance-2 1 /1 Running 6 9m 10.140.0.2 instance-2 kube-apiserver-instance-3 1 /1 Running 5 8m 10.140.0.3 instance-3 kube-apiserver-instance-4 1 /1 Running 5 8m 10.140.0.4 instance-4 kube-controller-manager-instance-2 1 /1 Running 0 8m 10.140.0.2 instance-2 kube-controller-manager-instance-3 1 /1 Running 0 8m 10.140.0.3 instance-3 kube-controller-manager-instance-4 1 /1 Running 0 8m 10.140.0.4 instance-4 kube-scheduler-instance-2 1 /1 Running 0 9m 10.140.0.2 instance-2 kube-scheduler-instance-3 1 /1 Running 0 8m 10.140.0.3 instance-3 kube-scheduler-instance-4 1 /1 Running 0 8m 10.140.0.4 instance-4
|
如果验证集群状态不成功,则通过 docker 查看相应服务的 log 进行排查,如果是 etcd 服务无法获取如下:
$ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy { "health" : "true" } etcd-2 Healthy { "health" : "true" } etcd-1 Unhealthy Get https: //10 .140.0.3:2379 /health : net /http : TLS handshake timeout
|
则需要把此节点上的 /var/lib/etcd/ 目录清空在重启启动。
接着确认服务能够执行日志等指令:
3A网络科技致力于提供最稳定最快速的云服务器建设,通过高速的cn2线路显著提升网络速度,搭配企业级硬件设备竭诚为客户提供高效且可靠的服务器应用方案,加上十几年专业团队的协作来保障服务器的高质量以及超高安全性。更多请访问http://iis3.com/server/
$ kubectl -n kube-system logs -f kube-scheduler-instance-2 Error from server (NotFound): pods "kube-scheduler-instance-2" not found
|
这边会发现出现 403 Forbidden 问题,这是因为 kube-apiserver 用户并没有节点的资源存取权限,属于正常。
由于上述权限问题,必需建立一个 apiserver-to-kubelet-rbac.yml 来定义权限,以供对节点容器执行日志,执行等指令。在任意一台 master 节点执行以下指令:
$ kubectl apply -f "${CORE_URL}/apiserver-to-kubelet-rbac.yml.conf" clusterrole.rbac.authorization.k8s.io "system:kube-apiserver-to-kubelet" created clusterrolebinding.rbac.authorization.k8s.io "system:kube-apiserver" created # 测试 logs $ kubectl -n kube-system logs -f kube-scheduler-instance-2 W0411 04:24:45.663651 1 server.go:163] WARNING: all flags other than --config are deprecated. Please begin using a config file ASAP. I0411 04:24:45.674465 1 server.go:555] Version: v1.10.0 I0411 04:24:45.677968 1 server.go:574] starting healthz server on 127.0.0.1:10251 ...
|
设定主节点允许 Taint:
$ kubectl taint nodes node-role.kubernetes.io /master = "" :NoSchedule --all node "instance-2" tainted node "instance-3" tainted node "instance-4" tainted
|
Taints and Tolerations。
创建 TLS Bootstrapping RBAC 与 Secret
由于本次安装启用了 TLS 认证,因此每个节点的 kubelet 都必须使用 kube-apiserver 的 CA 的凭证后,才能与 kube-apiserver 进行沟通,而该过程需要手动针对每台节点单签签署凭证是一件繁琐的事情,且一旦节点增加会延伸出管理不易问题; 而 TLS bootstrapping 目标就是解决该问题,透过让 kubelet 先使用一个预定低权限使用者连接到 kube-apiserver,然后在对 kube-apiserve r申请凭证签署,当授权Token一致时,Node节点的 kubelet 凭证将由 kube- apiserver 动态签署提供。具体作法可以参考 TLS Bootstrapping 与 Authenticating with Bootstrap Tokens。
单个 MASTER 节点 创建一个变数来产生 BOOTSTRAP_TOKEN,并创建 bootstrap-kubelet.conf的Kubernetes 配置文件:
$ cd /etc/kubernetes/pki $ export TOKEN_ID=$(openssl rand 3 -hex) $ export TOKEN_SECRET=$(openssl rand 8 -hex) $ export BOOTSTRAP_TOKEN=${TOKEN_ID}.${TOKEN_SECRET} $ export KUBE_APISERVER= "https://192.168.35.10:6443" # bootstrap set cluster $ kubectl config set -cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs= true \ --server=${KUBE_APISERVER} \ --kubeconfig=.. /bootstrap-kubelet .conf # bootstrap set credentials $ kubectl config set -credentials tls-bootstrap-token-user \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=.. /bootstrap-kubelet .conf # bootstrap set context $ kubectl config set -context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=.. /bootstrap-kubelet .conf # bootstrap use default context $ kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=.. /bootstrap-kubelet .conf
|
若想要用手动签署凭证来进行授权的话,可以参考 Certificate。
单个 MASTER 节点 创建 TLS bootstrap secret 来提供自动签证使用:
$ cat <<EOF | kubectl create -f - apiVersion: v1 kind: Secret metadata: name: bootstrap-token-${TOKEN_ID} namespace: kube-system type : bootstrap.kubernetes.io /token stringData: token- id : ${TOKEN_ID} token-secret: ${TOKEN_SECRET} usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra- groups : system:bootstrappers:default-node-token EOF secret "bootstrap-token-938d23" created
|
单个 MASTER 节点 建立 TLS Bootstrap Autoapprove RBAC:
$ kubectl apply -f "${CORE_URL}/kubelet-bootstrap-rbac.yml.conf" clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-bootstrap" created clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-certificate-rotation" created
|
Kubernetes Nodes
所有 NODE 节点 下面说明 Node 的部署,首先安装 Node 所需的组件:
$ wget https: //dl .k8s.io /v1 .10.0 /kubernetes-node-linux-amd64 . tar .gz $ tar zxf kubernetes-node-linux-amd64. tar .gz && cp kubernetes /node/bin/ {kubelet,kube-proxy} /usr/local/bin/
|
如果没有部署 CNI,则需要部署 CNI。
单个 MASTER 节点 然后把所需的配置文件,放置在相应的 Node 上:
$ cd /etc/kubernetes/pki $ for NODE in instance-5 instance-6 instance-7; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/kubernetes/pki/" ssh ${NODE} "mkdir -p /etc/etcd/ssl" # Etcd for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do scp /etc/etcd/ssl/ ${FILE} ${NODE}: /etc/etcd/ssl/ ${FILE} done # Kubernetes for FILE in pki /ca .pem pki /ca-key .pem bootstrap-kubelet.conf; do scp /etc/kubernetes/ ${FILE} ${NODE}: /etc/kubernetes/ ${FILE} done done
|
下载 kubelet.service 来管理 kubelet :
$ export CORE_URL= "https://mirror.shileizcc.com/Kubernetes/1.10/node/" $ mkdir -p /etc/systemd/system/kubelet .service.d $ wget "${CORE_URL}/kubelet.service" -O /etc/systemd/system/kubelet .service $ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet .service.d /10-kubelet .conf
|
若 cluster dns
或 domain 有变动则
需要修改 10-kubelet.conf
。
启动 kubelet:
$ mkdir -p /var/lib/kubelet /var/log/kubernetes /var/lib/etcd $ systemctl daemon-reload && systemctl enable kubelet.service $ systemctl start kubelet.service && systemctl status kubelet.service
|
验证集群
单个 MASTER 节点 完成启动后,检测集群状态:
$ kubectl get csr ... node-csr-uz8g7WPpxSmBadOtkVsa0ikintMVQkp0RskJWEWDxRg 25s system:bootstrap:938d23 Approved,Issued $ kubectl get node ... instance-5 Ready node 3m v1.10.0
|
Kubernetes Core Addons Deployment
当完成上面所有步骤后,接着需要部署一些插件,其中如 Kubernetes DNS 与 Kubernetes Proxy 等这种 Addons 是非常重要的。
Kubernetes Proxy
单个 MASTER 节点 Kube-proxy 是实现 Service 的关键插件,kube-proxy 会在每台节点上执行,然后监听 API Server 的 Service 与 Endpoint 资源的改变,然后来依据变化执行 iptables 来实现网路的转发。这边我们 会需要建议一个 DaemonSet 来执行,并且建立一些需要的证书。
下载 kube-proxy.yml
来创建 Kubernetes Proxy Addon:
3A网络科技致力于提供最稳定最快速的云服务器建设,通过高速的cn2线路显著提升网络速度,搭配企业级硬件设备竭诚为客户提供高效且可靠的服务器应用方案,加上十几年专业团队的协作来保障服务器的高质量以及超高安全性。更多请访问http://iis3.com/server/
$ kubectl apply -f "https://mirror.shileizcc.com/Kubernetes/1.10/addon/kube-proxy.yml.conf" serviceaccount "kube-proxy" created clusterrolebinding.rbac.authorization.k8s.io "system:kube-proxy" created configmap "kube-proxy" created daemonset.apps "kube-proxy" created $ kubectl -n kube-system get po -o wide -l k8s-app=kube-proxy NAME READY STATUS RESTARTS AGE IP NODE kube-proxy-8sb79 1 /1 Running 0 7s 10.140.0.2 instance-2 kube-proxy-n5wkz 1 /1 Running 0 7s 10.140.0.4 instance-4 kube-proxy-svdwl 1 /1 Running 0 7s 10.140.0.3 instance-3
|
Kubernetes DNS
单个 MASTER 节点 Kube DNS 是Kubernetes Cluster 内部 Pod 之间互相沟通的重要插件,它允许 Pod 可以透过域名方式来连接服务,其主要由 Kube DNS 与 Sky DNS 组合而成,透过 Kube DNS 监听服务与端点变化,来提供 Sky DNS 信息,已更新解析位址。
$ kubectl apply -f "https://mirror.shileizcc.com/Kubernetes/1.10/addon/kube-dns.yml.conf" serviceaccount "kube-dns" created service "kube-dns" created deployment.extensions "kube-dns" created $ kubectl -n kube-system get po -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE kube-dns-654684d656-vvznf 0 /3 Pending 0 15s
|
处于 Pending 状态,是因为 Pod Network 组件没有部署。
Calico Network
Calico 是一款纯第3层的资料中心网路方案(不需Over Overlay网路),Calico 好处是它整合了各种云原生平台,Calico 在每一个节点利用 Linux Kernel 实现高效的 vRouter 来负责资料的转发 ,而当资料中心复杂度增加时,可以用 BGP 路由反射器来达成。
本次不采用手动方式来创建 Calico 网络,若想了解可以參考 Integration Guide。
$ kubectl apply -f "https://mirror.shileizcc.com/Kubernetes/1.10/network/calico.yml.conf" configmap "calico-config" created daemonset.extensions "calico-node" created deployment.extensions "calico-kube-controllers" created clusterrolebinding.rbac.authorization.k8s.io "calico-cni-plugin" created clusterrole.rbac.authorization.k8s.io "calico-cni-plugin" created serviceaccount "calico-cni-plugin" created clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" created clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" created serviceaccount "calico-kube-controllers" created $ kubectl -n kube-system get po -l k8s-app=calico-node -o wide NAME READY STATUS RESTARTS AGE IP NODE calico-node-fxmpx 2 /2 Running 0 14s 10.140.0.4 instance-4 calico-node-n7mmv 2 /2 Running 0 14s 10.140.0.2 instance-2 calico-node-pn66l 2 /2 Running 0 14s 10.140.0.3 instance-3
|
需要自定义修改 calico.yml.conf 文件内的 etcd 地址,和网卡设备名称。
单个 MASTER 节点 下载 Calico CLI 来查看 Calico nodes:
$ wget https: //github .com /projectcalico/calicoctl/releases/download/v3 .1.0 /calicoctl -O /usr/local/bin/calicoctl $ chmod u+x /usr/local/bin/calicoctl $ cat <<EOF > ~ /calico-rc export ETCD_ENDPOINTS= "https://10.140.0.2:2379,https://10.140.0.3:2379,https://10.140.0.4:2379" export ETCD_CA_CERT_FILE= "/etc/etcd/ssl/etcd-ca.pem" export ETCD_CERT_FILE= "/etc/etcd/ssl/etcd.pem" export ETCD_KEY_FILE= "/etc/etcd/ssl/etcd-key.pem" EOF $ . ~ /calico-rc $ calicoctl node status Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 10.140.0.3 | node-to-node mesh | up | 06:00:48 | Established | | 10.140.0.4 | node-to-node mesh | up | 06:00:48 | Established | +--------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found.
|
查看 pending 的 pod 是否已执行:
$ kubectl -n kube-system get po -l k8s-app=kube-dns -o wide NAME READY STATUS RESTARTS AGE IP NODE kube-dns-654684d656-hfpsd 3 /3 Running 0 40m 10.244.89.2 instance-3
|
Kubernetes Extra Addons 部署
部署一些官方常用的 Addons,如 Dashboard、Heapster 等。
Dashboard
Dashboard 是由 Kubernetes 社区开发的官方仪表板。 利用仪表板,管理员可以以基于 Web 的方式管理 Kubernetes 集群。 除了改善管理外,资源也可视化,人们更直观地看到系统信息的结果。
单个 MASTER 节点 使用 kubectl 来创建 kubernetes dashboard 即可:
$ kubectl apply -f https: //raw .githubusercontent.com /kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard .yaml $ kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard NAME READY STATUS RESTARTS AGE kubernetes-dashboard-7d5dcdb6d9-9dfs6 0 /1 Terminating 0 4m kubernetes-dashboard-7d5dcdb6d9-g6tzx 1 /1 Running 0 20s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.100.151.252 <none> 443 /TCP 4m
|
这边会额外建立一个名称为 open-api Cluster Role Binding,这只作为方便测试时使用,在一般情况下不要开启,不然就会直接被存取所有的 API:
$ cat <<EOF | kubectl create -f - apiVersion: rbac.authorization.k8s.io /v1 kind: ClusterRoleBinding metadata: name: open -api namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:anonymous EOF
|
注意!管理者可以针对特定使用者来开放API存取权限,但这边方便使用直接绑定在群集管理群集角色。
在1.7版本以后的 Dashboard 中将不再提供所有权限,因此需要建立一个服务帐户来绑定 cluster-admin 角色:
$ kubectl -n kube-system create sa dashboard $ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard $ SECRET=$(kubectl -n kube-system get sa dashboard -o yaml | awk '/dashboard-token/ {print $3}' ) $ kubectl -n kube-system describe secrets ${SECRET} | awk '/token:/{print $2}' eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tcDJzcTYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2YxOTVjYTYtM2Q2Zi0xMWU4LThkN2EtNDIwMTBhOGMwMDAzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.FuUe6sRcH2Y856JcBDcBf0fHQ8mkDL0kTZCNE8hdHP-UiryejHd75SdhZQgWMjJfPZKRQQaRIbeXF6zJ9GSxjoEbofEga85EhczmfPSPN_HfFxin4KTCeQriGXpzKykRvBv0jW0Oywoxp_vU5DHyLQZWu_PXuP2Jct7EnLAG77PqhASZoR_CTtOcFDNftY9QTjpG2rJH9lPFDrMFFNvh3d3WYdc9D06nz3wLPOtSOlS1Pl8Dx0LnYeJ_RXBHMFvwRNitYfoQpaM6QSa3NLdabi60ZowyQk7zTnxtCO9rNlV_pn9HU_cEs9Z0rmSviY097hh2EkD2n4ti1jZt3h8pIA
|
复制令牌,然后贴到 Kubernetes 仪表板。注意这边一般来说要针对不同用户开启特定存取权限。
3A网络科技致力于提供最稳定最快速的云服务器建设,通过高速的cn2线路显著提升网络速度,搭配企业级硬件设备竭诚为客户提供高效且可靠的服务器应用方案,加上十几年专业团队的协作来保障服务器的高质量以及超高安全性。更多请访问http://iis3.com/server/

访问地址如下:https://192.16.35.10:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Heapster
Heapster 是 Kubernetes 社区维护的容器丛集监控与效能分析工具。Heapster 会从 Kubernetes apiserver 取得所有节点资讯,然后再透过这些节点来取得 kubelet 上的资料,最后再将所有收集到资料送到 Heapster 的后台储存InfluxDB ,最后利用 Grafana 来抓取 InfluxDB 的资料源来进行视觉化。
单个 MASTER 节点 通过 kubectl 来创建 kubernetes monitor 即可:
$ kubectl apply -f "https://mirror.shileizcc.com/Kubernetes/1.10/addon/kube-monitor.yml.conf" $ kubectl -n kube-system get po,svc NAME READY STATUS RESTARTS AGE IP NODE calico-kube-controllers-7779fd5f4c-2zb5r 1 /1 Running 0 57m 10.140.0.2 instance-2 calico-node-6r8j8 2 /2 NodeLost 0 21m 10.140.0.5 instance-5 calico-node-8f4kz 2 /2 Running 0 57m 10.140.0.4 instance-4 calico-node-hf7nw 2 /2 Running 0 57m 10.140.0.3 instance-3 calico-node-z9pt6 2 /2 Running 0 57m 10.140.0.2 instance-2 etcd-instance-2 1 /1 Running 1 1h 10.140.0.2 instance-2 etcd-instance-3 1 /1 Running 0 1h 10.140.0.3 instance-3 etcd-instance-4 1 /1 Running 0 1h 10.140.0.4 instance-4 haproxy-instance-2 1 /1 Running 0 1h 10.140.0.2 instance-2 haproxy-instance-3 1 /1 Running 0 1h 10.140.0.3 instance-3 haproxy-instance-4 1 /1 Running 0 1h 10.140.0.4 instance-4 heapster-697bddffc4-tpf8r 4 /4 Running 0 16s 10.244.118.196 instance-4 influxdb-grafana-848cd4dd9c-wccxj 2 /2 Running 0 37s 10.244.118.195 instance-4 keepalived-instance-2 1 /1 Running 1 33m 10.140.0.2 instance-2 keepalived-instance-3 1 /1 Running 0 33m 10.140.0.3 instance-3 keepalived-instance-4 1 /1 Running 0 26m 10.140.0.4 instance-4 kube-apiserver-instance-2 1 /1 Running 2 1h 10.140.0.2 instance-2 kube-apiserver-instance-3 1 /1 Running 0 1h 10.140.0.3 instance-3 kube-apiserver-instance-4 1 /1 Running 0 1h 10.140.0.4 instance-4 kube-controller-manager-instance-2 1 /1 Running 0 1h 10.140.0.2 instance-2 kube-controller-manager-instance-3 1 /1 Running 0 1h 10.140.0.3 instance-3 kube-controller-manager-instance-4 1 /1 Running 0 1h 10.140.0.4 instance-4 kube-dns-654684d656-hfpsd 3 /3 Running 0 57m 10.244.89.2 instance-3 kube-proxy-hp9hg 1 /1 NodeLost 0 21m 10.140.0.5 instance-5 kube-proxy-mqv4s 1 /1 Running 0 57m 10.140.0.3 instance-3 kube-proxy-q4kmb 1 /1 Running 0 57m 10.140.0.4 instance-4 kube-proxy-rllw7 1 /1 Running 0 57m 10.140.0.2 instance-2 kube-scheduler-instance-2 1 /1 Running 0 1h 10.140.0.2 instance-2 kube-scheduler-instance-3 1 /1 Running 0 1h 10.140.0.3 instance-3 kube-scheduler-instance-4 1 /1 Running 0 1h 10.140.0.4 instance-4 kubernetes-dashboard-7d5dcdb6d9-9dfs6 0 /1 Unknown 0 14m <none> instance-5 kubernetes-dashboard-7d5dcdb6d9-g6tzx 1 /1 Running 0 10m 10.244.118.193 instance-4 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR heapster ClusterIP 10.104.187.8 <none> 80 /TCP 37s k8s-app=heapster kube-dns ClusterIP 10.96.0.10 <none> 53 /UDP ,53 /TCP 57m k8s-app=kube-dns kubernetes-dashboard ClusterIP 10.100.151.252 <none> 443 /TCP 14m k8s-app=kubernetes-dashboard monitoring-grafana ClusterIP 10.106.34.117 <none> 80 /TCP 37s k8s-app=influxGrafana monitoring-influxdb ClusterIP 10.106.234.32 <none> 8083 /TCP ,8086 /TCP 37s k8s-app=influxGrafana
|
创建完成后即可访问:

访问地址如下:https://192.16.35.10:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/
Ingress Controller
Ingress 是利用 Nginx 或 HAProxy 等负载平衡器来曝露 Cluster 内服务的元件,Ingress 主要透过设定 Ingress规范来定义域名映射 Kubernetes 内部服务,这种方式可以避免掉掉过多的 NodePort 问题。
单个 MASTER 节点 通过kubectl 来创建 Ingress Controller 即可:
3A网络科技致力于提供最稳定最快速的云服务器建设,通过高速的cn2线路显著提升网络速度,搭配企业级硬件设备竭诚为客户提供高效且可靠的服务器应用方案,加上十几年专业团队的协作来保障服务器的高质量以及超高安全性。更多请访问http://iis3.com/server/
$ kubectl create ns ingress-nginx $ kubectl apply -f "https://mirror.shileizcc.com/Kubernetes/1.10/addon/ingress-controller.yml.conf" $ kubectl -n ingress-nginx get po NAME READY STATUS RESTARTS AGE default-http-backend-5c6d95c48-f6gw2 1 /1 Running 0 33s nginx-ingress-controller-699cdf846-n2sxv 1 /1 Running 0 33s
|
这里也可以选择 Traefik 的 Ingress Controller。
Ingress 功能
这里先创建一个 Nginx HTTP server Deployment 与 Service:
$ kubectl run nginx-dp --image nginx --port 80 $ kubectl expose deploy nginx-dp --port 80 $ kubectl get po,svc $ cat <<EOF | kubectl create -f - apiVersion: extensions /v1beta1 kind: Ingress metadata: name: test -nginx-ingress annotations: ingress.kubernetes.io /rewrite-target : / spec: rules: - host: test .nginx.com http: paths: - path: / backend: serviceName: nginx-dp servicePort: 80 EOF
|
通过 curl 进行测试:
$ curl 192.168.35.10 -H 'Host: test.nginx.com' <!DOCTYPE html> <html> < head > <title>Welcome to nginx!< /title > ... # 测试其他 domain name 是否返回 404 $ curl 192.168.35.10 -H 'Host: test.nginx.com1' default backend - 404
|
Helm Tiller Server
Helm 是 Kubernetes Chart 的管理工具,Kubernetes Chart 是一套预先组态的 Kubernetes 资源套件。其中Tiller Server主要负责接收来至客户的指令,并透过 kube-apiserve r与 Kubernetes Cluster 做沟通,根据 Chart 定义的内容, 来产生与管理各种对应API 物件的 Kubernetes 部署档案(又称为Release)。
单个 MASTER 节点 安裝 Helm tool:
$ wget -qO- https: //kubernetes-helm .storage.googleapis.com /helm-v2 .8.1-linux-amd64. tar .gz | tar -zx $ sudo mv linux-amd64 /helm /usr/local/bin/
|
所有 NODE 节点 安裝 socat:
$ sudo apt-get install -y socat
|
初始化 Helm (安裝 Tiller Server):
$ kubectl -n kube-system create sa tiller $ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller $ helm init --service-account tiller Creating /root/ .helm Creating /root/ .helm /repository Creating /root/ .helm /repository/cache Creating /root/ .helm /repository/local Creating /root/ .helm /plugins Creating /root/ .helm /starters Creating /root/ .helm /cache/archive Creating /root/ .helm /repository/repositories .yaml Adding stable repo with URL: https: //kubernetes-charts .storage.googleapis.com Adding local repo with URL: http: //127 .0.0.1:8879 /charts $HELM_HOME has been configured at /root/ .helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Happy Helming! $ kubectl -n kube-system get po -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-75b7d95f5c-cmrln 1 /1 Running 0 12s $ helm version Client: &version.Version{SemVer: "v2.8.1" , GitCommit: "6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2" , GitTreeState: "clean" } Server: &version.Version{SemVer: "v2.8.1" , GitCommit: "6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2" , GitTreeState: "clean" }
|
测试 Helm 功能
部署简单的 Jenkins 来进行功能测试:
$ helm install --name demo -- set Persistence.Enabled= false stable /jenkins NAME: demo LAST DEPLOYED: Wed Apr 11 10:23:33 2018 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1 /Secret NAME TYPE DATA AGE demo-jenkins Opaque 2 0s ==> v1 /ConfigMap NAME DATA AGE demo-jenkins 3 0s demo-jenkins-tests 1 0s ==> v1 /Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo-jenkins-agent ClusterIP 10.109.239.81 <none> 50000 /TCP 0s demo-jenkins LoadBalancer 10.102.51.59 <pending> 8080:31076 /TCP 0s ==> v1beta1 /Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE demo-jenkins 1 1 1 0 0s ==> v1 /Pod (related) NAME READY STATUS RESTARTS AGE demo-jenkins-7bf4bfcff-2rzgt 0 /1 Init:0 /1 0 0s NOTES: 1. Get your 'admin' user password by running: printf $(kubectl get secret --namespace default demo-jenkins -o jsonpath= "{.data.jenkins-admin-password}" | base64 --decode); echo 2. Get the Jenkins URL to visit by running these commands in the same shell: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get svc --namespace default -w demo-jenkins' export SERVICE_IP=$(kubectl get svc --namespace default demo-jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}" ) echo http: // $SERVICE_IP:8080 /login 3. Login with the password from step 1 and the username: admin For more information on running Jenkins on Kubernetes, visit: https: //cloud .google.com /solutions/jenkins-on-container-engine ################################################################################# ###### WARNING: Persistence is disabled!!! You will lose your data when ##### ###### the Jenkins pod is terminated. ##### #################################################################################
|
$ kubectl get po,svc -l app=demo-jenkins NAME READY STATUS RESTARTS AGE IP NODE demo-jenkins-7bf4bfcff-2rzgt 1 /1 Running 0 2m 10.244.56.6 instance-2 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR demo-jenkins LoadBalancer 10.102.51.59 <pending> 8080:31076 /TCP 2m component=demo-jenkins-master demo-jenkins-agent ClusterIP 10.109.239.81 <none> 50000 /TCP 2m component=demo-jenkins-master # 取得 admin 账号的密碼 $ printf $(kubectl get secret --namespace default demo-jenkins -o jsonpath= "{.data.jenkins-admin-password}" | base64 --decode); echo E6zR4HiCh8
|
创建成功后即可访问:

访问地址如下:http://192.168.35.10:31076 注意端口一致。
删除服务:
$ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE demo 1 Wed Apr 11 10:23:33 2018 DEPLOYED jenkins-0.14.4 default $ helm delete demo --purge release "demo" deleted
|
更多 Helm Apps 可以到 Kubeapps Hub 查找。
测试 Cluster
单个 MASTER 节点 SSH 进入 instance-2 节点,然后关闭该节点:
接着去 instance-3 节点查看是否节点被关闭:
$ kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy { "health" : "true" } etcd-2 Healthy { "health" : "true" } etcd-0 Unhealthy Get https: //10 .140.0.2:2379 /health : net /http : request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
|