KIND网络插件实验
前言
KIND环境介绍及启动(禁用默认的网络插件)请跳转到KIND网络插件实验:使用bridge作为cni插件
上文介绍了使用bridge作为KIND网络插件的方法,本文将使用flannel
KIND启动后由于没有网络插件,coredns无法被分配地址,所以没起来。禁用默认网络插件启动后的POD状况如下:
部署flannel
kubectl apply -f flannel-v0.14.0.yaml
# flannel-v0.14.0.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.100.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq=false
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
部署完成后会在/etc/cni/net.d目录下生成flannel的配置文件
/etc/cni/net.d# cat 10-flannel.conflist
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
部署完成后,kube-flannel以DaemonSet方式运行在master和worker节点上,但是coredns等仍旧有问题
通过kubectl describe查看:
… failed to find plugin “flannel” in path [/opt/cni/bin]
需要下载CNI插件: CNI Plugins v0.8.7 (在1.0.0版本后CNI Plugins中没有flannel了,这是为什么?)
将压缩包放到master和worker的/opt/cni/bin目录下,并且解压
#将插件放到master,worker节点类似
sudo docker cp '/home/cni-plugins-linux-arm64-v0.8.7.tgz' aa7c807c9c4a:/opt/cni/bin/
sudo docker exec -it aa7c807c9c4a /bin/bash
root@cluster1-worker:/# cd /opt/cni/bin/
root@cluster1-worker:/opt/cni/bin# ls
cni-plugins-linux-arm64-v0.8.7.tgz host-local loopback portmap ptp
root@cluster1-worker:/opt/cni/bin# tar -xzvf cni-plugins-linux-arm64-v0.8.7.tgz
./
./macvlan
./flannel
./static
./vlan
./portmap
./host-local
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth
完成后,集群上的pod都正常运行
进入worker节点可以看到:
cni0网桥和flannel.1虚拟设备,mtu 1450表示使用vxlan进行节点间通信
节点subset信息在节点的**/run/flannel/subnet.env**文件中
root@cluster1-worker:/run/flannel# cat subnet.env
FLANNEL_NETWORK=10.100.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
验证
部署测试的deployment,通过测试ip分配正常
进入master上的pod(10.244.0.2),ping worker上的pod(10.244.1.10)
/ # ping 10.244.1.10
PING 10.244.1.10 (10.244.1.10): 56 data bytes
64 bytes from 10.244.1.10: seq=0 ttl=62 time=0.516 ms
64 bytes from 10.244.1.10: seq=1 ttl=62 time=0.480 ms
^C
--- 10.244.1.10 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.480/0.498/0.516 ms
/ # traceroute 10.244.1.10
traceroute to 10.244.1.10 (10.244.1.10), 30 hops max, 46 byte packets
1 10.244.0.1 (10.244.0.1) 0.057 ms 0.029 ms 0.024 ms
2 10.244.1.0 (10.244.1.0) 0.025 ms 0.021 ms 0.017 ms
3 10.244.1.10 (10.244.1.10) 0.015 ms 0.013 ms 0.008 ms