Kubernetes HPA控制器实现pod的弹性伸缩
Pod伸缩简介
- 根据当前pod的负载,动态调整 pod副本数量,业务高峰期自动扩容pod的副本数以尽快响应pod的请求。
- 在业务低峰期对pod进行缩容,实现降本增效的目的。
- 公有云支持node级别的弹性伸缩。
手动调整pod副本数
#当前pod副本数1个
[root@K8s-ansible ~]#kubectl get pod -n mooreyxia
NAME READY STATUS RESTARTS AGE
wordpress-app-deployment-598fd848b4-sc8r6 2/2 Running 4 (48m ago) 38h
[root@K8s-ansible ~]#kubectl get deployments.apps -n mooreyxia
NAME READY UP-TO-DATE AVAILABLE AGE
wordpress-app-deployment 1/1 1 1 38h
#扩容至2个
[root@K8s-ansible ~]#kubectl scale deployment wordpress-app-deployment --replicas=2 -n mooreyxia
deployment.apps/wordpress-app-deployment scaled
验证pod副本数:
[root@K8s-ansible ~]#kubectl get deployments.apps -n mooreyxia
NAME READY UP-TO-DATE AVAILABLE AGE
wordpress-app-deployment 2/2 2 2 38h
[root@K8s-ansible ~]#kubectl get pod -n mooreyxia
NAME READY STATUS RESTARTS AGE
wordpress-app-deployment-598fd848b4-sc8r6 2/2 Running 4 (52m ago) 38h
wordpress-app-deployment-598fd848b4-xjt6b 2/2 Running 0 102s
动态伸缩控制器类型
- 水平pod自动缩放器(Horizontal Pod Autoscaling-HPA)
- 基于pod 资源利用率横向调整pod副本数量,可跨主机
- 垂直pod自动缩放器(Vertical Pod Autoscaler-VPA)
- 基于pod资源利用率,调整对单个pod的最大资源限制,假如该Node的资源不足,则会调度到其他Node重建,不能与HPA同时使用。
- 集群伸缩(Cluster Autoscaler,CA)
- 基于集群中node 资源使用情况,动态伸缩node节点,从而保证有CPU和内存资源用于创建pod。
HPA控制器简介
Horizontal Pod Autoscaling (HPA)控制器,根据预定义好的阈值及pod当前的资源利用率,自动控制在k8s集群中运行的pod数量(自动弹
性水平自动伸缩).
--horizontal-pod-autoscaler-sync-period #默认每隔15s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况。
--horizontal-pod-autoscaler-downscale-stabilization #缩容间隔周期,默认5分钟。
--horizontal-pod-autoscaler-sync-period #HPA控制器同步pod副本数的间隔周期
--horizontal-pod-autoscaler-cpu-initialization-period #初始化延迟时间,在此时间内 pod的CPU 资源指标将不会生效,默认为5分钟。
--horizontal-pod-autoscaler-initial-readiness-delay #用于设置 pod 准备时间, 在此时间内的 pod 统统被认为未就绪及不采集数据,默认为30秒。
--horizontal-pod-autoscaler-tolerance #HPA控制器能容忍的数据差异(浮点数,默认为0.1),即新的指标要与当前的阈值差异在0.1或以上,即要大于1+0.1=1.1,或小于1-0.1=0.9,比如阈值为CPU利用率50%,当前为80%,那么80/50=1.6 > 1.1则会触发扩容,反之会缩容。
即触发条件:avg(CurrentPodsConsumption) / Target >1.1 或 <0.9=把N个pod的数据相加后根据pod的数量计算出平均数除以阈值,大于1.1就扩容,小于0.9就缩容。
计算公式:TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target) #ceil是一个向上取整的目的pod整数。
指标数据需要部署metrics-server,即HPA使用metrics-server作为数据源。
https://github.com/kubernetes-sigs/metrics-server
在k8s 1.1引入HPA控制器,早期使用Heapster组件采集pod指标数据,在k8s 1.11版本开始使用Metrices Server完成数据采集,然后将采
集到的数据通过API(Aggregated API,汇总API),例如metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io,然后再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的。
metrics-server 部署
Metrics Server 是 Kubernetes 内置的容器资源指标来源。
Metrics Server 从node节点上的 Kubelet 收集资源指标,并通过Metrics API在 Kubernetes apiserver 中公开指标数据,以供 Horizo ntal Pod Autoscaler和Vertical Pod Autoscaler使用,也可以通过访问kubectl top node/pod 查看指标数据。
- 部署metrics-server
#源码地址 注意版本对kubernetes的兼容性问题
https://github.com/kubernetes-sigs/metrics-server
https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.6.1
#下载镜像并加载到镜像中
[root@K8s-ansible metrics-server-0.6.1-case]#tree .
.
├── hpa.yaml
├── metrics-server-v0.6.1.tar.gz #源码
├── metrics-server-v0.6.1.yaml #部署文件随源码附带
└── tomcat-app1.yaml #测试用例
0 directories, 4 files
[root@K8s-ansible metrics-server-0.6.1-case]#docker load -i metrics-server-v0.6.1.tar.gz
5b1fa8e3e100: Loading layer [==================================================>] 3.697MB/3.697MB
3dc34f14eb83: Loading layer [==================================================>] 66.43MB/66.43MB
Loaded image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
#将镜像上传到harbor中
[root@K8s-ansible metrics-server-0.6.1-case]#docker tag k8s.gcr.io/metrics-server/metrics-server:v0.6.1 K8s-harbor01.mooreyxia.com/baseimages/metrics-server:v0.6.1
[root@K8s-ansible metrics-server-0.6.1-case]#docker push K8s-harbor01.mooreyxia.com/baseimages/metrics-server:v0.6.1
The push refers to repository [K8s-harbor01.mooreyxia.com/baseimages/metrics-server]
3dc34f14eb83: Pushed
5b1fa8e3e100: Pushed
v0.6.1: digest: sha256:f9800ed2264fba9b5513ca183f0f7988c8597e21c72bb39888168232d378973d size: 739
#将部署文件中的镜像地址改为自己的harbor
[root@K8s-ansible metrics-server-0.6.1-case]#vim metrics-server-v0.6.1.yaml
[root@K8s-ansible metrics-server-0.6.1-case]#cat metrics-server-v0.6.1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolutinotallow=15s
#image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
image: K8s-harbor01.mooreyxia.com/baseimages/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl apply -f metrics-server-v0.6.1.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
#确认pod正常运行
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl get pod -n kube-system |grep metrics
metrics-server-c995875bc-bvf8z 1/1 Running 0 76s
#测试
[root@K8s-ansible metrics-server-0.6.1-case]#cat tomcat-app1.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: mooreyxia-tomcat-app1-deployment-label
name: mooreyxia-tomcat-app1-deployment
namespace: mooreyxia
spec:
replicas: 2
selector:
matchLabels:
app: mooreyxia-tomcat-app1-selector
template:
metadata:
labels:
app: mooreyxia-tomcat-app1-selector
spec:
containers:
- name: mooreyxia-tomcat-app1-container
image: tomcat:7.0.93-alpine
#image: lorel/docker-stress-ng #压力测试镜像
#args: ["--vm", "2", "--vm-bytes", "256M"]
##command: ["/apps/tomcat/bin/run_tomcat.sh"]
imagePullPolicy: IfNotPresent
##imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: mooreyxia-tomcat-app1-service-label
name: mooreyxia-tomcat-app1-service
namespace: mooreyxia
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 40003
selector:
app: mooreyxia-tomcat-app1-selector
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl apply -f tomcat-app1.yaml
deployment.apps/mooreyxia-tomcat-app1-deployment configured
service/mooreyxia-tomcat-app1-service created
#验证pod指标数据
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl top pod -n mooreyxia
NAME CPU(cores) MEMORY(bytes)
mooreyxia-tomcat-app1-deployment-64795c69dc-sxzfd 2m 145Mi
mysql-0 23m 205Mi
mysql-1 22m 206Mi
mysql-2 23m 204Mi
wordpress-app-deployment-598fd848b4-sc8r6 1m 16Mi
wordpress-app-deployment-598fd848b4-xjt6b 1m 15Mi
#验证node指标数据
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl top node -n mooreyxia
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
192.168.11.211 107m 5% 1022Mi 61%
192.168.11.212 135m 6% 1041Mi 62%
192.168.11.213 120m 6% 994Mi 59%
192.168.11.214 168m 8% 961Mi 57%
192.168.11.215 148m 7% 968Mi 57%
192.168.11.216 142m 7% 1191Mi 71%
HPA控制器部署
#查看定义
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl explain HorizontalPodAutoscaler.spec
apiVersion: autoscaling/v2beta1 #定义API版本
kind: HorizontalPodAutoscaler #对象类型
metadata: #定义对象元数据
namespace: linux36 #创建后隶属的namespace
name: linux36-tomcat-app1-podautoscaler #对象名称
labels: 这样的label标签
app: linux36-tomcat-app1 #自定义的label名称
version: v2beta1 #自定义的api版本
spec: #定义对象具体信息
scaleTargetRef: #定义水平伸缩的目标对象,Deployment、ReplicationController/ReplicaSet
apiVersion: apps/v1
#API版本,HorizontalPodAutoscaler.spec.scaleTargetRef.apiVersion
kind: Deployment #目标对象类型为deployment
name: linux36-tomcat-app1-deployment #deployment 的具体名称
minReplicas: 2 #最小pod数
maxReplicas: 5 #最大pod数
metrics: #调用metrics数据定义
- type: Resource #类型为资源
resource: #定义资源
name: cpu #资源名称为cpu
targetAverageUtilization: 80 #CPU使用率
- type: Resource #类型为资源
resource: #定义资源
name: memory #资源名称为memory
targetAverageValue: 1024Mi #memory使用率
#案例 - 根据metrics创建HPA控制器弹性伸缩pod节点
[root@K8s-ansible metrics-server-0.6.1-case]#cat hpa.yaml
#apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
namespace: mooreyxia
name: mooreyxia-tomcat-app1-podautoscaler
labels:
app: mooreyxia-tomcat-app1
version: v2beta1
spec:
scaleTargetRef:
apiVersion: apps/v1
#apiVersion: extensions/v1beta1
kind: Deployment
name: mooreyxia-tomcat-app1-deployment
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 60
#metrics: #旧版写法
#- type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
#- type: Resource
# resource:
# name: memory
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl apply -f hpa.yaml
horizontalpodautoscaler.autoscaling/mooreyxia-tomcat-app1-podautoscaler created
#查看采集到的数据
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl get hpa -n mooreyxia
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
mooreyxia-tomcat-app1-podautoscaler Deployment/mooreyxia-tomcat-app1-deployment 0%/60% 3 10 3 69s
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl describe hpa mooreyxia-tomcat-app1-podautoscaler -n
mooreyxia
----------------------------------------------------
desired 最终期望处于READY状态的副本数
updated 当前完成更新的副本数
total 总计副本数
available 当前可用的副本数
unavailable 不可用副本数
----------------------------------------------------
Name: mooreyxia-tomcat-app1-podautoscaler
Namespace: mooreyxia
Labels: app=mooreyxia-tomcat-app1
versinotallow=v2beta1
Annotations: <none>
CreationTimestamp: Fri, 14 Apr 2023 06:01:47 +0000
Reference: Deployment/mooreyxia-tomcat-app1-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (2m) / 60%
Min replicas: 3
Max replicas: 10
Deployment pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ScaleDownStabilized recent recommendations were higher than current one, applying the highest recent recommendation
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 6m24s horizontal-pod-autoscaler New size: 3; reason: Current number of replicas below Spec.MinReplicas
#设置业务pod资源使用满载,触发弹性扩展
[root@K8s-ansible metrics-server-0.6.1-case]#cat tomcat-app1.yaml
...
spec:
containers:
- name: mooreyxia-tomcat-app1-container
#image: tomcat:7.0.93-alpine
image: lorel/docker-stress-ng #压力测试镜像
args: ["--vm", "2", "--vm-bytes", "256M"]
...
#观察扩展状况
[root@K8s-ansible metrics-server-0.6.1-case]#kubectl get hpa -n mooreyxia -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
mooreyxia-tomcat-app1-podautoscaler Deployment/mooreyxia-tomcat-app1-deployment 83%/60% 1 10 1 14m
mooreyxia-tomcat-app1-podautoscaler Deployment/mooreyxia-tomcat-app1-deployment 53%/60% 1 10 2 15m
kubernetes 准入控制
Kubernetes API 鉴权类型
鉴权类型:https://kubernetes.io/zh/docs/reference/access-authn-authz/authorization
- Node(节点鉴权):针对kubelet发出的API请求进行鉴权。
- 授予node节点的kubelet读取services、endpoints、secrets、configmaps等事件状态,并向API server更新pod与node状态。
- Webhook: 是一个HTTP回调,触发事件时触发回调。
# Kubernetes API 版本
apiVersion: v1
# API 对象种类
kind: Config
# clusters 代表远程服务。
clusters:
- name: name-of-remote-authz-service
cluster:
# 对远程服务进行身份认证的 CA。
certificate-authority: /path/to/ca.pem
# 远程服务的查询 URL。必须使用 'https'。
server: https://authz.example.com/authorize
- ABAC(Attribute-based access control ):基于属性的访问控制,1.6之前使用,将属性与账户直接绑定。
[root@K8s-master01 ~]#kube-apiserver --help | grep authorization
--secure-port int The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0. (default 6443)
--requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
--authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
--authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
--authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
--authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
--authorization-webhook-version string The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook. (default "v1beta1")
#默认开启RBAC
[root@K8s-master01 ~]#cat /etc/systemd/system/kube-apiserver.service|grep authorization-mode
--authorization-mode=Node,RBAC \
#开启ABAC参数
--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json
#用户user1对所有namespace所有API版本的所有资源拥有所有权限((没有设置"readonly": true)。
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "user1",
"namespace": "*",
"resource": "*",
"apiGroup": "*"
}
}
#用户user2对namespace myserver的pod有只读权限。
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "user2",
"namespace": "myserver",
"resource": "pods",
"readonly": true
}
}
- RBAC(Role-Based Access Control):基于角色的访问控制,将权限与角色(role)先进行关联,然后将角色与用户进行绑定(Binding)从而继承角色中的权限。
- RBAC 简介
- RBAC API声明了四种Kubernetes对象:Role、ClusterRole、RoleBinding和ClusterRoleBinding。
- Role: 定义一组规则,用于访问命名空间中的 Kubernetes 资源。
- RoleBinding: 定义用户和角色(Role)的绑定关系。
- ClusterRole: 定义了一组访问集群中 Kubernetes 资源(包括所有命名空间)的规则。
- ClusterRoleBinding: 定义了用户和集群角色(ClusterRole)的绑定关系。
#将权限与角色(role)先进行关联
apiVersion: rbac.authorization.k8s.io/v1
kind: Role #类似为role即角色
metadata:
namespace: default #角色所在的namespace
name: pod-reader #角色名称
rules: #定义授权规则
- apiGroups: [""] #资源对象的API,空表示所有版本
resources: ["pods"] #目标资源对象
verbs: ["get", "watch", "list"] #该角色针对上述资源对象的动作集
#将角色与用户进行绑定(Binding)
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding #类型为角色绑定
metadata:
name: read-pods #角色绑定的名称
namespace: default #橘色绑定所在的namespace
subjects: #主体配置,格式为列表
- kind: User
name: jane #角色绑定的目标账户
apiGroup: rbac.authorization.k8s.io #API组
roleRef: #角色配置,"roleRef" 指定账户是与 Role 还是与 ClusterRole 进行绑定
kind: Role # 绑定类型,必须是 Role 或 ClusterRole二者其一
name: pod-reader # 此字段必须与要绑定的目标 Role 或 ClusterRole 的名称匹配
apiGroup: rbac.authorization.k8s.io #API版本
RBAC多账户实现案例
- 使用RBAC实现基于角色的访问控制
#案例一:创建用户并用Token登录Dashboard
#1.1:在指定namespace创建账户moore:
[root@K8s-ansible ~]#kubectl create serviceaccount moore -n mooreyxia
serviceaccount/moore created
#1.2:创建role规则:
[root@K8s-ansible RBAC-yaml-case]#kubectl explain role
KIND: Role
VERSION: rbac.authorization.k8s.io/v1
....
[root@K8s-ansible RBAC-yaml-case]#cat moore-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: mooreyxia
name: moore-role
rules:
- apiGroups: ["*"]
resources: ["pods/exec"]
#verbs: ["*"]
##RO-Role
verbs: ["get", "list", "watch", "create"]
- apiGroups: ["*"]
resources: ["pods"]
#verbs: ["*"]
##RO-Role
verbs: ["get", "list", "watch", "delete"]
- apiGroups: ["apps/v1"]
resources: ["deployments"]
#verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
##RO-Role
verbs: ["get", "watch", "list"]
[root@K8s-ansible RBAC-yaml-case]#kubectl apply -f moore-role.yaml
role.rbac.authorization.k8s.io/moore-role created
#查看角色权限
[root@K8s-ansible RBAC-yaml-case]#kubectl get role -n mooreyxia -o yaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"moore-role","namespace":"mooreyxia"},"rules":[{"apiGroups":["*"],"resources":["pods/exec"],"verbs":["get","list","watch","create"]},{"apiGroups":["*"],"resources":["pods"],"verbs":["get","list","watch","delete"]},{"apiGroups":["apps/v1"],"resources":["deployments"],"verbs":["get","watch","list"]}]}
creationTimestamp: "2023-04-14T07:16:45Z"
name: moore-role
namespace: mooreyxia
resourceVersion: "868543"
uid: 92b4cc9f-2a1a-4c17-96d4-f5d70e1bbe29
rules:
- apiGroups:
- '*'
resources:
- pods/exec
verbs:
- get
- list
- watch
- create
- apiGroups:
- '*'
resources:
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- apps/v1
resources:
- deployments
verbs:
- get
- watch
- list
kind: List
metadata:
resourceVersion: ""
#1.3:将规则与账户进行绑定
[root@K8s-ansible RBAC-yaml-case]#cat moore-role-bind.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-bind-moore
namespace: mooreyxia
subjects:
- kind: ServiceAccount
name: moore
namespace: mooreyxia
roleRef:
kind: Role
name: moore-role
apiGroup: rbac.authorization.k8s.io
[root@K8s-ansible RBAC-yaml-case]#kubectl apply -f moore-role-bind.yaml
rolebinding.rbac.authorization.k8s.io/role-bind-moore created
#查看账户权限
[root@K8s-ansible RBAC-yaml-case]#kubectl get serviceaccounts -n mooreyxia -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2023-04-09T07:15:50Z"
name: default
namespace: mooreyxia
resourceVersion: "514189"
uid: 7a6f2aaa-27f7-4070-a1f4-69c0e07a22bf
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2023-04-14T07:11:00Z"
name: moore
namespace: mooreyxia
resourceVersion: "867576"
uid: b531177d-7711-4f80-9618-b9f29a5fec17
kind: List
metadata:
resourceVersion: ""
#查看角色binding信息
[root@K8s-ansible RBAC-yaml-case]#kubectl get rolebindings.rbac.authorization.k8s.io -n mooreyxia -o yaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"role-bind-moore","namespace":"mooreyxia"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"moore-role"},"subjects":[{"kind":"ServiceAccount","name":"moore","namespace":"mooreyxia"}]}
creationTimestamp: "2023-04-14T07:21:49Z"
name: role-bind-moore
namespace: mooreyxia
resourceVersion: "869390"
uid: bb889ddb-2985-46b0-9c17-a36be263f1a9
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: moore-role
subjects:
- kind: ServiceAccount
name: moore
namespace: mooreyxia
kind: List
metadata:
resourceVersion: ""
#1.4:创建账户token:
[root@K8s-ansible RBAC-yaml-case]#cat moore-secret.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: moore-admin-user
namespace: mooreyxia
annotations:
kubernetes.io/service-account.name: "moore"
[root@K8s-ansible RBAC-yaml-case]#kubectl apply -f moore-secret.yaml
secret/moore-admin-user created
[root@K8s-ansible RBAC-yaml-case]#kubectl get secrets -n mooreyxia
NAME TYPE DATA AGE
moore-admin-user kubernetes.io/service-account-token 3 36s
[root@K8s-ansible RBAC-yaml-case]#kubectl describe secrets moore-admin-user -n mooreyxia
Name: moore-admin-user
Namespace: mooreyxia
Labels: <none>
Annotations: kubernetes.io/service-account.name: moore
kubernetes.io/service-account.uid: b531177d-7711-4f80-9618-b9f29a5fec17
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1310 bytes
namespace: 9 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImhIMHhCWW1iOFRhbXNjdDAyQUg5YVE3RUVuRjNxTDZReXhnUzJqbnRpTzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb29yZXl4aWEiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibW9vcmUtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJtb29yZSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI1MzExNzdkLTc3MTEtNGY4MC05NjE4LWI5ZjI5YTVmZWMxNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptb29yZXl4aWE6bW9vcmUifQ.WGIIEyW4jJ_xtsDUiVOyqADZfGvr5z5ftd2FxdBrltXkfZMy0N5T6mZkiVICjNUEoaFehoxDm-iiMe3gdpaj9lxV772fXIqcuFcaZUEOwS_B9CA3rBBCbL0nS16wsCbUcWiwnS_j2FtnCg_S2tLc8E5sPyYZpMlNreTDt1QqdldL760g6cnOKhWJWA4ZffhZTvqB9ZPbHRCLKw8dEgwfZIrXfMCJwQ4sauU4ivbR9an82QX8k_SIlhDkDent3APVXrB-4_rT1ZRDfpQTkDQRxbBQF8DYouoXuiP6PtpAax-23z_9OJzanVFfWJ0fMrz708SJjcMJlpC_nLZgKuE8rw
#使用该token登录dashboard
之后用户在赋予的权限内可以查询管理集群
#案例二:创建用户授权admin权限并使用kubeconfig文件登录Dashboard
#创建集群管理新用户,生成新的kubeconfig文件,大致流程如下,详细可参考《基于velero及minio实现etcd数据备份与恢复》博客中创建管理用户的案例
2.1:创建csr文件:
# cat mooreyxia-csr.json
{
"CN": "China",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
2.2:签发证书:
# ln -sv /etc/kubeasz/bin/cfssl* /usr/bin/
# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -cnotallow=/etc/kubeasz/clusters/k8s-cluster1/ssl/ca-config.json -profile=kubernetes mooreyxia-csr.json | cfssljson -bare mooreyxia
# ls mooreyxia*
mooreyxia-csr.json mooreyxia-key.pem mooreyxia-role-bind.yaml mooreyxia-role.yaml mooreyxia.csr mooreyxia.pem
2.3:生成普通用户kubeconfig文件:
# kubectl config set-cluster cluster1 --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.31.7.188:6443 --kubecnotallow=mooreyxia.kubeconfig #--embed-certs=true为嵌入证书信息
2.4:设置客户端认证参数:
# cp *.pem /etc/kubernetes/ssl/
# kubectl config set-credentials mooreyxia \
--client-certificate=/etc/kubernetes/ssl/mooreyxia.pem \
--client-key=/etc/kubernetes/ssl/mooreyxia-key.pem \
--embed-certs=true \
--kubecnotallow=mooreyxia.kubeconfig
2.5:设置上下文参数(多集群使用上下文区分)
https://kubernetes.io/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/
# kubectl config set-context cluster1 \
--cluster=cluster1 \
--user=mooreyxia \
--namespace=mooreyxia \
--kubecnotallow=mooreyxia.kubeconfig
2.5: 设置默认上下文
# kubectl config use-context cluster1 --kubecnotallow=mooreyxia.kubeconfig
2.7:获取token:
# kubectl get secrets -n mooreyxia | grep mooreyxia
# kubectl describe secrets mooreyxia-token-8d897 -n mooreyxia
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlYwMDNHdWJwTmtoaTJUMFRPTVlwV3RiVWFWczJYRHJCNkFkMGRtQWFqRTgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtYWdlZHUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWFnZWR1LXRva2VuLThkODk3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1hZ2VkdSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBlZmNiNGI0LWM3YTUtNGJkZS1iZjk4LTFiNTkwNThjOTFjNiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptYWdlZHU6bWFnZWR1In0.SJHLgshKcGtIf-ycivn_4SWVRdWw4SuWymBVaA8YJXHPd5PYnwERVNtfUPX88nv-wXkCuZY7fIjGYkoYj6AJEhSPoG15fcmUPaojYeyjkQYghan3CBsZR8C12buSB6t5zCCt22GdG_ScZymxLU7n3Z0PhOzTLzgpXRs1Poqz4DOYylqZyLmW_BPgoNhtQYKlBH6OFzDe8v3JytnaaJUObVZCRxtI6x4iKLt2Evhs8XKfczqqesgoo61qTqtbU4jzlXuHeW7cUMhWoipUc-BkEdV6OtKWOetecxu5uB-44eTRHa1FBjnRMv9SEGj0hxTJCQ08ZNlP0Kc01JZlKXBGdQ
2.8:将token写入用户kube-config文件:
---------------------------------------------------------------------------
#获取用户的token
[root@K8s-ansible RBAC-yaml-case]#kubectl get secret -A | grep admin
kubernetes-dashboard dashboard-admin-user kubernetes.io/service-account-token 3 15d
kuboard kuboard-admin-token kubernetes.io/service-account-token 3 11d
mooreyxia moore-admin-user kubernetes.io/service-account-token 3 18m
[root@K8s-ansible RBAC-yaml-case]#kubectl describe secrets dashboard-admin-user -n kubernetes-dashboard
Name: dashboard-admin-user
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: e03c53f4-d159-4008-804b-970912fe556e
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1310 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImhIMHhCWW1iOFRhbXNjdDAyQUg5YVE3RUVuRjNxTDZReXhnUzJqbnRpTzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTAzYzUzZjQtZDE1OS00MDA4LTgwNGItOTcwOTEyZmU1NTZlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.lVHgpVsH0G0Rsq-OLST8zTeH48GlLUZDcPTjYSAh1MnOFDhylKofJUjjv68t0nkQ71xZnsqEs89qekakC1UfkTmpRgbHjRVisYdPPqO7Y-D6RqDJUC_FMArPRZaTONta7ZKCs6j99zp8VrFB4BajBdNvpXJ1YsawCFE6ZNssVkL2Wjdy8mkpb8xYQX1XDrEvFaNHX67IRkcQDiF-k8rZeSOVvHlqzHKgeeg4OBblb2yNwVDc8X6FdmZXfTvA768t9rkmq1VJ4U2dRBmHAgMNZN5iD4YjNphNkCMzAZQJm4glkxvAD7nDpGX6CT_4boskv4jHOITbkXUjDPpf_VZyJg
#复制token制作新的kubeconfig文件,X!强制保存退出
[root@K8s-ansible RBAC-yaml-case]#cp /root/.kube/config /opt/kube-config
#token加到文件最后
[root@K8s-ansible RBAC-yaml-case]#vim /opt/kube-config
[root@K8s-ansible RBAC-yaml-case]#cat /opt/kube-config
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImhIMHhCWW1iOFRhbXNjdDAyQUg5YVE3RUVuRjNxTDZReXhnUzJqbnRpTzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTAzYzUzZjQtZDE1OS00MDA4LTgwNGItOTcwOTEyZmU1NTZlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.lVHgpVsH0G0Rsq-OLST8zTeH48GlLUZDcPTjYSAh1MnOFDhylKofJUjjv68t0nkQ71xZnsqEs89qekakC1UfkTmpRgbHjRVisYdPPqO7Y-D6RqDJUC_FMArPRZaTONta7ZKCs6j99zp8VrFB4BajBdNvpXJ1YsawCFE6ZNssVkL2Wjdy8mkpb8xYQX1XDrEvFaNHX67IRkcQDiF-k8rZeSOVvHlqzHKgeeg4OBblb2yNwVDc8X6FdmZXfTvA768t9rkmq1VJ4U2dRBmHAgMNZN5iD4YjNphNkCMzAZQJm4glkxvAD7nDpGX6CT_4boskv4jHOITbkXUjDPpf_VZyJg
[root@K8s-ansible RBAC-yaml-case]#chmod 644 /opt/kube-config
[root@K8s-ansible RBAC-yaml-case]#ll /opt/kube-config
-rw-r--r-- 1 root root 7158 Apr 14 08:16 /opt/kube-config
#测试用kube-config登录Dashboard
#kube-config文件导出到windows
我是moore,大家一起加油!!!