一、NFS Provisioner
Kubernetes 不包含内部 NFS 驱动,需要使用外部驱动为 NFS 创建 StorageClass,可以使用 nfs-subdir-external-provisioner 项目。
git地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
1.1 创建 RBAC
使用 github 中的模板,根据需求修改。
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: db
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: db
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: db
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: db
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: db
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
1.2 创建 nfs 供应商客户端
根据官方模板修改,不要更改 volumeMounts 的 mountPath。
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: db
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: harbor.belkuy.top/base/nfs-subdir-external-provisioner:v4.0.2
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.50.150
- name: NFS_PATH
value: /data/nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.50.150
path: /data/nfsdata
1.3 创建持久化存储
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-sc
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
二、MySQL 主从
2.1 创建配置文件
创建配置文件,区分主节点配置和从节点配置。
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
namespace: db
labels:
app: mysql
data:
master.cnf: |
# Master配置
[mysqld]
log-bin=mysqllog
skip-name-resolve
slave.cnf: |
# Slave配置
[mysqld]
super-read-only
skip-name-resolve
log-bin=mysql-bin
replicate-ignore-db=mysql
2.2 创建密码
为 mysql 创建密码,密码需要进行 base64 加密。
echo -n "SHNhX33AYmYr9l114aWx" | base64
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
namespace: db
labels:
app: mysql
type: Opaque
data:
password: U0hOaFgzM0FZbVlyOWwxMTRhV3g=
2.3 创建服务
创建 headless service。
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: db
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
2.4 创建副本
使用 init-mysql 这个 initContainers 进行配置文件的初始化;接着使用 clone-mysql 这个 initContainers 进行数据的传输;同时使用 xtrabackup 这个 sidecar 容器进行SQL初始化和数据传输功能。储存使用(一)中的存储类 nfs-sc。
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
namespace: db
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 2
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: harbor.belkuy.top/base/mysql:5.7.41-debian
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
command:
- bash
- "-c"
- |
set -ex
# 从 Pod 的序号,生成 server-id
[[ $(hostname) =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# 由于 server-id 不能为 0,因此给 ID 加 100 来避开它
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Pod 的序号为 0,则说明它是 Master 节点,否则就是 slave 节点
if [[ ${ordinal} -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d
else
cp /mnt/config-map/slave.cnf /mnt/conf.d
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: harbor.belkuy.top/base/xtrabackup:1.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
command:
- bash
- "-c"
- |
set -ex
# 拷贝操作只需要在第一次启动时进行,有数据则跳过
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Master 节点(序号为 0)不需要这个操作
[[ $(hostname) =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal == 0 ]] && exit 0
# 使用 ncat 指令,远程地从前一个节点拷贝数据到本地
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# 执行 --prepare,这样拷贝来的数据就可以用作恢复了
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: harbor.belkuy.top/base/mysql:5.7.41-debian
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping", "-uroot", "-p${MYSQL_ROOT_PASSWORD}"]
initialDelaySeconds: 45
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
command: ["mysqladmin", "ping", "-uroot", "-p${MYSQL_ROOT_PASSWORD}"]
initialDelaySeconds: 15
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: harbor.belkuy.top/base/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# 从备份信息文件里读取 MASTER_LOG_FILE 和 MASTER_LOG_POS 这 2 个字段的值,用来拼装集群初始化 SQL
if [[ -f xtrabackup_slave_info ]]; then
# 如果 xtrabackup_slave_info 文件存在,说明这个备份数据来自于另一个 Slave 节点
# 这种情况下,XtraBackup 工具在备份的时候,就已经在这个文件里自动生成了 "CHANGE MASTER TO" SQL 语句
# 所以,只需要把这个文件重命名为 change_master_to.sql.in,后面直接使用即可
mv xtrabackup_slave_info change_master_to.sql.in
# 所以,也就用不着 xtrabackup_binlog_info 了
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# 如果只是存在 xtrabackup_binlog_info 文件,说明备份来自于 Master 节点,就需要解析这个备份信息文件,读取所需的两个字段的值
[[ $(cat xtrabackup_binlog_info) =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
# 把两个字段的值拼装成 SQL,写入 change_master_to.sql.in 文件
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# 如果存在 change_master_to.sql.in,就意味着需要做集群初始化工作
if [[ -f change_master_to.sql.in ]]; then
# 但一定要先等 MySQL 容器启动之后才能进行下一步连接 MySQL 的操作
echo "Waiting for mysqld to be ready(accepting connections)"
until mysql -h 127.0.0.1 -uroot -p${MYSQL_ROOT_PASSWORD} -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# 将文件 change_master_to.sql.in 改个名字
# 防止这个 Container 重启的时候,因为又找到了 change_master_to.sql.in,从而重复执行一遍初始化流程
mv change_master_to.sql.in change_master_to.sql.orig
# 使用 change_master_to.sql.orig 的内容,也就是前面拼装的 SQL,组成一个完整的初始化和启动 Slave 的 SQL 语句
mysql -h 127.0.0.1 -uroot -p${MYSQL_ROOT_PASSWORD} << EOF
$(< change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='${MYSQL_ROOT_PASSWORD}',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# 使用 ncat 监听 3307 端口。
# 它的作用是,在收到传输请求的时候,直接执行 xtrabackup --backup 命令,备份 MySQL 的数据并发送给请求者
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root --password=${MYSQL_ROOT_PASSWORD}"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: nfs-sc
resources:
requests:
storage: 1Gi
2.5 验证
# 查看从节点的状态
kubectl -n db exec mysql-1 -c mysql -- bash -c "mysql -uroot -pSHNhX33AYmYr9l114aWx -e 'show slave status \G'"
# 在主节点上创建测试数据
kubectl -n db exec mysql-0 -c mysql -- bash -c "mysql -uroot -pSHNhX33AYmYr9l114aWx -e 'create database test’"
kubectl -n db exec mysql-0 -c mysql -- bash -c "mysql -uroot -pSHNhX33AYmYr9l114aWx -e 'use test;create table counter(c int);'"
kubectl -n db exec mysql-0 -c mysql -- bash -c "mysql -uroot -pSHNhX33AYmYr9l114aWx -e 'use test;insert into counter values(123)'"
# 在从节点上查询是否有数据同步过来
kubectl -n db exec mysql-1 -c mysql -- bash -c "mysql -uroot -pSHNhX33AYmYr9l114aWx -e 'use test;select * from counter'"
另外,可以增加从节点数。
kubectl -n db scale statefulset mysql -—replicas=3
三、Wordpress
使用 nginx、php8.1、mysql(上面的主从)来部署 wordpress 6.1.1
3.1 创建持久化存储
使用 StorageClass 存储类,创建 PVC,把 wordpress 复制进去对应的 NFS 路径上。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pvc
namespace: web
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
storageClassName: nfs-sc
3.2 创建配置文件
创建 nginx 配置文件。
---
apiVersion: v1
data:
default.conf: |-
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html/;
index index.php;
}
location ~ \.php$ {
root /usr/share/nginx/html/;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
kind: ConfigMap
metadata:
annotations: {}
name: wp-nginx-config
namespace: web
3.3 创建服务
创建 nginx svc,暂时使用 NodePort 模式。
---
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wp-nginx
namespace: web
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: wordpress
sessionAffinity: None
type: NodePort
3.4 创建副本
将 nginx、php 部署在同个 pod 里面。php 直接使用dockerhub 官方镜像 php:8.1-fpm,连接数据库时会报错无法连接;需要修改官方 Dockerfile 文件,在编译安装 php 时增加以下参数:
--with-mysqli=mysqlnd
--with-pdo-mysql=mysqlnd
--enable-mysqlnd-compression-support
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
app: wordpress
name: wordpress
namespace: web
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: wordpress
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: 'nginx:1.23.3-alpine'
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: wp-nginx
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/nginx/html
name: wp-data
- mountPath: /etc/nginx/conf.d/default.conf
name: nginx-config
subPath: default.conf
- image: 'harbor.Belkuy.top/base/php:8.1'
imagePullPolicy: Always
name: php
ports:
- containerPort: 9000
name: wp-php
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/nginx/html
name: wp-data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: wp-data
persistentVolumeClaim:
claimName: wp-pvc
- configMap:
defaultMode: 420
items:
- key: default.conf
path: default.conf
name: wp-nginx-config
name: nginx-config
3.5 初始化
使用(二)中的数据库,登陆主节点创建数据库和用户及授权。初始化时,使用的数据库地址为:mysql-0.mysql.db.svc.cluster.local。
四、Redis Cluster
4.1 创建配置文件
---
apiVersion: v1
data:
redis.conf: |-
port 6379
maxclients 10000
protected-mode no
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
logfile "/data/redis.log"
kind: ConfigMap
metadata:
name: conf-rdsc
namespace: db
4.2 创建服务
创建 headless service。
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis-cluster
name: redis-cluster
namespace: db
spec:
ports:
- name: rds-port
port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis-cluster
type: ClusterIP
clusterIP: None
4.3 创建副本
使用 StatefulSet 创建 redis-cluster 集群节点。
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: redis-cluster
name: redis-cluster
namespace: db
spec:
replicas: 6
selector:
matchLabels:
app: redis-cluster
serviceName: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- command:
["redis-server"]
args:
- /etc/redis/redis.conf
- --cluster-announce-ip
- "$(podIP)"
env:
- name: podIP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: TZ
value: Asia/Shanghai
image: harbor.belkuy.top/base/redis:6.0.16
imagePullPolicy: IfNotPresent
name: redis
ports:
- containerPort: 6379
name: rds-port
protocol: TCP
volumeMounts:
- mountPath: /etc/redis/redis.conf
name: conf-rdsc
subPath: redis.conf
- mountPath: /data
name: v-rdsd
readOnly: false
dnsPolicy: ClusterFirst
volumes:
- configMap:
items:
- key: redis.conf
path: redis.conf
name: conf-rdsc
name: conf-rdsc
volumeClaimTemplates:
- metadata:
name: v-rdsd
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-sc
resources:
requests:
storage: 2Gi
4.4 初始化集群
随机找个节点初始化集群。
kubectl exec -n db -it redis-cluster-0 -- \
redis-cli -p 6379 \
--cluster create $(kubectl get pods -n db -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}') \
--cluster-replicas 1
4.5 验证集群
# 查看集群信息
for i in $(seq 1 6);do kubectl exec -n db -it redis-cluster-$i -- redis-cli cluster info; done
# 查看节点的链接信息
for i in $(seq 1 6);do kubectl exec -n db -it redis-cluster-$i -- redis-cli cluster nodes; done