k8s初级实战10--备份etcd集群
- 1 基础概念
- 2 常见用法
- 2.1 备份etcd
- 2.2 还原etcd
- 3 注意事项
- 4 说明
1 基础概念
Etcd 是 CoreOS 基于 Raft 开发的分布式 key-value 存储,可用于服务发现、共享配置以及一致性保障(如数据库选主、分布式锁等)。Etcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。
2 常见用法
2.1 备份etcd
- 拷贝etcdctl 到 /usr/bin
# docker cp k8s_etcd_etcd-kmaster_kube-system_8d474956e7bbb5b3129a652bc831f31f_3:/usr/local/bin/etcdctl /usr/bin
# etcdctl version
- 创建目录并备份etcd
# mkdir /etcd_backup/
# ETCDCTL_API=3 etcdctl --endpoints https://192.168.2.131:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key \
snapshot save /etcd_backup/snapshot.db
输出:
{"level":"info","ts":1610252289.038325,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/etcd_backup/snapshot.db.part"}
{"level":"info","ts":"2021-01-10T04:18:09.045Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1610252289.045765,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://192.168.2.131:2379"}
{"level":"info","ts":"2021-01-10T04:18:09.107Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1610252289.131032,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://192.168.2.131:2379","size":"4.9 MB","took":0.092062186}
{"level":"info","ts":1610252289.131579,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/etcd_backup/snapshot.db"}
Snapshot saved at /etcd_backup/snapshot.db
查看备份状态:
# ETCDCTL_API=3 etcdctl --endpoints https://192.168.2.131:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key \
--write-out=table snapshot status /etcd_backup/snapshot.db
输出:
+---------+----------+------------+------------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
+---------+----------+------------+------------+
| 5e57128 | 34397 | 1724 | 4.9 MB |
- 定时备份
编写执行脚本
cat << EOF > etcd_backup.sh
#!/bin/bash
IP=192.168.2.131
BACKUP=/etcd_backup/
export ETCDCTL_API=3
mkdir -p $BACKUP
etcdctl --endpoints=https://$IP:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key \
snapshot save $BACKUP/snap-$(date +%Y%m%d%H%M).db
EOF
编写定时任务
crontab
2.2 还原etcd
- 查看etcd是否健康
# ETCDCTL_API=3 etcdctl \
--endpoints https://192.168.2.131:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key endpoint health
输出:
https://192.168.2.131:2379 is healthy: successfully committed proposal: took =
- 查看etcd的成员列表
# ETCDCTL_API=3 etcdctl \
--endpoints https://192.168.2.131:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key member list
输出:
9cb4eb07d38510b5, started, kmaster, https://192.168.2.131:2380, https://192.168.2.131:2379, false
- 查看 etcd pod 的yaml配置
# kubectl -n kube-system get pod etcd-kmaster -o yaml
- 删除 etcd 的数据目录
# mv /var/lib/etcd/ /var/lib/etcd-bak
再次查看健康状态,提示unhealthy
{"level":"warn","ts":"2021-01-10T05:00:38.988Z","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-a1e6e93c-6091-447b-ae77-2ed622d92ab2/192.168.2.131:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.2.131:2379: connect: connection refused\""}
- 还原 etcd
# ETCDCTL_API=3 etcdctl snapshot restore /etcd_backup/snapshot.db \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key \
--name=kmaster \
--data-dir=/var/lib/etcd \
--skip-hash-check \
--initial-advertise-peer-urls=https://192.168.2.131:2380 \
--initial-cluster=kmaster=https://192.168.2.131:2380
输出:
{"level":"info","ts":1610254983.838928,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/etcd_backup/snapshot.db","wal-dir":"/var/lib/etcd/member/wal","data-dir":"/var/lib/etcd","snap-dir":"/var/lib/etcd/member/snap"}
{"level":"info","ts":1610254983.8676407,"caller":"mvcc/kvstore.go:380","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":33324}
{"level":"info","ts":1610254983.880514,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"65567032c1db9f01","local-member-id":"0","added-peer-id":"9cb4eb07d38510b5","added-peer-peer-urls":["https://192.168.2.131:2380"]}
{"level":"info","ts":1610254983.9191227,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/etcd_backup/snapshot.db","wal-dir":"/var/lib/etcd/member/wal","data-dir":"/var/lib/etcd","snap-dir":"/var/lib/etcd/member/snap"}
确认已经恢复:
# ls /var/lib/etcd
member
# 查看健康状态
https://192.168.2.131:2379 is healthy: successfully committed proposal: took =
3 注意事项
- 在实际使用中,如果忘记了etcdctl的使用命令,可以通过-h 面临来查看命令所需的参数
# kubectl -n kube-system exec -it etcd-MasterNodeName -- etcdctl -h
4 说明
任务->管理集群->为 Kubernetes 运行 etcd 集群feiskyer/kubernetes-handbook/blob/master/components/etcd