0
点赞
收藏
分享

微信扫一扫

安装Docker Harbor镜像仓库,并实现高可用

1、Docker Harbor介绍

  Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器,由VMware开源,其通过添加了一些企业必需的功能特性,例如安全。标识和管理等,扩展了开源Docker Distribution。作为一个企业级私有Registry服务器,Harbor提供了更好的性能和安全,也提升了用户使用Registry构建和运行环境传输镜像的效率。Harbor支持安装在多个Registry节点的镜像资源复制,镜像全部保存在私有Registry中,确保数据和知识产权在公司内部网络中管控,另外,Harbor也提供了高级的安全特性,诸如用户管理,访问控制和活动审计等。
Harbor官方github地址:https://github.com/goharbor/harbor
Harbor官方地址:https://goharbor.io/
image.png

1.1、Harbor的功能

  1. 基于角色的访问控制:用户与Docker镜像仓库通过“项目”进行组织管理,一个用户可以对多个镜像仓库在同一命名空间(project)里有不同的权限。
  2. 镜像的复制:镜像可以在多个Registry实例中复制(同步),尤其适合于负载均衡、高可用、混合云和各个云平台的场景。
  3. 图形化的用户界面:用户可以通过账号密码登陆后浏览浏览器,检索当前的Docker镜像仓库,管理项目和命名空间。
  4. AD/LDAP支:Harbor可以集成与企业内部已经有的AD/LDAP,用来鉴权认证管理。
  5. 审计管理:所有针对镜像仓库的操作都可以被记录追溯,用来审计管理。
  6. 国际化:拥有英文、中文、德文、日文和俄文的本地话版本,还有其他更多的语言将会在后期的版本中添加进来。
  7. RESTful API - RESTful API:提供给管理员对于Harbor更多的操控,从而使得与其他管理软件之间的集成变得更加便利;
  8. 部署简单:提供了在线和离线安装两种工具,我们也可以安装到vSphere(OVA方式)虚拟设备中。

2、安装Harbor

2.1、Harbor的安装方式

在线安装:在线安装时,下载安装的程序包会比较小,其他各种依赖的资源是通过网络下载的;
离线安装:离线安装时,下载安装的程序包会比较大,里面包含了在安装过程中所需要的各个资源,主机不联网也是可以的,笔者推荐使用离线安装。

2.2、Harbor安装环境准备

2.2.1、Harbor安装先决条件

硬件要求:Harbor服务器要求2核CPU、4G内存和40G磁盘空间的保底要求,测试的话这配置就好了,生产中的配置会更高4C8G等等,磁盘的话大点最好;
软件要求:必须是在提前安装好Docker和Docker-compose的服务器上进行安装,并且Docker版本不可以低于v17.06.0,Docker-compose版本不可以低于v1.18.0,如果要使用证书走加密协议,还需要安装好openssl服务。

2.2.2、安装Docker与Docker-compose

这里安装的这两个的话我就是使用脚本去一键安装了,docker-compose的二进制包下载地址:https://github.com/docker/compose/releases docker-ce的包下载地址:https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/static/stable/x86_64/

root@node1:~# cd /usr/local/src/
root@node1:/usr/local/src# ll
total 76800
drwxr-xr-x  2 root root     4096 Aug  2 14:52 ./
drwxr-xr-x 10 root root     4096 Sep 16  2021 ../
-rw-r--r--  1 root root      647 Apr 12  2021 containerd.service
-rw-r--r--  1 root root 62436240 Jun  7 18:40 docker-19.03.15.tgz
-rwxr-xr-x  1 root root 16168192 Jun 25  2019 docker-compose-Linux-x86_64_1.24.1*
-rwxr-xr-x  1 root root     2686 Aug  2 14:38 docker-install.sh*
-rw-r--r--  1 root root     1683 Aug  2 14:22 docker.service
-rw-r--r--  1 root root      197 Apr 12  2021 docker.socket
-rw-r--r--  1 root root      458 Jun  2 23:25 limits.conf
-rw-r--r--  1 root root     2331 Aug  2 14:24 sysctl.conf
root@node1:/usr/local/src# cat limits.conf
*             soft    core            unlimited
*             hard    core            unlimited
*             soft    nproc           1000000
*             hard    nproc           1000000
*             soft    nofile          1000000
*             hard    nofile          1000000
*             soft    memlock         32000
*             hard    memlock         32000
*             soft    msgqueue        8192000
*             hard    msgqueue        8192000

root@node1:/usr/local/src# cat sysctl.conf
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# # Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# # Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# TCP kernel paramater
net.ipv4.tcp_mem = 786432 1048576 1572864
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

# socket buffer
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 20480
net.core.optmem_max = 81920

# TCP conn
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15

# tcp conn reuse
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 1

net.ipv4.tcp_max_tw_buckets = 20000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_timestamps = 1 #?
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1

# keepalive conn
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.ip_local_port_range = 10001    65000

# swap
vm.overcommit_memory = 0
vm.swappiness = 0
vm.max_map_count=262144

#net.ipv4.conf.eth1.rp_filter = 0
#net.ipv4.conf.lo.arp_ignore = 1
#net.ipv4.conf.lo.arp_announce = 2
#net.ipv4.conf.all.arp_ignore = 1
#net.ipv4.conf.all.arp_announce = 2
net.netfilter.nf_conntrack_max=2097152
kernel.pid_max=4194303
fs.file-max=1000000

root@node1:/usr/local/src# cat containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity

[Install]
WantedBy=multi-user.target

root@node1:/usr/local/src# cat docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

root@node1:/usr/local/src# cat docker-install.sh
#!/bin/bash
DIR=`pwd`
USER_NAME="wm"
EXISTED_NAME="zg"
PACKAGE_NAME="docker-19.03.15.tgz"
DOCKER_FILE=${DIR}/${PACKAGE_NAME}
centos_install_docker(){
  grep "Kernel" /etc/issue &> /dev/null
  if [ $? -eq 0 ];then
    /bin/echo  "当前系统是`cat /etc/redhat-release`,即将开始系统初始化、配置docker-compose与安装docker" && sleep 1
    systemctl stop firewalld && systemctl disable firewalld && echo "防火墙已关闭" && sleep 1
    systemctl stop NetworkManager && systemctl disable NetworkManager && echo "NetworkManager" && sleep 1
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux && setenforce  0 && echo "selinux 已关闭" && sleep 1
    \cp ${DIR}/limits.conf /etc/security/limits.conf 
    \cp ${DIR}/sysctl.conf /etc/sysctl.conf

    /bin/tar xvf ${DOCKER_FILE}
    \cp docker/*  /usr/bin

    \cp containerd.service /lib/systemd/system/containerd.service
    \cp docker.service  /lib/systemd/system/docker.service
    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.24.1 /usr/bin/docker-compose

    groupadd docker && useradd docker -g docker
    id -u ${USER_NAME} &> /dev/null
    if [ $? -ne 0 ];then
      useradd ${USER_NAME}
      usermod ${USER_NAME} -G docker
    fi
    systemctl enable containerd.service && systemctl restart containerd.service
    systemctl enable docker.service && systemctl restart docker.service
    systemctl enable docker.socket && systemctl restart docker.socket 
  fi
}

ubuntu_install_docker(){
  grep "linux" /etc/issue &> /dev/null  #这里的/etc/issue这个文件我在初始化系统的时候修改了一下,把操作系统的版本信息改成linux了,这样的话登陆后也隐藏了操作系统的版本信息
  if [ $? -eq 0 ];then
    /bin/echo  "当前系统是`cat /etc/issue`,即将开始系统初始化、配置docker-compose与安装docker" && sleep 1
    \cp ${DIR}/limits.conf /etc/security/limits.conf
    \cp ${DIR}/sysctl.conf /etc/sysctl.conf

    /bin/tar xvf ${DOCKER_FILE}
    \cp docker/* /usr/bin 

    \cp containerd.service /lib/systemd/system/containerd.service
    \cp docker.service /lib/systemd/system/docker.service
    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.24.1 /usr/bin/docker-compose
    ulimit -n 1000000 
    /bin/su -c - ${EXISTED_NAME} "ulimit -n 1000000"
    /bin/echo "docker 安装完成!" && sleep 1
    id -u ${USER_NAME} &> /dev/null
    if [ $? -ne 0 ];then
      groupadd -r ${USER_NAME}
      groupadd -r docker
      useradd -r -m -g ${USER_NAME} ${USER_NAME}
      usermod ${USER_NAME} -G docker
    fi  
    systemctl enable containerd.service && systemctl restart containerd.service
    systemctl enable docker.service && systemctl restart docker.service
    systemctl enable docker.socket && systemctl restart docker.socket 
  fi
}

main(){
  centos_install_docker  
  ubuntu_install_docker
}

main

root@node1:/usr/local/src# bash docker-install.sh
当前系统是linux,即将开始系统初始化、配置docker-compose与安装docker
docker/
docker/dockerd
docker/docker-proxy
docker/containerd-shim
docker/docker-init
docker/docker
docker/runc
docker/ctr
docker/containerd
docker 安装完成!
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
root@node1:/usr/local/src# docker --version
Docker version 19.03.15, build 99e3ed8
root@node1:/usr/local/src# docker-compose --version
docker-compose version 1.24.1, build 4667896b

2.3、下载并解压harbor压缩包

harbor压缩包下载地址:https://github.com/goharbor/harbor/releases
这里我使用的是harbor 2.2.2版本的离线安装压缩包

root@node1:~# cd /usr/local/src/
root@node1:/usr/local/src# wget https://github.com/goharbor/harbor/releases/download/v2.2.2/harbor-offline-installer-v2.2.2.tgz
root@node1:/usr/local/src# ll
total 493028
drwxr-xr-x  2 root root      4096 Aug  2 16:12 ./
drwxr-xr-x 10 root root      4096 Sep 16  2021 ../
-rw-r--r--  1 root root 504847710 Dec  8  2021 harbor-offline-installer-v2.2.2.tgz
root@node1:/usr/local/src# tar xf harbor-offline-installer-v2.2.2.tgz

2.4、修改Harbor配置文件并开始安装Harbor

root@node1:/usr/local/src# cd harbor/
root@node1:/usr/local/src/harbor# ll
total 494984
drwxr-xr-x 2 root root      4096 Aug  2 16:16 ./
drwxr-xr-x 3 root root      4096 Aug  2 16:16 ../
-rw-r--r-- 1 root root      3361 May 15  2021 common.sh
-rw-r--r-- 1 root root 506818941 May 15  2021 harbor.v2.2.2.tar.gz
-rw-r--r-- 1 root root      7840 May 15  2021 harbor.yml.tmpl
-rwxr-xr-x 1 root root      2500 May 15  2021 install.sh*
-rw-r--r-- 1 root root     11347 May 15  2021 LICENSE
-rwxr-xr-x 1 root root      1881 May 15  2021 prepare*
root@node1:/usr/local/src/harbor# mv harbor.yml.tmpl harbor.yml
root@node1:/usr/local/src/harbor# vim harbor.yml    #修改下面几行
hostname: harbor.stars.com  #这可以使用域名和IP,使用域名是是要做好域名解析的,或者在本地加上hosts文件解析
#https: #把https块中先注释掉,这里没有使用https我就先注释掉了,如果要使用的话可以把证书配置好
#  port: 443
#  certificate: /your/certificate/path
#  private_key: /your/private/key/path
harbor_admin_password: wm521314 #默认登陆的admin用户密码是Harbor12345,这里在生产中是会修改的。
database:
  password: test1234    #数据库的密码默认是root123,生产中也是会修改的
data_volume: /data/harbor   #存放harbor的数据的目录,在生产中一般都会放在数据盘上
root@node1:/usr/local/src/harbor# ./install.sh

image.png

2.5、使用浏览器访问

image.png
image.png

3、上传或下载镜像

3.1、上传镜像

3.1.1、创建好镜像的项目

  这里说一下harbor本身就是企业级的私有仓库,访问级别设置为公开的话,在企业内部的人就可以直接下载镜像了,上传的话还是要登陆用户后才可以使用,如果不设置公开的话,上传和下载都是要登陆后才可以操作,这里的存储容量也是可以设置的,这个是按照企业业务实际需求来设置的,设置为-1的话就是不限制容量。
image.png

3.1.2、修改本地hosts文件并使仓库可信

  在自己的主机上也配置一下,另外在其他机器安装了docker服务的主机上也配置一下,配置是一样的这里我就演示一个主机的。

root@node1:~# vim /etc/hosts    #加上下面内容
10.0.0.100 harbor.stars.com
root@node1:~# tee /etc/docker/daemon.json <<-'EOF'
> {                    
>   "registry-mirrors": ["https://c51gf9he.mirror.aliyuncs.com"],
>   "insecure-registries": ["harbor.stars.com"]
> }
> EOF
{                    
  "registry-mirrors": ["https://c51gf9he.mirror.aliyuncs.com"],
  "insecure-registries": ["harbor.stars.com"]
}
root@node1:~# systemctl daemon-reload 
root@node1:~# systemctl restart docker

3.1.3、登陆仓库

root@node1:~# docker login harbor.stars.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

3.1.4、上传镜像

  将docker服务器上的镜像上传到harbor服务器上,这里我就在harbor服务器上上传镜像了,上传前还要把本地的镜像改成harbor服务器相关的tag,在前面我新建了一个项目“zg-test”,那么要上传的话就要把tag修改成harbor.stars.com/zg-test/xxx的格式,修改完后就可以使用docker push命令将刚刚修改的本地镜像上传到harbor服务器上。
image.png
在上传一个我之前用Dockerfile文件构建的镜像上传上去,这个镜像我在构建镜像的时候就改好了tag
image.png
上传两个镜像后,在通过用户登陆harbor镜像仓库,找到对应的项目后再看是否有上传的镜像
image.png

3.2、下载镜像

3.2.1、配置hosts解析和本地仓库的信任

root@node2:~# vim /etc/hosts    #加上下面内容
10.0.0.100 harbor.stars.com
root@node2:~# tee /etc/docker/daemon.json <<-'EOF'
> {                    
>   "registry-mirrors": ["https://c51gf9he.mirror.aliyuncs.com"],
>   "insecure-registries": ["harbor.stars.com"]
> }
> EOF
{                    
  "registry-mirrors": ["https://c51gf9he.mirror.aliyuncs.com"],
  "insecure-registries": ["harbor.stars.com"]
}
root@node2:~# systemctl daemon-reload 
root@node2:~# systemctl restart docker

3.2.2、下载镜像

  如果项目设置的是公开的话,下载镜像就不用登陆了,镜像拉取的命令可以通过浏览器的页面具体镜像获取,如果知道设置的tag的话也是可以直接下载的。
image.png
image.png
image.png

4、实现Harbor的高可用

  要实现Harbor高可用的解决方案主要有基于共享存储和基于镜像复制两种,这里我就演示基于镜像复制怎么来实现Harbor的高可用。

4.1、基于共享存储

  使用共享存储的话,每个节点前面配置一个负载均衡器,当Docker服务器要进行镜像上传或下载是,是通过负载均衡器来根据调度算法来调度到某一个节点的harbor服务器,harbor服务自身不存放镜像,镜像都存放在共享存储上,从而要下载镜像时就会到共享存储上去找镜像。
image.png

4.2、基于镜像复制

  使用镜像复制的话,镜像是存在本地的;harbor支持基于策略的docker镜像复制的功能,这类功能类似于MySQL的主从同步,从而实现不同的数据中心、不同的运行环境之间同步镜像,也提供了相应的管理界面,从而大大的简化了实际工作中的镜像管理。
image.png

4.3、实现基于镜像复制的Harbor高可用

4.3.1、环境准备

IP地址 主机名 描述
10.0.0.101 docker-test.stars.org 测试docker下载上传镜像
10.0.0.102 haproxy.stars.org 实现负载均衡
10.0.0.103 harbor-master.stars.org harbor镜像主节点
10.0.0.104 harbor-slave.stars.org harbor镜像从节点

4.3.2、harbor服务复制功能测试

这两台harbor服务器的安装过程就省略了,可以参考这篇文章的前面的harbor安装。

4.3.2.1、创建项目

这里两个harbor服务器创建的项目是要一样的,这里我就创建一个test的项目了。
master节点:
image.png
slave节点:
image.png

4.3.2.2、新建仓库目标

master节点:
image.png
slave节点:
image.png
这里新建目标的面板说明一下:
提供者:默认是Harbor,这里可以不用动;
目标名、目标URL、访问ID、访问密码:是指的填上目标harbor服务器上的信息;
验证远程证书:这个使用http协议的或自己签证的证书,这里可以不用选上。

4.3.2.3、新建复制规则

master节点:
image.png
slave节点:
image.png

4.3.2.4、复制镜像测试
docker测试机登陆一个harbor master节点上传镜像,不过要添加仓库的本地信任
root@docker-test:~# vim /etc/docker/daemon.json
{                    
  "registry-mirrors": ["https://c51gf9he.mirror.aliyuncs.com"],
  "insecure-registries": ["harbor.stars.com","10.0.0.103"]
}
root@docker-test:~# systemctl daemon-reload
root@docker-test:~# systemctl restart docker

登陆并上传镜像
root@docker-test:~# docker login 10.0.0.103
Username: admin   
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
root@docker-test:~# docker images
REPOSITORY                                   TAG                 IMAGE ID            CREATED             SIZE
harbor.stars.com/zg-test/ubuntu-18.04-base   v1                  b38fec40a658        19 hours ago        427MB
alpine                                       latest              c059bfaa849c        8 months ago        5.59MB
harbor.stars.com/zg-test/alpine-test         latest              c059bfaa849c        8 months ago        5.59MB
ubuntu                                       18.04               5a214d77f5d7        10 months ago       63.1MB
centos                                       7.9.2009            eeb6ee3f44bd        10 months ago       204MB
root@docker-test:~# docker tag harbor.stars.com/zg-test/ubuntu-18.04-base:v1 10.0.0.103/test/ubuntu-18.04-base:v1
root@docker-test:~# docker push 10.0.0.103/test/ubuntu-18.04-base:v1
The push refers to repository [10.0.0.103/test/ubuntu-18.04-base]
d3c7884eca2e: Pushed 
58403463aef8: Pushed 
613b0767e6b2: Pushed 
824bf068fd3d: Pushed 
v1: digest: sha256:da75a73fad6eb6bbb1e3d6542fda5c102d99bcc25e173f675698dcc374799b5f size: 1157

在浏览器上访问查看镜像

harbor-master节点上查看:
image.png
harbor-slave节点上查看:
image.png
这时去看复制管理中是有一个复制任务的,这个任务是通过事件触发了复制的,这时候再去slave服务器上看获取镜像的命令是,仓库的地址是10.0.0.104的。
image.png
image.png

4.3.3、实现Harbor的高可用

4.3.3.1、负载均衡器服务器配置
安装haproxy服务
root@haproxy:~# apt -y install haproxy
修改haproxy配置文件并重启服务
root@haproxy:~# vim /etc/haproxy/haproxy.cfg 
root@haproxy:~# tail -n 6 /etc/haproxy/haproxy.cfg
listen harbor_test_80
    bind 10.0.0.102:80
    mode tcp
    balance source
    server harbor-master 10.0.0.103:80 check inter 3000 fall 2 rise 5
    server harbor-slave 10.0.0.104:80 check inter 3000 fall 2 rise 5
root@haproxy:~# systemctl restart haproxy.service
root@haproxy:~# lsof -i:80
COMMAND  PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
haproxy 2336 haproxy    7u  IPv4  60619      0t0  TCP 10.0.0.102:http (LISTEN)
4.3.3.2、测试harbor的高可用
设置本地信任仓库并登陆
root@docker-test:~# vim /etc/docker/daemon.json
{                    
  "registry-mirrors": ["https://c51gf9he.mirror.aliyuncs.com"],
  "insecure-registries": ["harbor.stars.com","10.0.0.102"]
}
root@docker-test:~# systemctl restart docker
root@docker-test:~# docker login 10.0.0.102
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

修改tag并上传镜像
root@docker-test:~# docker images
REPOSITORY                                   TAG                 IMAGE ID            CREATED             SIZE
10.0.0.103/test/ubuntu-18.04-base            v1                  b38fec40a658        20 hours ago        427MB
harbor.stars.com/zg-test/ubuntu-18.04-base   v1                  b38fec40a658        20 hours ago        427MB
alpine                                       latest              c059bfaa849c        8 months ago        5.59MB
harbor.stars.com/zg-test/alpine-test         latest              c059bfaa849c        8 months ago        5.59MB
ubuntu                                       18.04               5a214d77f5d7        10 months ago       63.1MB
centos                                       7.9.2009            eeb6ee3f44bd        10 months ago       204MB
root@docker-test:~# docker tag alpine:latest 10.0.0.102/test/alpine:v1
root@docker-test:~# docker push 10.0.0.102/test/alpine:v1
The push refers to repository [10.0.0.102/test/alpine]
8d3ac3489996: Pushed 
v1: digest: sha256:e7d88de73db3d3fd9b2d63aa7f447a10fd0220b7cbf39803c803f2af9ba256b3 size: 528

使用浏览器验证镜像是否上传

image.png
image.png

下载镜像

root@docker-test:~# docker pull 10.0.0.102/test/alpine:v1
v1: Pulling from test/alpine
Digest: sha256:e7d88de73db3d3fd9b2d63aa7f447a10fd0220b7cbf39803c803f2af9ba256b3
Status: Image is up to date for 10.0.0.102/test/alpine:v1
10.0.0.102/test/alpine:v1

image.png
从上面的截图可以得出他调度到harbor-slave上去下载镜像了
测试高可用
前面我haproxy的配置文件是设置的源地址哈希算法,在默认的情况下来自同一个地址的话是会一直调度到同一服务器上的,刚刚说调度到了harbor-slave上,这样的话就会一直调度到harbor-slave上面;这时我关了harbor-slave服务时,看会不会调度到harbor-master上。

root@harbor-slave:/opt/harbor# docker-compose stop
Stopping nginx             ... done
Stopping harbor-jobservice ... done
Stopping harbor-core       ... done
Stopping registry          ... done
Stopping harbor-db         ... done
Stopping redis             ... done
Stopping harbor-portal     ... done
Stopping registryctl       ... done
Stopping harbor-log        ... done

root@docker-test:~# docker pull 10.0.0.102/test/ubuntu-18.04-base:v1
v1: Pulling from test/ubuntu-18.04-base
Digest: sha256:da75a73fad6eb6bbb1e3d6542fda5c102d99bcc25e173f675698dcc374799b5f
Status: Downloaded newer image for 10.0.0.102/test/ubuntu-18.04-base:v1
10.0.0.102/test/ubuntu-18.04-base:v1
root@docker-test:~# docker images
REPOSITORY                                   TAG                 IMAGE ID            CREATED             SIZE
10.0.0.102/test/ubuntu-18.04-base            v1                  b38fec40a658        21 hours ago        427MB
10.0.0.103/test/ubuntu-18.04-base            v1                  b38fec40a658        21 hours ago        427MB
harbor.stars.com/zg-test/ubuntu-18.04-base   v1                  b38fec40a658        21 hours ago        427MB
alpine                                       latest              c059bfaa849c        8 months ago        5.59MB
harbor.stars.com/zg-test/alpine-test         latest              c059bfaa849c        8 months ago        5.59MB
10.0.0.102/test/alpine                       v1                  c059bfaa849c        8 months ago        5.59MB
ubuntu                                       18.04               5a214d77f5d7        10 months ago       63.1MB
centos                                       7.9.2009            eeb6ee3f44bd        10 months ago       204MB

image.png
从上面图可知,已经实现了高可用,当负载均衡器里面一个服务挂了,就会代理到另一个服务器上。

举报

相关推荐

0 条评论