1. Fabric梳理
1.1 fabric网络的搭建过程
-
生成节点证书
# 1. 编写组织信息的配置文件, 该文件中声明每个组织有多少个节点, 多少用户 # 在这个配置文件中声明了每个节点访问的地址(域名) # 一般命名为crypto-config.yaml $ cryptogen generate --config=xxx.yaml
-
生成创始块文件和通道文件
-
编写配置文件 - configtx.yaml
- 配置组织信息
- name
- ID
- msp
- anchor peer
- 排序节点设置
- 排序算法( 共识机制)
- orderer节点服务器的地址
- 区块如何生成
- 对组织关系的概述
- 当前组织中所有的信息 -> 生成创始块文件
- 通道信息 -> 生成通道文件 或者 生成锚节点更新文件
- 配置组织信息
-
通过命令生成文件
$ configtxgen -profile [从configtx.yaml->profiles->下属字段名] -outputxxxx
-
创始块文件: 给排序节点使用了
ORDERER_GENERAL_GENESISMETHOD=file ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
-
通道文件:
被一个可以操作peer节点的客户端使用该文件创建了通道, 得到一个
通道名.block
-
-
编写 orderer节点对应的配置文件
-
编写配置文件
# docker-compose.yaml
-
启动docker容器
$ docker-compose up -d
-
检测
$ docker-compose ps
-
-
编写peer节点对应的配置文件
# docker-compose.yaml - 两个服务器 - peer - cli
启动容器
$ docker-compose up -d
检测
$ docker-compose ps
进入到客户端容器中
$ docker exec -it cli bash
- 创建通道
- 当前节点加入到通道
- 安装链码
- 初始化 -> 一次就行
1.2 看图说话
- 客户端
- 连接peer需要用户身份的账号信息, 可以连接到同组的peer节点上
- 客户端发起一笔交易
- 会发送到参与背书的各个节点上
- 参加背书的节点进行模拟交易
- 背书节点将处理结果发送给客户端
- 如果提案的结果都没有问题, 客户端将交易提交给orderer节点
- orderer节点将交易打包
- leader节点将打包数据同步到当前组织
- 当前组织的提交节点将打包数据写入到区块中
- Fabric-ca-sever
- 可以通过它动态创建用户
- 网络中可以没有这个角色
- 组织
- peer节点 -> 存储账本
- 用户
- 排序节点
- 对交易进行排序
- 解决双花问题
- 对交易打包
- configtx.yaml
- 对交易进行排序
- peer节点
- 背书节点
- 进行交易的模拟, 将节点返回给客户端
- 客户端选择的, 客户端指定谁去进行模拟交易谁就是背书节点
- 提交节点
- 将orderer节点打包的数据, 加入到区块链中
- 只要是peer节点, 就具有提交数据的能力
- 主节点
- 和orderer排序节点直接通信的节点
- 从orderer节点处获取到打包数据
- 将数据同步到当前组织的各个节点中
- 只能有一个
- 可以自己指定
- 也可以通过fabric框架选择 -> 推荐
- 和orderer排序节点直接通信的节点
- 锚节点
- 代表当前组织和其他组织通信的节点
- 只能有一个
- 背书节点
2. Fabric中的共识机制
2.1 Solo
2.2 Kafka
3. kafka集群
3.1 生成节点证书
-
配置文件
3.2 生成创始块文件和通道文件
- 编写配置文件 - configtx.yaml
---
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
- &OrgGo
Name: OrgGoMSP
ID: OrgGoMSP
MSPDir: crypto-config/peerOrganizations/orggo.example.com/msp
AnchorPeers:
- Host: peer0.orggo.example.com
Port: 7051
- &OrgCpp
Name: OrgCppMSP
ID: OrgCppMSP
MSPDir: crypto-config/peerOrganizations/orgcpp.example.com/msp
AnchorPeers:
- Host: peer0.orgcpp.example.com
Port: 7051
################################################################################
#
# SECTION: Capabilities
#
################################################################################
Capabilities:
Global: &ChannelCapabilities
V1_1: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_2: true
################################################################################
#
# SECTION: Application
#
################################################################################
Application: &ApplicationDefaults
Organizations:
################################################################################
#
# SECTION: Orderer
#
################################################################################
Orderer: &OrdererDefaults
# Available types are "solo" and "kafka"
OrdererType: kafka
Addresses: # 排序节点的地址
- orderer0.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 192.168.247.201:9092
- 192.168.247.202:9092
- 192.168.247.203:9092
- 192.168.247.204:9092
Organizations:
################################################################################
#
# Profile
#
################################################################################
Profiles:
TwoOrgsOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *OrgGo
- *OrgCpp
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *OrgGo
- *OrgCpp
Capabilities:
<<: *ApplicationCapabilities
-
通过configtx.yaml生成创始块和通道文件
# 生成创始块 $ configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./genesis.block # 生成通道 $ configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel.tx -channelID testchannel
3.3 配置zookeeper服务器
-
如何配置, 如何编写配置文件
-
配置文件编写
-
zookeeper1
version: '2' services: zookeeper1: # 服务器名, 自己起 container_name: zookeeper1 # 容器名, 自己起 hostname: zookeeper1 # 访问的主机名, 自己起, 需要和IP有对应关系 image: hyperledger/fabric-zookeeper:latest restart: always # 指定为always environment: # ID在集合中必须是唯一的并且应该有一个值,在1和255之间。 - ZOO_MY_ID=1 # server.x=hostname:prot1:port2 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - 2181:2181 - 2888:2888 - 3888:3888 extra_hosts: - zookeeper1:192.168.24.201 - zookeeper2:192.168.24.202 - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
zookeep2
# zookeeper2.yaml version: '2' services: zookeeper2: # 服务器名, 自己起 container_name: zookeeper2 # 容器名, 自己起 hostname: zookeeper2 # 访问的主机名, 自己起, 需要和IP有对应关系 image: hyperledger/fabric-zookeeper:latest restart: always # 指定为always environment: # ID在集合中必须是唯一的并且应该有一个值,在1和255之间。 - ZOO_MY_ID=2 # server.x=hostname:prot1:port2 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - 2181:2181 - 2888:2888 - 3888:3888 extra_hosts: - zookeeper1:192.168.24.201 - zookeeper2:192.168.24.202 - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
zookeeper3
# zookeeper3.yaml version: '2' services: zookeeper3: # 服务器名, 自己起 container_name: zookeeper3 # 容器名, 自己起 hostname: zookeeper3 # 访问的主机名, 自己起, 需要和IP有对应关系 image: hyperledger/fabric-zookeeper:latest restart: always # 指定为always environment: # ID在集合中必须是唯一的并且应该有一个值,在1和255之间。 - ZOO_MY_ID=3 # server.x=hostname:prot1:port2 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - 2181:2181 - 2888:2888 - 3888:3888 extra_hosts: - zookeeper1:192.168.24.201 - zookeeper2:192.168.24.202 - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
3.4 kafka集群
-
配置文件
-
3.4 kafka集群配置文件配置
-
kafka1
# kafka1.yaml version: '2' services: kafka1: container_name: kafka1 hostname: kafka1 image: hyperledger/fabric-kafka:latest restart: always environment: # broker.id - KAFKA_BROKER_ID=1 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 # 100 * 1024 * 1024 B - KAFKA_MESSAGE_MAX_BYTES=104857600 - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600 # 100 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M ports: - 9092:9092 extra_hosts: - "zookeeper1:192.168.24.201" - "zookeeper2:192.168.24.202" - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
kafka2
# kafka2.yaml version: '2' services: kafka2: container_name: kafka2 hostname: kafka2 image: hyperledger/fabric-kafka:latest restart: always environment: # broker.id - KAFKA_BROKER_ID=2 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 # 100 * 1024 * 1024 B - KAFKA_MESSAGE_MAX_BYTES=104857600 - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600 # 100 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M ports: - 9092:9092 extra_hosts: - "zookeeper1:192.168.24.201" - "zookeeper2:192.168.24.202" - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
kafka3
# kafka3.yaml version: '2' services: kafka3: container_name: kafka3 hostname: kafka3 image: hyperledger/fabric-kafka:latest restart: always environment: # broker.id - KAFKA_BROKER_ID=3 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 # 100 * 1024 * 1024 B - KAFKA_MESSAGE_MAX_BYTES=104857600 - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600 # 100 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M ports: - 9092:9092 extra_hosts: - "zookeeper1:192.168.24.201" - "zookeeper2:192.168.24.202" - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
kafka4
# kafka4.yaml version: '2' services: kafka4: container_name: kafka4 hostname: kafka4 image: hyperledger/fabric-kafka:latest restart: always environment: # broker.id - KAFKA_BROKER_ID=4 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 # 100 * 1024 * 1024 B - KAFKA_MESSAGE_MAX_BYTES=104857600 - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600 # 100 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M ports: - 9092:9092 extra_hosts: - "zookeeper1:192.168.24.201" - "zookeeper2:192.168.24.202" - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
4. orderer集群
4.1 相关配置项
4.2 orderer集群配置
-
orderer0
# orderer.yaml version: '2' services: orderer0.example.com: container_name: orderer0.example.com image: hyperledger/fabric-orderer:latest environment: - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_LISTENPORT=7050 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP # configtx.yaml - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls networks: default: aliases: - aberic ports: - 7050:7050 extra_hosts: - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
orderer1
# orderer1.yaml version: '2' services: orderer0.example.com: container_name: orderer0.example.com image: hyperledger/fabric-orderer:latest environment: - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_LISTENPORT=7050 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=false - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls networks: default: aliases: - aberic ports: - 7050:7050 extra_hosts: - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
orderer2
# orderer2.yaml version: '2' services: orderer0.example.com: container_name: orderer0.example.com image: hyperledger/fabric-orderer:latest environment: - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_LISTENPORT=7050 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=false - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls networks: default: aliases: - aberic ports: - 7050:7050 extra_hosts: - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
5. 集群的启动
5.1 zookeeper集群的启动
-
第一台zookeeper
# 1. 进入到当前节点的工作目录, 在开始的时候创建的, 比如 ~/kafka $ cd ~/kafka # 2. 将写好的配置文件部署到当前主机的 ~/kafka目录 zookeeper1.yaml # 3. 启动docker 通过docker-compose $ docker-compose -f zookeeper1.yaml up -d
-
第2台zookeeper
# 1. 进入到当前节点的工作目录, 在开始的时候创建的, 比如 ~/kafka $ cd ~/kafka # 2. 将写好的配置文件部署到当前主机的 ~/kafka目录 zookeeper2.yaml # 3. 启动docker 通过docker-compose $ docker-compose -f zookeeper2.yaml up -d
-
第3台zookeeper
# 1. 进入到当前节点的工作目录, 在开始的时候创建的, 比如 ~/kafka $ cd ~/kafka # 2. 将写好的配置文件部署到当前主机的 ~/kafka目录 zookeeper3.yaml # 3. 启动docker 通过docker-compose $ docker-compose -f zookeeper3.yaml up -d
5.2 启动kafka集群
-
卡夫卡1
# 1. 进入到当前节点的工作目录, 在开始的时候创建的, 比如 ~/kafka $ cd ~/kafka # 2. 将写好的配置文件部署到当前主机的 ~/kafka目录 kafka1.yaml # 3. 启动docker 通过docker-compose $ docker-compose -f kafka1.yaml up -d $ docker-compose -f kafka1.yaml ps
-
按照上述方式分别启动kafka 2, 3, 4
5.3 orderer集群的启动
-
orderer0
# 1. 进入到当前节点的工作目录, 在开始的时候创建的, 比如 ~/kafka $ cd ~/kafka # 2. 将写好的配置文件部署到当前主机的 ~/kafka目录 # 3. 需要将开始时候生成的证书文件和创始块文件部署到orderer0主机上 - 将第3.1生成的crypto-conf中拷贝到当前目录 - 将genesis.block拷贝到当前目录 - 根据配置文件中卷挂载的路径对上述文件目录进行修改即可 # 4. 启动docker 通过docker-compose $ docker-compose -f orderer0 up -d $ docker-compose -f orderer0 ps
-
orderer1
# 1. 进入到当前节点的工作目录, 在开始的时候创建的, 比如 ~/kafka $ cd ~/kafka # 2. 将写好的配置文件部署到当前主机的 ~/kafka目录 # 3. 需要将开始时候生成的证书文件和创始块文件部署到orderer1主机上 - 将第3.1生成的crypto-conf中拷贝到当前目录 - 将genesis.block拷贝到当前目录 - 根据配置文件中卷挂载的路径对上述文件目录进行修改即可 # 4. 启动docker 通过docker-compose $ docker-compose -f orderer1 up -d $ docker-compose -f orderer1 ps
-
orderer2
# 1. 进入到当前节点的工作目录, 在开始的时候创建的, 比如 ~/kafka $ cd ~/kafka # 2. 将写好的配置文件部署到当前主机的 ~/kafka目录 # 3. 需要将开始时候生成的证书文件和创始块文件部署到orderer2主机上 - 将第3.1生成的crypto-conf中拷贝到当前目录 - 将genesis.block拷贝到当前目录 - 根据配置文件中卷挂载的路径对上述文件目录进行修改即可 # 4. 启动docker 通过docker-compose $ docker-compose -f orderer2 up -d $ docker-compose -f orderer2 ps
kafka集群部署
1. 准备工作
名称 | IP地址 | Hostname | 组织结构 |
---|---|---|---|
zk1 | 192.168.247.101 | zookeeper1 | |
zk2 | 192.168.247.102 | zookeeper2 | |
zk3 | 192.168.247.103 | zookeeper3 | |
kafka1 | 192.168.247.201 | kafka1 | |
kafka2 | 192.168.247.202 | kafka2 | |
kafka3 | 192.168.247.203 | kafka3 | |
kafka4 | 192.168.247.204 | kafka4 | |
orderer0 | 192.168.247.91 | orderer0.test.com | |
orderer1 | 192.168.247.92 | orderer1.test.com | |
orderer2 | 192.168.247.93 | orderer2.test.com | |
peer0 | 192.168.247.81 | peer0.orggo.test.com | OrgGo |
peer0 | 192.168.247.82 | peer0.orgcpp.test.com | OrgCpp |
为了保证整个集群的正常工作, 需要给集群中的各个节点设置工作目录, 我们要保证各个节点工作目录是相同的
# 在以上各个节点的家目录创建工作目录:
$ mkdir ~/kafka
2. 生成证书文件
2.1 编写配置文件
# crypto-config.yaml
OrdererOrgs:
- Name: Orderer
Domain: test.com
Specs:
- Hostname: orderer0 # 第1个排序节点: orderer0.test.com
- Hostname: orderer1 # 第2个排序节点: orderer1.test.com
- Hostname: orderer2 # 第3个排序节点: orderer2.test.com
PeerOrgs:
- Name: OrgGo
Domain: orggo.test.com
Template:
Count: 2 # 当前go组织两个peer节点
Users:
Count: 1
- Name: OrgCpp
Domain: orgcpp.test.com
Template:
Count: 2 # 当前cpp组织两个peer节点
Users:
Count: 1
2.2 生成证书
$ cryptogen generate --config=crypto-config.yaml
$ tree ./ -L 1
./
├── `crypto-config` -> 证书文件目录
└── crypto-config.yaml
3. 生成创始块和通道文件
3.1 编写配置文件
---
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/test.com/msp
- &go_org
Name: OrgGoMSP
ID: OrgGoMSP
MSPDir: crypto-config/peerOrganizations/orggo.test.com/msp
AnchorPeers:
- Host: peer0.orggo.test.com
Port: 7051
- &cpp_org
Name: OrgCppMSP
ID: OrgCppMSP
MSPDir: crypto-config/peerOrganizations/orgcpp.test.com/msp
AnchorPeers:
- Host: peer0.orgcpp.test.com
Port: 7051
################################################################################
#
# SECTION: Capabilities
#
################################################################################
Capabilities:
Global: &ChannelCapabilities
V1_1: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_2: true
################################################################################
#
# SECTION: Application
#
################################################################################
Application: &ApplicationDefaults
Organizations:
################################################################################
#
# SECTION: Orderer
#
################################################################################
Orderer: &OrdererDefaults
# Available types are "solo" and "kafka"
OrdererType: kafka
Addresses:
# 排序节点服务器地址
- orderer0.test.com:7050
- orderer1.test.com:7050
- orderer2.test.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
# kafka服务器地址
- 192.168.247.201:9092
- 192.168.247.202:9092
- 192.168.247.203:9092
- 192.168.247.204:9092
Organizations:
################################################################################
#
# Profile
#
################################################################################
Profiles:
OrgsOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *go_org
- *cpp_org
OrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *go_org
- *cpp_org
Capabilities:
<<: *ApplicationCapabilities
3.2 生成通道和创始块文件
-
生成创始块文件
# 我们先创建一个目录 channel-artifacts 存储生成的文件, 目的是为了和后边的配置文件模板的配置项保持一致 $ mkdir channel-artifacts # 生成通道文件 $ configtxgen -profile OrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
-
生成通道文件
# 生成创始块文件 $ configtxgen -profile OrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID testchannel
4. Zookeeper设置
4.1 基本概念
-
zookeeper 的运作流程
-
Zookeeper的集群数量
4.2 zookeeper配置文件模板
-
配置文件模板
version: '2' services: zookeeper1: # 服务器名, 自己起 container_name: zookeeper1 # 容器名, 自己起 hostname: zookeeper1 # 访问的主机名, 自己起, 需要和IP有对应关系 image: hyperledger/fabric-zookeeper:latest restart: always # 指定为always environment: # ID在集合中必须是唯一的并且应该有一个值,在1和255之间。 - ZOO_MY_ID=1 # server.x=hostname:prot1:port2 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - 2181:2181 - 2888:2888 - 3888:3888 extra_hosts: - zookeeper1:192.168.24.201 - zookeeper2:192.168.24.202 - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
相关配置项解释:
4.3 各个zookeeper节点的配置
zookeeper1 配置
# zookeeper1.yaml
version: '2'
services:
zookeeper1:
container_name: zookeeper1
hostname: zookeeper1
image: hyperledger/fabric-zookeeper:latest
restart: always
environment:
# ID在集合中必须是唯一的并且应该有一个值,在1和255之间。
- ZOO_MY_ID=1
# server.x=[hostname]:nnnnn[:nnnnn]
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- zookeeper1:192.168.247.101
- zookeeper2:192.168.247.102
- zookeeper3:192.168.247.103
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
zookeeper2 配置
# zookeeper2.yaml
version: '2'
services:
zookeeper2:
container_name: zookeeper2
hostname: zookeeper2
image: hyperledger/fabric-zookeeper:latest
restart: always
environment:
# ID在集合中必须是唯一的并且应该有一个值,在1和255之间。
- ZOO_MY_ID=2
# server.x=[hostname]:nnnnn[:nnnnn]
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- zookeeper1:192.168.247.101
- zookeeper2:192.168.247.102
- zookeeper3:192.168.247.103
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
zookeeper3 配置
# zookeeper3.yaml
version: '2'
services:
zookeeper3:
container_name: zookeeper3
hostname: zookeeper3
image: hyperledger/fabric-zookeeper:latest
restart: always
environment:
# ID在集合中必须是唯一的并且应该有一个值,在1和255之间。
- ZOO_MY_ID=3
# server.x=[hostname]:nnnnn[:nnnnn]
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- zookeeper1:192.168.247.101
- zookeeper2:192.168.247.102
- zookeeper3:192.168.247.103
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
5. Kafka设置
5.1 基本概念
5.2 kafka配置文件模板
-
kafka配置文件模板
version: '2' services: kafka1: container_name: kafka1 hostname: kafka1 image: hyperledger/fabric-kafka:latest restart: always environment: # broker.id - KAFKA_BROKER_ID=1 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 # 99 * 1024 * 1024 B - KAFKA_MESSAGE_MAX_BYTES=103809024 - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M ports: - 9092:9092 extra_hosts: - "zookeeper1:192.168.24.201" - zookeeper2:192.168.24.202 - zookeeper3:192.168.24.203 - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
配置项解释
5.3 各个kafka节点的配置
kafka1 配置
# kafka1.yaml
version: '2'
services:
kafka1:
container_name: kafka1
hostname: kafka1
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 100 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=104857600
- KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
ports:
- 9092:9092
extra_hosts:
- zookeeper1:192.168.247.101
- zookeeper2:192.168.247.102
- zookeeper3:192.168.247.103
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
kafka2 配置
# kafka2.yaml
version: '2'
services:
kafka2:
container_name: kafka2
hostname: kafka2
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=2
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 100 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=104857600
- KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
ports:
- 9092:9092
extra_hosts:
- zookeeper1:192.168.247.101
- zookeeper2:192.168.247.102
- zookeeper3:192.168.247.103
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
kafka3 配置
# kafka3.yaml
version: '2'
services:
kafka3:
container_name: kafka3
hostname: kafka3
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=3
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 100 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=104857600
- KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
ports:
- 9092:9092
extra_hosts:
- zookeeper1:192.168.247.101
- zookeeper2:192.168.247.102
- zookeeper3:192.168.247.103
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
kafka4 配置
# kafka4.yaml
version: '2'
services:
kafka4:
container_name: kafka4
hostname: kafka4
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=4
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 100 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=104857600
- KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
ports:
- 9092:9092
extra_hosts:
- zookeeper1:192.168.247.101
- zookeeper2:192.168.247.102
- zookeeper3:192.168.247.103
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
6. orderer节点设置
6.1 orderer节点配置文件模板
-
orderer节点配置文件模板
version: '2' services: orderer0.example.com: container_name: orderer0.example.com image: hyperledger/fabric-orderer:latest environment: - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_LISTENPORT=7050 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=false - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls networks: default: aliases: - aberic ports: - 7050:7050 extra_hosts: - kafka1:192.168.24.204 - kafka2:192.168.24.205 - kafka3:192.168.24.206 - kafka4:192.168.24.207
-
细节解释
6.3 orderer各节点的配置
orderer0配置
# orderer0.yaml
version: '2'
services:
orderer0.test.com:
container_name: orderer0.test.com
image: hyperledger/fabric-orderer:latest
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer0.test.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer0.test.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- kafka
ports:
- 7050:7050
extra_hosts:
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
orderer1配置
# orderer1.yaml
version: '2'
services:
orderer1.test.com:
container_name: orderer1.test.com
image: hyperledger/fabric-orderer:latest
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer1.test.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer1.test.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- kafka
ports:
- 7050:7050
extra_hosts:
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
orderer2配置
# orderer2.yaml
version: '2'
services:
orderer2.test.com:
container_name: orderer2.test.com
image: hyperledger/fabric-orderer:latest
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer2.test.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer2.test.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- kafka
ports:
- 7050:7050
extra_hosts:
- kafka1:192.168.247.201
- kafka2:192.168.247.202
- kafka3:192.168.247.203
- kafka4:192.168.247.204
7. 启动集群
7.1 启动Zookeeper集群
-
zookeeper1:192.168.247.101
$ cd ~/kafka # 将写好的 zookeeper1.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 # 该命令可以不加 -d 参数, 这样就能看到当前 zookeeper 服务器启动的情况了 $ docker-compose -f zookeeper1.yaml up
-
zookeeper2:192.168.247.102
$ cd ~/kafka # 将写好的 zookeeper2.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 # 该命令可以不加 -d 参数, 这样就能看到当前 zookeeper 服务器启动的情况了 $ docker-compose -f zookeeper2.yaml up
-
zookeeper3:192.168.247.103
$ cd ~/kafka # 将写好的 zookeeper3.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 # 该命令可以不加 -d 参数, 这样就能看到当前 zookeeper 服务器启动的情况了 $ docker-compose -f zookeeper3.yaml up
7.2 启动Kafka集群
-
kafka1:192.168.247.201
$ cd ~/kafka # 将写好的 kafka1.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 # 该命令可以不加 -d 参数, 这样就能看到当前 kafka 服务器启动的情况了 $ docker-compose -f kafka1.yaml up
-
kafka2:192.168.247.202
$ cd ~/kafka # 将写好的 kafka2.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 $ docker-compose -f kafka2.yaml up -d
-
kafka3:192.168.247.203
$ cd ~/kafka # 将写好的 kafka3.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 $ docker-compose -f kafka3.yaml up -d
-
kafka4:192.168.247.204
$ cd ~/kafka # 将写好的 kafka4.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 $ docker-compose -f kafka4.yaml up
7.3 启动Orderer集群
-
orderer0:192.168.247.91
$ cd ~/kafka # 假设生成证书和通道创始块文件操作是在当前 orderer0 上完成的, 那么应该在当前 kafka 工作目录下 $ tree ./ -L 1 ./ ├── channel-artifacts ├── configtx.yaml ├── crypto-config └── crypto-config.yaml # 将写好的 orderer0.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 $ docker-compose -f orderer0.yaml up -d
-
orderer1:192.168.247.92
# 将生成的 证书文件目录 和 通道创始块 文件目录拷贝到当前主机的 ~/kafka目录中 $ cd ~/kafka # 创建子目录 crypto-config $ mkdir crypto-config # 远程拷贝 $ scp -f itcast@192.168.247.91:/home/itcast/kafka/crypto-config/ordererOrganizations ./crypto-config # # 将写好的 orderer1.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 $ docker-compose -f orderer1.yaml up -d
-
orderer2:192.168.247.93
# 将生成的 证书文件目录 和 通道创始块 文件目录拷贝到当前主机的 ~/kafka目录中 $ cd ~/kafka # 创建子目录 crypto-config $ mkdir crypto-config # 远程拷贝 $ scp -f itcast@192.168.247.91:/home/itcast/kafka/crypto-config/ordererOrganizations ./crypto-config # # 将写好的 orderer3.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器 $ docker-compose -f orderer3.yaml up -d
7.4 启动Peer集群
Solo多机多节点部署
1. 准备工作
所有的节点分离部署, 每台主机上有一个节点, 节点的分布如下表:
名称 | IP | Hostname | 组织机构 |
---|---|---|---|
orderer | 192.168.247.129 | orderer.itcast.com | Orderer |
peer0 | 192.168.247.141 | peer0.orggo.com | OrgGo |
peer1 | 192.168.247.142 | peer1.orggo.com | OrgGo |
peer0 | 192.168.247.131 | peer0.orgcpp.com | OrgCpp |
peer1 | 192.168.247.1451 | peer1.orgcpp.com | OrgCpp |
1.1 准备工作 - 创建工作目录
# N台主机需要创建一个名字相同的工作目录, 该工作目录名字自己定, 切记名字一定要相同
# 192.168.247.129
$ mkdir ~/testwork
# 192.168.247.141
$ mkdir ~/testwork
# 192.168.247.131
$ mkdir ~/testwork
# 192.168.247.142
$ mkdir ~/testwork
# 192.168.247.145
$ mkdir ~/testwork
1.2 生成组织节点和用户证书
-
编写配置文件
# crypto-config.yaml -> 名字可以改, 一般起名为crypto-config.yaml OrdererOrgs: # --------------------------------------------------------------------------- # Orderer # --------------------------------------------------------------------------- - Name: Orderer Domain: test.com Specs: - Hostname: orderer PeerOrgs: # --------------------------------------------------------------------------- # Org1 # --------------------------------------------------------------------------- - Name: OrgGo Domain: orggo.test.com EnableNodeOUs: false Template: Count: 2 Users: Count: 1 # --------------------------------------------------------------------------- # Org2: See "Org1" for full specification # --------------------------------------------------------------------------- - Name: OrgCpp Domain: orgcpp.test.com EnableNodeOUs: false Template: Count: 2 Users: Count: 1
-
使用
cryptogen
生成证书$ cryptogen generate --config=crypto-config.yaml
1.3 生成通道文件和创始块文件
-
编写配置文件, 名字为
configtx.yaml
, 该名字不能改, 是固定的.# configtx.yaml -> 名字不能变 --- ################################################################################ # # Section: Organizations # ################################################################################ Organizations: - &OrdererOrg Name: OrdererOrg ID: OrdererMSP MSPDir: ./crypto-config/ordererOrganizations/test.com/msp - &OrgGo Name: OrgGoMSP ID: OrgGoMSP MSPDir: ./crypto-config/peerOrganizations/orggo.test.com/msp AnchorPeers: - Host: peer0.orggo.test.com Port: 7051 - &OrgCpp Name: OrgCppMSP ID: OrgCppMSP MSPDir: ./crypto-config/peerOrganizations/orgcpp.test.com/msp AnchorPeers: - Host: peer0.orgcpp.test.com Port: 7051 ################################################################################ # # SECTION: Capabilities # ################################################################################ Capabilities: Global: &ChannelCapabilities V1_1: true Orderer: &OrdererCapabilities V1_1: true Application: &ApplicationCapabilities V1_2: true ################################################################################ # # SECTION: Application # ################################################################################ Application: &ApplicationDefaults Organizations: ################################################################################ # # SECTION: Orderer # ################################################################################ Orderer: &OrdererDefaults # Available types are "solo" and "kafka" OrdererType: solo Addresses: - orderer.test.com:7050 BatchTimeout: 2s BatchSize: MaxMessageCount: 10 AbsoluteMaxBytes: 99 MB PreferredMaxBytes: 512 KB Kafka: Brokers: - 127.0.0.1:9092 Organizations: ################################################################################ # # Profile # ################################################################################ Profiles: TwoOrgsOrdererGenesis: Capabilities: <<: *ChannelCapabilities Orderer: <<: *OrdererDefaults Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Consortiums: SampleConsortium: Organizations: - *OrgGo - *OrgCpp TwoOrgsChannel: Consortium: SampleConsortium Application: <<: *ApplicationDefaults Organizations: - *OrgGo - *OrgCpp Capabilities: <<: *ApplicationCapabilities
-
通过命令
configtxgen
生成创始块和通道文件# 我们先创建一个目录 channel-artifacts 存储生成的文件, 目的是为了和后边的配置文件模板的配置项保持一致 $ mkdir channel-artifacts # 生成通道文件 $ configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block # 生成创始块文件 $ configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID testchannel
2 部署 orderer 排序节点
2.1 编写配置文件
version: '2'
services:
orderer.test.com:
container_name: orderer.test.com
image: hyperledger/fabric-orderer:latest
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=testwork_default
- ORDERER_GENERAL_LOGLEVEL=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- testwork # 这个名字使用当前配置文件所在的目录 的名字
ports:
- 7050:7050
2.2 启动orderer容器
$ docker-compose up -d
Creating network "testwork_default" with the default driver
Creating orderer.test.com ... done
# 检测是否启动成功
$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------
orderer.test.com orderer Up 0.0.0.0:7050->7050/tcp
3 部署 peer0.orggo 节点
3.1 准备工作
-
切换到
peer0.orggo
主机 -192.168.247.141
-
进入到工作目录中:
$ cd ~/testwork
-
拷贝文件
# 通过scp命令远程拷贝 # -r : 表示要拷贝的是目录, 执行递归操作 # itcast@192.168.247.129:/home/itcast/testwork/channel-artifacts # itcast@192.168.247.129: 从192.168.247.129上拷贝数据, 登录用户名为itcast # /home/itcast/testwork/channel-artifacts: 要拷贝192.168.247.129上itcast用户的哪个目录 # ./ : 远程目录拷贝到本地的什么地方 $ scp -r itcast@192.168.247.129:/home/itcast/testwork/channel-artifacts ./ $ scp -r itcast@192.168.247.129:/home/itcast/testwork/crypto-config ./ # 查看拷贝结果 $ tree ./ -L 1 . ├── channel-artifacts └── crypto-config
3.2 编写 配置文件
# docker-compose.yaml
version: '2'
services:
peer0.orggo.test.com:
container_name: peer0.orggo.test.com
image: hyperledger/fabric-peer:latest
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=testwork_default
- CORE_LOGGING_LEVEL=INFO
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_LOCALMSPID=OrgGoMSP
- CORE_PEER_ID=peer0.orggo.test.com
- CORE_PEER_ADDRESS=peer0.orggo.test.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.orggo.test.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.orggo.test.com:7051
# TLS
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/orggo.test.com/peers/peer0.orggo.test.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/orggo.test.com/peers/peer0.orggo.test.com/tls:/etc/hyperledger/fabric/tls
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
networks:
default:
aliases:
- testwork
ports:
- 7051:7051
- 7053:7053
extra_hosts: # 声明域名和IP的对应关系
- "orderer.test.com:192.168.247.129"
- "peer0.orgcpp.test.com:192.168.247.131"
cli:
container_name: cli
image: hyperledger/fabric-tools:latest
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.orggo.test.com:7051
- CORE_PEER_LOCALMSPID=OrgGoMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orggo.test.com/peers/peer0.orggo.test.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orggo.test.com/peers/peer0.orggo.test.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orggo.test.com/peers/peer0.orggo.test.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orggo.test.com/users/Admin@orggo.test.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on: # 启动顺序
- peer0.orggo.test.com
networks:
default:
aliases:
- testwork
extra_hosts:
- "orderer.test.com:192.168.247.129"
- "peer0.orggo.test.com:192.168.247.141"
- "peer0.orgcpp.test.com:192.168.247.131"
3.3 启动容器
-
启动容器
$ docker-compose up -d Creating network "testwork_default" with the default driver Creating peer0.orgGo.test.com ... done Creating cli ... done # 查看启动状态 $ docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------------------- cli /bin/bash Up peer0.orgGo.test.com peer node start Up 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp
3.4 对peer0.orggo节点的操作
-
进入到客户端容器中
$ docker exec -it cli bash
-
创建通道
$ peer channel create -o orderer.test.com:7050 -c testchannel -f ./channel-artifacts/channel.tx --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/test.com/msp/tlscacerts/tlsca.test.com-cert.pem $ ls channel-artifacts crypto `testchannel.block` --> 生成的通道块文件
-
将当前节点加入到通道中
$ peer channel join -b testchannel.block
-
安装链码
$ peer chaincode install -n testcc -v 1.0 -l golang -p github.com/chaincode
-
初始化链码
$ peer chaincode instantiate -o orderer.test.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/test.com/msp/tlscacerts/tlsca.test.com-cert.pem -C testchannel -n testcc -v 1.0 -l golang -c '{"Args":["init","a","100","b","200"]}' -P "AND ('OrgGoMSP.member', 'OrgCppMSP.member')"
-
查询
$ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","a"]}' $ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","b"]}'
-
将生成的通道文件
testchannel.block
从cli容器拷贝到宿主机# 从客户端容器退出到宿主机 $ exit # 拷贝操作要在宿主机中进行 $ docker cp cli:/opt/gopath/src/github.com/hyperledger/fabric/peer/testchannel.block ./
4 部署 peer0.orgcpp 节点
4.1 准备工作
-
切换到
peer0.orgcpp
主机 -192.168.247.131
-
进入到工作目录
$ cd ~/testwork
-
远程拷贝文件
# 从主机192.168.247.141的zoro用户下拷贝目录crypto-config到当前目录下 $ scp -r zoro@192.168.247.141:/home/zoro/testwork/crypto-config ./ # 链码拷贝 $ scp -r zoro@192.168.247.141:/home/zoro/testwork/chaincode ./ # 从主机192.168.247.141的zoro用户下拷贝文件testchannel.block到当前目录下 $ scp zoro@192.168.247.141:/home/zoro/testwork/testchannel.block ./ # 查看结果 $ tree ./ -L 1 ./ ├── chaincode ├── crypto-config └── testchannel.block
-
为了方便操作可以将
通道块文件
放入到客户端容器挂载的目录中# 创建目录 $ mkdir channel-artifacts # 移动 $ mv testchannel.block channel-artifacts/
4.2 编写配置文件
# docker-compose.yaml
version: '2'
services:
peer0.orgcpp.test.com:
container_name: peer0.orgcpp.test.com
image: hyperledger/fabric-peer:latest
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=testwork_default
- CORE_LOGGING_LEVEL=INFO
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_LOCALMSPID=OrgCppMSP
- CORE_PEER_ID=peer0.orgcpp.test.com
- CORE_PEER_ADDRESS=peer0.orgcpp.test.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.orgcpp.test.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.orgcpp.test.com:7051
# TLS
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/orgcpp.test.com/peers/peer0.orgcpp.test.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/orgcpp.test.com/peers/peer0.orgcpp.test.com/tls:/etc/hyperledger/fabric/tls
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
networks:
default:
aliases:
- testwork
ports:
- 7051:7051
- 7053:7053
extra_hosts: # 声明域名和IP的对应关系
- "orderer.test.com:192.168.247.129"
- "peer0.orggo.test.com:192.168.247.141"
cli:
container_name: cli
image: hyperledger/fabric-tools:latest
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.orgcpp.test.com:7051
- CORE_PEER_LOCALMSPID=OrgCppMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgcpp.test.com/peers/peer0.orgcpp.test.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgcpp.test.com/peers/peer0.orgcpp.test.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgcpp.test.com/peers/peer0.orgcpp.test.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgcpp.test.com/users/Admin@orgcpp.test.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on: # 启动顺序
- peer0.orgcpp.test.com
networks:
default:
aliases:
- testwork
extra_hosts:
- "orderer.test.com:192.168.247.129"
- "peer0.orggo.test.com:192.168.247.141"
- "peer0.orgcpp.test.com:192.168.247.131"
4.3 启动当前节点
-
启动客户端容器
$ docker-compose up -d Creating network "testwork_default" with the default driver Creating peer0.orgcpp.test.com ... done Creating cli ... done # 查看启动情况 $ docker-compose ps Name Command State Ports ------------------------------------------------------------------------------------------------ cli /bin/bash Up peer0.orgcpp.test.com peer node start Up 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp
4.4 对peer0.orgcpp节点的操作
-
进入到操作该节点的客户端中
$ docker exec -it cli bash
-
加入到通道中
$ peer channel join -b ./channel-artifacts/testchannel.block
-
安装链码
$ peer chaincode install -n testcc -v 1.0 -l golang -p github.com/chaincode
-
查询
$ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","a"]}' $ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","b"]}'
-
交易
# 转账 $ peer chaincode invoke -o orderer.test.com:7050 -C testchannel -n testcc --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/test.com/orderers/orderer.test.com/msp/tlscacerts/tlsca.test.com-cert.pem --peerAddresses peer0.orgGo.test.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgGo.test.com/peers/peer0.orgGo.test.com/tls/ca.crt --peerAddresses peer0.orgcpp.test.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgcpp.test.com/peers/peer0.orgcpp.test.com/tls/ca.crt -c '{"Args":["invoke","a","b","10"]}' # 查询 $ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","a"]}' $ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","b"]}'
5. 其余节点的部署
6. 链码的打包
-
通过客户端在第1个peer节点中安装好链码之后, 将链码打包
$ peer chaincode package -n testcc -p github.com/chaincode -v 1.0 mycc.1.0.out -n: 链码的名字 -p: 链码的路径 -v: 链码的版本号 -mycc.1.0.out: 打包之后生成的文件
-
将打包之后的链码从容器中拷贝出来
$ docker cp cli:/xxxx/mycc.1.0.out ./
-
将得到的打包之后的链码文件拷贝到其他的peer节点上
-
通过客户端在其他peer节点上安装链码
$ peer chaincode install mycc.1.0.out
river
Creating peer0.orgcpp.test.com … done
Creating cli … done
查看启动情况
$ docker-compose ps
Name Command State Ports
cli /bin/bash Up
peer0.orgcpp.test.com peer node start Up 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp
### 4.4 对peer0.orgcpp节点的操作
- 进入到操作该节点的客户端中
```shell
$ docker exec -it cli bash
-
加入到通道中
$ peer channel join -b ./channel-artifacts/testchannel.block
-
安装链码
$ peer chaincode install -n testcc -v 1.0 -l golang -p github.com/chaincode
-
查询
$ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","a"]}' $ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","b"]}'
-
交易
# 转账 $ peer chaincode invoke -o orderer.test.com:7050 -C testchannel -n testcc --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/test.com/orderers/orderer.test.com/msp/tlscacerts/tlsca.test.com-cert.pem --peerAddresses peer0.orgGo.test.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgGo.test.com/peers/peer0.orgGo.test.com/tls/ca.crt --peerAddresses peer0.orgcpp.test.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/orgcpp.test.com/peers/peer0.orgcpp.test.com/tls/ca.crt -c '{"Args":["invoke","a","b","10"]}' # 查询 $ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","a"]}' $ peer chaincode query -C testchannel -n testcc -c '{"Args":["query","b"]}'
5. 其余节点的部署
6. 链码的打包
-
通过客户端在第1个peer节点中安装好链码之后, 将链码打包
$ peer chaincode package -n testcc -p github.com/chaincode -v 1.0 mycc.1.0.out -n: 链码的名字 -p: 链码的路径 -v: 链码的版本号 -mycc.1.0.out: 打包之后生成的文件
-
将打包之后的链码从容器中拷贝出来
$ docker cp cli:/xxxx/mycc.1.0.out ./
-
将得到的打包之后的链码文件拷贝到其他的peer节点上
-
通过客户端在其他peer节点上安装链码
$ peer chaincode install mycc.1.0.out