1.环境声明
fabric1.1.0
go1.8.3 linux/amd64
Docker version 17.03.2-ce
2.节点部署
IP | Hostname |
172.20.6.145 | zookeeper1 |
172.20.6.151 | zookeeper2 |
172.20.6.147 | zookeeper3 |
172.20.6.146 | kafka1 |
172.20.6.144 | kafka2 |
172.20.6.198 | kakka4 |
172.20.6.207 | orderer0 |
172.20.6.181 | orderer1 |
172.20.6.203 | orderer2 |
172.20.6.122 | peer0Org1 |
172.20.6.192 | peer0Org2 |
3.kafka集群启动方式
kafka集群的启动顺序是由上到下的,即根集群必须优先启动;所以,先启动zookeeper集群,随后启动kafka集群,最后是orderer排序服务集群。当所有集群启动成功后,可以开始向集群中新增业务节点并开启各项正式业务
4.注意事项
多机部署与单机部署最大的区别在于需要在节点配置中添加extra_hosts属性。处于同一台机器的节点互相之间是不用添加extra_hosts的,但需要注意依赖关系——也是就是depends_on属性;
此外,本文档其后所有的配置都抛弃e2e样例的继承做法,将所有的配置参数都写在一个文档内;
所有的port属性,可改成如下格式:
ports:
- "2181:2181"
5.配置文件(文章底部可下载我的配置文件)
多机部署kafka集群主要需要用到三种文件:
第一个是docker-compose.yaml文件,这个文件是用来启动docker容器的配置文件。第二与第三种分别是configtx.yaml和crypto-config.yaml文件,这两个是用来生成通道配置文件,创世区块以及证书的配置文件。
-
crypto-config.yaml配置
在OrdererOrgs下面的属性Specs添加所有的orderer的Hostname
在PeerOrgs添加集群需要的组织
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: example.com
CA:
Country: US
Province: California
Locality: San Francisco
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer0
- Hostname: orderer1
- Hostname: orderer2
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
# ---------------------------------------------------------------------------
# Org1
# ---------------------------------------------------------------------------
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
Template:
Count: 2
# Start: 5
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1
# ---------------------------------------------------------------------------
# Org2: See "Org1" for full specification
# ---------------------------------------------------------------------------
- Name: Org2
Domain: org2.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
Template:
Count: 2
Users:
Count: 1
-
configtx.yaml 配置
在Profiles的Organizations添加所有的组织
在Organizations:添加所有的组织,并添加AnchorPeers
在Orderer: &OrdererDefaults下的Address属性中,加入所建立的所有orderer名称,以及他们自己所对应的外部端口。
在kafka:Brokers添加所有的kafka Hostname及其端口号,如下例所示:
Brokers:
- kafka1:9092
- kafka2:9092
- kafka4:9092
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
---
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
TwoOrgsOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
# SampleOrg defines an MSP using the sampleconfig. It should never be used
# in production but may be used as a template for other definitions
- &OrdererOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: OrdererOrg
# ID to load the MSP definition as
ID: OrdererMSP
# MSPDir is the filesystem path which contains the MSP configuration
MSPDir: crypto-config/ordererOrganizations/example.com/msp
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org1MSP
# ID to load the MSP definition as
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org1.example.com
Port: 7051
- &Org2
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org2MSP
# ID to load the MSP definition as
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org2.example.com
Port: 7051
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: kafka
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 98 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects. Edit
# this list to identify the brokers of the ordering service.
# NOTE: Use IP:port notation.
Brokers:
- kafka1:9092
- kafka2:9092
- kafka4:9092
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
Organizations:
Capabilities:
Global: &ChannelCapabilities
V1_1: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_1: true
-
zookeeper配置
本例中共三个配置文件:docker-zookeeper1.yaml docker-zookeeper2.yaml docker-zookeeper3yaml
在每个配置文件中,保证container_name,hostname和service name一致。
在environment中,ZOO_MY_ID=1数字在1~255选一个,保证唯一性
ZOO_SERVERS要与zookeeper服务数目对应
extra_hosts:添加所有的zookeeper和kafka的hostname和IP,如下所示:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ZooKeeper的基本运转流程:
# 1、选举Leader。
# 2、同步数据。
# 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
# 4、Leader要具有最高的执行ID,类似root权限。
# 5、集群中大多数的机器得到响应并follow选出的Leader。
#
version: '2'
services:
zookeeper1:
container_name: zookeeper1
hostname: zookeeper1
image: hyperledger/fabric-zookeeper
restart: always
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper1:2988:3888 server.2=zookeeper2:2988:3888 server.3=zookeeper3:2988:3888
ports:
- "2181:2181"
- "2988:2988"
- "3888:3888"
extra_hosts:
- "zookeeper1:172.20.6.145"
- "zookeeper2:172.20.6.151"
- "zookeeper3:172.20.6.147"
- "kafka1:172.20.6.146"
- "kafka2:172.20.6.144"
- "kafka4:172.20.6.198"
-
kafka配置
本例中共三个配置文件:docker-kafka1.yaml docker-kafka2.yaml docker-kafka4.yaml(最好有4个)
在每个配置文件中,保证container_name,hostname和service name一致。
在environment中,
KAFKA_BROKER_ID是唯一的非负整数进行表示,这个ID可以作为Broker的名字。
KAFKA_ZOOKEEPER_CONNECT对应zookeeper的数目和端口(2181)
extra_hosts:添加所有的zookeeper和kafka的hostname和IP
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
kafka1:
container_name: kafka1
hostname: kafka1
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
ports:
- "9092:9092"
extra_hosts:
- "zookeeper1:172.20.6.145"
- "zookeeper2:172.20.6.151"
- "zookeeper3:172.20.6.147"
- "kafka1:172.20.6.146"
- "kafka2:172.20.6.144"
- "kafka4:172.20.6.198"
-
orderer配置
本例中共三个配置文件:docker-orderer0.yaml docker-orderer1.yaml docker-orderer2.yaml
在每个配置文件中,保证container_name和service name一致。
ORDERER_KAFKA_BROKERS要对应kafka的数目和端口
volumes: msp和tls的物理地址要注明orderer对应hostname。如下所示
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
extra_hosts:添加所有的kafka的hostname和IP
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
# - ORDERER_GENERAL_LOGLEVEL=error
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
#- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
#- ORDERER_GENERAL_LEDGERTYPE=ram
#- ORDERER_GENERAL_LEDGERTYPE=file
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[172.20.6.146:9092,172.20.6.144:9092,172.20.6.198:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- "7050:7050"
extra_hosts:
- "kafka1:172.20.6.146"
- "kafka2:172.20.6.144"
- "kafka4:172.20.6.198"
-
peer配置
本例中有2个组织,一个组织一个peer。所以两个文件:docker-org1.yaml docker-org2.yaml;
ca 属性在不用tls可以不要,需要时,每次重启更新sk值;
peer
在每个配置文件中,保证container_name、service name、CORE_PEER_ID一致;
CORE_PEER_NETWORKID是总文件名,例如,e2ecli;
CORE_PEER_ADDRESS、CORE_PEER_CHAINCODEADDRESS、CORE_PEER_CHAINCODELISTENADDRESS、CORE_PEER_GOSSIP_EXTERNALENDPOINT都要对应service name(比如peer0.org1.example.com:)和端口号;
CORE_PEER_LOCALMSPID对应configtx.yaml的对应组织的msp;
CORE_VM_DOCKER_TLS_ENABLED修改为false;
volumes: msp和tls的物理地址要注明peer对应hostname。如下所示
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
extra_hosts:添加所有的orderer的hostname和IP
cli
cli 中与peer的配置做一样的修改
cli的 volumes: - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
冒号之前指示链码的物理地址,要对应正确
cli 的depends_on:写本服务的hostname
cli 的extra_hosts添加所有的orderer和peer的hostname和IP
version: '2'
services:
ca:
container_name: ca
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca
- FABRIC_CA_SERVER_TLS_ENABLED=false
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/4847076e9d618047e8a8ccf3d333d6411f165d462841a6115395cd20977feab5_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/4847076e9d618047e8a8ccf3d333d6411f165d462841a6115395cd20977feab5_sk -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer
environment:
# - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
# - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=172.31.159.129:5984
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_NETWORKID=e2ecli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=e2ecli_default
- CORE_VM_DOCKER_TLS_ENABLED=false
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- "7051:7051"
- "7052:7052"
- "7053:7053"
extra_hosts:
- "orderer0.example.com:172.20.6.207"
- "orderer1.example.com:172.20.6.181"
- "orderer2.example.com:172.20.6.203"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer0.org1.example.com
extra_hosts:
- "orderer0.example.com:172.20.6.207"
- "orderer1.example.com:172.20.6.181"
- "orderer2.example.com:172.20.6.203"
- "peer0.org1.example.com:172.20.6.122"
- "peer0.org2.example.com:172.20.6.192"
6.启动
删除旧的channel-artifacts中的文件和crypto-config文件,或者./network.sh down来删除旧的文件
./generateArtifacts.sh mychannel 生成必要的证书文件和区块
替换两个peer配置文件的ca属性的sk
sudo docker-compose -f docker-zookeeper1.yaml up
sudo docker-compose -f docker-zookeeper2.yaml up
sudo docker-compose -f docker-zookeeper3.yaml up
sudo docker-compose -f docker-kafka1.yaml up
sudo docker-compose -f docker-kafka2.yaml up
sudo docker-compose -f docker-kafka4.yaml up
sudo docker-compose -f docker-orderer0.yaml up
sudo docker-compose -f docker-orderer1.yaml up
sudo docker-compose -f docker-orderer2.yaml up
sudo docker-compose -f docker-org1.yaml up
sudo docker-compose -f docker-org2.yaml up
在任何peer节点:
sudo docker exec -it cli bash
./scripts/scripts.sh mychannel
7.查错方式
1.启动zookeeper集群后:
sudo docker exec -it zookeeper? bash
zkServer.sh status 查看zookeeper状态,结合日志判断zookeeper启动是否正常。
查看启动的服务是否正常启动,比如 netstat -a|grep 9092查看kafka是否在本机启动
3. telnet ip 端口号
查看是否能从外部访问端口
4. 其余错误基本都是配置文件参数不对。
下载链接:
链接: https://pan.baidu.com/s/1bFBAeWC0PZqjkSQ8JspVKA 密码: 2akv