fabric多orderer节点环境搭建的详细过程

## 引言

文章写了块两年了,一直在公司内部分享,今天存放在这里,也算做个记录吧

本文配置结构为3台kafka,2个orderer,4个peer

其中kafka需要安装jdk,

peer需要安装go和docker

因为实际环境需要,本文全部都是离线安装,安装详见附件

系统  Ubuntu 16.04.3 

## hosts

将hosts复制到所有主机中 

```shell
vim /etc/hosts

192.168.3.181  node1  orderer1.local
192.168.3.183  node2  orderer2.local
192.168.3.185  node3  peer0.org1.local


192.168.3.184  kafka1 
192.168.3.186  kafka2
192.168.3.119  kafka3

# 分发到其他机器上
scp /etc/hosts root@xxx.xxx.x.IP:/etc/
```

##kafka zookeeper集群配置

kafka1、kafka2、kafka3

### 安装jdk

```shell
mkdir /usr/local/software 
# 将安装工具上传到/usr/local/software/目录下,为了分发方便,我将所有安装工具都放到这里了,后续的安装也会根据这个路径来

# 分别将software分发到其他服务器上
scp -r /usr/local/software root@XXX.XXX.XIP /usr/local/software
# 解压jdk文件到/usr/local/目录下
tar zxf /usr/local/software/jdk-8u144-linux-x64.tar.gz -C /usr/local/
# 设置环境变量,这里偷了个懒,将其他服务器所需的kafka和go都设置了,后续就不再说明了,避免后续遗漏,最好所有服务器都执行
echo "export JAVA_HOME=/usr/local/jdk1.8.0_144" >> ~/.profile
echo 'export JRE_HOME=${JAVA_HOME}/jre' >>  ~/.profile
echo 'export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib' >> ~/.profile
echo 'export PATH=${JAVA_HOME}/bin:$PATH' >> ~/.profile
echo 'export KAFKA_HOME=/usr/local/kafka_2.10-0.10.2.0' >> ~/.profile
echo 'export PATH=$KAFKA_HOME/bin:$PATH' >> ~/.profile
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
echo 'export GOPATH=/root/go' >> ~/.profile

# 使环境变量生效
source ~/.profile
```

### kafka zookeeper 安装

```shell
# 解压zookeeper文件到/usr/local/目录下
tar zxf /usr/local/software/zookeeper-3.4.10.tar.gz -C /usr/local/
# 解压kafka文件到/usr/local/目录下
tar zxf /usr/local/software/kafka_2.10-0.10.2.0.tgz -C /usr/local/
# 修改server.properties配置文件,有就修改,没有则添加
# 我已经将命令语句整理出,对应机器分别执行以下命令
```

kafka1 server.properties

```shell
sed -i 's/broker.id=0/broker.id=1/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka1:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties

#################以上命令修改和添加的变量################
## broker.id=1
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181

## listeners=PLAINTEXT://kafka1:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```

kafka2 server.properties

```shell
sed -i 's/broker.id=0/broker.id=2/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka2:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties

#################以上命令修改和添加的变量################
## broker.id=2
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181

## listeners=PLAINTEXT://kafka2:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```

kafka3 server.properties

```shell
sed -i 's/broker.id=0/broker.id=3/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka3:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties

#################以上命令修改和添加的变量#################
## broker.id=2
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181

## listeners=PLAINTEXT://kafka2:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```

修改kafka里zookeeper配置文件,新增myid

这里同样整理除了命令语句,对应机器分别执行即可

```shell
#创建kafka和zookeeper相关文件夹,只要和properties文件中的log.dirs一致就行
mkdir -p /data/kafka-logs
mkdir -p /data/zookeeper
```

zookeeper 1

```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 1 > /data/zookeeper/myid

#################以上命令修改和添加的变量#################
## zookeeper.properties

### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888

## 新增myid,并设置值为1
```

zookeeper 2

```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 2 > /data/zookeeper/myid

#################以上命令修改和添加的变量#################
## zookeeper.properties

### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888

## 新增myid,并设置值为2
```

zookeeper 3

```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 3 > /data/zookeeper/myid

#################以上命令修改和添加的变量#################
## zookeeper.properties

### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888

## 新增myid,并设置值为3
```

补充:server.properties可选参数如下

```shell
broker.id=0  #当前机器在集群中的唯一标识,和zookeeper的myid性质一样
port=19092 #当前kafka对外提供服务的端口默认是9092
host.name=192.168.7.100 #这个参数默认是关闭的,在0.8.1有个bug,DNS解析问题,失败率的问题。
num.network.threads=3 #这个是borker进行网络处理的线程数
num.io.threads=8 #这个是borker进行I/O处理的线程数
log.dirs=/opt/kafka/kafkalogs/ #消息存放的目录,这个目录可以配置为“,”逗号分割的表达式,上面的num.io.threads要大于这个目录的个数这个目录,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个
socket.send.buffer.bytes=102400 #发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能
socket.receive.buffer.bytes=102400 #kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘
socket.request.max.bytes=104857600 #这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小
num.partitions=1 #默认的分区数,一个topic默认1个分区数
log.retention.hours=168 #默认消息的最大持久化时间,168小时,7天
message.max.byte=5242880  #消息保存的最大值5M
default.replication.factor=2  #kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务
replica.fetch.max.bytes=5242880  #取消息的最大直接数
log.segment.bytes=1073741824 #这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件
log.retention.check.interval.ms=300000 #每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除
log.cleaner.enable=false #是否启用log压缩,一般不用启用,启用的话可以提高性能
zookeeper.connect=192.168.7.100:12181,192.168.7.101:12181,192.168.7.107:1218 #设置zookeeper的连接端口
```

启动

```shell
# 启动zookeeper
nohup zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties >> ~/zookeeper.log 2>&1 &
# 启动kafka
nohup kafka-server-start.sh $KAFKA_HOME/config/server.properties  >> ~/kafka.log 2>&1 &

jps
# 查看kafka和zookeeper进程是否启动起来
1462 Jps
1193 Kafka
937 QuorumPeerMain

# kafka测试:
#(1)一台机器上创建主题和生产者:
$KAFKA_HOME/bin/./kafka-topics.sh --create --zookeeper kafka1:2181 --replication-factor 2 --partition 1 --topic test

$KAFKA_HOME/bin/./kafka-console-producer.sh --broker-list kafka1:9092 --topic test

#(2)另外2台接收消息
$KAFKA_HOME/bin/./kafka-console-consumer.sh --zookeeper kafka1:2181 --topic test --from beginning
```

kafka部署错误

```shell
# 查看kafka server.properties 中的listener是否已设置
# 查看log.dirs目录是否存在,是否拥有读写权限
# 查看myid文件初始值是否写入
# 查看内存是否够用

# kafka和zookeeper其他相关命令

# 查看所有主题
$KAFKA_HOME/bin/kafka-topics.sh --zookeeper kafka1:2181 --list
# 查看单个主题(test)详情
$KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper kafka1:2181 --topic test

# 关闭kafka 
kafka-server-stop.sh
# 关闭zookeeper
zookeeper-server-stop.sh
```

node3、node4、node5、node6

##安装GO

```shell
# 解压go相关软件
tar zxf /usr/local/software/go1.9.linux-amd64.tar.gz -C /usr/local/
```

##安装Docker

```shell
# 安装deb版Docker,这里为ubunt deb安装文件
dpkg -i /usr/local/software/*.deb

# 启动 docker
service docker start
# 查看安装版本号  
docker version

Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:06 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:06 2017
 OS/Arch:      linux/amd64
 Experimental: false
 
# load 镜像
docker load -i /usr/local/software/fabric-javaenv.tar.gz
docker load -i /usr/local/software/fabric-ccenv.tar.gz
docker load -i /usr/local/software/fabric-baseimage.tar.gz
docker load -i /usr/local/software/fabric-baseos.tar.gz
```

## fabric配置

### 配置文件说明

####orderer.yaml

    orderer启动读取的配置文件

```yaml
---
 ################################################################################
 #
 #   Orderer Configuration
 #
 #   - This controls the type and configuration of the orderer.
 #
 ################################################################################
General:

     # Ledger Type: The ledger type to provide to the orderer.
     # Two non-production ledger types are provided for test purposes only:
     #  - ram: An in-memory ledger whose contents are lost on restart.
     #  - json: A simple file ledger that writes blocks to disk in JSON format.
     # Only one production ledger type is provided:
     #  - file: A production file-based ledger.
    LedgerType: file

     # Listen address: The IP on which to bind to listen.
    ListenAddress: 127.0.0.1

     # Listen port: The port on which to bind to listen.
    ListenPort: 7050

     # TLS: TLS settings for the GRPC server.
    TLS:
        Enabled: false
        PrivateKey: tls/server.key
        Certificate: tls/server.crt
        RootCAs:
          - tls/ca.crt
        ClientAuthEnabled: false
        ClientRootCAs:

     # Log Level: The level at which to log. This accepts logging specifications
     # per: fabric/docs/Setup/logging-control.md
    LogLevel: info

     # Genesis method: The method by which the genesis block for the orderer
     # system channel is specified. Available options are "provisional", "file":
     #  - provisional: Utilizes a genesis profile, specified by GenesisProfile,
     #                 to dynamically generate a new genesis block.
     #  - file: Uses the file provided by GenesisFile as the genesis block.
    GenesisMethod: provisional

     # Genesis profile: The profile to use to dynamically generate the genesis
     # block to use when initializing the orderer system channel and
     # GenesisMethod is set to "provisional". See the configtx.yaml file for the
     # descriptions of the available profiles. Ignored if GenesisMethod is set to
     # "file".
    GenesisProfile: SampleInsecureSolo

     # Genesis file: The file containing the genesis block to use when
     # initializing the orderer system channel and GenesisMethod is set to
     # "file". Ignored if GenesisMethod is set to "provisional".
    GenesisFile: genesisblock

     # LocalMSPDir is where to find the private crypto material needed by the
     # orderer. It is set relative here as a default for dev environments but
     # should be changed to the real location in production.
    LocalMSPDir: msp

     # LocalMSPID is the identity to register the local MSP material with the MSP
     # manager. IMPORTANT: The local MSP ID of an orderer needs to match the MSP
     # ID of one of the organizations defined in the orderer system channel's
     # /Channel/Orderer configuration. The sample organization defined in the
     # sample configuration provided has an MSP ID of "DEFAULT".
    LocalMSPID: DEFAULT

     # Enable an HTTP service for Go "pprof" profiling as documented at:
     # https://golang.org/pkg/net/http/pprof
    Profile:
        Enabled: false
        Address: 0.0.0.0:6060

     # BCCSP configures the blockchain crypto service providers.
    BCCSP:
         # Default specifies the preferred blockchain crypto service provider
         # to use. If the preferred provider is not available, the software
         # based provider ("SW") will be used.
         # Valid providers are:
         #  - SW: a software based crypto provider
         #  - PKCS11: a CA hardware security module crypto provider.
        Default: SW

         # SW configures the software based blockchain crypto provider.
        SW:
             # TODO: The default Hash and Security level needs refactoring to be
             # fully configurable. Changing these defaults requires coordination
             # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
             # Location of key store. If this is unset, a location will be
             # chosen using: 'LocalMSPDir'/keystore
            FileKeyStore:
                KeyStore:

 ################################################################################
 #
 #   SECTION: File Ledger
 #
 #   - This section applies to the configuration of the file or json ledgers.
 #
 ################################################################################
FileLedger:

     # Location: The directory to store the blocks in.
     # NOTE: If this is unset, a new temporary location will be chosen every time
     # the orderer is restarted, using the prefix specified by Prefix.
    Location: /var/hyperledger/production/orderer

     # The prefix to use when generating a ledger directory in temporary space.
     # Otherwise, this value is ignored.
    Prefix: hyperledger-fabric-ordererledger

 ################################################################################
 #
 #   SECTION: RAM Ledger
 #
 #   - This section applies to the configuration of the RAM ledger.
 #
 ################################################################################
RAMLedger:

     # History Size: The number of blocks that the RAM ledger is set to retain.
     # WARNING: Appending a block to the ledger might cause the oldest block in
     # the ledger to be dropped in order to limit the number total number blocks
     # to HistorySize. For example, if history size is 10, when appending block
     # 10, block 0 (the genesis block!) will be dropped to make room for block 10.
    HistorySize: 1000

 ################################################################################
 #
 #   SECTION: Kafka
 #
 #   - This section applies to the configuration of the Kafka-based orderer, and
 #     its interaction with the Kafka cluster.
 #
 ################################################################################
Kafka:

     # Retry: What do if a connection to the Kafka cluster cannot be established,
     # or if a metadata request to the Kafka cluster needs to be repeated.
    Retry:
         # When a new channel is created, or when an existing channel is reloaded
         # (in case of a just-restarted orderer), the orderer interacts with the
         # Kafka cluster in the following ways:
         # 1. It creates a Kafka producer (writer) for the Kafka partition that
         # corresponds to the channel.
         # 2. It uses that producer to post a no-op CONNECT message to that
         # partition
         # 3. It creates a Kafka consumer (reader) for that partition.
         # If any of these steps fail, they will be re-attempted every
         # <ShortInterval> for a total of <ShortTotal>, and then every
         # <LongInterval> for a total of <LongTotal> until they succeed.
         # Note that the orderer will be unable to write to or read from a
         # channel until all of the steps above have been completed successfully.
        ShortInterval: 5s
        ShortTotal: 10m
        LongInterval: 5m
        LongTotal: 12h
         # Affects the socket timeouts when waiting for an initial connection, a
         # response, or a transmission. See Config.Net for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        NetworkTimeouts:
            DialTimeout: 10s
            ReadTimeout: 10s
            WriteTimeout: 10s
         # Affects the metadata requests when the Kafka cluster is in the middle
         # of a leader election.See Config.Metadata for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        Metadata:
            RetryBackoff: 250ms
            RetryMax: 3
         # What to do if posting a message to the Kafka cluster fails. See
         # Config.Producer for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        Producer:
            RetryBackoff: 100ms
            RetryMax: 3
        # What to do if reading from the Kafka cluster fails. See
         # Config.Consumer for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        Consumer:
            RetryBackoff: 2s

     # Verbose: Enable logging for interactions with the Kafka cluster.
    Verbose: false

     # TLS: TLS settings for the orderer's connection to the Kafka cluster.
    TLS:

       # Enabled: Use TLS when connecting to the Kafka cluster.
      Enabled: false

       # PrivateKey: PEM-encoded private key the orderer will use for
       # authentication.
      PrivateKey:
         # As an alternative to specifying the PrivateKey here, uncomment the
         # following "File" key and specify the file name from which to load the
         # value of PrivateKey.
         #File: path/to/PrivateKey

       # Certificate: PEM-encoded signed public key certificate the orderer will
       # use for authentication.
      Certificate:
         # As an alternative to specifying the Certificate here, uncomment the
         # following "File" key and specify the file name from which to load the
         # value of Certificate.
         #File: path/to/Certificate

       # RootCAs: PEM-encoded trusted root certificates used to validate
       # certificates from the Kafka cluster.
      RootCAs:
         # As an alternative to specifying the RootCAs here, uncomment the
         # following "File" key and specify the file name from which to load the
         # value of RootCAs.
         #File: path/to/RootCAs

     # Kafka version of the Kafka cluster brokers (defaults to 0.9.0.1)
    Version:
```

#### crypto-config.yaml

    生成网络拓扑和证书

    文件可以帮我们为每个组织和组织中的成员生成证书库。每个组织分配一个根证书(ca-cert),这个根证书会绑定一些peers和orders到这个组织。fabric中的交易和通信都会被一个参与者的私钥(keystore)签名,并会被公钥(signcerts)验证。最后Users Count=1是说每个Template下面会有几个普通User(注意,Admin是Admin,不包含在这个计数中),这里配置了1,也就是说我们只需要一个普通用户User1@org2.local 我们可以根据实际需要调整这个配置文件,增删Org Users等

```yaml
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
  # ---------------------------------------------------------------------------
  # Orderer
  # ---------------------------------------------------------------------------
  - Name: Orderer
    Domain: orderer.local
    # ---------------------------------------------------------------------------
    # "Specs" - See PeerOrgs below for complete description
    # ---------------------------------------------------------------------------
    Specs:
      - Hostname: orderer1.local
        CommonName: orderer1.local
      - Hostname: orderer2.local
        CommonName: orderer2.local
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
  # ---------------------------------------------------------------------------
  # Org1
  # ---------------------------------------------------------------------------
  - Name: Org1
    Domain: org1.local
    # ---------------------------------------------------------------------------
    # "Specs"
    # ---------------------------------------------------------------------------
    # Uncomment this section to enable the explicit definition of hosts in your
    # configuration.  Most users will want to use Template, below
    #
    # Specs is an array of Spec entries.  Each Spec entry consists of two fields:
    #   - Hostname:   (Required) The desired hostname, sans the domain.
    #   - CommonName: (Optional) Specifies the template or explicit override for
    #                 the CN.  By default, this is the template:
    #
    #                              "{{.Hostname}}.{{.Domain}}"
    #
    #                 which obtains its values from the Spec.Hostname and
    #                 Org.Domain, respectively.
    # ---------------------------------------------------------------------------
    # Specs:
    #   - Hostname: foo # implicitly "foo.org1.example.com"
    #     CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
    #   - Hostname: bar
    #   - Hostname: baz
    Specs:
      - Hostname: peer0.org1.local
        CommonName: peer0.org1.local
    # ---------------------------------------------------------------------------
    # "Template"
    # ---------------------------------------------------------------------------
    # Allows for the definition of 1 or more hosts that are created sequentially
    # from a template. By default, this looks like "peer%d" from 0 to Count-1.
    # You may override the number of nodes (Count), the starting index (Start)
    # or the template used to construct the name (Hostname).
    #
    # Note: Template and Specs are not mutually exclusive.  You may define both
    # sections and the aggregate nodes will be created for you.  Take care with
    # name collisions
    # ---------------------------------------------------------------------------
    Template:
      Count: 2
      # Start: 5
      # Hostname: {{.Prefix}}{{.Index}} # default
    # ---------------------------------------------------------------------------
    # "Users"
    # ---------------------------------------------------------------------------
    # Count: The number of user accounts _in addition_ to Admin
    # ---------------------------------------------------------------------------
    Users:
      Count: 1
  # ---------------------------------------------------------------------------
  # Org2: See "Org1" for full specification
  # ---------------------------------------------------------------------------
  - Name: Org2
    Domain: org2.local
    Specs:
      - Hostname: peer0.org2.local
        CommonName: peer0.org2.local
    Template:
      Count: 2
    Users:
      Count: 1
```

core.yaml peer启动读取的配置文件

```yaml
###############################################################################
 #
 #    LOGGING section
 #
 ###############################################################################
logging:

     # Default logging levels are specified here.

     # Valid logging levels are case-insensitive strings chosen from

     #     CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG

     # The overall default logging level can be specified in various ways,
     # listed below from strongest to weakest:
     #
     # 1. The --logging-level=<level> command line option overrides all other
     #    default specifications.
     #
     # 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to
     #    all peer commands if defined as a non-empty string.
     #
     # 3. The value of peer that directly follows in this file. It can also
     #    be set via the environment variable CORE_LOGGING_PEER.
     #
     # If no overall default level is provided via any of the above methods,
     # the peer will default to INFO (the value of defaultLevel in
     # common/flogging/logging.go)

     # Default for all modules running within the scope of a peer.
     # Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL
     #       are not set
    peer:       info

     # The overall default values mentioned above can be overridden for the
     # specific components listed in the override section below.

     # Override levels for various peer modules. These levels will be
     # applied once the peer has completely started. They are applied at this
     # time in order to be sure every logger has been registered with the
     # logging package.
     # Note: the modules listed below are the only acceptable modules at this
     #       time.
    cauthdsl:   warning
    gossip:     warning
    ledger:     info
    msp:        warning
    policies:   warning
    grpc:       error

     # Message format for the peer logs
    format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

 ###############################################################################
 #
 #    Peer section
 #
 ###############################################################################
peer:

     # The Peer id is used for identifying this Peer instance.
    id: jdoe

     # The networkId allows for logical seperation of networks
    networkId: dev

     # The Address at local network interface this Peer will listen on.
     # By default, it will listen on all network interfaces
    listenAddress: 0.0.0.0:7051

     # The endpoint this peer uses to listen for inbound chaincode connections.
     #
     # The chaincode connection does not support TLS-mutual auth. Having a
     # separate listener for the chaincode helps isolate the chaincode
     # environment for enhanced security, so it is strongly recommended to
     # uncomment chaincodeListenAddress and specify a protected endpoint.
     #
     # If chaincodeListenAddress is not configured or equals to the listenAddress,
     # listenAddress will be used for chaincode connections. This is not
     # recommended for production.
     #
     # chaincodeListenAddress: 127.0.0.1:7052

     # When used as peer config, this represents the endpoint to other peers
     # in the same organization for peers in other organization, see
     # gossip.externalEndpoint for more info.
     # When used as CLI config, this means the peer's endpoint to interact with
    address: 0.0.0.0:7051

     # Whether the Peer should programmatically determine its address
     # This case is useful for docker containers.
    addressAutoDetect: false

     # Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the
     # current setting
    gomaxprocs: -1

     # Gossip related configuration
    gossip:
         # Bootstrap set to initialize gossip with.
         # This is a list of other peers that this peer reaches out to at startup.
         # Important: The endpoints here have to be endpoints of peers in the same
         # organization, because the peer would refuse connecting to these endpoints
         # unless they are in the same organization as the peer.
        bootstrap: 127.0.0.1:7051

         # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive.
         # Setting both to true would result in the termination of the peer
         # since this is undefined state. If the peers are configured with
         # useLeaderElection=false, make sure there is at least 1 peer in the
         # organization that its orgLeader is set to true.

         # Defines whenever peer will initialize dynamic algorithm for
         # "leader" selection, where leader is the peer to establish
         # connection with ordering service and use delivery protocol
         # to pull ledger blocks from ordering service. It is recommended to
         # use leader election for large networks of peers.
        useLeaderElection: false
         # Statically defines peer to be an organization "leader",
         # where this means that current peer will maintain connection
         # with ordering service and disseminate block across peers in
         # its own organization
        orgLeader: true

         # Overrides the endpoint that the peer publishes to peers
         # in its organization. For peers in foreign organizations
         # see 'externalEndpoint'
        endpoint:
         # Maximum count of blocks stored in memory
        maxBlockCountToStore: 100
         # Max time between consecutive message pushes(unit: millisecond)
        maxPropagationBurstLatency: 10ms
         # Max number of messages stored until a push is triggered to remote peers
        maxPropagationBurstSize: 10
         # Number of times a message is pushed to remote peers
        propagateIterations: 1
         # Number of peers selected to push messages to
        propagatePeerNum: 3
         # Determines frequency of pull phases(unit: second)
        pullInterval: 4s
         # Number of peers to pull from
        pullPeerNum: 3
         # Determines frequency of pulling state info messages from peers(unit: second)
        requestStateInfoInterval: 4s
         # Determines frequency of pushing state info messages to peers(unit: second)
        publishStateInfoInterval: 4s
         # Maximum time a stateInfo message is kept until expired
        stateInfoRetentionInterval:
         # Time from startup certificates are included in Alive messages(unit: second)
        publishCertPeriod: 10s
         # Should we skip verifying block messages or not (currently not in use)
        skipBlockVerification: false
         # Dial timeout(unit: second)
        dialTimeout: 3s
         # Connection timeout(unit: second)
        connTimeout: 2s
         # Buffer size of received messages
        recvBuffSize: 20
         # Buffer size of sending messages
        sendBuffSize: 200
         # Time to wait before pull engine processes incoming digests (unit: second)
        digestWaitTime: 1s
         # Time to wait before pull engine removes incoming nonce (unit: second)
        requestWaitTime: 1s
         # Time to wait before pull engine ends pull (unit: second)
        responseWaitTime: 2s
         # Alive check interval(unit: second)
        aliveTimeInterval: 5s
         # Alive expiration timeout(unit: second)
        aliveExpirationTimeout: 25s
         # Reconnect interval(unit: second)
        reconnectInterval: 25s
         # This is an endpoint that is published to peers outside of the organization.
         # If this isn't set, the peer will not be known to other organizations.
        externalEndpoint:
         # Leader election service configuration
        election:
             # Longest time peer waits for stable membership during leader election startup (unit: second)
            startupGracePeriod: 15s
             # Interval gossip membership samples to check its stability (unit: second)
            membershipSampleInterval: 1s
             # Time passes since last declaration message before peer decides to perform leader election (unit: second)
            leaderAliveThreshold: 10s
             # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second)
            leaderElectionDuration: 5s

     # EventHub related configuration
    events:
         # The address that the Event service will be enabled on the peer
        address: 0.0.0.0:7053

         # total number of events that could be buffered without blocking send
        buffersize: 100

         # timeout duration for producer to send an event.
         # if < 0, if buffer full, unblocks immediately and not send
         # if 0, if buffer full, will block and guarantee the event will be sent out
         # if > 0, if buffer full, blocks till timeout
        timeout: 10ms

     # TLS Settings
     # Note that peer-chaincode connections through chaincodeListenAddress is
     # not mutual TLS auth. See comments on chaincodeListenAddress for more info
    tls:
        enabled:  false
        cert:
            file: tls/server.crt
        key:
            file: tls/server.key
        rootcert:
            file: tls/ca.crt

         # The server name use to verify the hostname returned by TLS handshake
        serverhostoverride:

     # Path on the file system where peer will store data (eg ledger). This
     # location must be access control protected to prevent unintended
     # modification that might corrupt the peer operations.
    fileSystemPath: /var/hyperledger/production

     # BCCSP (Blockchain crypto provider): Select which crypto implementation or
     # library to use
    BCCSP:
        Default: SW
        SW:
             # TODO: The default Hash and Security level needs refactoring to be
             # fully configurable. Changing these defaults requires coordination
             # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
             # Location of Key Store
            FileKeyStore:
                 # If "", defaults to 'mspConfigPath'/keystore
                 # TODO: Ensure this is read with fabric/core/config.GetPath() once ready
                KeyStore:

     # Path on the file system where peer will find MSP local configurations
    mspConfigPath: msp

     # Identifier of the local MSP
     # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!----
     # Deployers need to change the value of the localMspId string.
     # In particular, the name of the local MSP ID of a peer needs
     # to match the name of one of the MSPs in each of the channel
     # that this peer is a member of. Otherwise this peer's messages
     # will not be identified as valid by other nodes.
    localMspId: DEFAULT

     # Used with Go profiling tools only in none production environment. In
     # production, it should be disabled (eg enabled: false)
    profile:
        enabled:     false
        listenAddress: 0.0.0.0:6060

 ###############################################################################
 #
 #    VM section
 #
 ###############################################################################
vm:

     # Endpoint of the vm management system.  For docker can be one of the following in general
     # unix:///var/run/docker.sock
     # http://localhost:2375
     # https://localhost:2376
    endpoint: unix:///var/run/docker.sock

     # settings for docker vms
    docker:
        tls:
            enabled: false
            ca:
                file: docker/ca.crt
            cert:
                file: docker/tls.crt
            key:
                file: docker/tls.key

         # Enables/disables the standard out/err from chaincode containers for
         # debugging purposes
        attachStdout: false

         # Parameters on creating docker container.
         # Container may be efficiently created using ipam & dns-server for cluster
         # NetworkMode - sets the networking mode for the container. Supported
         # standard values are: `host`(default),`bridge`,`ipvlan`,`none`.
         # Dns - a list of DNS servers for the container to use.
         # Note:  `Privileged` `Binds` `Links` and `PortBindings` properties of
         # Docker Host Config are not supported and will not be used if set.
         # LogConfig - sets the logging driver (Type) and related options
         # (Config) for Docker. For more info,
         # https://docs.docker.com/engine/admin/logging/overview/
         # Note: Set LogConfig using Environment Variables is not supported.
        hostConfig:
            NetworkMode: host
            Dns:
                # - 192.168.0.1
            LogConfig:
                Type: json-file
                Config:
                    max-size: "50m"
                    max-file: "5"
            Memory: 2147483648

 ###############################################################################
 #
 #    Chaincode section
 #
 ###############################################################################
chaincode:
     # This is used if chaincode endpoint resolution fails with the
     # chaincodeListenAddress property
    peerAddress:

     # The id is used by the Chaincode stub to register the executing Chaincode
     # ID with the Peer and is generally supplied through ENV variables
     # the `path` form of ID is provided when installing the chaincode.
     # The `name` is used for all other requests and can be any string.
    id:
        path:
        name:

     # Generic builder environment, suitable for most chaincode types
    builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)

    golang:
         # golang will never need more than baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    car:
         # car may need more facilities (JVM, etc) in the future as the catalog
         # of platforms are expanded.  For now, we can just use baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    java:
         # This is an image based on java:openjdk-8 with addition compiler
         # tools added for java shim layer packaging.
         # This image is packed with shim layer libraries that are necessary
         # for Java chaincode runtime.
        Dockerfile:  |
            from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)

     # Timeout duration for starting up a container and waiting for Register
     # to come through. 1sec should be plenty for chaincode unit tests
    startuptimeout: 300s

     # Timeout duration for Invoke and Init calls to prevent runaway.
     # This timeout is used by all chaincodes in all the channels, including
     # system chaincodes.
     # Note that during Invoke, if the image is not available (e.g. being
     # cleaned up when in development environment), the peer will automatically
     # build the image, which might take more time. In production environment,
     # the chaincode image is unlikely to be deleted, so the timeout could be
     # reduced accordingly.
    executetimeout: 30s

     # There are 2 modes: "dev" and "net".
     # In dev mode, user runs the chaincode after starting peer from
     # command line on local machine.
     # In net mode, peer will run chaincode in a docker container.
    mode: net

     # keepalive in seconds. In situations where the communiction goes through a
     # proxy that does not support keep-alive, this parameter will maintain connection
     # between peer and chaincode.
     # A value <= 0 turns keepalive off
    keepalive: 0

     # system chaincodes whitelist. To add system chaincode "myscc" to the
     # whitelist, add "myscc: enable" to the list below, and register in
     # chaincode/importsysccs.go
    system:
        cscc: enable
        lscc: enable
        escc: enable
        vscc: enable
        qscc: enable

     # Logging section for the chaincode container
    logging:
       # Default level for all loggers within the chaincode container
      level:  info
       # Override default level for the 'shim' module
      shim:   warning
       # Format for the chaincode container logs
      format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

 ###############################################################################
 #
 #    Ledger section - ledger configuration encompases both the blockchain
 #    and the state
 #
 ###############################################################################
ledger:

  blockchain:

  state:
     # stateDatabase - options are "goleveldb", "CouchDB"
     # goleveldb - default state database stored in goleveldb.
     # CouchDB - store state database in CouchDB
    stateDatabase: goleveldb
    couchDBConfig:
        # It is recommended to run CouchDB on the same server as the peer, and
        # not map the CouchDB container port to a server port in docker-compose.
        # Otherwise proper security must be provided on the connection between
        # CouchDB client (on the peer) and server.
       couchDBAddress: 127.0.0.1:5984
        # This username must have read and write authority on CouchDB
       username:
        # The password is recommended to pass as an environment variable
        # during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD).
        # If it is stored here, the file must be access control protected
        # to prevent unintended users from discovering the password.
       password:
        # Number of retries for CouchDB errors
       maxRetries: 3
        # Number of retries for CouchDB errors during peer startup
       maxRetriesOnStartup: 10
        # CouchDB request timeout (unit: duration, e.g. 20s)
       requestTimeout: 35s
        # Limit on the number of records to return per query
       queryLimit: 10000


  history:
     # enableHistoryDatabase - options are true or false
     # Indicates if the history of key updates should be stored.
     # All history 'index' will be stored in goleveldb, regardless if using
     # CouchDB or alternate database for the state.
enableHistoryDatabase: true
```

####configtx.yaml 

    生成网络拓扑和证书

    文件包含网络的定义,网络中有2个成员(Org1和Org2)分别管理维护2个peer。 在文件最上方 “Profile”段落中,有两个header,一个是orderer genesis block - TwoOrgsOrdererGenesis ,另一个是channel - TwoOrgsChannel。这两个header十分重要,我们创建artifacts是我们会把他们作为参数传入。文件中还包含每个成员的MSP 目录位置.crypto目录包含每个实体的admin证书、ca证书、签名证书和私钥

```yaml
---
################################################################################
#
#   Profile
#
#   - Different configuration profiles may be encoded here to be specified
#   as parameters to the configtxgen tool
#
################################################################################
Profiles:

 TwoOrgsOrdererGenesis:
     Orderer:
         <<: *OrdererDefaults
         Organizations:
             - *OrdererOrg
     Consortiums:
         SampleConsortium:
             Organizations:
                 - *Org1
                 - *Org2
 TwoOrgsChannel:
     Consortium: SampleConsortium
     Application:
         <<: *ApplicationDefaults
         Organizations:
             - *Org1
             - *Org2

################################################################################
#
#   Section: Organizations
#
#   - This section defines the different organizational identities which will
#   be referenced later in the configuration.
#
################################################################################
Organizations:

 # SampleOrg defines an MSP using the sampleconfig.  It should never be used
 # in production but may be used as a template for other definitions
 - &OrdererOrg
     # DefaultOrg defines the organization which is used in the sampleconfig
     # of the fabric.git development environment
     Name: OrdererOrg

     # ID to load the MSP definition as
     ID: OrdererMSP

     # MSPDir is the filesystem path which contains the MSP configuration
     MSPDir: crypto-config/ordererOrganizations/orderer.local/msp


 - &Org1
     # DefaultOrg defines the organization which is used in the sampleconfig
     # of the fabric.git development environment
     Name: Org1MSP

     # ID to load the MSP definition as
     ID: Org1MSP

     MSPDir: crypto-config/peerOrganizations/org1.local/msp

     AnchorPeers:
         # AnchorPeers defines the location of peers which can be used
         # for cross org gossip communication.  Note, this value is only
         # encoded in the genesis block in the Application section context
#         - Host: peer0.org1.local
#           Port: 7051
#         - Host: peer1.org1.local
#           Port: 7051

 - &Org2
     # DefaultOrg defines the organization which is used in the sampleconfig
     # of the fabric.git development environment
     Name: Org2MSP

     # ID to load the MSP definition as
     ID: Org2MSP

     MSPDir: crypto-config/peerOrganizations/org2.local/msp

     AnchorPeers:
         # AnchorPeers defines the location of peers which can be used
         # for cross org gossip communication.  Note, this value is only
         # encoded in the genesis block in the Application section context
#         - Host: peer0.org2.local
#           Port: 7051
#         - Host: peer1.org2.local
#           Port: 7051
################################################################################
#
#   SECTION: Orderer
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults

 # Orderer Type: The orderer implementation to start
 # Available types are "solo" and "kafka"
 OrdererType: kafka

 Addresses:
     - orderer1.local:7050
     - orderer2.local:7050
     #- orderer3.local:7050
 # Batch Timeout: The amount of time to wait before creating a batch
 BatchTimeout: 2s

 # Batch Size: Controls the number of messages batched into a block
 BatchSize:

     # Max Message Count: The maximum number of messages to permit in a batch
     MaxMessageCount: 10

     # Absolute Max Bytes: The absolute maximum number of bytes allowed for
     # the serialized messages in a batch.
     AbsoluteMaxBytes: 99 MB

     # Preferred Max Bytes: The preferred maximum number of bytes allowed for
     # the serialized messages in a batch. A message larger than the preferred
     # max bytes will result in a batch larger than preferred max bytes.
     PreferredMaxBytes: 512 KB

 Kafka:
     # Brokers: A list of Kafka brokers to which the orderer connects
     # NOTE: Use IP:port notation
     Brokers:
         - kafka1:9092
         - kafka2:9092
         - kafka3:9092

 # Organizations is the list of orgs which are defined as participants on
 # the orderer side of the network
 Organizations:

################################################################################
#
#   SECTION: Application
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults

    # Organizations is the list of orgs which are defined as participants on
    # the application side of the network
    Organizations:
```

####core.yaml 

    peer启动读取的配置文件

```yaml
###############################################################################
#
#    LOGGING section
#
###############################################################################
logging:

    # Default logging levels are specified here.

    # Valid logging levels are case-insensitive strings chosen from

    #     CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG

    # The overall default logging level can be specified in various ways,
    # listed below from strongest to weakest:
    #
    # 1. The --logging-level=<level> command line option overrides all other
    #    default specifications.
    #
    # 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to
    #    all peer commands if defined as a non-empty string.
    #
    # 3. The value of peer that directly follows in this file. It can also
    #    be set via the environment variable CORE_LOGGING_PEER.
    #
    # If no overall default level is provided via any of the above methods,
    # the peer will default to INFO (the value of defaultLevel in
    # common/flogging/logging.go)

    # Default for all modules running within the scope of a peer.
    # Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL
    #       are not set
    peer:       info

    # The overall default values mentioned above can be overridden for the
    # specific components listed in the override section below.

    # Override levels for various peer modules. These levels will be
    # applied once the peer has completely started. They are applied at this
    # time in order to be sure every logger has been registered with the
    # logging package.
    # Note: the modules listed below are the only acceptable modules at this
    #       time.
    cauthdsl:   warning
    gossip:     warning
    ledger:     info
    msp:        warning
    policies:   warning
    grpc:       error

    # Message format for the peer logs
    format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

###############################################################################
#
#    Peer section
#
###############################################################################
peer:

    # The Peer id is used for identifying this Peer instance.
    id: jdoe

    # The networkId allows for logical seperation of networks
    networkId: dev

    # The Address at local network interface this Peer will listen on.
    # By default, it will listen on all network interfaces
    listenAddress: 0.0.0.0:7051

    # The endpoint this peer uses to listen for inbound chaincode connections.
    #
    # The chaincode connection does not support TLS-mutual auth. Having a
    # separate listener for the chaincode helps isolate the chaincode
    # environment for enhanced security, so it is strongly recommended to
    # uncomment chaincodeListenAddress and specify a protected endpoint.
    #
    # If chaincodeListenAddress is not configured or equals to the listenAddress,
    # listenAddress will be used for chaincode connections. This is not
    # recommended for production.
    #
    # chaincodeListenAddress: 127.0.0.1:7052

    # When used as peer config, this represents the endpoint to other peers
    # in the same organization for peers in other organization, see
    # gossip.externalEndpoint for more info.
    # When used as CLI config, this means the peer's endpoint to interact with
    address: 0.0.0.0:7051

    # Whether the Peer should programmatically determine its address
    # This case is useful for docker containers.
    addressAutoDetect: false

    # Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the
    # current setting
    gomaxprocs: -1

    # Gossip related configuration
    gossip:
        # Bootstrap set to initialize gossip with.
        # This is a list of other peers that this peer reaches out to at startup.
        # Important: The endpoints here have to be endpoints of peers in the same
        # organization, because the peer would refuse connecting to these endpoints
        # unless they are in the same organization as the peer.
        bootstrap: 127.0.0.1:7051

        # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive.
        # Setting both to true would result in the termination of the peer
        # since this is undefined state. If the peers are configured with
        # useLeaderElection=false, make sure there is at least 1 peer in the
        # organization that its orgLeader is set to true.

        # Defines whenever peer will initialize dynamic algorithm for
        # "leader" selection, where leader is the peer to establish
        # connection with ordering service and use delivery protocol
        # to pull ledger blocks from ordering service. It is recommended to
        # use leader election for large networks of peers.
        useLeaderElection: false
        # Statically defines peer to be an organization "leader",
        # where this means that current peer will maintain connection
        # with ordering service and disseminate block across peers in
        # its own organization
        orgLeader: true

        # Overrides the endpoint that the peer publishes to peers
        # in its organization. For peers in foreign organizations
        # see 'externalEndpoint'
        endpoint:
        # Maximum count of blocks stored in memory
        maxBlockCountToStore: 100
        # Max time between consecutive message pushes(unit: millisecond)
        maxPropagationBurstLatency: 10ms
        # Max number of messages stored until a push is triggered to remote peers
        maxPropagationBurstSize: 10
        # Number of times a message is pushed to remote peers
        propagateIterations: 1
        # Number of peers selected to push messages to
        propagatePeerNum: 3
        # Determines frequency of pull phases(unit: second)
        pullInterval: 4s
        # Number of peers to pull from
        pullPeerNum: 3
        # Determines frequency of pulling state info messages from peers(unit: second)
        requestStateInfoInterval: 4s
        # Determines frequency of pushing state info messages to peers(unit: second)
        publishStateInfoInterval: 4s
        # Maximum time a stateInfo message is kept until expired
        stateInfoRetentionInterval:
        # Time from startup certificates are included in Alive messages(unit: second)
        publishCertPeriod: 10s
        # Should we skip verifying block messages or not (currently not in use)
        skipBlockVerification: false
        # Dial timeout(unit: second)
        dialTimeout: 3s
        # Connection timeout(unit: second)
        connTimeout: 2s
        # Buffer size of received messages
        recvBuffSize: 20
        # Buffer size of sending messages
        sendBuffSize: 200
        # Time to wait before pull engine processes incoming digests (unit: second)
        digestWaitTime: 1s
        # Time to wait before pull engine removes incoming nonce (unit: second)
        requestWaitTime: 1s
        # Time to wait before pull engine ends pull (unit: second)
        responseWaitTime: 2s
        # Alive check interval(unit: second)
        aliveTimeInterval: 5s
        # Alive expiration timeout(unit: second)
        aliveExpirationTimeout: 25s
        # Reconnect interval(unit: second)
        reconnectInterval: 25s
        # This is an endpoint that is published to peers outside of the organization.
        # If this isn't set, the peer will not be known to other organizations.
        externalEndpoint:
        # Leader election service configuration
        election:
            # Longest time peer waits for stable membership during leader election startup (unit: second)
            startupGracePeriod: 15s
            # Interval gossip membership samples to check its stability (unit: second)
            membershipSampleInterval: 1s
            # Time passes since last declaration message before peer decides to perform leader election (unit: second)
            leaderAliveThreshold: 10s
            # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second)
            leaderElectionDuration: 5s

    # EventHub related configuration
    events:
        # The address that the Event service will be enabled on the peer
        address: 0.0.0.0:7053

        # total number of events that could be buffered without blocking send
        buffersize: 100

        # timeout duration for producer to send an event.
        # if < 0, if buffer full, unblocks immediately and not send
        # if 0, if buffer full, will block and guarantee the event will be sent out
        # if > 0, if buffer full, blocks till timeout
        timeout: 10ms

    # TLS Settings
    # Note that peer-chaincode connections through chaincodeListenAddress is
    # not mutual TLS auth. See comments on chaincodeListenAddress for more info
    tls:
        enabled:  false
        cert:
            file: tls/server.crt
        key:
            file: tls/server.key
        rootcert:
            file: tls/ca.crt

        # The server name use to verify the hostname returned by TLS handshake
        serverhostoverride:

    # Path on the file system where peer will store data (eg ledger). This
    # location must be access control protected to prevent unintended
    # modification that might corrupt the peer operations.
    fileSystemPath: /var/hyperledger/production

    # BCCSP (Blockchain crypto provider): Select which crypto implementation or
    # library to use
    BCCSP:
        Default: SW
        SW:
            # TODO: The default Hash and Security level needs refactoring to be
            # fully configurable. Changing these defaults requires coordination
            # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
            # Location of Key Store
            FileKeyStore:
                # If "", defaults to 'mspConfigPath'/keystore
                # TODO: Ensure this is read with fabric/core/config.GetPath() once ready
                KeyStore:

    # Path on the file system where peer will find MSP local configurations
    mspConfigPath: msp

    # Identifier of the local MSP
    # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!----
    # Deployers need to change the value of the localMspId string.
    # In particular, the name of the local MSP ID of a peer needs
    # to match the name of one of the MSPs in each of the channel
    # that this peer is a member of. Otherwise this peer's messages
    # will not be identified as valid by other nodes.
    localMspId: DEFAULT

    # Used with Go profiling tools only in none production environment. In
    # production, it should be disabled (eg enabled: false)
    profile:
        enabled:     false
        listenAddress: 0.0.0.0:6060

###############################################################################
#
#    VM section
#
###############################################################################
vm:

    # Endpoint of the vm management system.  For docker can be one of the following in general
    # unix:///var/run/docker.sock
    # http://localhost:2375
    # https://localhost:2376
    endpoint: unix:///var/run/docker.sock

    # settings for docker vms
    docker:
        tls:
            enabled: false
            ca:
                file: docker/ca.crt
            cert:
                file: docker/tls.crt
            key:
                file: docker/tls.key

        # Enables/disables the standard out/err from chaincode containers for
        # debugging purposes
        attachStdout: false

        # Parameters on creating docker container.
        # Container may be efficiently created using ipam & dns-server for cluster
        # NetworkMode - sets the networking mode for the container. Supported
        # standard values are: `host`(default),`bridge`,`ipvlan`,`none`.
        # Dns - a list of DNS servers for the container to use.
        # Note:  `Privileged` `Binds` `Links` and `PortBindings` properties of
        # Docker Host Config are not supported and will not be used if set.
        # LogConfig - sets the logging driver (Type) and related options
        # (Config) for Docker. For more info,
        # https://docs.docker.com/engine/admin/logging/overview/
        # Note: Set LogConfig using Environment Variables is not supported.
        hostConfig:
            NetworkMode: host
            Dns:
               # - 192.168.0.1
            LogConfig:
                Type: json-file
                Config:
                    max-size: "50m"
                    max-file: "5"
            Memory: 2147483648

###############################################################################
#
#    Chaincode section
#
###############################################################################
chaincode:
    # This is used if chaincode endpoint resolution fails with the
    # chaincodeListenAddress property
    peerAddress:

    # The id is used by the Chaincode stub to register the executing Chaincode
    # ID with the Peer and is generally supplied through ENV variables
    # the `path` form of ID is provided when installing the chaincode.
    # The `name` is used for all other requests and can be any string.
    id:
        path:
        name:

    # Generic builder environment, suitable for most chaincode types
    builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)

    golang:
        # golang will never need more than baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    car:
        # car may need more facilities (JVM, etc) in the future as the catalog
        # of platforms are expanded.  For now, we can just use baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    java:
        # This is an image based on java:openjdk-8 with addition compiler
        # tools added for java shim layer packaging.
        # This image is packed with shim layer libraries that are necessary
        # for Java chaincode runtime.
        Dockerfile:  |
            from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)

    # Timeout duration for starting up a container and waiting for Register
    # to come through. 1sec should be plenty for chaincode unit tests
    startuptimeout: 300s

    # Timeout duration for Invoke and Init calls to prevent runaway.
    # This timeout is used by all chaincodes in all the channels, including
    # system chaincodes.
    # Note that during Invoke, if the image is not available (e.g. being
    # cleaned up when in development environment), the peer will automatically
    # build the image, which might take more time. In production environment,
    # the chaincode image is unlikely to be deleted, so the timeout could be
    # reduced accordingly.
    executetimeout: 30s

    # There are 2 modes: "dev" and "net".
    # In dev mode, user runs the chaincode after starting peer from
    # command line on local machine.
    # In net mode, peer will run chaincode in a docker container.
    mode: net

    # keepalive in seconds. In situations where the communiction goes through a
    # proxy that does not support keep-alive, this parameter will maintain connection
    # between peer and chaincode.
    # A value <= 0 turns keepalive off
    keepalive: 0

    # system chaincodes whitelist. To add system chaincode "myscc" to the
    # whitelist, add "myscc: enable" to the list below, and register in
    # chaincode/importsysccs.go
    system:
        cscc: enable
        lscc: enable
        escc: enable
        vscc: enable
        qscc: enable

    # Logging section for the chaincode container
    logging:
      # Default level for all loggers within the chaincode container
      level:  info
      # Override default level for the 'shim' module
      shim:   warning
      # Format for the chaincode container logs
      format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

###############################################################################
#
#    Ledger section - ledger configuration encompases both the blockchain
#    and the state
#
###############################################################################
ledger:

  blockchain:

  state:
    # stateDatabase - options are "goleveldb", "CouchDB"
    # goleveldb - default state database stored in goleveldb.
    # CouchDB - store state database in CouchDB
    stateDatabase: goleveldb
    couchDBConfig:
       # It is recommended to run CouchDB on the same server as the peer, and
       # not map the CouchDB container port to a server port in docker-compose.
       # Otherwise proper security must be provided on the connection between
       # CouchDB client (on the peer) and server.
       couchDBAddress: 127.0.0.1:5984
       # This username must have read and write authority on CouchDB
       username:
       # The password is recommended to pass as an environment variable
       # during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD).
       # If it is stored here, the file must be access control protected
       # to prevent unintended users from discovering the password.
       password:
       # Number of retries for CouchDB errors
       maxRetries: 3
       # Number of retries for CouchDB errors during peer startup
       maxRetriesOnStartup: 10
       # CouchDB request timeout (unit: duration, e.g. 20s)
       requestTimeout: 35s
       # Limit on the number of records to return per query
       queryLimit: 10000


  history:
    # enableHistoryDatabase - options are true or false
    # Indicates if the history of key updates should be stored.
    # All history 'index' will be stored in goleveldb, regardless if using
    # CouchDB or alternate database for the state.
enableHistoryDatabase: true
```

### 生成配置

    上文是fabirc核心的四个文件,主要调整config文件中的相关参数即可

    下面我们通过脚本生成配置文件

```shell
# 创建fabric目录
mkdir -p /opt/fabric
# 解压相关文件
tar zxf /usr/local/software/hyperledger-fabric-linux-amd64-1.0.3.tar.gz -C /opt/fabric/

tar zxf /usr/local/software/fabric.tar.gz -C /opt/

######### 因为要保证证书和秘钥相同,以下步骤在一台电脑上执行,然后再分发到其他机器,begin

# 进入fabric configs目录,
cd /opt/fabric/configs/
# 执行generateArtifacts.sh 脚本
../scripts/generateArtifacts.sh
# 这个做了以下操作
# 1.基于crypto-config.yaml生成公私钥和证书信息,并保存在当前路径生成的crypto-config文件夹中
# 2.生成创世区块和通道相关信息,并保存在当前生成的channel-artifacts文件夹
# 感兴趣的小伙伴可以自己看下生成后的文档结构,这里就不再列出了

# 将整个fabric文件分发到所有orderer和peer服务器中
scp -r /opt/fabric root@node"X":/opt/

######### end #########

# chaincode所需文件
mkdir -p ~/go/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02

cp /usr/local/software/chaincode_example02.go ~/go/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02/
```

peer启动脚本默认是peer0.org1.local的,所以这里我们要修改另外三台

node4  peer1.org1.local

```shell
sed -i 's/peer0/peer1/g' /opt/fabric/start-peer.sh
```

node5  peer0.org2.local

```shell
sed -i 's/org1/org2/g' /opt/fabric/start-peer.sh
sed -i 's/Org1/Org2/g' /opt/fabric/start-peer.sh
```

node6  peer1.org2.local

```shell
sed -i 's/peer0/peer1/g' /opt/fabric/start-peer.sh
sed -i 's/org1/org2/g' /opt/fabric/start-peer.sh
sed -i 's/Org1/Org2/g' /opt/fabric/start-peer.sh
```

至此,peer环境已搭建完毕,我们暂不启动peer,等orderer完成并启动后再启动peer。

## 安装orderer

node1、node2

```shell
# 因为文件默认是node1(orderer1)的启动脚本,这里需要对node2(orderer2)的启动脚本进行修改
# 在node2服务器执行后面命令 sed -i 's/orderer1/orderer2/g' /opt/fabric/start-orderer.sh
# 启动orderer
/opt/fabric/start-orderer.sh
# 查看日志
tail -99f ~/orderer.log
```

由于前面我们已经启动了kafka和orderer,这里再启动peer

```
# 启动四台peer
/opt/fabric/start-peer.sh
# 查看日志
tail -99f ~/peer.log
```

## 链上代码的安装与运行

    以上,整个Fabric网络都准备完毕,接下来我们创建Channel、安装和运行ChainCode。这个例子实现了a,b两个账户,相互之间可以转账。

我们先在其中一个节点(peer0.org1.local)执行以下命令

```shell
cd ~

# 设置CLI的环境变量
export FABRIC_ROOT=/opt/fabric
export FABRIC_CFG_PATH=/opt/fabric/configs

export CORE_LOGGING_LEVEL=DEBUG
export CORE_LOGGING_PEER=debug
export CORE_LOGGING_FORMAT='%{color}%{time:15:04:05.000} [%{module}] [%{longfile}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
export CORE_CHAINCODE_LOGGING_LEVEL=DEBUG

export CORE_PEER_ID=peer0.org1.local
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_MSPCONFIGPATH=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/users/Admin@org1.local/msp
export CORE_PEER_ADDRESS=peer0.org1.local:7051

export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_ROOTCERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/ca.crt
export CORE_PEER_TLS_KEY_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/server.key
export CORE_PEER_TLS_CERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/server.crt

export ordererCa=$FABRIC_CFG_PATH/crypto-config/ordererOrganizations/orderer.local/orderers/orderer1.local/msp/tlscacerts/tlsca.orderer.local-cert.pem


# 创建Channel
# 系统会在当前目录创建一个mychannel.block文件,这个文件非常重要,接下来其他节点要加入这个Channel就必须使用这个文件,要将这个文件分发到其他peer当中。
$FABRIC_ROOT/bin/peer channel create -o orderer1.local:7050 -f $FABRIC_CFG_PATH/channel-artifacts/channel.tx -c mychannel -t 30 --tls true --cafile $ordererCa

# 加入通道
$FABRIC_ROOT/bin/peer channel join -b mychannel.block

# 更新锚节点,这个我还是没太理解,即使没有设置锚节点,也不会影响整个网络
$FABRIC_ROOT/bin/peer channel update -o orderer1.local:7050 -c mychannel -f $FABRIC_CFG_PATH/channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls true --cafile $ordererCa

# 安装 chaincode
# 链上代码的安装需要在各个相关的Peer上进行,对于现在这种Fabric网络,如果4个Peer都想对Example02进行操作,那么就需要安装4次
$FABRIC_ROOT/bin/peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02

# Instantiate ChainCode
# 实例化链上代码主要是在Peer所在的机器上对前面安装好的链上代码进行包装,生成对应Channel的Docker镜像和Docker容器。并且在实例化时我们可以指定背书策略。
$FABRIC_ROOT/bin/peer chaincode instantiate -o orderer1.local:7050 --tls true --cafile $ordererCa -C mychannel -n mycc -v 2.0 -c '{"Args":["init","a","100","b","200"]}' -P "OR  ('Org1MSP.member','Org2MSP.member')"

# 现在链上代码的实例也有了,并且在实例化的时候指定了a账户100,b账户200,我们可以试着调用ChainCode的查询代码,验证一下
$FABRIC_ROOT/bin/peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

# 返回结果:Query Result: 100

# 把a账户的10元转给b
$FABRIC_ROOT/bin/peer chaincode invoke -o orderer1.local:7050  --tls true --cafile $ordererCa -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
```

在另一个节点(peer0.org2.local)上查询

    前面的操作都是在org1下面做的,那么加入同一个区块链(mychannel)的org2,是否会看org1的更改呢?我们试着给peer0.org2.local加入mychannel、安装链上代码

```shell
# 设置CLI的环境变量

export FABRIC_ROOT=/opt/fabric
export FABRIC_CFG_PATH=/opt/fabric/configs

export CORE_LOGGING_LEVEL=DEBUG
export CORE_LOGGING_PEER=debug
export CORE_LOGGING_FORMAT='%{color}%{time:15:04:05.000} [%{module}] [%{longfile}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
export CORE_CHAINCODE_LOGGING_LEVEL=DEBUG

export CORE_PEER_ID=peer0.org2.local
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_MSPCONFIGPATH=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/users/Admin@org2.local/msp
export CORE_PEER_ADDRESS=peer0.org2.local:7051

export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_ROOTCERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/ca.crt
export CORE_PEER_TLS_KEY_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/server.key
export CORE_PEER_TLS_CERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/server.crt

export ordererCa=$FABRIC_CFG_PATH/crypto-config/ordererOrganizations/orderer.local/orderers/orderer2.local/msp/tlscacerts/tlsca.orderer.local-cert.pem

# 进入分发过来mychannel.block所在目录,执行以下命令加入通道
$FABRIC_ROOT/bin/peer channel join -b mychannel.block

# 安装 chaincode
$FABRIC_ROOT/bin/peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02

# org1的时候实例化了,也就是说对应的区块已经生成了,所以在org2不能再次初始化,直接查询
$FABRIC_ROOT/bin/peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

# 运行该命令后要等很久才返回结果:
Query Result: 90
# 这是因为peer0.org2也需要生成Docker镜像,创建对应的容器,才能通过容器返回结果。我们执行docker ps -a,可以看到又多了一个容器;
```

其他节点也同样如此,记得改环境变量

## 清除

```shell
# 清理pper数据
# 删除docker容器和镜像 
docker rm 节点所生成的容器id(通过docker ps -a 查询)
docker rmi 节点生成的镜像id (通过docker images 查询)

rm -rf /var/hyperledger/*

# 清理orderer数据
/opt/fabric/stop-orderer.sh
rm -rf /var/hyperledger/*

# 清理kafka数据
# 关闭kafka 
kafka-server-stop.sh
rm -rf /data/kafka-logs/*
# 清理zookeeper数据(zookeeper 必须为启动状态):
/usr/local/zookeeper-3.4.10/bin/./zkCli.sh

rmr /brokers
rmr /admin
rmr /config
rmr /isr_change_notification
quit

# 关闭zookeeper
zookeeper-server-stop.sh
#kafka经常会出现无法停止的现象,可通过jps查看进程,再通过kill杀掉
rm -rf /data/zookeeper/version*
```

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Hyperledger Fabric 是一个开源的多方参与的区块链平台,它提供了可编程的智能合约和具有高度可配置性的共识机制。多机环境的部署可以帮助实现高可用性、可扩展性和可靠性。 以下是 Hyperledger Fabric 多机环境的构建步骤: 1. 安装 Docker 和 Docker Compose:Docker 是用于构建和运行容器的平台,Docker Compose 是用于定义和运行多个容器的工具。在多机环境中,需要安装 Docker 和 Docker Compose。 2. 下载 Fabric Samples:Fabric Samples 包含了许多有用的示例和模板,可以帮助快速构建 Fabric 网络。可以从 GitHub 上下载最新版本的 Fabric Samples。 3. 编写网络配置文件:网络配置文件指定了 Fabric 网络的拓扑结构和节点配置信息。根据实际需求,可以自定义网络配置文件。 4. 生成证书和密钥:Fabric 使用 TLS 加密保护通信,因此需要生成证书和密钥。可以使用 Fabric CA 工具或 OpenSSL 工具生成证书和密钥。 5. 部署节点:使用 Docker Compose 部署 Fabric 节点节点包括 Peer 节点Orderer 节点和 CA 节点。可以根据实际需求自定义节点数量和配置。 6. 创建通道和加入 Peer:创建 Fabric 通道并将 Peer 节点加入通道。可以使用 Fabric CLI 工具执行这些任务。 7. 安装和实例化链码:安装链码并在 Peer 节点上实例化链码。链码定义了智能合约的逻辑和状态转换规则。 8. 调用链码:使用 Fabric CLI 工具调用链码执行操作。 以上是 Hyperledger Fabric 多机环境的主要步骤。在实践中,可能还需要解决一些问题,例如安全性、性能和监控等。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值