第一个网络的多主机部署(Hyperledger Fabric v2)

总览 (Overview)

This is a rework on my previous articles published last December (2019), about deploying the raft-based First Network deployed in a multi-host environment.

这是对我去年12月(2019年)发表的先前文章的重做,该文章涉及在多主机环境中部署基于筏的First Network。

The previous one was done on v1.4. Here the demonstration is done on Fabric v2.2, which was released just a month ago. There are several changes compared to the previous article:

上一个是在v1.4上完成的。 这里的演示是在一个月前发布的Fabric v2.2上完成的。 与上一篇文章相比有几处更改:

  • The generation of crypto material and channel artifacts is skipped. Instead they were pre-generated and available in the repository. This simplifies the initial steps when dealing with scp between hosts.

    跳过加密材料和通道伪像的生成。 相反,它们是预先生成的,可以在存储库中使用。 这简化了在主机之间处理scp时的初始步骤。
  • First Network is not available in v2.2. As this setup is largely based on First Network, those relevant files were copied and kept it in the repository.

    第一网络在v2.2中不可用。 由于此设置主要基于First Network,因此将这些相关文件复制并保存在存储库中。
  • The chaincode deployment in v2 is different from v1.4.

    v2中的链码部署不同于v1.4。
  • Scripts are created for easier deployment.

    创建脚本以便于部署。

As a result, a repository is created here. Once you have the swarm setup across the four hosts, we can clone the repository and start demonstration. Those who wish to understand the details, you can always refer to the previous article.

结果,在此处创建一个存储库。 在四台主机上完成群设置后,我们可以克隆存储库并开始演示。 那些希望了解细节的人,可以随时参考上一篇文章

The setup remains the same. The demonstration is performed in this flow.

设置保持不变。 在此流程中进行演示。

  1. Bring up four hosts in AWS

    在AWS中启动四台主机
  2. Create an overlay with Docker Swarm

    使用Docker Swarm创建覆盖
  3. Clone the repository on all hosts

    在所有主机上克隆存储库
  4. Bring up each host

    调动每个主机
  5. Bring up mychannel and join all peers to mychannel

    打开mychannel并加入所有同龄人到mychannel

  6. Deploy fabcar chaincode

    部署fabcar chaincode

  7. Test fabcar chaincode

    测试fabcar链码

建立 (Setup)

First Network is composed of one orderer organization and two peer organizations. In orderer organization a raft-based ordering service cluster of five ordering service nodes (orderers). In each peer organization (Org1 and Org2) there are two peers, peer0 and peer1. A channel mychannel is created and all peers join the mychannel.

第一网络由一个订购者组织和两个对等组织组成。 在订购者组织中,基于筏的订购服务集群包含五个订购服务节点(订购者)。 在每个对等组织(Org1和Org2)中,都有两个对等体,peer0和peer1。 创建通道mychannel ,所有对等方都加入mychannel

The setup is identical to the previous article: here is how the network components are deployed in each host.

该设置与上一篇文章相同:这是在每个主机中部署网络组件的方式。

示范 (Demonstration)

步骤1:在AWS中启动四台主机 (Step 1: Bring up four hosts in AWS)

The four hosts are running Fabric v2.2 on Ubuntu 18.04 LTS.

这四台主机在Ubuntu 18.04 LTS上运行Fabric v2.2。

Note: Since we do not use any features on managed blockchain features, feel free to use hosts in different cloud providers or even across cloud providers. Just make sure the ports for communication are well opened between the hosts.

注意:由于我们不在托管区块链功能上使用任何功能,因此可以随时使用不同云提供商中甚至跨云提供商的主机。 只需确保主机之间的通信端口打开良好即可。

Here is my deployment.

这是我的部署。

Image for post

步骤2:使用Docker Swarm创建覆盖 (Step 2: Create an overlay with Docker Swarm)

On Host 1,

在主机1上

docker swarm init --advertise-addr <host-1 ip address>
docker swarm join-token manager

Note the output of docker swarm join-token manager as it is being used immediately in next step.

请注意docker swarm join-token manager的输出,因为下一步将立即使用它。

On Host 2, 3 and 4,

在主机2、3和4上

<output from join-token manager> --advertise-addr <host n ip>

On Host 1, create first-network overlay network, which is being used for our network components (see each hostn.yaml in the repository)

在主机1上,创建第一个网络覆盖网络,该网络将用于我们的网络组件(请参见存储库中的每个host n .yaml )

docker network create --attachable --driver overlay first-network

Now we can check each host. All are sharing the same overlay (note the same network ID in all hosts).

现在我们可以检查每个主机。 所有共享相同的覆盖图(请注意所有主机中的相同网络ID)。

Image for post
Terminals for Host 1, 2, 3 and 4 (from top to bottom)
主机1、2、3和4的端子(从上到下)

With this the Docker Swarm is ready for our fabric network.

这样,Docker Swarm就可以用于我们的结构网络了。

步骤3:在所有主机上克隆存储库 (Step 3: Clone the repository on all hosts)

On each host,

在每台主机上

cd fabric-samples
git clone https://github.com/kctam/4host-swarm.gitcd 4host-swarm

Note: All the four hosts have the same set of material. This is just for demonstration. In real deployment each host should have a customized set of material. This is to ensure that only the right set of material (crypto material) and containers (e.g. CLI for different organizations) should be specified.

注意:所有四个主机具有相同的材料集。 这只是为了演示。 在实际部署中,每个主机应具有一组定制的材料。 这是为了确保仅指定正确的一组材料(加密材料)和容器(例如,用于不同组织的CLI)。

步骤4:启动每台主机 (Step 4: Bring up each host)

On each host, bring up the host with script hostnup.sh. This script is just a docker-compose up with the corresponding configuration file.

在每个主机上,使用脚本host n up.sh启动该主机。 该脚本只是一个docker,由相应的配置文件组成。

./hostnup.sh

To check if everything works fine, check the containers running in each host. It is our setup.

要检查一切是否正常,请检查每个主机中运行的容器。 这是我们的设置。

Image for post
Containers in Host 1, 2, 3 and 4 (from top to bottom)
主机1、2、3和4中的容器(从上到下)

步骤5:启动mychannel并加入所有同龄人到mychannel (Step 5: Bring up mychannel and join all peers to mychannel)

This is a standard process, by generating the channel genesis block (for mychannel), join all the peers with this block file and update the anchor peer transactions. As all commands are issued on CLI (which is in Host 1), a script is created to perform these tasks.

这是一个标准过程,通过生成通道生成块(针对mychannel ),将所有对等方与该块文件一起加入并更新锚点对等方事务。 由于所有命令都是在CLI(位于主机1中)上发出的,因此会创建一个脚本来执行这些任务。

On Host 1,

在主机1上

./mychannelup.sh

After all peers join the channel, they should have the same ledger. We use this command to check in each host. All peers are of the same blockchain height (3), and of the same block hash.

所有对等方加入通道后,他们应该具有相同的分类帐。 我们使用此命令检入每台主机。 所有对等点都具有相同的区块链高度(3),并且具有相同的区块哈希值。

Note here we are using docker exec command directly on each peer, not using CLI.

请注意,此处我们直接在每个对等方上使用docker exec命令,而不使用CLI。

docker exec peerx.orgy.example.com peer channel getinfo -c mychannel
Image for post
All peers have the same ledger (blockchain) after joining mychannel
加入mychannel后,所有对等方都具有相同的分类帐(区块链)

Now all the peers have joined mychannel, and the network is ready for chaincode deployment.

现在,所有对等方都已加入mychannel ,并且网络已准备好部署链码。

步骤6:部署fabcar chaincode (Step 6: Deploy fabcar chaincode)

We follow the previous article and deploy fabcar chaincode for demonstration. Note that in v2.2 the process is different from v1.4 as we are using lifecycle chaincode for deployment. For more information you can refer to the readthedocs and this article.

我们遵循上一篇文章,并部署fabcar chaincode进行演示。 请注意,在v2.2中,此过程与v1.4不同,因为我们使用生命周期链代码进行部署。 有关更多信息,请参阅阅读文档本文

Note all peers commands are issued from CLI container, which is on Host 1.

请注意, 所有对等命令都是从主机1上的CLI容器发出的。

Package chaincode

包裹链码

# If not done before
pushd ../chaincode/fabcar/go
GO111MODULE=on go mod vendor
popd# packaging
docker exec cli peer lifecycle chaincode package fabcar.tar.gz --path github.com/chaincode/fabcar/go --label fabcar_1

Install chaincode package to all peers

向所有对等方安装chaincode软件包

# peer0.org1
docker exec cli peer lifecycle chaincode install fabcar.tar.gz# peer1.org1
docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:8051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer lifecycle chaincode install fabcar.tar.gz# peer0.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer lifecycle chaincode install fabcar.tar.gz# peer1.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:10051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer lifecycle chaincode install fabcar.tar.gz

Once the installation of each peer is complete, you can see a new chaincode container image is built in each host. It is not instantiated yet (you cannot see it in docker ps), and will be instantiated once the chaincode is committed. Also, although the command is issued on CLI of Host 1, the chaincode container image is built in the host where the peer is running.

每个对等方的安装完成后,您可以看到在每个主机中都构建了一个新的chaincode容器映像。 它尚未实例化(您无法在docker docker ps看到它),并且在提交链码后将实例化它。 另外,尽管命令是在主机1的CLI上发出的,但是链码容器映像是在运行对等方的主机中构建的。

Image for post
Chaincode container images are built after chaincode package is installed on each peer.
Chaincode容器映像是在每个对等方上安装Chaincode软件包后生成的。

Approve chaincode for both organizations

批准两个组织的链码

Note in case your package ID is not the same. Use the output of previous installation.

请注意,如果您的包裹ID不相同。 使用先前安装的输出。

# for org1
docker exec cli peer lifecycle chaincode approveformyorg --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --sequence 1 --waitForEvent --package-id fabcar_1:a976a3f2eb95c19b91322fc939dd37135837e0cfc5d52e4dbc3a2ef881d14179# for org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer lifecycle chaincode approveformyorg --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --sequence 1 --waitForEvent --package-id fabcar_1:a976a3f2eb95c19b91322fc939dd37135837e0cfc5d52e4dbc3a2ef881d14179

To check approval status,

要查看批准状态,

docker exec cli peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1
Image for post

Commit chaincode

提交链码

docker exec cli peer lifecycle chaincode commit -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --channelID mychannel --name fabcar --version 1 --sequence 1

To check commit status,

要检查提交状态,

docker exec cli peer lifecycle chaincode querycommitted --channelID mychannel --name fabcar
Image for post

步骤7:测试fabcar链码 (Step 7: Test fabcar chaincode)

We will invoke chaincode functions and see if the deployment is working well.

我们将调用chaincode函数,并查看部署是否运行良好。

Invoke initLedger

调用initLedger

As we are using a raft-based ordering cluster, we can specify any orderers to perform our transaction. When invoking initLedger here we specify orderer3.example.com to handle this process, which is running in Host 3. You can try others.

当我们使用基于筏的订购集群时,我们可以指定任何订购者来执行交易。 在此处调用initLedger时,我们指定orderer3.example.com来处理此过程,该过程在主机3中运行。您可以尝试其他方法。

docker exec cli peer chaincode invoke -o orderer3.example.com:9050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer3.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["initLedger"]}'
Image for post

Query queryCar

查询queryCar

After that, we can query a car record on the four peer nodes. We get back the same result. This shows that the fabric network is working well.

之后,我们可以查询四个对等节点上的汽车记录。 我们得到相同的结果。 这表明结构网络运行良好。

# peer0.org1
docker exec cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'# peer1.org1
docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:8051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'# peer0.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'# peer1.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:10051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'
Image for post
query on peer0.org1.example.com
在peer0.org1.example.com上查询
Image for post
query on peer1.org1.example.com
在peer1.org1.example.com上查询
Image for post
query on peer0.org2.example.com
在peer0.org2.example.com上查询
Image for post
query on peer1.org2.example.com
在peer1.org2.example.com上查询

步骤8:拆解所有内容 (Step 8: Tear down everything)

On each host, tear down everything with script hostndown.sh. This script uses docker-compose to shut down and remove containers. Besides, all chaincode containers and the chaincode images are removed.

在每个主机上,使用脚本host n down.sh拆除所有内容。 该脚本使用docker-compose关闭并删除容器。 此外,所有的chaincode容器和chaincode图像都被删除。

./hostndown.sh

Hope you enjoy this work.

希望您喜欢这项工作。

翻译自: https://medium.com/@kctheservant/multi-host-deployment-for-first-network-hyperledger-fabric-v2-273b794ff3d

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值