1. 多机多节点的规划
主机名 | IP | 节点identity |
---|---|---|
master | 192.168.1.120 | lucy |
slave1 | 192.168.1.121 | slave1 |
slave2 | 192.168.1.122 | slave2 |
- 要求多台主机之间能相互ping通,这里我为每台主机设置的是静态ip。
- 如何设置静态IP可以参考:
ubuntu18.04配置静态ip和动态ip
2. 启动多节点
- 启动主机
master
上的节点lucy
:
# 初始化创世区块
$ geth --datadir /home/hadoop/eth/private_eth1/ init genesis.json
# 启动节点lucy,发现貌似指定ip没有用???
$ geth --networkid 230 --datadir /home/hadoop/eth/private_eth1/ --identity "lucy" --rpc --rpcport "8545" --port "30303" --rpcaddr "192.168.1.120" --nodiscover --rpcapi "eth,net,web3,personal,admin,shh,txpool,debug,miner" console
- 启动主机
slave1
上的节点slave1
:
# 初始化创世区块
$ geth --datadir /home/hadoop/eth/private_eth1/ init genesis.json
# 启动节点slave1,发现貌似指定ip没有用???
$ geth --networkid 230 --datadir /home/hadoop/eth/private_eth1/ --identity "slave1" --rpc --rpcport "8545" --port "30303" --rpcaddr "192.168.1.121" --nodiscover --rpcapi "eth,net,web3,personal,admin,shh,txpool,debug,miner" console
- 启动主机
slave2
上的节点slave2
:
# 初始化创世区块
$ geth --datadir /home/hadoop/eth/private_eth1/ init genesis.json
# 启动节点slave1,发现貌似指定ip没有用???
$ geth --networkid 230 --datadir /home/hadoop/eth/private_eth1/ --identity "slave2" --rpc --rpcport "8545" --port "30303" --rpcaddr "192.168.1.122" --nodiscover --rpcapi "eth,net,web3,personal,admin,shh,txpool,debug,miner" console
3. 查看节点信息
- 启动主机
master
上的节点lucy
信息,非重要信息有省略:
> admin.nodeInfo
{
enode: "enode://3bc36ddac875a8c156eff7cd2e31c4f1a05cf5da38bbbaccdd371b604db79dacabd24bfc0a3a0a12c54a9be91a052b073011809a81b4570eb465fbc28db8c2d6@127.0.0.1:30303?discport=0",
enr: "0xf88fb840578ee0bfee1d5a110d5f9cfaad3def2de6e29a0f989e50844cd48ea99f0c09a82705fd9944fefa6d0fcd84a2bac42c7c5fe8cf5d04cc04f6134c67502c8f1d540283636170c6c5836574683f826964827634826970847f00000189736563703235366b31a1023bc36ddac875a8c156eff7cd2e31c4f1a05cf5da38bbbaccdd371b604db79dac8374637082765f",
id: "dd0759f26e9a9c0b5d430a5954d045f9130f5850aacf569208253456f55fd56d",
ip: "127.0.0.1",
listenAddr: "[::]:30303",
name: "Geth/lucy/v1.8.23-stable-c9427004/linux-amd64/go1.10.4",
ports: {
discovery: 0,
listener: 30303
},
...
network: 230
}
}
}
- 启动主机
slave1
上的节点slave1
信息,非重要信息有省略:
> admin.nodeInfo
{
enode: "enode://7d0b5dd5dc499e443266f48615fa238595d802baabc12dedd16b9ce3af40af9961c20a91282a38a1fd578e4744acf3df409ec205a6071f75ff6b6cd7697bf0ae@113.54.155.24:30303?discport=0",
enr: "0xf88fb840e4d46897d193e32ba8d476546f4ee4df346c27ee339abcc131f57e4c2713dd74463eb75b53e9245f89897c9bab2f40901bb45b02b9d3127ec92f19f3cc6c950d0483636170c6c5836574683f8269648276348269708471369b1889736563703235366b31a1027d0b5dd5dc499e443266f48615fa238595d802baabc12dedd16b9ce3af40af998374637082765f",
id: "e6541e73b5c630eb1c608e0dcb8a561316398d17976b8dda8efa4deb65911cd6",
ip: "113.54.155.24",
listenAddr: "[::]:30303",
name: "Geth/slave1/v1.8.23-stable-c9427004/linux-amd64/go1.10.4",
ports: {
discovery: 0,
listener: 30303
},
...
network: 230
}
}
}
- 查看主机
slave2
上的节点slave2
:
> admin.nodeInfo
{
enode: "enode://76958c4cd7daf0b6bd2c3fc6737fa4614945d3cd8f82f4cc8205dab2af2d3725dbd318f554b0cec048748d13a119572b82b943f944306aa1a2ca37191a58d697@113.54.155.24:30303?discport=0",
enr: "0xf88fb8404305d5219d1fda712d55e83d7636e5f1a605d61f5da196946c5fffefd3f810fc22a3e1fd9b62933da3789cfb4af8461291f2efb701581c92b277a08316a53ede0583636170c6c5836574683f8269648276348269708471369b1889736563703235366b31a10376958c4cd7daf0b6bd2c3fc6737fa4614945d3cd8f82f4cc8205dab2af2d37258374637082765f",
id: "a3e6296e09d9882cccd26c9842fb6094d2376a9bfbdf959a3ac93f31a0fa4284",
ip: "113.54.155.24",
listenAddr: "[::]:30303",
name: "Geth/slave2/v1.8.23-stable-c9427004/linux-amd64/go1.10.4",
ports: {
discovery: 0,
listener: 30303
},
...
network: 230
}
}
}
4. 节点的相互关联
-
通过
admin.addPeer()
方法,将节点进行相互关联。 -
注意: 首先获取
admin.nodeInfo
中enode
属性的值,根据大半天的实验证明:关联节点时,需要将enode
属性值中@
符号之后的IP地址
改为拟关联节点IP地址;将其末尾的?discport=0
去除! -
将
lucy
节点与slave1
节点相互关联
> admin.addPeer("enode://7d0b5dd5dc499e443266f48615fa238595d802baabc12dedd16b9ce3af40af9961c20a91282a38a1fd578e4744acf3df409ec205a6071f75ff6b6cd7697bf0ae@192.168.1.121:30303?")
true
查看关联的节点,发现slave1
节点已经和lucy
节点成功相关联。
> admin.peers
[{
...
network: {
inbound: false,
localAddress: "192.168.1.120:51504",
remoteAddress: "192.168.1.121:30303",
static: true,
trusted: false
},
...
}
}]
//或者直接查看remoteAddress
> admin.peers[0].network.remoteAddress
"192.168.1.121:30303"
- 将
lucy
节点与slave2
节点相互关联
> admin.addPeer("enode://76958c4cd7daf0b6bd2c3fc6737fa4614945d3cd8f82f4cc8205dab2af2d3725dbd318f554b0cec048748d13a119572b82b943f944306aa1a2ca37191a58d697@192.168.1.122:30303")
true
查看关联的节点,发现slave2
节点已经和lucy
节点成功相关联。此时,lucy
节点有两个节点与其相关联
> admin.peers
[{
...
network: {
inbound: false,
localAddress: "192.168.1.120:39454",
remoteAddress: "192.168.1.122:30303",
static: true,
trusted: false
},
...
}
}, {
...
network: {
inbound: false,
localAddress: "192.168.1.120:51504",
remoteAddress: "192.168.1.121:30303",
static: true,
trusted: false
},
...
}
}]
- 将
slave2
节点与slave1
节点相互关联
> admin.addPeer("enode://7d0b5dd5dc499e443266f48615fa238595d802baabc12dedd16b9ce3af40af9961c20a91282a38a1fd578e4744acf3df409ec205a6071f75ff6b6cd7697bf0ae@192.168.1.121:30303")
true
查看关联的节点,发现slave2
节点已经和slave1
、lucy
节点成功相关联。
> admin.peers
[{
...
network: {
inbound: true,
localAddress: "192.168.1.122:30303",
remoteAddress: "192.168.1.120:39454",
static: false,
trusted: false
},
...
}
}, {
...
network: {
inbound: false,
localAddress: "192.168.1.122:48648",
remoteAddress: "192.168.1.121:30303",
static: true,
trusted: false
},
...
}
}]
- 此时三台主机上的三个节点已经相互关联了,如图所示:
- 以下是参考别人的资料时,发现的一句话,这也告诉了我如何避免每次都进行节点添加!测试如下,关掉slave2节点,然后重新启动,查看它的
admin.peers
。
注意: 节点关机后,会自动被删除,节点重新启动后,会自动加入节点集群。但是如果所有节点全部断掉,则需要重新添加。
测试结果:说法不对啊,重启以后的slave2节点,只重新关联了lucy节点。
5. 创建账户,开始挖矿
- 每个节点均创建两个账户,第一个账户默认为矿工账户:
//slave2
> personal.newAccount("123456")
"0x4161514855682c94e3cbcb4808eb8766cf889e17"
> personal.newAccount("123456")
"0xa070a468bfb29e807fa09b2876b543dc2e9f9424"
//slave1
> personal.newAccount("123456")
"0x7cd56cfe617ae25037cb6e13a85a3b485cff9d74"
> personal.newAccount("123456")
"0x74f3141e22e415dd9edaae9cd1ef26e4ef704984"
//lucy
> personal.newAccount("123456")
"0xa070a468bfb29e807fa09b2876b543dc2e9f9424"
> personal.newAccount("123456")
"0xd4c057bbda47d12229f71c474be330d0b1a37780"
- 在节点
slave1
上开始挖矿:
// Successfully sealed new block、mined potential block 表示挖到一个潜在的区块
INFO [03-23|22:37:30.378] Successfully sealed new block number=21 sealhash=cf79be…db1ab5 hash=86fc49…8d62d7 elapsed=2.746s
// block reached canonical chain 区块已经上链
INFO [03-23|22:37:30.379] ? block reached canonical chain number=14 hash=5eac6b…bdb8fc
INFO [03-23|22:37:30.379] ? mined potential block number=21 hash=86fc49…8d62d7
//Commit new mining work 代表提供一个新的挖矿任务,这个任务有可能不一定会成功挖到块
INFO [03-23|22:37:30.380] Commit new mining work number=22 sealhash=0bc5d0…9a2f1e uncles=0 txs=0 gas=0 fees=0 elapsed=278.877µs
INFO [03-23|22:37:30.469] Generating DAG in progress epoch=1 percentage=35 elapsed=1m42.916s
- 停止挖矿后,查看三个节点上的信息,所有的节点区块数一致。注意: 虽然是28个区块,实际一共29个区块,0号区块为创世区块!
//lucy
> eth.blockNumber
28
//slave1
> eth.blockNumber
28
//slave2
> eth.blockNumber
28
6. 账户间转账
slave1
节点的eth.accounts[0]
向lucy
节点的eth.accounts[1]
转账4个ether:
> personal.unlockAccount(eth.accounts[0],"123456",2000)
true
> eth.sendTransaction({from:eth.accounts[0],to:"0x093089e86d8a276d2aba0457ad1471e322c19f57",value:web3.toWei(4,"ether")})
INFO [03-24|11:14:10.298] Setting new local account address=0x7cD56cFe617AE25037Cb6e13a85a3b485cff9d74
INFO [03-24|11:14:10.298] Submitted transaction fullhash=0xb1e37b3736110149cd5b021639d853ead0353de9ad31f5d5cbcb299b0504fd08 recipient=0x093089E86d8a276D2aBa0457Ad1471E322c19F57
"0xb1e37b3736110149cd5b021639d853ead0353de9ad31f5d5cbcb299b0504fd08"
> miner.start(1)
...
挖矿结束,查看lucy
节点的eth.accounts[1]
账户余额:
> web3.fromWei(eth.getBalance(eth.accounts[1]),"ether")
4
lucy
节点的eth.accounts[1]
向slave2
节点的eth.accounts[1]
转账1个ether:
> personal.unlockAccount(eth.accounts[0],"123456",2000)
true
> eth.sendTransaction({from:eth.accounts[1],to:"0xa070a468bfb29e807fa09b2876b543dc2e9f9424",value:web3.toWei(1,"ether")})
INFO [03-24|11:57:29.477] Setting new local account address=0x093089E86d8a276D2aBa0457Ad1471E322c19F57
INFO [03-24|11:57:29.477] Submitted transaction fullhash=0xdb2cc7df872cfbf3920289684fec86e1375c63ee7574eed0ed933b368b713a20 recipient=0xa070A468Bfb29E807fa09B2876B543dc2e9f9424
"0xdb2cc7df872cfbf3920289684fec86e1375c63ee7574eed0ed933b368b713a20"
//开始挖矿,更新slave2的eth.accounts[1]的账户余额
> miner.start(1)
...
查看slave2
节点的eth.accounts[1]
账户余额:
> web3.fromWei(eth.getBalance(eth.accounts[1]),"ether")
1
参考链接:
以太坊联盟链-多节点私链搭建手册
多个节点搭建以太坊私有链(两台电脑测试)
搭建以太坊私有链并两台电脑间节点连接
区块链开发(一)搭建基于以太坊的私有链环境