在上一篇中,我們實現(xiàn)了Heperledger fabric v1.3.0阿里云環(huán)境的搭建,同時也成功啟動了e2e案例, 那接下來呢,我們要實現(xiàn)kafka集群模式,根據(jù)這個云服務(wù)創(chuàng)建一個鏡像,再根據(jù)這個鏡像創(chuàng)建出7個云服務(wù)器.云服務(wù)器的配置,依然采用按需付費的模式,這樣就會省點錢,配置方面,1g內(nèi)存,單核cpu即可. 但上一篇文章中使用的是4g內(nèi)存,因為所有的節(jié)點容器都在一臺機器上運行,但接下來的kafka集群環(huán)境,用1g就夠了.
kafka生產(chǎn)環(huán)境部署案例采用三個排序(orderer)服務(wù)、四個kafka堪遂、三個zookeeper和四個節(jié)點(peer)組成介蛉,共準備八臺服務(wù)器,每臺服務(wù)器對應(yīng)的服務(wù)如下表所示:
1.修改配置文件中extra_hosts設(shè)置
zookeeper,kafka,以及orderer,和各個節(jié)點的配置文件已經(jīng)寫好了,在這里:提取碼 5aun
需要修改一下配置文件中extra_hosts 變量中 ip地址的映射, 根據(jù)實際的云服務(wù)器的ip來進行修改.
首先來看一下配置文件們的結(jié)構(gòu):
.
├── orderer0
│ ├── clear_docker.sh // 關(guān)閉orderer0.example.com云服務(wù)器上所有容器
│ ├── configtx.yaml // 用于生成創(chuàng)世區(qū)塊文件和通道文件
│ ├── crypto-config.yaml // 用于生成證書文件
│ ├── docker-compose-kafka.yaml // kafka0配置文件
│ ├── docker-compose-orderer.yaml // orderer0排序節(jié)點配置文件
│ ├── docker-compose-zookeeper.yaml // zookeeper0配置文件
│ ├── generate.sh // 生成證書和創(chuàng)建創(chuàng)世區(qū)塊文件和通道文件的腳本
│ └── scpCryptoAndGenesisToOther.sh // 將證書,創(chuàng)世區(qū)塊文件,通道文件分發(fā)給其他云服務(wù), 其實只需要分發(fā)各個云服務(wù)所需要的文件即可,在這里我們?yōu)榱朔奖闫鹨?進行全部分發(fā).
├── orderer1
│ ├── clear_docker.sh // 關(guān)閉orderer1.example.com云服務(wù)器上所有容器
│ ├── docker-compose-kafka.yaml // kafka1配置文件
│ ├── docker-compose-orderer.yaml // orderer1排序節(jié)點配置文件
│ └── docker-compose-zookeeper.yaml // zookeeper1配置文件
├── orderer2
│ ├── clear_docker.sh // 關(guān)閉orderer2.example.com云服務(wù)器上所有容器
│ ├── docker-compose-kafka.yaml // kafka2配置文件
│ ├── docker-compose-orderer.yaml // orderer2排序節(jié)點配置文件
│ └── docker-compose-zookeeper.yaml // zookeeper2配置文件
├── orderer3
│ ├── clear_docker.sh // 關(guān)閉orderer3.example.com云服務(wù)器上所有容器
│ ├── docker-compose-kafka.yaml // kafka3配置文件
├── peer0.org1
│ ├── chaincode // 鏈碼,使用的是 e2e的 example02 鏈碼
│ ├── clear_docker.sh // 關(guān)閉 peer0.org1云服務(wù)器上所有容器
│ ├── docker-compose-peer.yaml // peer0.org1節(jié)點配置文件
│ └── scpChannelToOtherPeers.sh // 分發(fā)通道 mychannel.block文件給其他節(jié)點的腳本文件
├── peer0.org2
│ ├── chaincode // 鏈碼,使用的是 e2e的 example02 鏈碼
│ ├── clear_docker.sh // 關(guān)閉 peer0.org2云服務(wù)器上所有容器
│ └── docker-compose-peer.yaml // peer0.org2節(jié)點配置文件
├── peer1.org1
│ ├── chaincode // 鏈碼,使用的是 e2e的 example02 鏈碼
│ ├── clear_docker.sh // 關(guān)閉 peer1.org1云服務(wù)器上所有容器
│ └── docker-compose-peer.yaml // peer1.org1節(jié)點配置文件
├── peer1.org2
│ ├── chaincode // 鏈碼,使用的是 e2e的 example02 鏈碼
│ ├── clear_docker.sh // 關(guān)閉 peer1.org2云服務(wù)器上所有容器
│ └── docker-compose-peer.yaml // peer1.org2節(jié)點配置文件
└── scpConfigFileToYun.sh // 將配置文件分發(fā)給各個云服務(wù)器的腳本文件
orderer0文件夾中,有生成證書文件的crypto-config.yaml, 有生成創(chuàng)世區(qū)塊文件和通道文件的configtx.yaml,
需要在上面zookeeper, kafka, orderer, peer,cli的配置文件中,一一修改extra_hosts的ip映射. 比如zookeeper0的配置文件:
version: '2'
services:
zookeeper0:
container_name: zookeeper0
hostname: zookeeper0
image: hyperledger/fabric-zookeeper
restart: always
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper0:47.104.26.119"
- "zookeeper1:47.105.127.95"
- "zookeeper2:47.105.226.90"
- "kafka0:47.104.26.119"
- "kafka1:47.105.127.95"
- "kafka2:47.105.226.90"
- "kafka3:47.105.136.5"
kafka0的配置文件:
version: '2'
services:
kafka0:
container_name: kafka0
hostname: kafka0
image: hyperledger/fabric-kafka
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
# 100 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=104857600
- KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
ports:
- 9092:9092
extra_hosts:
- "zookeeper0:47.104.26.119"
- "zookeeper1:47.105.127.95"
- "zookeeper2:47.105.226.90"
- "kafka0:47.104.26.119"
- "kafka1:47.105.127.95"
- "kafka2:47.105.226.90"
- "kafka3:47.105.136.5"
orderer0配置文件:
ports:
- 7050:7050
extra_hosts:
- "kafka0:47.104.26.119"
- "kafka1:47.105.127.95"
- "kafka2:47.105.226.90"
- "kafka3:47.105.136.5"
peer0.org1文件中 peer 節(jié)點修改:
extra_hosts:
- "orderer0.example.com:47.104.26.119"
- "orderer1.example.com:47.105.127.95"
- "orderer2.example.com:47.105.226.90"
peer0.org1文件中 cli 修改:
- "orderer0.example.com:47.104.26.119"
- "orderer1.example.com:47.105.127.95"
- "orderer2.example.com:47.105.226.90"
- "peer0.org1.example.com:47.105.36.78"
- "peer1.org1.example.com:47.105.40.77"
- "peer0.org2.example.com:47.104.147.94"
- "peer1.org2.example.com:47.105.106.73"
依次對其他zookeeper, kafka, orderer, peer,cli進行修改
2.修改腳本文件的ip地址
修改scpConfigFileToYun.sh,
修改scpCryptoAndGenesisToOther.sh
修改scpChannelToOtherPeers.sh
這三個腳本文件是為了讓我們更方便去進行文件的分發(fā).
3.配置文件的分發(fā)
進入各個云服務(wù)器,創(chuàng)建kafkapeer文件夾:
$ cd /root/go/src/github.com/hyperledger/fabric
$ mkdir kafkapeer
添加ip域名映射到 /etc/hosts 文件中
vi /etc/hosts
47.104.26.119 zookeeper0
47.105.127.95 zookeeper1
47.105.226.90 zookeeper2
47.104.26.119 kafka0
47.105.127.95 kafka1
47.105.226.90 kafka2
47.105.136.5 kafka3
47.104.26.119 orderer0.example.com
47.105.127.95 orderer1.example.com
47.105.226.90 orderer2.example.com
47.105.36.78 peer0.org1.example.com
47.105.40.77 peer1.org1.example.com
47.104.147.94 peer0.org2.example.com
47.105.106.73 peer1.org2.example.com
在Mac上執(zhí)行scpConfigFileToYun.sh腳本,進行配置文件的分發(fā)
$ sh scpConfigFileToYun.sh
4 生成證書和創(chuàng)世區(qū)塊文件
進入到orderer0.example.com 云服務(wù)器中
$ ssh root@47.104.26.119
cd go/src/github.com/hyperledger/fabric/kafkapeer/
查找configtxgen工具和cryptogen工具的路徑
$ find / -name configtxgen
把configtxgen工具和cryptogen工具拷貝到當前文件夾:
$ cp -r /root/go/src/github.com/hyperledger/fabric/release/linux-amd64/bin ./
執(zhí)行g(shù)enerate.sh 腳本
$ ./generate.sh
執(zhí)行成功后,可以看到目錄已經(jīng)有了證書文件和創(chuàng)世區(qū)塊文件,以及通道文件!
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# ls channel-artifacts/
genesis.block mychannel.tx
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# ls crypto-config
ordererOrganizations peerOrganizations
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer#
執(zhí)行scpCryptoAndGenesisToOther.sh腳本,將證書和創(chuàng)世區(qū)塊文件分發(fā)給其他云服務(wù)器
$ ./scpCryptoAndGenesisToOther.sh
5.啟動zookeeper, kafka, orderer
5.1啟動zookeeper
進入47.104.26.119 服務(wù)器
$ ssh root@47.104.26.119
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer
啟動zookeeper0
$ docker-compose -f docker-compose-zookeeper.yaml up -d
進入47.105.127.95 服務(wù)器
$ ssh root@47.105.127.95
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer
啟動zookeeper1
$ docker-compose -f docker-compose-zookeeper.yaml up -d
進入47.105.226.90 服務(wù)器
$ ssh root@47.105.226.90
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer
啟動zookeeper2
$ docker-compose -f docker-compose-zookeeper.yaml up -d
5.2啟動kafka
在 47.104.26.119 服務(wù)器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路徑下
啟動kafka0
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.127.95 服務(wù)器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路徑下
啟動kafka1
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.136.5 服務(wù)器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路徑下
啟動kafka2
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.226.90 服務(wù)器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路徑下
啟動kafka3
$ docker-compose -f docker-compose-kafka.yaml up -d
啟動之后docker ps -a
查看一下是否啟動成功, 若是Up狀態(tài)則表明啟動成功
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
35dd8ed6574b hyperledger/fabric-kafka "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:9092->9092/tcp, 9093/tcp kafka0
c94a1273f518 hyperledger/fabric-zookeeper "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp zookeeper0
5.3啟動orderer
在 47.104.26.119 服務(wù)器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路徑下
啟動orderer0
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.127.95 服務(wù)器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路徑下
啟動orderer1
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.136.5 服務(wù)器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路徑下
啟動orderer2
$ docker-compose -f docker-compose-kafka.yaml up -d
啟動之后docker ps -a
查看一下是否啟動成功, 若是Up狀態(tài)則表明啟動成功
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd0a249c1307 hyperledger/fabric-orderer "orderer" About a minute ago Up About a minute 0.0.0.0:7050->7050/tcp orderer0.example.com
35dd8ed6574b hyperledger/fabric-kafka "/docker-entrypoint.…" 9 minutes ago Up 9 minutes 0.0.0.0:9092->9092/tcp, 9093/tcp kafka0
c94a1273f518 hyperledger/fabric-zookeeper "/docker-entrypoint.…" 15 minutes ago Up 15 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp zookeeper0
查看一下orderer0容器的日志
$ docker logs fd0a249c1307
可以看到,It's a connect message
, 說明kafka集群環(huán)境已經(jīng)成功搭建!!
2019-01-12 03:10:50.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> DEBU 0ff [channel: testchainid] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2019-01-12 03:10:50.537 UTC [orderer/consensus/kafka] processConnect -> DEBU 100 [channel: testchainid] It's a connect message - ignoring
2019-01-12 03:10:52.351 UTC [orderer/consensus/kafka] processMessagesToBlocks -> DEBU 101 [channel: testchainid] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2019-01-12 03:10:52.352 UTC [orderer/consensus/kafka] processConnect -> DEBU 102 [channel: testchainid] It's a connect message - ignoring
6 啟動peer節(jié)點,創(chuàng)建通道,安裝鏈碼
6.1啟動peer節(jié)點
進入到 peer0.org1 節(jié)點
$ ssh root@47.105.36.78
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器啟動狀態(tài) docker ps -a
root@iZm5e9pn15ifo7y2it7dgbZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03f56b8e2916 hyperledger/fabric-tools "/bin/bash" 16 seconds ago Up 14 seconds cli
d451b975bd18 hyperledger/fabric-peer "peer node start" 16 seconds ago Up 13 seconds 0.0.0.0:7051-7053->7051-7053/tcp peer0.org1.example.com
6.2 創(chuàng)建channel
進入cli容器
$ docker exec -it cli bash
創(chuàng)建channel
$ ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
$ peer channel create -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/mychannel.tx --tls --cafile $ORDERER_CA`
查看創(chuàng)建的channel文件,mychannel.block 就是剛剛創(chuàng)建的channel文件
root@03f56b8e2916:/opt/gopath/src/github.com/hyperledger/fabric/peer# ls
channel-artifacts crypto mychannel.block
將peer0.org1節(jié)點加入到channel中
$ peer channel join -b mychannel.block
成功后,提示:Successfully submitted proposal to join channel
$ peer channel list
查看節(jié)點所加入的通道, 可以看到,已經(jīng)加入到了mychannel通道中
Channels peers has joined:
mychannel
將創(chuàng)建的mychannel.block從容器中拷貝出來,拷貝到/root/go/src/github.com/hyperledger/fabric/kafkapeer
目錄下:
退出cli容器
$ exit
$ docker cp 03f56b8e2916:/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block ./
其中 03f56b8e2916 為 cli容器的CONTAINER ID
查看一下是否從cli容器中將mychannel.block拷貝出來:
執(zhí)行scpChannelToOtherPeers.sh腳本文件,將mychannel.block分發(fā)給其他節(jié)點,以便之后其他節(jié)點加入.
$ sh scpChannelToOtherPeers.sh
6.3 安裝鏈碼&初始化
進入到cli容器:
$ docker exec -it cli bash
安裝鏈碼:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回 Installed remotely response:<status:200 payload:"OK"
表示安裝鏈碼成功
鏈碼初始化, 調(diào)用init方法,初始數(shù)據(jù)a為100,b為100 :
指定背書策略,Org1MSP組織的成員或Org2MSP組織的成員,任意一個參加即可
$ peer chaincode instantiate -o orderer0.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","100","b","100"]}' -P "OR ('Org1MSP.member','Org2MSP.member')"
鏈碼初始化時間會比較長一些,完成之后查看 a的余額數(shù)據(jù):
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回結(jié)果100
查看b的余額數(shù)據(jù):
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回結(jié)果100, 說明初始化成功.
6.4 進行交易
6.4.1 接下來在 peer0.org1節(jié)點上進行一筆交易, 由a向b轉(zhuǎn)賬20
$ peer chaincode invoke --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","a","b","20"]}'
返回 Chaincode invoke successful. result: status:200
說明交易成功,查看一下啊a,b的余額
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回結(jié)果80
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回結(jié)果120
說明進行交易成功
6.4.2 在 peer1.org1節(jié)點進行交易
進入 peer1.org1節(jié)點
$ ssh root@47.105.40.77
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
可以看到,在當前路徑下,有我們剛才從peer0.org1節(jié)點發(fā)過來的 mychannel.block文件
啟動 peer1.org1節(jié)點容器
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器狀態(tài):
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ae5098d3d3a3 hyperledger/fabric-tools "/bin/bash" 2 hours ago Up 2 hours cli
c94443b1f6a5 hyperledger/fabric-peer "peer node start" 2 hours ago Up 2 hours 0.0.0.0:7051-7053->7051-7053/tcp peer1.org1.example.com
把mychannel.block文件拷貝到cli容器中:
$ docker cp ./mychannel.block ae5098d3d3a3:/opt/gopath/src/github.com/hyperledger/fabric/peer
其中ae5098d3d3a3
為cli的CONTAINER ID
進入cli
$ docker exec -it cli bash
查看 mychannel.block是否拷貝進來
將peer1.org1節(jié)點加入到channel中
$ peer channel join -b mychannel.block
返回Successfully submitted proposal to join channel
則表示加入通道成功!
安裝鏈碼:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回Installed remotely response:<status:200 payload:"OK"
表示安裝鏈碼成功
查詢a,b的余額:
時間會比較長一些,因為要同步賬本數(shù)據(jù)
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回結(jié)果80
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回結(jié)果120
進行一筆交易: 由b向a轉(zhuǎn)50
$ ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
$ peer chaincode invoke --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","b","a","50"]}'
返回 Chaincode invoke successful. result: status:200
說明交易成功,查看一下啊a,b的余額
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回結(jié)果130
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回結(jié)果70
說明進行交易成功
6.4.3 在 peer0.org2節(jié)點進行交易
$ ssh root@47.104.147.94
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
可以看到,在當前路徑下,有我們剛才從peer0.org1節(jié)點發(fā)過來的 mychannel.block文件
root@iZm5e9pn15ifo7y2it7dgcZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# ls
chaincode channel-artifacts clear_docker.sh crypto-config docker-compose-peer.yaml mychannel.block
啟動 peer0.org2節(jié)點容器
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器狀態(tài):
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fa14d2d5a07 hyperledger/fabric-tools "/bin/bash" 17 minutes ago Up 17 minutes cli
aa41cf591a16 hyperledger/fabric-peer "peer node start" 17 minutes ago Up 17 minutes 0.0.0.0:7051-7053->7051-7053/tcp peer0.org2.example.com
把mychannel.block文件拷貝到cli容器中:
$ docker cp ./mychannel.block 3fa14d2d5a07:/opt/gopath/src/github.com/hyperledger/fabric/peer
其中3fa14d2d5a07
為cli的CONTAINER ID
進入cli
$ docker exec -it cli bash
查看 mychannel.block是否拷貝進來
將peer0.org2節(jié)點加入到channel中
$ peer channel join -b mychannel.block
返回Successfully submitted proposal to join channel
則表示加入通道成功!
安裝鏈碼:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回Installed remotely response:<status:200 payload:"OK"
表示安裝鏈碼成功
查詢a,b的余額:
時間會比較長一些,因為要同步賬本數(shù)據(jù)
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回結(jié)果130
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回結(jié)果70
進行一筆交易: 由b向a轉(zhuǎn)20
$ ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
$ peer chaincode invoke --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","b","a","20"]}'
返回 Chaincode invoke successful. result: status:200
說明交易成功
6.4.4 在 peer1.org2節(jié)點進行查詢
$ ssh root@47.105.106.73
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
可以看到,在當前路徑下,有我們剛才從peer0.org1節(jié)點發(fā)過來的 mychannel.block文件
啟動 peer0.org2節(jié)點容器
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器狀態(tài):
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a47aaf499de3 hyperledger/fabric-tools "/bin/bash" 31 minutes ago Up 31 minutes cli
d3f0b0db0495 hyperledger/fabric-peer peer node start" 31 minutes ago Up 31 minutes 0.0.0.0:7051-7053->7051-7053/tcp peer1.org2.example.com
把mychannel.block文件拷貝到cli容器中:
$ docker cp ./mychannel.block a47aaf499de3:/opt/gopath/src/github.com/hyperledger/fabric/peer
其中a47aaf499de3
為cli的CONTAINER ID
進入cli
$ docker exec -it cli bash
查看 mychannel.block是否拷貝進來
將peer1.org2節(jié)點加入到channel中
$ peer channel join -b mychannel.block
返回Successfully submitted proposal to join channel
則表示加入通道成功!
安裝鏈碼:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回Installed remotely response:<status:200 payload:"OK"
表示安裝鏈碼成功
查看一下啊a,b的余額
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回結(jié)果150
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回結(jié)果50
7.關(guān)閉集群&啟動集群
7.1 關(guān)閉集群
進入每個云服務(wù)器中, 執(zhí)行 ./clear_docker.sh
腳本
在peer節(jié)點的云服務(wù)器中,還要刪除dev鏡像文件:
$ docker images
刪除鏡像
$ docker rmi 093555752a3c
7.2 重新啟動集群
- 在Mac上執(zhí)行分發(fā)配置文件的腳本:
./scpConfigFileToYun.sh
- 進入 orderer0 服務(wù)器,執(zhí)行生成證書和創(chuàng)世區(qū)塊的腳本文件:
./generate.sh
- 將文件分發(fā)給各個服務(wù)器, 執(zhí)行文件:
./scpCryptoAndGenesisToOther.sh
- 依次啟動zookeeper, kafka, orderer
- 啟動peer節(jié)點, 創(chuàng)建通道, 加入通道,把通道文件(.block后綴的)發(fā)送給其他節(jié)點,執(zhí)行腳本:
./scpChannelToOtherPeers.sh
遇到的問題:
1.當把第二個節(jié)點加入到通道的時候報錯:
grpc: addrConn.createTransport failed to connect to {peer1.org1.example.com:7051 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for peer0.org2.example.com, peer0, not peer1.org1.example.com". Reconnecting...
意思是證書有問題,用了別的節(jié)點的證書, 最終錯誤原因是因為, 我們之前約定好了 哪臺服務(wù)器, 充當哪個節(jié)點的角色, 但是呢,我們在給各個服務(wù)器 分發(fā)配置文件的時候, 分發(fā)錯誤了, 這個體現(xiàn)在scpConfigFileToYun.sh 這個腳本文件中, 把 peer1.org1的配置文件分發(fā)給 peer0.peer2節(jié)點服務(wù)器了. 所以編寫scpConfigFileToYun.sh的時候節(jié)點要和ip一一對應(yīng).
參考內(nèi)容:
HyperLedger Fabric 1.2 kafka生產(chǎn)環(huán)境部署(11.1