Table of Contents
- 1. Objective:
- 2. Background:
- 3. Proposed Methodology:
- 4. Proposal:
- 4.1. 1. Performance of different blockchains for identical tasks.
- 4.2. 2. Impact of Layer 1 attributes (block size, consensus algorithms, node count) on single blockchain performance.
- 4.3. 1. Creating a Simulated Layer 1 Blockchain:
- 4.4. 2. Developing Workloads for Full Load Simulation:
- 4.5. 3.Integrating and Benchmarking Layer 2 Solutions:
- 5. 1. Creating a Simulated Layer 1 Blockchain
- 6. 2. Development of Client Application Workloads:
- 7. 3. Integrating and Benchmarking Layer 2 Solutions:
- 7.1. State Channels
- 7.2. Side Chains
- 7.3. Plasma
- 7.4. Rollups
- 7.5. Chaincodes in Hyperledger Fabric
- 7.5.1. Runs in a secured Docker container isolated from the endorsing peer process.
- 7.5.2. Initializes and manages the ledger state through transactions submitted by applications.
- 7.5.3. Can be invoked to update or query the ledger in a proposal transaction. Given the appropriate permission
- 7.5.4. May invoke another chaincode, either in the same channel or in different channels, to query its state.
- Ordering / Gateway service nodes should not be a bottleneck (assign enough CPUs)
- Important!! How to simulate sidechains solutions of layer 2?
- We build a network to simulate the public layer 1 blockchain like Ethereum. (do not require merkle trie, just using world state DB) (using it as the standard to the other layer 2 solutions)
- Write benchmarks to make that layer 1 blockchain full load. (specifically, simulating the database update, query) ("full load" means make the CPU limits its throughput)
- Introducing different layer 2 solutions to that layer 1 network. (lightning network: how to adjust the benchmark?) (sidechain*: important part, better introduce multiple designs with tradeoffs) …
Full of trade-offs…
Don't compare different solutions on layer 2, two purposes:
- Find the best use-case of those layer 2 solutions
- Can those layer 2 scaling well? compraing to layer 1.
1. Objective:
To benchmark different Layer 2 solutions in blockchain technology, addressing the gap in existing research which primarily focuses on Layer 1 benchmarking.
2. Background:
2.1. Existing benchmarks (e.g., Blockbench, Gromit, BBSF) focus on different Layer 1 blockchain, or some Layer 1 attributes.
2.2. Layer 2 solutions, like the Lightning Network, have varied use cases and efficiencies, presenting a challenge for direct benchmarking.
3. Proposed Methodology:
3.1. 1. Simulated Layer 1 Environment Construction:
Utilize Hyperledger Fabric to create a controlled, simulated Layer 1 blockchain environment. Ensure uniform computational power across all nodes to control variance. Focus on a clean, straightforward design for ease of implementation and reproducibility.
3.2. 2. Development of Client Application Workloads:
Design workloads to maximize the capacity of each peer, focusing on CPU resource limitations rather than bandwidth or system design. Consider simulating a database and contrasting it with a raft distributed system.
3.3. 3. Integration of Layer 2 Solutions:
Implement various Layer 2 solutions within the simulated network using chaincode without altering the core of fabric. Leverage Hyperledger Fabric's channel mechanisms for solutions like the Lightning Network, or sidechains.
4. Proposal:
I am proposing a survey focused on blockchain Layer 2 solutions. My research, including an examination of about 10 papers such as the Bitcoin and Ethereum whitepapers (Layer 1 introductions), has led me to explore various Layer 2 solutions. Although existing surveys provided a foundation, I identified a gap in the benchmarking of different Layer 2 solutions.
Notably, existing benchmarks like Blockbench and BBSF (Blockchain Benchmarking Standardized Framework) from the NUS Singapore research group primarily focus on:
4.1. 1. Performance of different blockchains for identical tasks.
4.2. 2. Impact of Layer 1 attributes (block size, consensus algorithms, node count) on single blockchain performance.
However, there's a lack of benchmarking for different Layer 2 solutions. This gap is acknowledged in the recent BBSF paper, suggesting the need for specific workloads to understand Layer 2 performance better.
My proposal aims to fill this gap. The key challenge is the varying use cases and effectiveness of Layer 2 solutions across different blockchains. For instance, the Lightning Network excels in frequent transactions but is less beneficial for single transactions. Thus, direct comparison across all Layer 2 solutions is impractical.
To address this, I propose a method similar to the benchmarking approach used for Hyperledger Fabric Layer 1:
4.3. 1. Creating a Simulated Layer 1 Blockchain:
We'll construct a network mimicking a public blockchain system (like Ethereum) with uniformly distributed computational power. This will serve as the basis our benchmarking framework. Hyperledger Fabric's endorsement policy enables an environment that simulates smart contract contexts, ensuring a clean, structured setup for subsequent steps.
4.4. 2. Developing Workloads for Full Load Simulation:
The next step involves creating client application workloads to fully utilize each peer's computational power, ensuring the throughput is CPU-bound. This could involve simulating a database and contrasting it with a distributed system using a relational database.
4.5. 3.Integrating and Benchmarking Layer 2 Solutions:
The final and most challenging step is integrating various Layer 2 solutions into our network. Our focus will be on assessing the benefits and optimal use cases of each solution. For instance, leveraging Fabric's channel mechanisms can facilitate implementing Lightning Network channels and sidechains, allowing for network behavior modification through chaincode.
In summary, this survey aims to provide a comprehensive benchmark of Layer 2 solutions, addressing the current gap in research. I believe this will significantly contribute to our understanding of blockchain scalability and efficiency.
5. 1. Creating a Simulated Layer 1 Blockchain
5.1. Design Layer 1 Network
5.1.1. Peers
- COREPEERADDRESS:
- Peers (24): 6001 - 6024: 24 organizations
This is the primary communication endpoint of the peer. which includes handling endorsements and communicating with other peers and applications. This port is critical for the core operations of the peer, such as processing transactions and interacting with the blockchain ledger.
- Peers (24): 6001 - 6024: 24 organizations
- COREPEERCHAINCODEADDRESS:
- COREOPERATIONSLISTENADDRESS:
5.2. Certificates
5.2.1. Transport Layer Security Certificates
TLS (Transport Layer Security) certificates are used for securing communications between nodes in the network. They enable encryption and authentication of data transmitted over the network, ensuring that the data cannot be intercepted or tampered with during transmission. During a TLS handshake, a peer will validate the certificate of the orderer (or vice versa) against a trusted root CA certificate. If the certificate is signed by a known and trusted CA, the connection is established; otherwise, it is rejected.
5.2.2. Membership Service Provider Certificates
MSP certificates are used for identity management within the network. They identify the organization to which a peer or orderer belongs, and can include various attributes like the organization's name, the node's role, etc.
- Joining a Channel
When a peer joins a channel, it presents its MSP certificate to the channel's orderer. This proves the peer's identity and ensures it belongs to an authorized organization within the network.
- Endorsing Transactions
During the transaction endorsement process, the endorsing peers sign the transaction proposal with their MSP private key.
5.3. Chaincode Container
Two scenarios when chaincode container is needed.
5.3.1. 1. When a peer is requested to perform endorsement.
5.3.2. 2. When querying a chaincode function which does not trigger any state change.
5.4. Fight with the issue*
Never take it for granted, never overtrust your understanding! If this issue hardly being meet by other people, then there is a high chance that some understanding error is on your side which lead you miss the thing that other people won't miss. Don't relying on GPT too much, since you can easily mislead that LLM, and GPT won't work in some non-popular cases such as some unknown bugs in less popular projects (lack of data)
Good thing: The idea to try to find the issue is correct.
5.4.1. Issue source: two core.yaml, one is used in docker compose, another one is used for creating the channel.
- 2023-12-25 => 2023-12-28:
Two days using the single core.yaml in the channel, which leads to peers do not have the enough specification, and I have to explicitly set the environment variable in compose.yaml. (that's the reason no matter how I change the core.yaml inside config/ there is no change in the peer's log).
- Route to target that issue:
- 2023-12-25: cannot find docker inside node, in this step, I find the right way: mount the docker in the compose.yaml inside volumes section of each peers. Thus, the network seems set up and all peers joined the network
- 2023-12-26 & 27: the trickey error => after installing the chaincode on each peer, when trying to invoke the chaincode transport: error while dialing: dial tcp: lookup peer1.layer1.chains on 192.168.50.1:53: no such host I tried many ways to trying to resolve that error, since it is an unusual error: There is no way the peer trying to ask the router (192.168.50.1:53) to find the address in the docker network. Initially, I thought that was quite easy, since it is not hard to find what is the problem. But I spend two days trying to find where is the issue QAQ. I faild.
- 2023-12-28: After two days trying to resolve that issue, I almost walk though every website that listed in google which contains the keyword, and turns out it is a very non-popular issue – nearly no one met that before. So I decide to follow the official tutorial in the fabric example, since I've tried that before, it is runnable on my machine, although I cannot target the issue now, I can completely achieve the same state as the official set-up, to achieve the same result, so I followed it, step by step, and find that I've missed a configuration file when using docker-compose to start the peers and orderer => there suppose to have one core.yaml for peer's running docker container, one for the channel. When I first meet that part, I just take it for granted and thought that there should be only one config file, just like others. I should've noticed that when I find no matter how I change the core.yaml inside core folder, the environment variable inside peers' log still the same.
5.5. Chaincode Execution
The "–peerAddresses" flag when invoking "peer chaincode invoke" are specifying which peers to send the transaction proposal to for endorsement. The specified peers are the ones that will execute the chaincode, simulate and, endorse the transaction if the result fit the policy by signing the transaction's Read-Write set. The responses from these peers (including their endorsements) are sent back to the client application, once client application has collected enough endorsements to satisfy the endorsement policy, it packages these endorsements with the transaction and submits this package to the ordering service, then distribute to all peers.
# Orderer Logs 2024-01-01 21:08:25.345 UTC 0097 WARN [orderer.common.broadcast] Handle -> Error reading from 172.25.0.1:33196: rpc error: code = Canceled desc = context canceled 2024-01-01 21:08:25.345 UTC 0098 INFO [comm.grpc.server] 1 -> streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.25.0.1:33196 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=3.256042ms 2024-01-01 21:08:27.345 UTC 0099 INFO [orderer.consensus.etcdraft] propose -> Created block [20], there are 0 blocks in flight channel=chains node=1 2024-01-01 21:08:27.346 UTC 009a INFO [orderer.consensus.etcdraft] writeBlock -> Writing block [20] (Raft index: 22) to ledger channel=chains node=1 # Endorser Logs 2024-01-01 21:08:25.310 UTC 01ab INFO [lifecycle] QueryChaincodeDefinition -> Successfully queried chaincode name 'basic' with definition {sequence: 1, endorsement info: (version: '1.0', plugin: 'escc', init required: false), validation info: (plugin: 'vscc', policy: '12202f4368616e6e656c2f4170706c69636174696f6e2f456e646f7273656d656e74'), collections: ()}, 2024-01-01 21:08:25.310 UTC 01ac INFO [endorser] callChaincode -> finished chaincode: _lifecycle duration: 0ms channel=chains txID=a0b9619e 2024-01-01 21:08:25.310 UTC 01ad INFO [comm.grpc.server] 1 -> unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.25.0.1:44980 grpc.code=OK grpc.call_duration=1.133072ms 2024-01-01 21:08:25.342 UTC 01ae INFO [endorser] callChaincode -> finished chaincode: basic duration: 0ms channel=chains txID=b8f56a10 2024-01-01 21:08:25.343 UTC 01af INFO [comm.grpc.server] 1 -> unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.25.0.1:44992 grpc.code=OK grpc.call_duration=807.404µs 2024-01-01 21:08:27.349 UTC 01b0 INFO [gossip.privdata] StoreBlock -> Received block [20] from buffer channel=chains 2024-01-01 21:08:27.359 UTC 01b1 INFO [committer.txvalidator] Validate -> [chains] Validated block [20] in 9ms 2024-01-01 21:08:27.363 UTC 01b2 INFO [kvledger] commit -> [chains] Committed block [20] with 1 transaction(s) in 3ms (state_validation=0ms block_and_pvtdata_commit=2ms state_commit=0ms) commitHash=[ec88b85c7fc55708e86107f2873b043171eb6ca4f7decbd3093bc50352b132c3] # Non-endorser Logs 2024-01-01 21:08:27.349 UTC 0105 INFO [gossip.privdata] StoreBlock -> Received block [20] from buffer channel=chains 2024-01-01 21:08:27.359 UTC 0106 INFO [committer.txvalidator] Validate -> [chains] Validated block [20] in 9ms 2024-01-01 21:08:27.362 UTC 0107 INFO [kvledger] commit -> [chains] Committed block [20] with 1 transaction(s) in 3ms (state_validation=0ms block_and_pvtdata_commit=2ms state_commit=0ms) commitHash=[ec88b85c7fc55708e86107f2873b043171eb6ca4f7decbd3093bc50352b132c3]
5.6. Multiple Orderers
Multiple peers with different transactions and these transactions are sent to different orderers, eventually, both transactions should be included in the ledger.
6. 2. Development of Client Application Workloads:
Throughput and Latency
6.1. MiniO Single Driver Multiple Node Distributed Systems
6.1.1. The issue from 1.5 to 1.7: very small, the address should be an URL (e.g., node01.minio instead of minionode01)
6.2. Vegeta HTTP load testing
6.2.1. MiniO Cluster
- 8 Nodes, 1 Target, File Size 1MB
Requests [total, rate, throughput] 300, 5.02, 5.01 Duration [total, attack, wait] 59.832s, 59.8s, 31.605ms Latencies [min, mean, 50, 90, 95, 99, max] 21.712ms, 35.103ms, 33.181ms, 45.276ms, 51.883ms, 72.573ms, 90.342ms Bytes In [total, mean] 0, 0.00 Bytes Out [total, mean] 365588550, 1218628.50 Success [ratio] 100.00% Status Codes [code:count] 200:300 Error Set:
Requests [total, rate, throughput] 600, 10.02, 10.01 Duration [total, attack, wait] 59.91s, 59.899s, 11.121ms Latencies [min, mean, 50, 90, 95, 99, max] 4.131ms, 21.297ms, 17.485ms, 39.066ms, 50.355ms, 66.105ms, 98.794ms Bytes In [total, mean] 0, 0.00 Bytes Out [total, mean] 365588550, 609314.25 Success [ratio] 100.00% Status Codes [code:count] 200:300 204:300
- 8 Nodes, 8 Target (Random), File Size 1MB
- 1
Requests [total, rate, throughput] 4800, 80.02, 77.45 Duration [total, attack, wait] 1m2s, 59.987s, 1.992s Latencies [min, mean, 50, 90, 95, 99, max] 3.175ms, 269.927ms, 47.623ms, 871.455ms, 1.405s, 2.779s, 3.531s Bytes In [total, mean] 0, 0.00 Bytes Out [total, mean] 2966843997, 618092.50 Success [ratio] 100.00% Status Codes [code:count] 200:2400 204:2400
- 2
Requests [total, rate, throughput] 4800, 80.02, 79.99 Duration [total, attack, wait] 1m0s, 59.987s, 20.905ms Latencies [min, mean, 50, 90, 95, 99, max] 3.234ms, 373.089ms, 43.067ms, 1.595s, 2.626s, 3.728s, 4.303s Bytes In [total, mean] 0, 0.00 Bytes Out [total, mean] 2966843997, 618092.50 Success [ratio] 100.00% Status Codes [code:count] 200:2400 204:2400
- 3
Requests [total, rate, throughput] 6400, 80.01, 79.11 Duration [total, attack, wait] 1m21s, 1m20s, 912.569ms Latencies [min, mean, 50, 90, 95, 99, max] 3.613ms, 112.867ms, 26.921ms, 60.739ms, 1.027s, 1.761s, 2.529s Bytes In [total, mean] 0, 0.00 Bytes Out [total, mean] 3955671625, 618073.69 Success [ratio] 100.00% Status Codes [code:count] 200:3200 204:3200
- 4
Requests [total, rate, throughput] 6400, 80.01, 78.05 Duration [total, attack, wait] 1m22s, 1m20s, 2.01s Latencies [min, mean, 50, 90, 95, 99, max] 3.536ms, 194.003ms, 39.952ms, 484.887ms, 1.265s, 2.382s, 3.243s Bytes In [total, mean] 0, 0.00 Bytes Out [total, mean] 3955671625, 618073.69 Success [ratio] 100.00% Status Codes [code:count] 200:3200 204:3200
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS c4c72b8a63cd node04.minio 6.32% 1.957GiB / 62.54GiB 3.13% 9.06GB / 5.25GB 6.5MB / 6.73GB 530 91db7525ba86 node07.minio 6.18% 2.387GiB / 62.54GiB 3.82% 8.89GB / 4.87GB 7.24MB / 6.55GB 537 0b34af846fb1 node01.minio 7.15% 3.522GiB / 62.54GiB 5.63% 11.6GB / 10.7GB 5.76MB / 6.69GB 529 39683fd06bd4 node08.minio 6.57% 1.834GiB / 62.54GiB 2.93% 9.26GB / 5.74GB 7.1MB / 6.56GB 531 5650ee5371c9 node05.minio 7.96% 1.885GiB / 62.54GiB 3.01% 9.26GB / 5.69GB 4.32MB / 6.55GB 529 bffd0f1abc91 node06.minio 5.93% 1.89GiB / 62.54GiB 3.02% 8.93GB / 4.94GB 7.17MB / 6.69GB 528 ae5bd5bfc27e node02.minio 6.59% 1.92GiB / 62.54GiB 3.07% 9.01GB / 5.13GB 6.6MB / 6.55GB 530 768a776579ce node03.minio 6.96% 2.42GiB / 62.54GiB 3.87% 9.35GB / 5.89GB 6.66MB / 7.5GB 531
How can I know whether a benchmark result is exceed the limit of current server?
- 1
6.2.2. Fabric Cluster
Write Fabric Application Client side of Fabric Application: Using Vegeta as the package
- 1. Chaincode Design:
Focus on creating simple and efficient chaincode for basic key-value operations: CRUD (create, read, update, and delete).
- 2. Benchmarking with Vegeta:
Integrate the Vegeta load testing tool into the client application to simulate high-volume, concurrent requests. Adapt Vegeta to work with the Fabric network, using it to generate a realistic load and test the system's performance.
6.2.3. Benchains
- Clean the Repository, learn how to delete the old files, and make all things clean (path)
benchains/ networks/ -> contains different distributed network systems fabric/ bin/ certs/ config/ scripts/ etcd/ config/ scripts/ chaincode/ -> contains the chaincode of fabric (benchains) sample/ src/ benchmark/ -> contains all the client side for running the benchmarks (e.g. vegeta)
6.2.4. ETCD Cluster
6.3. Enabling Vegeta HTTP Benchmarking
Fabric Gateway Client Able to Run the official Gateway Tutorial Client APP -> vegeta Compare the difference between use shell script to invoke the chaincode (init) and using gateway services
6.3.1. shell script init (without gateway services)
- shell -> orderer -> peer -> orderer -> peer
- peer1.org01.chains (peerEndpoint)
2024-02-07 14:06:56.846 UTC 019a INFO [lifecycle] QueryChaincodeDefinition -> Successfully queried chaincode name 'basic' with definition {sequence: 1, endorsement info: (version: '1.0', plugin: 'escc', init required: false), validation info: (plugin: 'vscc', policy: '12202f4368616e6e656c2f4170706c69636174696f6e2f456e646f7273656d656e74'), collections: ()}, 2024-02-07 14:06:56.847 UTC 019b INFO [endorser] callChaincode -> finished chaincode: _lifecycle duration: 0ms channel=chains txID=a887a5df 2024-02-07 14:06:56.847 UTC 019c INFO [comm.grpc.server] 1 -> unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peeraddress=172.25.0.1:43428 grpc.code=OK grpc.callduration=914.104µs
2024-02-07 14:06:56.887 UTC 019d INFO [endorser] callChaincode -> finished chaincode: basic duration: 1ms channel=chains txID=90f01866 2024-02-07 14:06:56.887 UTC 019e INFO [comm.grpc.server] 1 -> unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peeraddress=172.25.0.1:43458 grpc.code=OK grpc.callduration=1.401806ms
2024-02-07 14:06:58.896 UTC 019f INFO [gossip.privdata] StoreBlock -> Received block [18] from buffer channel=chains 2024-02-07 14:06:58.898 UTC 01a0 INFO [committer.txvalidator] Validate -> [chains] Validated block [18] in 2ms 2024-02-07 14:06:58.902 UTC 01a1 INFO [kvledger] commit -> [chains] Committed block [18] with 1 transaction(s) in 3ms (statevalidation=0ms blockandpvtdatacommit=2ms statecommit=0ms) commitHash=[10023d662f159854de8808d0e6f2d67d5ff3d9dd7ffeb5aada1dc9fcf642ffeb]
- peer1.org02.chains
2024-02-07 14:06:56.887 UTC 0187 INFO [endorser] callChaincode -> finished chaincode: basic duration: 1ms channel=chains txID=90f01866 2024-02-07 14:06:56.887 UTC 0188 INFO [comm.grpc.server] 1 -> unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peeraddress=172.25.0.1:55136 grpc.code=OK grpc.callduration=1.56059ms
2024-02-07 14:06:58.896 UTC 0189 INFO [gossip.privdata] StoreBlock -> Received block [18] from buffer channel=chains 2024-02-07 14:06:58.898 UTC 018a INFO [committer.txvalidator] Validate -> [chains] Validated block [18] in 2ms 2024-02-07 14:06:58.902 UTC 018b INFO [kvledger] commit -> [chains] Committed block [18] with 1 transaction(s) in 3ms (statevalidation=0ms blockandpvtdatacommit=2ms statecommit=0ms) commitHash=[10023d662f159854de8808d0e6f2d67d5ff3d9dd7ffeb5aada1dc9fcf642ffeb]
- orderer1.ord01.chains
2024-02-07 14:06:56.889 UTC 00a3 WARN [orderer.common.broadcast] Handle -> Error reading from 172.25.0.1:41312: rpc error: code = Canceled desc = context canceled 2024-02-07 14:06:56.889 UTC 00a4 INFO [comm.grpc.server] 1 -> streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peeraddress=172.25.0.1:41312 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.callduration=3.692723ms
2024-02-07 14:06:58.889 UTC 00a5 INFO [orderer.consensus.etcdraft] propose -> Created block [18], there are 0 blocks in flight channel=chains node=1 2024-02-07 14:06:58.893 UTC 00a6 INFO [orderer.consensus.etcdraft] writeBlock -> Writing block [18] (Raft index: 22) to ledger channel=chains node=1
6.3.2. Gateway init
- gateway -> peer -> gateway -> peer -> gateway -> orderer -> peer
- peer1.org01.chains (peerEndpoint)
2024-02-07 14:22:10.597 UTC 019a INFO [gateway] lookupEndorser -> Attempting to connect to endorser endpoint=peer1.org03.chains:6003
2024-02-07 14:22:10.598 UTC 019b INFO [gateway] connectChannelPeers -> Added peer to registry: peer1.org05.chains:6005 2024-02-07 14:22:10.598 UTC 019c INFO [gateway] connectChannelPeers -> Added peer to registry: peer1.org04.chains:6004 2024-02-07 14:22:10.598 UTC 019d INFO [gateway] connectChannelPeers -> Added peer to registry: peer1.org08.chains:6008 2024-02-07 14:22:10.598 UTC 019e INFO [gateway] connectChannelPeers -> Added peer to registry: peer1.org03.chains:6003 2024-02-07 14:22:10.598 UTC 019f INFO [gateway] connectChannelPeers -> Added peer to registry: peer1.org02.chains:6002 2024-02-07 14:22:10.598 UTC 01a0 INFO [gateway] connectChannelPeers -> Added peer to registry: peer1.org06.chains:6006 2024-02-07 14:22:10.598 UTC 01a1 INFO [gateway] connectChannelPeers -> Added peer to registry: peer1.org07.chains:6007
2024-02-07 14:22:10.600 UTC 01a2 INFO [endorser] callChaincode -> finished chaincode: basic duration: 0ms channel=chains txID=8090e54c 2024-02-07 14:22:10.604 UTC 01a3 INFO [comm.grpc.server] 1 -> unary call completed grpc.service=gateway.Gateway grpc.method=Endorse grpc.requestdeadline=2024-02-07T14:22:25.595Z grpc.peeraddress=172.25.0.1:53974 grpc.code=OK grpc.callduration=6.529708ms
2024-02-07 14:22:10.605 UTC 01a4 INFO [gateway] orderers -> Added orderer to registry address=orderer1.ord03.chains:7003 2024-02-07 14:22:10.605 UTC 01a5 INFO [gateway] orderers -> Added orderer to registry address=orderer1.ord01.chains:7001 2024-02-07 14:22:10.605 UTC 01a6 INFO [gateway] orderers -> Added orderer to registry address=orderer1.ord02.chains:7002 2024-02-07 14:22:10.605 UTC 01a7 INFO [gateway] Submit -> Sending transaction to orderer txID=8090e54c7426abf89bb497dbf54e84d7e2ece491d59e06b3304f48a920a6fda1 endpoint=orderer1.ord03.chains:7003 2024-02-07 14:22:10.609 UTC 01a8 INFO [comm.grpc.server] 1 -> unary call completed grpc.service=gateway.Gateway grpc.method=Submit grpc.requestdeadline=2024-02-07T14:22:15.605Z grpc.peeraddress=172.25.0.1:53974 grpc.code=OK grpc.callduration=4.75349ms
2024-02-07 14:22:12.616 UTC 01a9 INFO [gossip.privdata] StoreBlock -> Received block [18] from buffer channel=chains 2024-02-07 14:22:12.620 UTC 01aa INFO [committer.txvalidator] Validate -> [chains] Validated block [18] in 3ms 2024-02-07 14:22:12.623 UTC 01ab INFO [kvledger] commit -> [chains] Committed block [18] with 1 transaction(s) in 3ms (statevalidation=0ms blockandpvtdatacommit=1ms statecommit=0ms) commitHash=[10023d662f159854de8808d0e6f2d67d5ff3d9dd7ffeb5aada1dc9fcf642ffeb] 2024-02-07 14:22:12.623 UTC 01ac INFO [comm.grpc.server] 1 -> unary call completed grpc.service=gateway.Gateway grpc.method=CommitStatus grpc.requestdeadline=2024-02-07T14:23:10.61Z grpc.peeraddress=172.25.0.1:53974 grpc.code=OK grpc.callduration=2.013723966s
- peer1.org02.chains
2024-02-07 14:22:10.602 UTC 018e INFO [endorser] callChaincode -> finished chaincode: basic duration: 0ms channel=chains txID=8090e54c 2024-02-07 14:22:10.603 UTC 018f INFO [comm.grpc.server] 1 -> unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.requestdeadline=2024-02-07T14:22:25.596Z grpc.peeraddress=172.25.0.12:60608 grpc.peersubject="CN=peer1.org01.chains,L=San Francisco,ST=California,C=US" grpc.code=OK grpc.callduration=1.327677ms
2024-02-07 14:22:12.616 UTC 0190 INFO [gossip.privdata] StoreBlock -> Received block [18] from buffer channel=chains 2024-02-07 14:22:12.620 UTC 0191 INFO [committer.txvalidator] Validate -> [chains] Validated block [18] in 3ms 2024-02-07 14:22:12.623 UTC 0192 INFO [kvledger] commit -> [chains] Committed block [18] with 1 transaction(s) in 3ms (statevalidation=0ms blockandpvtdatacommit=2ms statecommit=0ms) commitHash=[10023d662f159854de8808d0e6f2d67d5ff3d9dd7ffeb5aada1dc9fcf642ffeb]
- orderer1.ord01.chains
2024-02-07 14:22:12.610 UTC 00a3 INFO [orderer.consensus.etcdraft] propose -> Created block [18], there are 0 blocks in flight channel=chains node=1 2024-02-07 14:22:12.613 UTC 00a4 INFO [orderer.consensus.etcdraft] writeBlock -> Writing block [18] (Raft index: 22) to ledger channel=chains node=1
- orderer1.ord03.chains (Leader)
2024-02-07 14:22:10.609 UTC 0043 INFO [orderer.consensus.etcdraft] forwardToLeader -> Forwarding transaction to the leader 1 channel=chains node=3
2024-02-07 14:22:10.609 UTC 0044 WARN [orderer.common.broadcast] Handle -> Error reading from 172.25.0.12:52622: rpc error: code = Canceled desc = context canceled 2024-02-07 14:22:10.609 UTC 0045 INFO [comm.grpc.server] 1 -> streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.requestdeadline=2024-02-07T14:22:15.605Z grpc.peeraddress=172.25.0.12:52622 grpc.peersubject="CN=peer1.org01.chains,L=San Francisco,ST=California,C=US" error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.callduration=1.003141ms 2024-02-07 14:22:12.614 UTC 0046 INFO [orderer.consensus.etcdraft] writeBlock -> Writing block [18] (Raft index: 22) to ledger channel=chains node=3
6.4. Building Benchmark on Fabric Network
6.4.1. IO Heavy
Current blockchain systems rely on key-value storage to persist blockchain transactions and states. Each storage system may perform differently under different workloads. This workload is designed to evaluate the IO performance by invoking a contract that performs a large number of random writes and random reads to the contract’s states. The I/O bandwidth can be estimated via the observed transaction latency. Achieving consistent I/O bandwidth across various network designs, including both Layer 1 and potential Layer 2 (e.g., sidechain) configurations, will indicate accurate and reliable benchmark results.
6.4.2. DO Nothing
This contract accepts transaction as input and simply returns. In other words, it involves minimal number of operations at the execution layer and data model layer, thus the overall performance will be mainly determined by the consensus layer.
6.4.3. CPU Heavy
Although this benchmark, which focuses on computationally intensive tasks, is less critical for our immediate goals, it still provides valuable data on the execution layer's efficiency. Given that Fabric executes code in Docker containers, similar to a native Linux environment, this benchmark helps us understand the computational overhead introduced by the platform.
6.5. Modular Chaincode Development
For each benchmark type, we will develop and deploy separate chaincodes to facilitate specific tests. This modular approach allows us to isolate and measure the impact of different operations on network performance accurately. Looking ahead to exploring Layer 2 enhancements, we plan to introduce additional chaincodes or policies that simulate Layer 2 features without altering the original benchmark chaincodes. These Layer 2 simulations will be integrated and executed through the gateway, enabling a seamless transition in our testing framework from Layer 1 to Layer 2 evaluations.
6.5.1. 1. Asset Transfer Sample Benchmark
Add a web server layer to the current gateway application, and using vegeta to bench that system. using `net/http` package of golang. All PUT requests on duplicated assets, which means delete records first then add a new one with same name
- rate = 1
Requests [total, rate, throughput] 80, 1.01, 0.99 Duration [total, attack, wait] 1m21s, 1m19s, 2.032s Latencies [min, mean, 50, 90, 95, 99, max] 26.466ms, 1.147s, 1.036s, 2.037s, 2.042s, 2.056s, 2.061s Bytes In [total, mean] 3434, 42.92 Bytes Out [total, mean] 0, 0.00 Success [ratio] 100.00% Status Codes [code:count] 200:80
- rate = 8
Requests [total, rate, throughput] 640, 8.01, 7.92 Duration [total, attack, wait] 1m20s, 1m20s, 316.66ms Latencies [min, mean, 50, 90, 95, 99, max] 38.122ms, 1.086s, 1.069s, 1.918s, 2.039s, 2.064s, 2.098s Bytes In [total, mean] 27257, 42.59 Bytes Out [total, mean] 0, 0.00 Success [ratio] 99.22% Status Codes [code:count] 0:5 200:635
- rate = 32
Requests [total, rate, throughput] 2560, 32.01, 30.44 Duration [total, attack, wait] 1m20s, 1m20s, 427.164ms Latencies [min, mean, 50, 90, 95, 99, max] 70.226ms, 1.145s, 1.145s, 1.965s, 2.068s, 2.154s, 2.282s Bytes In [total, mean] 105029, 41.03 Bytes Out [total, mean] 0, 0.00 Success [ratio] 95.59% Status Codes [code:count] 0:113 200:2447
- rate = 36
Requests [total, rate, throughput] 2880, 36.01, 33.51 Duration [total, attack, wait] 1m22s, 1m20s, 1.949s Latencies [min, mean, 50, 90, 95, 99, max] 18.13ms, 1.204s, 1.203s, 2.021s, 2.125s, 2.204s, 2.269s Bytes In [total, mean] 117819, 40.91 Bytes Out [total, mean] 0, 0.00 Success [ratio] 95.31% Status Codes [code:count] 0:135 200:2745
- rate = 40
Requests [total, rate, throughput] 3200, 40.01, 36.61 Duration [total, attack, wait] 1m20s, 1m20s, 231.815ms Latencies [min, mean, 50, 90, 95, 99, max] 18.863ms, 1.18s, 1.183s, 2.002s, 2.097s, 2.178s, 2.288s Bytes In [total, mean] 126017, 39.38 Bytes Out [total, mean] 0, 0.00 Success [ratio] 91.75% Status Codes [code:count] 0:264 200:2936
- Peer CPU%: 32%
NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS peer1.org01.chains 96.63% 272MiB / 62.54GiB 0.42% 150MB / 175MB 0B / 38.5MB 38 peer1.org05.chains 30.66% 262.8MiB / 62.54GiB 0.41% 50.7MB / 22.7MB 0B / 38.5MB 36 peer1.org07.chains 34.66% 247.3MiB / 62.54GiB 0.39% 50.7MB / 22.6MB 0B / 38.5MB 37 peer1.org08.chains 30.12% 243.4MiB / 62.54GiB 0.38% 50.7MB / 22.7MB 0B / 38.5MB 37 peer1.org03.chains 30.43% 263MiB / 62.54GiB 0.41% 50.9MB / 22.9MB 0B / 38.5MB 36 peer1.org04.chains 31.05% 253.5MiB / 62.54GiB 0.40% 50.7MB / 22.5MB 0B / 38.5MB 36 peer1.org02.chains 30.65% 240.6MiB / 62.54GiB 0.38% 50.8MB / 22.8MB 0B / 38.5MB 37 peer1.org06.chains 32.81% 242.1MiB / 62.54GiB 0.38% 50.9MB / 22.8MB 0B / 38.5MB 37 orderer1.ord03.chains 3.17% 161.4MiB / 62.54GiB 0.25% 41MB / 96.6MB 0B / 59.2MB 37 orderer1.ord01.chains 1.40% 162.7MiB / 62.54GiB 0.25% 34.5MB / 147MB 0B / 59.2MB 37 orderer1.ord02.chains 2.13% 155.6MiB / 62.54GiB 0.24% 41.2MB / 69MB 0B / 59.2MB 35
- Peer CPU%: 32%
- rate = 44
Requests [total, rate, throughput] 3520, 44.01, 36.27 Duration [total, attack, wait] 1m22s, 1m20s, 1.908s Latencies [min, mean, 50, 90, 95, 99, max] 13.139ms, 1.287s, 1.305s, 2.098s, 2.174s, 2.326s, 2.606s Bytes In [total, mean] 127476, 36.21 Bytes Out [total, mean] 0, 0.00 Success [ratio] 84.38% Status Codes [code:count] 0:550 200:2970
- Peer CPU%: 34%
NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS peer1.org01.chains 119.74% 370.1MiB / 62.54GiB 0.58% 1.34GB / 1.57GB 0B / 308MB 43 peer1.org05.chains 34.26% 308.9MiB / 62.54GiB 0.48% 382MB / 132MB 0B / 308MB 38 peer1.org07.chains 35.37% 297.5MiB / 62.54GiB 0.46% 382MB / 132MB 0B / 308MB 39 peer1.org08.chains 34.29% 282.8MiB / 62.54GiB 0.44% 381MB / 131MB 0B / 308MB 38 peer1.org03.chains 32.85% 294MiB / 62.54GiB 0.46% 383MB / 133MB 0B / 308MB 37 peer1.org04.chains 35.04% 299.4MiB / 62.54GiB 0.47% 381MB / 131MB 16.4kB / 308MB 37 peer1.org02.chains 34.81% 311.3MiB / 62.54GiB 0.49% 382MB / 132MB 0B / 308MB 37 peer1.org06.chains 34.28% 311.7MiB / 62.54GiB 0.49% 383MB / 133MB 0B / 308MB 38 orderer1.ord03.chains 1.32% 195.8MiB / 62.54GiB 0.31% 386MB / 938MB 0B / 570MB 38 orderer1.ord01.chains 3.53% 148.5MiB / 62.54GiB 0.23% 302MB / 1.41GB 0B / 570MB 38 orderer1.ord02.chains 1.60% 176.1MiB / 62.54GiB 0.27% 386MB / 662MB 0B / 570MB 38
- Peer CPU%: 34%
- rate = 48
Requests [total, rate, throughput] 3840, 48.01, 36.11 Duration [total, attack, wait] 1m22s, 1m20s, 2.098s Latencies [min, mean, 50, 90, 95, 99, max] 15.254ms, 1.477s, 1.534s, 2.286s, 2.41s, 2.565s, 2.771s Bytes In [total, mean] 127218, 33.13 Bytes Out [total, mean] 0, 0.00 Success [ratio] 77.19% Status Codes [code:count] 0:876 200:2964
- Peer CPU%: 40%
NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS peer1.org01.chains 182.08% 367MiB / 62.54GiB 0.57% 778MB / 900MB 0B / 183MB 43 peer1.org05.chains 37.68% 294.7MiB / 62.54GiB 0.46% 230MB / 81.1MB 0B / 183MB 37 peer1.org07.chains 39.66% 302.2MiB / 62.54GiB 0.47% 231MB / 81.5MB 0B / 183MB 39 peer1.org08.chains 37.01% 272.5MiB / 62.54GiB 0.43% 230MB / 80MB 0B / 183MB 37 peer1.org03.chains 37.11% 289.6MiB / 62.54GiB 0.45% 231MB / 81.2MB 0B / 183MB 37 peer1.org04.chains 36.08% 292.8MiB / 62.54GiB 0.46% 230MB / 80.6MB 16.4kB / 183MB 37 peer1.org02.chains 37.22% 306.9MiB / 62.54GiB 0.48% 230MB / 81.2MB 0B / 183MB 37 peer1.org06.chains 37.48% 296.8MiB / 62.54GiB 0.46% 231MB / 81.4MB 0B / 183MB 38 orderer1.ord03.chains 2.33% 172.5MiB / 62.54GiB 0.27% 229MB / 556MB 0B / 340MB 38 orderer1.ord01.chains 2.46% 154.6MiB / 62.54GiB 0.24% 180MB / 836MB 0B / 340MB 37 orderer1.ord02.chains 1.91% 151.8MiB / 62.54GiB 0.24% 230MB / 393MB 0B / 340MB 38
- Peer CPU%: 40%
- Summary
At a request rate of 1 transaction per second (TPS), the system exhibited exceptional reliability with a 100% success rate, minimal latency. As we incrementally increased the load, the system maintained a high level of performance up to 36 TPS, where it achieved a throughput of approximately 33.51 TPS with a 95.31% success rate. This rate represents the system's optimal operational threshold under current testing conditions.
Beyond this rate, notably at 40 TPS and higher, we observed a marked decline in performance. The success rate dropped significantly to 91.75% at 40 TPS and continued to decrease, reaching 77.19% at 48 TPS. Concurrently, latency increased, and peer CPU utilization surged to 40% at 48 TPS, indicating stress on the system's resources.
The unsuccessful response is due to the gateway handler's current logic, which deletes existing data before adding new data with the same identifier, introduces potential conflicts. These conflicts become more pronounced under higher loads, contributing to the observed increase in unsuccessful requests.
Based on our findings, the system's current maximum throughput is around 36 TPS, at which point it operates with satisfactory reliability and efficiency. Beyond this threshold, the success rate and performance metrics indicate the system's limits under the present configuration and workload handling strategy.
- Adjust the Layer 1 Parameters
6.5.2. 2. DO Nothing Benchmark
6.5.3. 3. IO Heavy Benchmark (Most Crucial One)
6.5.4. 4. CPU Heavy Benchmark
7. 3. Integrating and Benchmarking Layer 2 Solutions:
To benchmark various Layer 2 scaling solutions effectively, we propose a novel approach that involves implementing these solutions on a uniform platform, using a simple Layer 1 blockchain network built with Hyperledger Fabric. This strategy allows for direct comparison of different Layer 2 approaches, such as state channels, sidechains, and rollups, under the same conditions, thereby providing meaningful data insights into which solutions are best suited for specific scenarios.
Our methodology leverages the controllability and flexibility of private blockchains like Fabric, utilizing its features like policies, chaincode, and channels to construct simplified Layer 2 solutions. This approach does not aim to replicate the complexity found in real-world implementations, but rather to capture the essence of each Layer 2 type to assess its impact on performance and scalability. Recognizing that Layer 2 solutions often trade some degree of security for scalability, our benchmarks will focus on the performance gains without implementing the intricate security mechanisms typical of these solutions. By simplifying the design, we aim to isolate and evaluate the core principles that contribute to scalability.
This practical framework is used for assessing the scalability and efficiency of different Layer 2 approaches in a controlled environment, providing valuable data to inform the selection of scaling solutions for various use cases.
7.1. State Channels
7.1.1. Lightning Network Notes
State channels are a Layer 2 scaling technique that allows parties to conduct transactions off-chain, with only the initial and final states being recorded on the blockchain. This approach significantly reduces the blockchain's workload by bypassing the need to process and store every transaction on-chain. Instead, participants engage in transactions through a private, off-chain channel, using cryptographic proofs to ensure security and trustlessness. The blockchain's role is limited to confirming the opening of the channel and finalizing the settled outcome after the parties close the channel. This ensures that, despite transactions occurring off-chain, the process remains secure and tamper-proof, relying on the blockchain only for dispute resolution and final settlement.
Although state channels offer considerable scaling potential in ideal conditions of high transaction frequency between specific parties, their effectiveness may be reduced in more common, less predictable transaction environments.
7.2. Side Chains
Side chains are independent blockchains that operate alongside a main chain, designed to enhance scalability and flexibility. By maintaining their own ledgers, they allow for the execution of transactions and smart contracts independently of the main chain, alleviating congestion and reducing transaction costs. This setup enables specialized applications, such as high-speed transactions or privacy-centric operations, that the main chain might not efficiently support due to its focus on security and decentralization.
The interoperability between the main chain and side chains is facilitated through mechanisms like two-way pegs, allowing assets to be securely moved between chains. This connection ensures that side chains benefit from the foundational security of the main chain while offering additional features and efficiencies. Essentially, side chains serve as an extension of the main chain, enabling a broader range of applications without compromising the main chain's integrity.
In this ecosystem, the main chain provides a secure and trustworthy base layer, while side chains introduce innovation and customization. This dual-layer approach balances security with flexibility, enabling the blockchain network to support a wider variety of applications and use cases. Side chains thus play a critical role in scaling blockchain technology, making it more adaptable and accessible for diverse needs.
Other than other L2 solutions like state channels, Plasma, and rollups aim to directly scale the main chain's operations while inheriting its security properties as much as possible. The side chains is more like an "extension" to the L1 main chain, as they expand the ecosystem's capabilities without altering the main chain's fundamental operation.
7.3. Plasma
Specific Applications Lead to Child Chain Creation in Plasma Child chains may adopt a security model that is optimized for higher transaction throughput and lower costs, which might be considered "weaker" in comparison to the main chain's security. However, the periodic commitment of the child chain's state to the main chain, and the ability for users to challenge fraudulent activities through fraud proofs, ensure that the overall system retains a strong security posture.
The application of the child chains should define:
7.3.1. The scope of transactions and state (ledger) that the child chain will handle
7.3.2. The rules and consensus mechanism for the child chain
When processing in the child the chain, the corresponding states in the main chain will be "locked". The child chain periodically generates a cryptographic proof (such as a Merkle root) that represents the current state of its ledger. This proof summarizes the entire state (ledger) of the child chain, recorded then in the main chain's ledger. However, this transaction does not alter the main chain's state in the same way that direct transactions between parties on the main chain would. Instead, it serves as an anchor or checkpoint for the state of the child chain.
For example, when a user wishes to withdraw assets from the child chain to the main chain, the process relies on the latest state commitment. The user proves their ownership of the assets based on the child chain's state, and the assets are then unlocked or transferred on the main chain.
7.4. Rollups
Plasma submits less information to the mainchain compared to rollups, which could affect the ability to reconstruct state solely from mainchain data in Plasma.