header-langage
简体中文
繁體中文
English
Tiếng Việt
Scan to Download the APP

Interpreting Vitalik’s new article: How can the future Layer2 get out of the predicament?

2024-04-01 14:00
Read this article in 11 Minutes
总结 AI summary
View the summary 收起
Original author: Haotian, crypto researcher


Editor's note: On March 28, Ethereum founder Vitalik Buterin published a long article on Warpcast expressing his thoughts on the future expansion of Ethereum after the Cancun upgrade. For details, please see the article: Vitalik's latest long article: Ethereum's evolutionary sequel, four key improvements in L2. Crypto researcher Haotian interpreted and analyzed Vitalik's views on X. BlockBeats reproduced the full text as follows:


How to understand Vitalik Buterin's new article on Ethereum's expansion? Some people say that Vitalik gave Blob inscriptions to call orders, which is far from the truth.


So how do Blob data packets work? Why can't Blob space be used efficiently after the Cancun upgrade? Is DAS data availability sampling a preparation for sharding?


In my opinion, the performance is sufficient after the Cancun upgrade, and Vitalik is worried about the development of Rollup. Why? Next, let me talk about my understanding:


1) It has been explained many times before that Blob is a temporary data packet that is decoupled from EVM calldata and can be directly called by the consensus layer. The direct benefit is that EVM does not need to access Blob data when executing transactions, so it does not incur high execution layer computing costs.


Currently, a series of factors are balanced. The size of a Blob is 128k. A batch transaction to the main network carries at most two Blobs. Ideally, the ultimate goal of a main network block is to carry 16MB, about 128 Blob data packets.


Therefore, the Rollup project should try to balance factors such as the number of Blob blocks, TPS transaction capacity, and Blob main network node storage costs, with the goal of using Blob space with the best cost-effectiveness.


Take Optimism as an example. Currently, there are about 500,000 transactions per day. On average, a transaction is batched to the main network every 2 minutes, carrying 1 Blob data packet at a time. Why carry 1? Because there is only so much TPS and it is not needed. Of course, you can carry two, but the capacity of each Blob will not be full, but the storage fee is increased, which is unnecessary.


What should we do if the transaction volume of Rollup off-chain increases, for example, 50 million transactions need to be processed every day?


1. Compress the transaction volume of each Batch, and make the Blob space as large as possible;

2. Increase the number of blobs;

3. Shorten the frequency of Batch transactions;


2) Since the amount of data carried by the main network block is affected by the Gas Limit and storage costs, 128 blobs per block is an ideal state. Currently, not so many are used. Optimism only uses 1 every 2 minutes, leaving a lot of room for Layer2 project parties to improve TPS, expand the market user volume and ecological prosperity.


Therefore, for a period of time after the Cancun upgrade, Rollup was not "volume" in terms of the number and frequency of blobs used and the use of Blob space bidding.


The reason why Vitalik mentioned Blobscription is that this type of inscription can temporarily increase the transaction volume and increase the demand for Blob usage, thus expanding the volume. Using inscriptions as an example can help us understand the working mechanism of Blob more deeply. What Vitalik really wants to express has little to do with inscriptions.


Because in theory, if a Layer2 project party batches transactions to the main network with high frequency and high capacity, and fills the Blob block every time, as long as it is willing to bear the high cost of forged transaction batch, it will affect the normal use of Blob by other Layer2. However, in the current situation, just like someone buys computing power to attack BTC with a 51% hard fork, it is theoretically feasible, but in reality lacks profit motivation.


Therefore, the gas fee for the second layer will be stable in the "lower" range for a long time, which will give the Layer2 market a long golden development window for "increasing troops and storing food".


3) So, if the Layer2 market prospers to a certain extent one day, and the number of transactions batched to the main network becomes huge every day, what should we do if the current Blob data packets are not enough? Ethereum has already provided a solution: using Data Availability Sampling (DAS):


To put it simply, the data that originally needed to be stored in one node can be distributed in multiple nodes at the same time. For example, each node stores 1/8 of the total Blob data, and 8 nodes form a group to meet the DA capacity, which is equivalent to expanding the current Blob storage capacity by 8 times. This is actually what we will do in the future Sharding stage.


But Vitalik has reiterated this many times, which is very interesting, and seems to warn the majority of Layer2 project parties: Don't always complain about the high cost of Ethereum DA capabilities. With your current TPS capacity, you haven't developed the capacity of Blob data packets to the extreme. Hurry up and increase your firepower to develop the ecosystem, expand users and transaction volume, and don't always think about DA escaping and doing one-click chain launch.


Later, Vitalik added that among the core Rollups, only Arbitum has reached stage 1. Although DeGate, Fuel, etc. have reached Stage 2, they are not yet familiar to the wider community. Stage 2 is the ultimate goal of Rollup security. Very few Rollups have reached Stage 1, and most rollups are at Stage 0. It can be seen that the development of the Rollup industry really worries Vitalik.


4) In fact, in terms of the expansion bottleneck problem alone, the Rollup Layer2 solution still has a lot of room to improve performance.


1. Use Blob space more efficiently through data compression. OP-Rollup currently has a dedicated compressor component to do this work. ZK-Rollup itself compresses SNARK/STARK proofs off-chain and submits them to the main network, which is "compression";


2. Reduce Layer2's dependence on the main network as much as possible, and use optimistic proof technology to ensure L2 security only in special circumstances. For example, most of Plasma's data is on the chain, but deposits and withdrawals occur on the main network, so the main network can promise its security.


This means that Layer2 should only consider strongly associating important operations such as deposits and withdrawals with the main network, which not only reduces the burden on the main network, but also enhances the performance of L2 itself. The previously mentioned Sequencer parallel processing capability, screening, classification and pre-processing of a large number of transactions off-chain, and the hybrid Rollup promoted by Metis, which uses OP-Rollup for normal transactions and ZK Route for special withdrawal requests, all have similar considerations.


The above


It should be said that Vitalik's article on Ethereum's future expansion plan is very inspiring. In particular, he expressed his dissatisfaction with the current development status of Layer2, his optimistic affirmation of Blob's performance space, and his outlook on future sharding technology. He even pointed out some directions that Layer2 should be optimized.


In fact, the only uncertainty now is left to Layer2 itself. How can it accelerate its development?


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit