header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Details of EIP-4844 proposal: Introduce "Transactions with BloBs" to reduce Rollup fees

2022-12-22 10:30
Read this article in 18 Minutes
总结 AI summary
View the summary 收起
How to reduce the Cost of Layer2 by 100 times?
Original article by Chuan Lin, AnT Capital


01 Introduction


Vitalik released an updated Ethereum roadmap on November 5, 2022, and compared to The previous roadmap released on December 2, 2021, the upcoming update to The Surge phase is undoubtedly the most noteworthy.


As you can see below, this phase of the update obviously adds more detail -- we can clearly see that the Ethereum community came up with IP-4844: Proto-Danksharding in order to implement "basic Rollup expansion". The proposal will be born in early in May, 2023 to 6, 100 times the cost of the Rollup cost will reduce, this will be very big to optimize the etheric fang L2 user experience. Such a large optimization is bound to be the focus of discussion and attention in the Web3 community.



What was the problem with Ethereum? What ideas and solutions does EIP-4844 use to solve this problem? This article will help you understand IP-4844 succinct.


If you want to keep up with the underlying architecture of Ethereum and keep up with the community discussion in real time, please don't miss this article!


02 Text


I. EIP-4844 Origin: L2 cost bottleneck caused by data availability


1.1 Basic information about data interaction between L2 and L1


At present, Ethereum L2 mostly uses Rollup as its basic technology Roadmap. Vitalik has described Ethereum update with "A rollup-centric roadmap", which shows that Rollup has basically taken over L2.


(See the author's previous study on L2: The Long Article on ETH Merging, Review and Prospect of Layer2  )


The basic principle of Rollup operation is to execute a bundle of transactions outside the main Ethereum chain. After the execution, the execution result and the transaction data itself are compressed and sent back to L1, so that others can verify the correctness of the transaction result. Obviously, if no one else has a way to read the data, then validation can't be done. So it's important that others have access to the raw transaction Data, also known as "Data Availability."


However, due to the current architecture of Ethereum, data transmitted from L2 to L1 is stored in the Calldata of transactions. However, when Ethereum was originally designed, Calldata was just a parameter of a smart contract function call, which was the data that all nodes had to download synchronously. If Calldata is inflated, it will cause a high load on Ethereum network nodes, so the cost of Calldata is relatively expensive. This is also a major factor in the current L2 expense.



1.2 Improvement ideas for the problem


Ask yourself, if you were to design an optimization solution for this problem, in which direction would you go?


In fact, we can observe that L2's transaction compression data is uploaded only so that it can be downloaded and verified by others, and does not need to be executed by L1. However, the high cost of Calldata is because it, as a parameter of a function call, may be executed by L1 by default. Therefore, nodes of the whole network need to synchronize.


This creates a mismatch: for example, I just want to upload my data to a web disk so that someone else can download it for a certain period of time. Instead, you synced my data with a whole network broadcast I didn't need, forced everyone to download it within a certain time limit, and then charged me exorbitant fees for the service. This is clearly inappropriate and needs to be improved.


So how can we improve it? We can design a separate data type for the data sent by L2 and separate it from the Calldata of L1. This type of data only needs to be accessible and downloaded by others in need within a certain period of time, without the need to do the whole network synchronization. In fact, this is what many members of the Ethereum tech community think.


The improvement of IP-4844 is actually carried out around this vein.


Ii. The heart of IP-4844: Trading with BloBs


If you can sum up what IP-4844 does in one sentence, it is to introduce a new transaction type called "transactions with Blobs", which is the data type specifically designed for L2 data transfer mentioned above.


Therefore, if you understand the details of Blobs, you can say that you basically understand EIP-4844.


2.1 Blob's ontology: A "big data block" for storing L2 compressed data, stored in the consensus layer nodes


The Blob, short for Binary Large Object, is designed to hold L2's raw transaction compression data, which is the same as L2's Calldata, which is now in the Blob. Compared to Calldata, the data size of BloBs can be very large, up to 125KB.


Blobs are stored by consensus layer nodes rather than directly on the main chain as Calldata is, which brings up two core features of bloBs:


Cannot be read by EVM like Calldata

It has a life cycle and will be deleted after 30 days


(If you're not familiar with cryptography and abstract algebra, that's enough for blobs themselves.)


In more detail, the Blob itself is a Vector of 4096 elements. Each dimension of this vector is a number that can be very large, Values range between 0 and 52435875175126190479447740508185965837690552500527637822603658699938581184513 - the very big number is a prime, It's related to elliptic curve cryptography.


The numbers in each dimension of this vector can be viewed as the coefficients of a finite field polynomial of order no higher than 4096. For example, the numbers in dimension i are the coefficients in front of w^i, where w is a constant and w^4096 = 1. This structure is designed to facilitate the generation of KZG polynomial promises.


2.2 Architecture design related to Blob: Sidecar


Before you can understand the Blob architecture, you need to explain one concept: Execution Payload. Following the Ethereum merger, Consensys Layer and Execution Layer were separated, which are responsible for two main functions: the former for PoS consensus and the latter for EVM execution. The Execution Payload can simply be thought of as the common L1 transaction in the EL layer.


(Source: OP in Paris: OP Lab's Protolambda walks us through IP-4844)


The fusion of Blob and the current Ethereum architecture can be likened to the relationship between the motorcycle ontology and the Sidecar, like this: (left is the Sidecar of the motorcycle)



The Sidecar is an official metaphor. What this means is that the Blob operates in a way that is parallel to and independent of the main chain, although it is dependent on it.


As shown in the figure below, let's walk through the blob-related execution process to better understand the metaphor:


(Source: OP in Paris: OP Lab's Protolambda walks us through IP-4844)


First, L2 Sequencer determines the transaction and sends the result of the transaction, related proofs (yellow) and data packets (Blob, blue) to L1's transaction pool


A Beacon Proposer (a node in L1) sees a deal and broadcasts it in a new Beacon Block. But when broadcast, it will separate the Blob and leave it in the consensus layer CL instead of putting it into a new block in the execution layer


Other L1 nodes (Beacon peers) received new block proposals and transaction results. If they need to become L2 validators, they can go to Blobs Sidecar and download the relevant data.


The following diagram illustrates the Blob lifecycle from another perspective. It is clear that blob data does not go up the L1 backbone, but only in the consensus layer nodes, and it has a different lifecycle.


(Source: OP in Paris: OP Lab's Protolambda walks us through IP-4844)


Therefore, it's not hard to understand why BloBs can't be read directly by EVM, or L1 smart contracts: what can be read is what is passed to the executive level, and since blobs are only at the consensus level, that's certainly not the case. This separation, in fact, is why Rollup costs can be reduced.


2.3 Storage of BloBs: The new Fee Market


As mentioned earlier, Blob data will reside in consensus layer nodes and have a lifecycle. But obviously this service is not free either, so it will lead to a new tariff Market independent of the L1 Gas tariff, which is also the Multi-dimensional Fee Market advocated by Vitalik. The Fee details are still in the iteration of perfect Market, see lot of discussion and update: https://github.com/ethereum/EIPs/pull/5707


In addition, if this data can only be stored at the node level for a short period of time, how do you achieve long-term storage? Vitalik says there are many solutions. Because the security assumption here is not very high, it is a "1 of N trust model", only someone can complete the storage of real data. In a big storage hardware costs only $20 per terabyte, 2.5 terabytes of data storage per year is a small problem for a willing heart. Various other decentralized storage solutions could also be an option, but Vitalik doesn't mention any specific projects here.


Three, the impact of IP-4844


At the architectural level, IP-4844 introduces a new Blob-carrying Transaction type, which is the first time for Ethereum to construct a separate data layer for L2, which is also the first step to implement Full Danksharding later.


At the economic model level, IP-4844 will introduce a new Fee Market for blob, which will be Ethereum's first step towards a Multi-dimensional Market.


At the user experience level, the most immediate user perception is the significant reduction in L2 costs, and this significant improvement at the bottom will provide an important foundation for the explosion of L2 and its application layer.


Iv. Prospect after IP-4844: Fully Danksharding


Currently, IP-4844 has been explicitly included in the Ethereum Shanghai upgrade series, which is expected to be completed between May and early June next year according to the current timeline given by community members.


And EIP-4844 is just Proto-Danksharding, which means the original Danksharding. The concept of the complete version of Danksharing is shown in the figure below. Each node can realize real-time verification of L2 Data correctness directly through Data Availability Sampling, which will further improve L2 security and performance.


(Source: Frequently Asked  Questions  Written by Vitalik Buterin)


Original link


Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit