Vitalik on the Possible Future of Ethereum (Part 3): The Scourge

24-10-23 17:43
Read this article in 37 Minutes
总结 AI summary
View the summary 收起
Original title: Possible futures of the Ethereum protocol, part 3: The Scourge
Original author: Vitalik Buterin
Original translation: Tia, Techub News


Previous reading: 《Vitalik on the possible future of Ethereum (I): The Merge》, 《Vitalik on the possible future of Ethereum (II): The Surge》


One of the biggest risks facing Ethereum L1 is the centralization of POS. If POS has a scale effect, it will lead to large stakers dominating, and small stakers will withdraw and join large stake pools. This makes it more likely to lead to high-risk events such as 51% attacks and transaction censorship. In addition to the risk of centralization, there is also the risk of value extraction, that is, a small group of people obtain the value that originally belonged to Ethereum users.


Over the past year, our understanding of these risks has deepened. As we all know, there are two links involved in this risk: (i) block construction, and (ii) pledge capital supply. Larger participants can increase block revenue by running more complex algorithms ("MEV extraction") to generate blocks. Large participants can also deal with the inconvenience of locked capital more efficiently by releasing it to third parties as liquid staking tokens (LST). In addition to the problems caused by the scale effect of stakers, Ethereum also needs to consider whether there is (or will be) too much ETH staked.


The Scourge, 2023 Roadmap


This year, significant progress has been made in "block construction", most notably the application of the "committee inclusion lists sorting solution", and a lot of research has been done on the economics of proof of stake, including two-tiered staking models and reducing issuance to limit the percentage of ETH stake.


The Scourge: Key Goals


· Minimize the risk of centralization in Ethereum’s staking layer (particularly in terms of block construction and capital supply, aka MEV and staking pools)


· Minimize the risk of over-extracting value from users


Fixing the block construction pipeline


What are we solving?


Today, Ethereum block construction is mainly implemented through MEVBoost embedded in the PBS mechanism. When validators propose blocks, they auction the work of selecting the block content to the builders. Selecting the blocks that maximize revenue requires specialized algorithms in order to extract as much profit as possible from the chain (this is called “MEV extraction”). Validators only need to perform relatively simple tasks such as listening to bids and accepting the highest bid and “attesting”.


Diagram of what MEVBoost does: professional builders do the red tasks, while stakers do the blue tasks.


There are many versions, such as "Proposer-Builder Separation" (PBS) and "Attester-Proposer Separation" (APS). They differ slightly in some of the responsibilities. In PBS, validators still propose blocks but receive payloads from builders, while in APS, the entire slot is the responsibility of the builder. Recently, APS has become more popular than PBS because it further reduces the incentive for proposers to collude with builders. Note that APS only involves executing blocks; consensus blocks containing proof-of-stake related data (such as attestations) are still randomly assigned to validators.


Further splitting power helps keep validators decentralized, but it comes at an important cost: actors performing "specialized" tasks can easily become centralized. Here is how Ethereum blocks are built today:



As you can see, two of the builders determine 88% of the content of Ethereum blocks. This could lead to participants censoring transactions. However, the situation may be better than expected: 100% censorship is required to prevent transactions from being included, and only 51% is not enough. With 88% censorship, users need to wait an average of 9 slots to be included. For some cases, waiting two or even five minutes is fine. But for other cases, such as DeFi liquidations, there is a risk of market manipulation even if other people's transactions are delayed by a few blocks.


The strategies adopted by block builders to maximize their profits may have other negative effects on users. Transactions such as "sandwich attacks" can cause users to suffer significant losses due to slippage. Moreover, the transactions introduced to carry out these attacks will cause congestion in the chain, thereby increasing the gas price for other users.


What is the solution? How does it work?


The solution can be to further subdivide the block production task: the task of selecting transactions is returned to the proposers (i.e. stakers), and the builders can only sort the transactions and insert some of their own transactions. This is what inclusion lists do.



At time T, a randomly selected staker creates an inclusion list based on the list of transactions that are valid in the current state. At time T+1, the block builder (who may have been selected in advance through an in-protocol auction mechanism) creates a block. This block must include every transaction in the inclusion list, but builders can reorder transactions and add their own.


In the Fork Choice Enforced Inclusion List (FOCIL) proposal, there is a multiple inclusion list committee per block. If a transaction is to be delayed by one block, the creators of k inclusion lists (e.g. k = 16) must review the transaction. The combination of FOCIL with a final proposer selected by auction (who is required to include the inclusion list, but can reorder and add new transactions) is often called "FOCIL + APS".


Another way to solve this problem is to use a multiple concurrent proposer (MCP) scheme, such as BRAID. BRAID seeks to avoid splitting the block proposer role into a low-economy and a high-economy-of-scale part, and instead tries to distribute the block production process to many participants so that each proposer only needs to have a moderate level of complexity to maximize his or her revenue. MCP works by having k parallel proposers generate a list of transactions, and then using a deterministic algorithm (e.g. sorting by fee from high to low) to choose the order.



BRAID does not achieve its goal by running the default software to determine the block proposer. There are two easy-to-understand reasons why it cannot achieve this goal:


1. Post-Move Arbitrage Attack: Assume that the average time for a proposer to submit is T, and the latest time a transaction is submitted and included is around T+1. Now, suppose that on a centralized exchange, the ETH/USDC price rises from $2500 to $2502 between T and T+1. Then the proposer can wait one more second and add an additional transaction to arbitrage on the chain decentralized exchange, making a profit of up to $2 per ETH. Mature proposers who are closely connected to the network are more capable of doing this.


2. Exclusive order flow: Users have an incentive to send transactions directly to a single proposer to minimize their risk of front-running and other attacks. Experienced proposers have an advantage because they can build infrastructure to accept these transactions directly from users, and they have a stronger reputation, so users who send them transactions can trust that the proposer will not betray and front-run (this can be mitigated by trusted hardware, but trusted hardware has its own trust assumptions)


Beyond these two extremes, there is a range of designs in between. For example, you could auction off a role that appends transactions to a block (but does not have the power to reorder them).


Encrypted mempools


Encrypted mempools are one of the key technologies to successfully implement these designs (especially the BRAID or APS versions, which have strict restrictions on the auction function). Encrypted mempools are a technique where users broadcast their transactions in encrypted form, along with some kind of validity proof, and the transactions are included in blocks in encrypted form without the block builders knowing the contents (the contents of the transactions are revealed later).


The main challenge in implementing an encrypted memory pool is coming up with a design that ensures that transactions are disclosed after they are confirmed: a simple "commit and disclose" scheme does not work because if disclosure is voluntary, then the act of choosing to disclose or not disclose is itself a "late-mover" influence on blocks that can be exploited. The two main techniques for achieving this are (i) threshold decryption and (ii) delayed encryption, a primitive closely related to verifiable delay functions (VDFs).


What are the connections to existing research?


· MEV and builder centralization explained: https://vitalik.eth.limo/general/2024/05/17/decentralization.html#mev-and-builder-dependence


· MEVBoost: https://github.com/flashbots/mev-boost


· Enshrined PBS (an early solution to these problems): https://ethresear.ch/t/why-enshrine-proposer-builder-separation-a-viable-path-to-epbs/15710


· Mike Neuder’s inclusion list: https://gist.github.com/michaelneuder/dfe5699cb245bc99fbc718031c773008


· Inclusion list EIP: https://eips.ethereum.org/EIPS/eip-7547


· FOCIL: https://ethresear.ch/t/fork-choice-enforced-inclusion-lists-focil-a-simple-committee-based-inclusion-list-proposal/19870


· Max Resnick’s presentation on BRAID: https://www.youtube.com/watch?v=mJLERWmQ2uw


· Priority is all you need by Dan Robinson: https://www.paradigm.xyz/2024/06/priority-is-all-you-need


· Gadgets and protocols for multiple proposers: https://www. : ://hackmd.io/xz1UyksETR-pCsazePMAjw


· VDFResearch.org: https://vdfresearch.org/


· Verifiable Delay Functions and Attacks (focused on the RANDAO setting, but also applies to encrypted memory pools): https://ethresear.ch/t/verifiable-delay-functions-and-attacks/2365


What’s left to do? What are the tradeoffs?


We can think of all of the above as different ways to divide up staking rights, arranged on a spectrum from lower economies of scale to higher economies of scale (“specialization-friendly”). Prior to 2021, all of these powers were concentrated in a single actor:



The core dilemma is this: any meaningful power left in the hands of stakers is likely to end up being “MEV-related”. We want a highly decentralized set of actors to have as much power as possible; this means (i) putting a lot of power in the hands of stakers, and (ii) ensuring that stakers are as decentralized as possible, which means they have little incentive to consolidate driven by economies of scale. This is a tension that’s difficult to navigate.


We can think of FOCIL + APS like this. Stakeholders continue to own the rights to the left part of the spectrum, while the right part of the spectrum is auctioned off to the highest bidder.



BRAID is very different. The "stakers" part is larger, but is split into two parts: light stakers and heavy stakers. At the same time, since transactions are sorted in descending order of fee priority, the selection of the top of the block is actually auctioned through the fee market, which can be seen as a kind of encapsulated PBS.



Note that the security of BRAID depends heavily on the encrypted memory pool; otherwise, the top of the block auction mechanism is vulnerable to strategic stealing attacks (essentially: copying other people's transactions, swapping recipient addresses, and paying a 0.01% higher fee). This need for privacy up front is also why PBS is difficult to implement.


Finally, there are more “radical” versions of FOCIL + APS, such as APS that only determines the option at the end of the block, as shown below:



The main remaining tasks are: (i) working to consolidate the various proposals and analyze their consequences; and (ii) combining this analysis with an understanding of the goals of the Ethereum community, namely what forms of centralization the Ethereum community will tolerate. Each individual proposal also requires some work, such as:


· Continue working on the design of the encrypted memory pool and reach a design that is both robust and reasonably simple to incorporate.


· Optimize the design of multiple inclusion lists to ensure that (i) it does not waste data, especially in the context of inclusion lists covering blobs, and (ii) it is friendly to stateless validators.


· More research on the optimal auction design for APS.


In addition, it is worth noting that these different proposals are not necessarily incompatible forks from each other. For example, implementing FOCIL + APS could easily be a stepping stone to implementing BRAID. An effective conservative strategy is to take a "wait and see" approach, where we first implement a solution that limits stakers' permissions and auctions off most of them, and then slowly increase stakers' permissions over time as we learn more about how the MEV market works on the live network.


How does it interact with the rest of the roadmap?


There is a positive interaction between solving one staking centralization bottleneck and solving the others. As an analogy, imagine a world where starting your own company requires growing your own food, building your own computers, and having your own army. Only a handful of companies can exist in this world. Solving one of these three problems will help the situation, but not much. Solving two will help more than twice as much as solving one. Solving three will help far more than three times as much - if you are a solo entrepreneur, either 3/3 of the problems are solved or you don't stand a chance.


Specifically, the centralization bottlenecks for staking include:


· Centralization of block construction (this section)


· Centralization of staking for economic reasons (next section)


· Centralization of staking due to the 32 ETH minimum (solved by Orbit or other technologies; see the merge post)


· Centralization of staking due to hardware requirements (solved in Verge with stateless clients and later ZK-EVM)


Solving any of these four problems will increase the benefits of solving any of the others.


In addition, there is an interaction between block construction and single-slot finality designs, especially when trying to reduce slot times. Many block construction designs end up increasing slot times. Many block construction designs involve the role of provers in their process. Therefore, block construction and single-slot finality can be designed with both in mind.


Fixing the staking economy


What problem are we solving?


Currently, about 30% of the ETH supply is actively staked. This is enough to protect Ethereum from a 51% attack. If the proportion of staked ETH increases further, researchers are concerned about another scenario: if almost all ETH is staked, risks will arise. These risks include:


· Staking goes from being a profitable task for experts to an obligation for all ETH holders. Therefore, ordinary stakers will choose the simplest method and are not inclined to stake themselves (entrusting their tokens to centralized operators who provide the most convenience)


· If all ETH is staked, the credibility of the slashing mechanism will be weakened


· A single liquid staking token can take over most of the staking, and even take over the "currency" network effect of ETH itself


· Ethereum is unnecessarily issuing ~1M extra ETH per year. If a liquid staking token gains dominant network effects, a large portion of that value could even be captured by LST.


What is it? How does it work?


Historically, one solution has been: if everyone is inevitably staking, and a liquid staking token is inevitable, then let’s make staking friendly and have a liquid staking token that is effectively trustless, neutral, and maximally decentralized. A simple approach would be to cap staking penalties at, say, 1/8, which would make 7/8 of the staked ETH non-slashable and therefore eligible to be put into the same liquid staking token. Another option would be to explicitly create two tiers of staking: “risk-bearing” (slashable) staking, which would be capped at 1/8 of all ETH, and “risk-free” (non-slashable) staking, which everyone could participate in.


However, one criticism of this approach is that it seems economically equivalent to something much simpler: drastically reduce issuance if stake approaches some predetermined cap. The basic argument is: if we end up living in a world where the risk-taking layer has a 3.4% return, and the risk-free layer (where everyone participates) has a 2.6% return, this is effectively the same as a world where staking ETH has a 0.8% return, and just holding ETH has a 0% return. In both cases, the dynamics of the risk-taking layer (both total stake and centralization) are the same. So we should do the simple thing and reduce issuance.


The main counterargument to this argument is whether we can have the “risk-free layer” still perform some useful function and some degree of risk (such as the point made by Dankrad here).


Both proposals imply changing the issuance curve so that if stake gets too high, the return gets too low.



Left: Justin Drake’s proposal to adjust the issuance curve. Right: Another set of proposals by Anders Elowsson.


Two-tier staking, on the other hand, requires setting two return curves: (i) the return on “basic” (risk-free or low-risk) staking, and (ii) the premium for taking on risky staking. There are many ways to set these parameters: for example, if you set a hard parameter that 1/8 of the stake is slashable, then market dynamics will determine the premium on the return earned by slashable staking.


Another important topic here is MEV capture. Today, income from MEV (e.g. DEX arbitrage, mezzanine trading, ...) goes to the proposer, the staker. This income is completely "opaque" to the protocol: the protocol has no way of knowing whether it is 0.01% APR, 1% APR or 20% APR. The existence of this income stream is very inconvenient from multiple perspectives:


1. This is an unstable source of income, because each individual staker can only get income when they propose a block, and now this number is about once every 4 months. This incentivizes people to join mining pools to get more stable income.


2. This leads to an unbalanced distribution of incentives: too much incentive for proposing, too little incentive for proving.


3. This makes it difficult to enforce a stake cap: even if the "official" return rate is zero, the MEV income alone is enough to motivate all ETH holders to stake.


Therefore, a realistic stake cap proposal would actually have to make the return approach negative infinity, such as the one proposed here. Needless to say, this would introduce more risk to stakers, especially solo stakers.


We can address these issues by finding a way to make MEV revenue legible to the protocol and capture it. The earliest proposal was Francesco's MEV smooth; today it is widely accepted that any mechanism that pre-auctions block proposer rights (or more generally, power sufficient to capture nearly all MEV) can achieve the same goal.


What are the connections to existing research?


· Issuance.wtf: https://issuance.wtf/


· Endgame Staking Economics, a Case for Targeting: https://ethresear.ch/t/endgame-staking-economics-a-case-for-targeting/18751


· Properties of Issuance Level, Anders Elowsson: https://ethresear.ch/t/properties-of-issuance-level-consensus-incentives-and-variability-across-potential-reward-curves/18448

Validator set size cap: https://notes.ethereum.org/@vbuterin/single _slot_finality ?type=view#Economic-capping-of-total-deposits


· Thoughts on the idea of multi-layer staking: https://notes.ethereum.org/@vbuterin/staking _2023_10 ?type=view


· Rainbow staking: https://ethresear.ch/t/unbundling-staking-towards-rainbow-staking/18683


· Dankrad’s liquidity staking proposal: https://notes.ethereum.org/Pcq3m8B8TuWnEsuhKwCsFg


· MEV smoothing by Francesco: https://ethresear.ch/t/unbundling-staking-towards-rainbow-staking/18683 : ://ethresear.ch/t/committee-driven-mev-smoothing/10408


· MEV burn, by Justin Drake: https://ethresear.ch/t/mev-burn-a-simple-design/15590


What’s left to do? What are the tradeoffs?


The main tasks remaining are to agree to do nothing and accept the risk of nearly all ETH being in LST, or to finalize and agree to the details and parameters of one of the above proposals. A rough summary of the benefits and risks is as follows:


Policies need to decide on risks


Do nothing* MEV burn implementation details* Almost 100% of ETH is staked, likely in LSTs (perhaps a dominant LST)* Macroeconomic risks


Staking cap (by changing the issuance curve)* Reward function and parameters (especially what the cap is)* MEV burn implementation details* Which stakers enter and leave is not yet resolved, the set of remaining stakers may be centralized


Two-tier staking* Role of the risk-free layer* Parameters (e.g., economics of determining how much to stake in the risk-bearing layer)* MEV burn implementation* Which stakeholders enter and leave is not yet resolved, the risk-bearing set may be centralized


How does it interact with the rest of the roadmap?


Related to solo stake. Today, the cheapest VPS capable of running an Ethereum node costs about $60 per month, mostly due to hard drive storage costs. For a 32 ETH staker ($84,000 at the time of writing), this brings the APY down to (60 * 12) / 84000 ~= 0.85%. If total staking returns fall below 0.85%, solo stake becomes unfeasible for many at that level.


If we want solo stake to be viable, this requires reducing node operating costs (which will be achieved in Verge): statelessness will eliminate the storage space requirement, and then L1 EVM validity proofs will make costs cheap.


On the other hand, MEV burn arguably helps solo stake. While this will reduce returns for everyone, more importantly it reduces variance and makes staking less of a lottery.


Finally, any change in issuance will interact with other fundamental changes in staking design (e.g., rainbow staking). One particular concern is that if staking returns become very low, this means we have to choose between (i) lowering penalties, reducing disincentives for bad behavior, or (ii) keeping penalties high, which would allow even well-intentioned validators to accidentally receive negative returns if they are unlucky enough to encounter technical issues or even attacks.


Application Layer Solutions


The above section focuses on changes to Ethereum Layer-1 that can address centralization risks. However, Ethereum is more than just a Layer-1, it is an ecosystem, and as such, some important application layer strategies can help mitigate the risks described above. Here are some examples:


· Specialized staking hardware solutions - Some companies, such as Dappnode, are selling specially designed hardware to make operating a staking node as simple as possible. One way to make this solution more effective is to ask the following question: if a user already expends the effort of keeping a box running 24/7 and connected to the internet, what other services can it provide to the user or others that benefit from decentralization? Examples that come to mind include (i) running locally hosted LLMs for self-sovereignty and privacy reasons, and (ii) running a node for a decentralized VPN.


· Squad Staking - This solution from Obol allows multiple people to stake together in an M-of-N format. This may become more popular over time as statelessness and later L1 EVM validity proofs reduce the overhead of running more nodes, and the benefits of each participant not having to worry about being online all the time start to dominate. This is another way to reduce the perceived overhead of staking, and ensure that individual staking thrives in the future.


· Airdrops - Starknet has provided airdrops to solo stakers. Other projects that wish to have a decentralized and values-aligned user base may also consider providing airdrops or discounts to validators identified as likely solo stakers.


· Decentralized Block Builder Market - Using a combination of ZK, MPC, and TEE, it is possible to create a decentralized block builder that participates in and wins the APS auction game while providing pre-confirmed privacy and censorship resistance guarantees to its users. This is another way to improve user welfare in an APS world.


· Application Layer MEV Minimization - Individual applications can be built in a way that less MEV "leaks" to L1, reducing the incentive for block builders to create specialized algorithms to collect MEV. A common simple strategy (although inconvenient and destroys composability) is to have the contract put all incoming actions into a queue and execute them in the next block, and auction the right to skip the queue. Other more complex approaches include doing more work off-chain like Cowswap does. Oracles can also be redesigned to minimize the value that can be extracted by the oracle.


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit