header-langage
简体中文
繁體中文
English
Tiếng Việt
Scan to Download the APP

Vitalik on the possible future of Ethereum (V): The Purge

24-10-28 12:30
Read this article in 52 Minutes
总结 AI summary
View the summary 收起
Original title: Possible futures of the Ethereum protocol, part 5: The Purge
Original author: Vitalik Buterin
Original translation: Fu Ruhe, Odaily Planet Daily


Translator's note: Since October 14, Ethereum founder Vitalik Buterin has successively released discussions on the possible future of Ethereum, from "The Merge", "The Surge", "The Scourge", "The Verge" to the latest "The Purge", which highlights Vitalik's vision for the future development of the Ethereum mainnet and how to solve the current problems.

"The Merge": It discusses the need for Ethereum to improve single-slot finality and lower the staking threshold after turning to PoS in order to increase participation and transaction confirmation speed.

《The Surge》: Discusses different strategies for Ethereum expansion, especially the roadmap centered on Rollup, and its challenges and solutions in scalability, decentralization, and security.

《The Scourge》: Discusses the risks of centralization and value extraction faced by Ethereum in its transition to PoS, and the development of various mechanisms to enhance decentralization and fairness, while carrying out pledge economic reforms to protect user interests.

《The Verge》: Discusses the challenges and solutions of Ethereum's stateless verification, focusing on how technologies such as Verkle trees and STARKs enhance the decentralization and efficiency of blockchains.

On October 26, Vitalik Buterin published an article about "The Purge", which discussed the challenge facing Ethereum is how to reduce complexity and storage requirements in the long term while maintaining the durability and decentralization of the chain. Key measures include reducing the client storage burden through "history expiration" and "state expiration" and simplifying the protocol through "feature cleaning" to ensure the sustainability and scalability of the network. The following is the original content, compiled by Odaily Planet Daily.


Special thanks to Justin Drake, Tim Beiko, Matt Garnett, Piper Merriam, Marius van der Wijden, and Tomasz Stanczak for their feedback and review.


One of the challenges with Ethereum is that, by default, any blockchain protocol will grow in bloat and complexity over time. This happens in two places:


· Historical data: Any transaction made and any account created at any point in history needs to be permanently stored by all clients, and downloaded by any new client to be fully synced to the network. This causes client load and sync times to continue to increase over time, even if the capacity of the chain remains constant.


· Protocol features: It is much easier to add new features than to remove old ones, causing code complexity to increase over time.


For Ethereum to be sustainable in the long term, we need to exert strong counter-pressure to both of these trends, reducing complexity and bloat over time. But at the same time, we need to preserve one of the key properties that make blockchains great: durability. You can put an NFT, a love letter in a transaction call data, or a smart contract containing $1 million on-chain, go into a cave for ten years, and come out to find it still there waiting for you to read and interact with it. In order for DApps to feel comfortable fully decentralizing and removing upgrade keys, they need to be confident that their dependencies will not upgrade in ways that break them - especially L1 itself.



It is absolutely possible to strike a balance between these two needs and minimize or reverse bloat, complexity, and decay while maintaining continuity if we are determined to do so. Organisms can do this: while most organisms age over time, a lucky few do not. Even social systems can have extremely long lifespans. Ethereum has succeeded in some cases: proof of work is gone, the SELFDESTRUCT opcode is mostly gone, and beacon chain nodes have been storing up to six months old data. Figuring out this path for Ethereum in a more general way, and towards a long-term stable end result, is the ultimate challenge for Ethereum's long-term scalability, technical sustainability, and even security.


The Purge: Primary goal.


· Reduce client storage requirements by reducing or eliminating the need for each node to permanently store all history or even final state.


· Reduce protocol complexity by eliminating unneeded functionality.


Table of Contents:

· History expiry

· State expiry

· Feature cleanup


History expiry


What problem does it solve?


As of this writing, a fully synced Ethereum node requires about 1.1 TB of disk space for the execution client, and hundreds of GB more for the consensus client. The vast majority of this is history: data about historical blocks, transactions, and receipts, much of which is many years old. This means that the size of the node will continue to increase by hundreds of GB per year, even if the gas limit did not increase at all.


What is it and how does it work?


A key simplifying feature of the history storage problem is that, because each block points to the previous block via hash links (and other structures), consensus on the present is sufficient to agree on the history. As long as the network agrees on the latest block, any historical block or transaction or state (account balance, random number, code, storage) can be provided by any single participant along with a Merkle proof, and that proof allows anyone else to verify its correctness. Consensus is an N/2-of-N trust model, while history is an N-of-N trust model.


This gives us a lot of choices for how to store history. One natural choice is a network where each node stores only a small fraction of the data. This is how the torrent network has operated for decades: while the network stores and distributes millions of files in total, each participant stores and distributes only a few of them. Perhaps counter-intuitively, this approach doesn't even necessarily reduce the robustness of the data. If, by making it more affordable to run nodes, we could build a network of 100,000 nodes, where each node stores a random 10% of history, then every piece of data would be replicated 10,000 times - exactly the same replication factor as a 10,000-node network, where each node stores everything.


Today, Ethereum has begun to move away from the model where all nodes store all history permanently. Consensus blocks (i.e. the part related to proof-of-stake consensus) are only stored for about 6 months. Blobs are only stored for about 18 days. EIP-4444 aims to introduce a one-year storage period for historical blocks and receipts. The long-term goal is to have a uniform period (probably around 18 days) during which each node is responsible for storing everything, and then have a peer-to-peer network of Ethereum nodes storing old data in a distributed way.



Erasure codes can be used to improve robustness while keeping the replication factor the same. In fact, blobs are already erasure coded to support data availability sampling. The simplest solution is probably to reuse such erasure codes and put the execution and consensus block data into blobs as well.


What is the connection with existing research?


· EIP-4444 ;

· Torrents and EIP-4444 ;

· Portal Network;

· Portal Network and EIP-4444 ;

· Distributed Storage and Retrieval of SSZ Objects in Portal;

· How to Raise the Gas Limit (Paradigm).


What else needs to be done, and what are the trade-offs?


The main remaining work involves building and integrating a concrete distributed solution for storing history - at least execution history, but eventually consensus and blobs as well. The simplest solutions are (i) simply bringing in existing torrent libraries, and (ii) an Ethereum-native solution called the Portal network. Once either of these are introduced, we can turn on EIP-4444. EIP-4444 itself does not require a hard fork, but it does require a new network protocol version. Therefore, it is valuable to enable it for all clients at the same time, otherwise there is a risk of clients failing due to connecting to other nodes expecting to download the full history but not actually getting it.


The main trade-off involves how hard we try to make “ancient” history data available. The simplest solution would be to stop storing ancient history tomorrow, and rely on existing archive nodes and various centralized providers to replicate it. That’s easy, but it undermines Ethereum’s position as a permanent place of record. The more difficult but safer path is to first build and integrate a torrent network to store history in a distributed way. Here, “how hard we try” has two dimensions:


1. How hard do we try to ensure that the largest set of nodes actually stores all the data?


2. How deeply do we integrate history storage into the protocol?


An extremely paranoid approach to (1) would involve proof of custody: effectively requiring each proof-of-stake validator to store a certain percentage of history, and periodically cryptographically checking that they do so. A more moderate approach would be to set a voluntary standard for the percentage of history stored by each client.


For (2), the basic implementation only involves work that’s already done today: Portal already stores an ERA file containing the entire history of Ethereum. A more thorough implementation would involve actually connecting it to the sync process, so that if someone wants to sync a full history storage node or archive node, they can do so by syncing directly from the Portal network, even if no other archive nodes exist online.


How does this interact with the rest of the roadmap?


If we want to make it extremely easy to run or spin up a node, then reducing history storage requirements is arguably more important than statelessness: of the 1.1 TB a node requires, ~300 GB is state, and the remaining ~800 GB is history. Only with statelessness and EIP-4444 can the vision of running an Ethereum node on a smartwatch and setting it up in just a few minutes be realized.

Limiting history storage also makes it more feasible for newer Ethereum node implementations to only support the latest version of the protocol, which makes them much simpler. For example, many lines of code can now be safely deleted because the empty storage slots created during the 2016 DoS attack have all been deleted. Now that the move to proof-of-stake is history, clients can safely delete all proof-of-work related code.


State expiry


What problem does it solve?


Even if we eliminate the need for clients to store history, clients’ storage requirements will continue to grow, by about 50 GB per year, as the state continues to grow: account balances and nonces, contract code, and contract storage. Users can pay a one-time fee to impose this burden on current and future Ethereum clients forever.


State is harder to “expire” than history because the EVM is fundamentally designed around the assumption that once a state object is created, it always exists and can be read by any transaction at any time. If we introduce statelessness, some argue that this problem might not be so bad: only specialized block builder classes need to actually store state, and all other nodes (even list generation!) can run statelessly. However, there is an argument that we don’t want to rely too much on statelessness, and eventually we may want to expire state to keep Ethereum decentralized.


What it is, how it works


Today, when you create a new state object (which can happen in one of three ways: (i) sending ETH to a new account, (ii) creating a new account with code, (iii) setting up a previously untouched storage slot), that state object stays in that state forever. Instead, what we want is for objects to automatically expire over time. The key challenge is to do this in a way that achieves three goals:


1. Efficiency: Don’t require a ton of extra computation to run the expiration process.


2. User-friendliness: If someone goes into a cave for five years and comes back, they shouldn’t lose access to their ETH, ERC 20, NFT, CDP position…


3. Developer-friendliness: Developers shouldn’t have to switch to a completely unfamiliar mental model. Also, applications that are currently ossified and not updated should continue to work fine.


It’s easy to work around problems that don’t meet these goals. For example, you could have each state object also store an expiration date counter (the expiration date can be extended by burning ETH, which could happen automatically any time it’s read or written), and have a process that loops through the state to remove expired state objects. However, this introduces extra computation (and even storage requirements), and it certainly doesn’t meet the requirements for user-friendliness. It’s also hard for developers to reason about edge cases involving stored values sometimes being reset to zero. If you set expiration timers within the scope of the contract, this technically makes life easier for developers, but it makes the economics more difficult: developers have to think about how to "pass on" the ongoing storage costs to users.


These are problems that the Ethereum core development community has been grappling with for years, with proposals such as "blockchain rent" and "regeneration". Ultimately, we combined the best parts of the proposals and focused on two categories of "least worst known solutions":


· Partial state expiration solutions


· Address-cycle-based state expiry proposals.


Partial state expiry


The partial state expiry proposals all follow the same principle. We split the state into chunks. Each one permanently stores a "top-level mapping" where a chunk is either empty or non-empty. Data in each chunk is only stored if that data has been recently accessed. There is a "resurrection" mechanism that resurrects the chunk if it is no longer stored


The main differences between these proposals are: (i) how do we define "recently", and (ii) how do we define "chunk"? One specific proposal is EIP-7736, which builds on the "stem-and-leaf" design introduced for Verkle trees (although compatible with any form of stateless state, such as binary trees). In this design, headers, code, and storage slots that are adjacent to each other are stored under the same "trunk". The data stored under one stem can be at most 256 * 31 = 7,936 bytes. In many cases, the entire header and code for an account, as well as many key storage slots, will be stored under the same trunk. If data under a given trunk has not been read or written for 6 months, that data is no longer stored, and only a 32-byte commitment (a "stub") to that data is stored. Future transactions accessing that data will need to "resurrect" the data and provide a proof that it checked against the stub.



There are other ways to implement similar ideas. For example, if account-level granularity is not enough, we could make a scheme where each 1/2 32 part of the tree is controlled by a similar stem-and-leaf mechanism.


This is made more tricky because of incentives: an attacker could force clients to store large amounts of state permanently by putting large amounts of data into a single subtree and sending a single transaction every year to "update the tree." If you make the renewal cost proportional to the tree size (or inversely proportional to the renewal duration), then someone could potentially harm other users by putting large amounts of data into the same subtree as them. One could try to bound both of these problems by making the granularity dynamic based on the subtree size: for example, each consecutive 2 16 = 65536 state objects could be considered a "group". However, these ideas are more complex; stem-based approaches are simple, and incentives can be aligned, since typically all data under a stem is related to the same application or user.


Address-cycle-based state expiration proposal


What if we wanted to avoid any permanent state growth at all, even a 32-byte stub? This is a hard problem due to resurrection conflicts: what if a state object is deleted, and later EVM execution puts another state object in exactly the same place, but then someone who cared about the original state object comes back and tries to restore it? When part of the state expires, the "stub" prevents new data from being created. With the state fully expired, we can't even store the stub.


The address cycle based design is the most well known idea to solve this problem. Instead of having one state tree to store the entire state, we have a growing list of state trees, and any state read or written is saved in the latest state tree. A new empty state tree is added once every epoch (e.g.: 1 year). The old trees are frozen solid. Full nodes only store the two most recent trees. If a state object has not been touched in two epochs and thus falls into the expired tree, it can still be read or written, but transactions need a Merkle proof proving it - once proven, a copy is saved again in the latest tree.



A key idea that makes this user and developer friendly is the concept of address cycles. An address cycle is a number that is part of an address. The key rule is that an address with address cycle N can only be read or written during or after cycle N (i.e. when the list of state trees reaches length N). If you are saving a new state object (e.g., a new contract, or a new ERC 20 balance), if you make sure you put the state object into a contract with address period N or N-1, then you can save it immediately, without having to provide proof that there was nothing before. On the other hand, any additions or edits made during the old address period require proof.


This design preserves most of Ethereum's current properties, requires no additional computation, allows applications to be written almost as they are today (ERC 20 needs to be rewritten to ensure that address balances with address period N are stored in child contracts, which themselves have address period N), and solves the "users in caves for five years" problem. However, it has a big problem: addresses need to be extended to more than 20 bytes to accommodate address periods.


Address space extension


One proposal is to introduce a new 32-byte address format that includes a version number, address period number, and extended hash.


0x01 (red) 0000 (orange) 000001 (green) 57 aE408398 dF7 E5 f4552091 A69125 d5dFcb7B8C2659029395bdF (blue).


The red one is the version number. The four zeros in orange here are intended as blank space that could accommodate a shard number in the future. The green one is the address period number. The blue one is the 26-byte hash value.


The key challenge here is backward compatibility. Existing contracts are designed around 20-byte addresses and often use strict byte packing techniques that explicitly assume that addresses are exactly 20 bytes long. One idea to solve this problem involves a conversion mapping where old-style contracts interacting with new-style addresses will see the 20-byte hash value of the new-style address. However, there are significant complexities in ensuring its security.


Address Space Shrinkage


Another approach takes the opposite direction: we immediately ban some 2 128-size subrange of addresses (e.g., all addresses starting with 0x ffffffff), and then use that range to introduce addresses with address cycles and 14-byte hashes.


0xffffffff000169125 d5dFcb7B8C2659029395bdF


The main sacrifice this approach makes is the security risk of introducing counterfactual addresses: addresses that hold assets or permissions but whose code has not yet been published to the chain. The risk involves someone creating an address that claims to hold a piece of (yet to be published) code, but also has another valid piece of code that hashes to the same address. Calculating such a collision today requires 2 80 hashes; address space shrinkage would reduce that number to an easily accessible 2 56 hashes.


The key risk area, counterfactual addresses for wallets not held by a single owner, is relatively rare today, but will likely become more common as we move into a multi-L2 world. The only solution is to simply accept this risk, but identify all common use cases where it could go wrong, and come up with efficient workarounds.


What are the connections to existing research?


Early Proposals

Blockchain Cleaning;

Regeneration;

Ethereum State Size Management Theory;

Some possible paths for stateless and state expired;


Partial State Expiration Proposal

EIP-7736;


Address Space Extension Document

Original Proposal;

Ipsilon Comment;

Comments on blog post;

What would break if we lose collision resistance.


What else needs to be done, what are the tradeoffs?


I see four possible paths forward:


· We go stateless and never introduce state expiration. The state is constantly growing (albeit slowly: we probably won't see it exceed 8 TB for decades), but only by a relatively special class of users: not even PoS validators need the state.


One feature that requires access to part of the state is inclusion list generation, but we can do this in a decentralized way: each user is responsible for maintaining the part of the state trie that contains their own account. When they broadcast a transaction, they broadcast it and include proofs of the state objects accessed during the verification step (this works for both EOA and ERC-4337 accounts). Stateless validators can then combine these proofs into a proof of the entire inclusion list.


· We do partial state expiration, and accept a much lower, but still non-zero, permanent state size growth rate. This result is arguably analogous to how history expiration proposals involving peer-to-peer networks accept a much lower, but still non-zero, permanent history storage growth rate where each client must store a lower but fixed percentage of history data.


· We do state expiration via address space expansion. This will involve a multi-year process to ensure that the address format transition approach is effective and secure, including for existing applications.


· We do state expiration via address space contraction. This will involve a multi-year process to ensure that all security risks involving address conflicts, including cross-chain cases, are handled.


The important point is that whether or not a state expiration scheme that relies on address format changes is implemented, the hard questions about address space expansion and contraction must eventually be solved. Today, generating an address collision requires about 2 80 hashes, and this computational load is already feasible for extremely resourceful actors: a GPU can perform about 2 27 hashes, so running it for a year can compute 2 52, so all the world's ~230 GPUs can compute a collision in about 1/4 of a year, and FPGAs and ASICs can accelerate this process further. In the future, such attacks will be open to more and more people. Therefore, the actual cost of implementing full state expiration may not be as high as it seems, because we have to solve this very challenging address problem anyway.


How does it interact with other parts of the roadmap?


Doing state expiration may make it easier to transition from one state tree format to another, because no conversion process is required: you can simply start creating a new tree with the new format, and then do a hard fork to convert the old tree. So, while state expiration is complex, it does have the benefit of simplifying other aspects of the roadmap.


Feature cleanup


What problem does it solve?


One of the key prerequisites for security, accessibility, and trusted neutrality is simplicity. If a protocol is beautiful and simple, it is less likely to have bugs. It increases the chances that new developers will be able to participate in any part of it. It is more likely to be fair and easier to defend against special interests. Unfortunately, protocols, like any social system, will by default become more complex over time. If we don’t want Ethereum to fall into a black hole of increasing complexity, we need to do one of two things: (i) stop making changes and ossifying the protocol, or (ii) be able to actually remove features and reduce complexity. A middle path is also possible, making fewer changes to the protocol and removing at least a little complexity over time. This section discusses how to reduce or eliminate complexity.


What is it and how does it work?


There is no major single fix that will reduce the complexity of the protocol; the nature of the problem is that there are many small fixes.


One example that has been largely completed, and can serve as a blueprint for how to approach other examples, is the removal of the SELFDESTRUCT opcode. The SELFDESTRUCT opcode was the only opcode that could modify an unlimited number of storage slots within a single block, requiring clients to implement significantly more complexity to avoid DoS attacks. The original purpose of the opcode was to enable voluntary state liquidation, allowing the state size to decrease over time. In practice, few people ended up using it. The opcode was weakened to only allow self-destructing accounts created in the same transaction as the Dencun hard fork. This solves the DoS problem and can significantly simplify client code. In the future, it may make sense to eventually remove the opcode entirely.


Some key examples of protocol simplification opportunities that have been identified so far include the following. First, some examples outside of the EVM; these are relatively non-invasive and therefore easier to reach consensus on and implement in a shorter time.


· RLP → SSZ Conversion: Originally, Ethereum objects were serialized using an encoding called RLP. RLP is untyped and unnecessarily complex. Today, the beacon chain uses SSZ, which is significantly better in many ways, including supporting not only serialization but also hashing. Eventually, we hope to get rid of RLP entirely and move all data types to SSZ structures, which in turn will make upgrades much easier. Current EIPs include [1] [2] [3].


· Removal of Legacy Transaction Types: There are too many transaction types today, and many of them may be removed. A more moderate alternative to complete removal is an account abstraction feature, through which smart accounts can include code to handle and validate legacy transactions if they wish.


· LOG Reform: The log creates bloom filters and other logic that adds complexity to the protocol, but is not actually used by clients because it is too slow. We can remove these features and work on alternatives such as out-of-protocol decentralized log reading tools using modern techniques like SNARKs.


· Eventually remove the Beacon Chain Sync Committee mechanism: The Sync Committee mechanism was originally introduced to enable light client verification for Ethereum. However, it significantly increases the complexity of the protocol. Eventually, we will be able to verify the Ethereum consensus layer directly using SNARKs, which will eliminate the need for a dedicated light client verification protocol. Potentially, changes in consensus could enable us to remove the Sync Committee earlier by creating a more "native" light client protocol that involves verifying signatures from a random subset of Ethereum consensus validators.


· Data format unification: Today, execution state is stored in Merkle Patricia trees, consensus state is stored in SSZ trees, and blobs are committed via KZG commitments. In the future, it would make sense to have a single unified format for block data and a single unified format for state. These formats would satisfy all important needs: (i) simple proofs for stateless clients, (ii) serialization and erasure coding of data, (iii) standardized data structures.


· Removal of Beacon Chain Committees: This mechanism was originally introduced to support a specific version of execution sharding. Instead, we ended up with sharding via L2 and blobs. As a result, committees are unnecessary, and moves are being made to eliminate them.


· Remove mixed endianness: EVM is big endian, consensus layer is little endian. It might make sense to re-coordinate and make everything one way or the other (probably big endian, since the EVM is harder to change)


Now, some examples in the EVM:


· Simplification of the gas mechanism: The current gas rules are not well optimized to give clear limits on the amount of resources required to validate a block. Key examples of this include (i) storage read/write costs, which are intended to limit the number of reads/writes in a block, but are currently quite arbitrary, and (ii) memory padding rules, where it is currently difficult to estimate the maximum memory consumption of the EVM. Proposed fixes include a stateless gas cost change (which would unify all storage-related costs into a simple formula) and a memory pricing proposal.


· Removing precompiles: The many precompiles Ethereum currently has are both needlessly complex and relatively unused, and account for a large portion of consensus failures when barely used by any applications. Two ways to deal with this are (i) just remove the precompile, and (ii) replace it with a piece of (inevitably more expensive) EVM code that implements the same logic. This draft EIP suggests doing this for the identity precompile as a first step; later, RIPEMD 160, MODEXP, and BLAKE may become candidates for removal.


· Remove gas observability: Make it so that the EVM execution can no longer see how much gas it has left. This breaks some applications (most notably sponsored transactions), but will make it easier to upgrade in the future (e.g., to more advanced versions with multidimensional gas). The EOF specification already makes gas unobservable, but EOF needs to be mandatory to simplify the protocol.


· Improvements in static analysis: EVM code is difficult to statically analyze today, especially because jumps can be dynamic. This also makes it harder to optimize EVM implementations (precompile EVM code to other languages). We can fix this by removing dynamic jumps (or making them more expensive, e.g., the gas cost is linear in the total number of JUMPDESTs in the contract). EOF does just that, though enforcing EOF is required to gain the protocol simplification benefits from it.


What is the connection with existing research?


· Next steps for cleanup;

· Self-destruct

· SSZed EIPS: [ 1 ] [ 2 ] [ 3 ];

· stateless gas cost changes;

· linear memory pricing;

· precompile removal;

· Bloom filter removal;

· A method for off-chain secure log retrieval using incremental verifiable computation (read: recursive STARKs);


What else needs to be done, and what are the trade-offs?


The main trade-offs in making this kind of feature simplification are (i) how much and how fast we simplify vs. (ii) backwards compatibility. Ethereum’s value as a chain comes from it being a platform where you can deploy an application and be confident that it will still work years later. At the same time, this ideal can also go too far and, in the words of William Jennings Bryan, “nail Ethereum to the cross of backwards compatibility.” If there are only two applications in all of Ethereum that use a given feature, and one application has had zero users for years, while the other application is almost completely unused and has gained a total of $57 in value, then we should remove the feature and pay the victim $57 out of pocket if necessary.


The broader societal problem is creating a standardized pipeline for making non-urgent, backwards-compatibility-breaking changes. One way to address this is to examine and extend existing precedents, such as self-destruct processes. The pipeline looks like this:


1. Start talking about removing feature X;


2. Perform an analysis to determine the impact on applications of removing X, depending on the outcome: (i) abandon the idea, (ii) proceed as planned, or (iii) determine a revised "least disruptive" way to remove X and proceed;


3. Create a formal EIP to deprecate X. Make sure popular higher-level infrastructure (e.g., programming languages, wallets) respects this and stops using the feature.;


4. Finally, actually remove X;


There should be a multi-year pipeline between steps 1 and 4, with clear instructions on which projects are in which step. At this point, there is a trade-off between the dynamism and speed of the feature removal process and being more conservative and devoting more resources to other areas of protocol development, but we are still far from the Pareto frontier.


EOF


The main set of changes proposed to the EVM is the EVM Object Format (EOF). EOF introduces a large number of changes, such as disabling gas observability, code observability (i.e. no CODECOPY), and only allowing static jumps. The goal is to allow the EVM to be upgraded more in a way that has stronger properties, while maintaining backwards compatibility (because the pre-EOF EVM still exists).


The advantage of this is that it creates a natural path for adding new EVM features and encourages migration to a stricter EVM with stronger guarantees. The disadvantage is that it significantly increases the complexity of the protocol unless we can find a way to eventually deprecate and remove the old EVM. A major question is: what role does EOF play in the EVM simplification proposal, especially if the goal is to reduce the complexity of the entire EVM?


How does it interact with the rest of the roadmap?


Many of the “Improvement” suggestions in the rest of the roadmap are also opportunities to simplify older features. To repeat some of the examples above:


· Switching to single-slot finality gives us the opportunity to remove committees, redesign economics, and make other proof-of-stake related simplifications.


· Fully implementing account abstraction allows us to remove a lot of existing transaction processing logic, moving it into “default account EVM code” that all EOAs can replace.


· If we move the Ethereum state to a binary hash tree, this can be reconciled with a new version of the SSZ so that all Ethereum data structures can be hashed in the same way.


More Radical Approach: Move Large Parts of the Protocol into Contract Code


A more radical strategy for simplifying Ethereum is to keep the protocol unchanged, but move large parts of it away from protocol functionality and into contract code.


The most extreme version would be to have Ethereum L1 “technically” just the beacon chain, and introduce a minimal VM (e.g. RISC-V, Cairo, or something even more minimal specifically for proof systems), allowing others to create their own rollups. The EVM would then become the first of these rollups. Ironically, this is exactly the same outcome as the 2019-20 execution environment proposal, although SNARKs make it more feasible to actually implement.



A more moderate approach would be to keep the relationship between the beacon chain and the current Ethereum execution environment unchanged, but do an in-place swap of the EVM. We could choose RISC-V, Cairo, or some other VM as the new “official Ethereum VM”, and then force all EVM contracts to be converted to the new VM code that interprets the logic of the original code (either by compilation or interpretation). In theory, this could even be done with the “target VM” being a version of EOF.


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit