Dialogue with the founders of MegaETH and Monad: Reshaping the future of Ethereum

24-08-22 17:40
Read this article in 79 Minutes
总结 AI summary
View the summary 收起
Original source: Bankless
Original text compilation and translation: TechFlow



Guests: Keone Hon, founder of Monad; Lei Yang, co-founder of Mega ETH


Host: Ryan Sean Adams, co-founder of Bankless; David Hoffman, co-founder of Bankless


Podcast source: Bankless


Original title: Mega ETH vs Monad: The Battle To Shape Ethereum's Future


Broadcast date: August 21, 2024


Background information


In this episode, we will explore the forefront of EVM - two blockchain monads from two different architectures and Mega ETH.


Monad is a Layer 1 project that aims to achieve a throughput of over 10,000 transactions per second by redesigning the execution and consensus layers.


Mega ETH is a Layer 2 project focused on performance optimization with the goal of achieving over 100,000 Ethereum transactions per second


This podcast will discuss the Ethereum Virtual Machine (EVM), specifically how Mega ETH and Monad plan to make Ethereum even faster.


We will answer the following questions:


1. Which is faster, more decentralized, and more censorship-resistant, Monad or MegaETH?


Technical Architecture:


· Monad improves performance by introducing technologies such as Monad DB, optimistic parallel execution, asynchronous execution, and Monad BFT.


· Mega ETH utilizes Layer 2 architecture to specialize nodes, reduce execution redundancy, and improve performance through real-time compilation and parallel execution of EVM.


Decentralization and Performance:


· Monad emphasizes decentralization through low hardware requirements, allowing anyone to run a full node.


· Mega ETH ensures censorship tolerance and execution correctness through Layer 2 architecture and economic incentives.


2. How did Monad and MegaETH build a large community before launching the testnet and mainnet?


Monad:


Monad attracts members by creating and promoting a unique community culture. For example, they have an event called "Monad Running Club" that encourages community members to run together. In addition, some interesting mascots and stories have emerged in the Monad community, such as "Molandak" and "Salmonad", which are spontaneously created and promoted by community members.


Monad also organizes a series of online and offline activities, such as the language learning event Molingo, to increase the participation and sense of belonging of community members.


Community members are encouraged to participate in different aspects of the project, including organizing events, contributing artwork, etc. This bottom-up participation method enhances the cohesion of the community.


Mega ETH:


Branding revolves around "Mega Mafia", which is their flagship user project and incubation project. By introducing a brand image like "Mega Mafia", they have successfully attracted developers and founders interested in high-performance blockchain applications.


By holding technical discussions and sharing sessions, they attract technicians interested in blockchain performance optimization. In addition, they also publish content through social media platforms to explain technical details and industry anecdotes to attract a wider audience.


By incubating projects and supporting innovative applications, Mega ETH attracts developers who want to implement new ideas on the blockchain.


Both Monad and Mega ETH actively use social media platforms such as Twitter and Discord to interact with the community and share project progress and technical insights. This transparent and open communication method has helped them build a trusting and loyal user base.


3. How do they view the criticism that "developing on EVM is like building a house on quicksand"? Will this affect Solana's value proposition?


Keone's View:


· Technical Maturity and Ecosystem: Although EVM may have some technical shortcomings, it has become a mature standard with strong network effects and broad tool support. This makes it easier for developers to build and deploy applications. Monad aims to enhance the performance and scalability of EVM through technical innovation rather than completely replace it.


· Impact on Solana: Keone believes that despite Solana's advantages in handling high throughput, EVM's widespread adoption and the maturity of the ecosystem make it still very attractive among developers. Therefore, continued improvements to EVM will not significantly weaken Solana's value proposition, but will consolidate Ethereum's position in the developer community.


Lei Yang’s Viewpoint:


· Adaptability and Improvement:Lei pointed out that the flexibility and adaptability of EVM enables it to evolve to meet new needs. By introducing Layer 2 solutions, Mega ETH is committed to improving the performance of EVM, enabling it to support more complex and efficient applications.


· Competition to Solana:Lei believes that while Solana is competitive in performance, the widespread adoption and compatibility of EVM gives it an advantage in cross-chain interoperability and developer support. By enhancing the capabilities of EVM, Mega ETH aims to provide a more attractive development platform while remaining competitive with other blockchains.


Here is the main content of this conversation:


Monad Introduction


David: Keone, please explain Monad to us and what different attempts it makes in the blockchain field.


Keone:


Thank you for having me. I'm Keone Hon, co-founder and CEO of Monad Labs. We're building Monad, a reimagining of Ethereum that redesigns the execution and consensus layers from the ground up to introduce four major improvements and more broadly provide a high-performance version of Ethereum that delivers over 10,000 transactions per second throughput. Monad does this by introducing a whole new database called Monad DB, built from the ground up; by introducing optimistic parallel execution, which enables many transactions to be executed in parallel; by introducing asynchronous execution, which enables consensus and execution to run in different swimlanes, which drastically increases the execution budget; and finally, by introducing Monad BFT, a high-performance consensus mechanism that allows hundreds of fully globally distributed nodes to stay in sync. So overall, this is a very low-level effort applying systems engineering techniques to build a really high-performance reimagining of Ethereum.


David: There's a lot of anticipation for Monad, especially after the Salana VM. There was a lot of excitement about parallelism with Salana and SVM, but there were some very smart people who made a technical counterargument that the real bottleneck was not parallel execution, but parallel read access to the database. Can you walk us through why parallelization of virtual machines is cool in the context of blockchain architecture? And how it comes down to the need for parallel read access to the database, which you have? Keone: That's a great question. I think it's good for people to intuitively understand parallel execution because our modern computers obviously have many processors and are running many things in parallel. For example, I have hundreds of tabs open in my browser, which is actually a bad habit. But you know, I have many different browser tabs, Spotify, Discord, all of which are running in parallel. Right now Ethereum and other EVM blockchains are single-threaded and completely serial, which is a bit crazy. So parallel execution is about taking advantage of multiple threads or multiple processors to run many jobs in parallel on the host, and this is one of the improvements that can finally unlock performance.


But when you benchmark and look at the actual costs, actually the biggest bottleneck in execution is not the CPU time, it's not executing a single instruction. The biggest bottleneck is actually state access, because every smart contract relies on some residual state associated with that contract. For example, when you do a unit swap, you need the balances of the two assets in the pool, let's say Uni v2, and you need to be able to look up those balances in order to actually perform the swap and calculate the new balances. So that requires reading some data from disk. So the biggest bottleneck in execution is actually state access, and parallelizing just the computation without also parallelizing the reads from the database is going to be a relatively small improvement, which is especially important compared to the much larger improvement that you can get by allowing the database to be read in parallel.


That’s exactly what our team has done with Monad DB, a database built from the ground up specifically optimized for efficiently storing Ethereum Merkle tree data. It’s a great effort, and people always say that common computer science advice is not to write your own database because it’s a huge amount of work. But in this case, it’s really necessary and impactful because we have high-value state, the Ethereum state. We need to access this data as quickly as possible.


How does Monad fit into Ethereum?


Ryan: Keone, you mentioned bringing this technology to Ethereum, but Monad still plans to launch its own alternative Layer 1, which we might call “Alt Layer 1” rather than Layer 2. So how are you bringing this technology to Ethereum? And in what ways are you not bringing it to Ethereum? You’re not extending Ethereum as a Layer 2, and you’re not doing it in the mainnet, but as an alternative chain?


Keone:


Monad is a pioneering environment designed to demonstrate the capabilities of these different architectural improvements that our team believes will ultimately be needed in Ethereum L1. I think the discussion about Layer 1 and Layer 2 is a topic that evolves over time. When we first started in 2022, the general consensus was that Layer 2s all interacted with Ethereum in the same way, leveraging Ethereum for state commitments and data availability. You know, this was a model that we chose to interact with Ethereum. Now in 2024, how Layer 2s interact with Ethereum and to what extent they directly leverage Ethereum's services is also evolving. Our team's belief is that there are many different ways to contribute to Ethereum as a whole. By focusing on completely orthogonal directions of Ethereum scaling research that we believe are really needed and unexplored, we can also make important improvements and contributions to the Ethereum ecosystem as a whole.


What is Mega ETH?


Ryan:Now let’s talk about Mega ETH. Lei Yang, can you tell us what this project is? And what is the significance of Mega ETH?


Lei Yang:


Of course. Hi, I'm Lei Yang, CTO and co-founder of Mega ETH. I just got my PhD from MIT, worked in blockchain consensus for six years, and published papers on blockchain security, performance, and networking.


Now I'm building Mega ETH. Mega ETH is a performance-optimized blockchain that is fully compatible with Ethereum. A key difference here is that we are unhesitatingly performance-oriented and focused on how to achieve this.


First, we chose to be the Layer 2 of Ethereum. The reason is that we believe this is the only best architecture for performance engineering. We are leveraging the architecture of Layer 2 to build what we call the "first live blockchain." In other words, we can achieve more than 100,000 real Ethereum transactions per second, not just payments, with block times of 1 to 10 milliseconds. What we hope to achieve is to make DApps as responsive and functional as normal Web 2 applications, while providing users with the benefits they expect from DApps, namely correctness of execution, freedom from censorship, etc.


In terms of some technical optimizations, first, being a Layer 2 allows us to specialize nodes. In Mega, we minimize the redundancy of execution so that at any time there is only one active sorter executing each transaction. Other nodes, while subscribing to state updates and working hard to stay up to date, do not need to execute all transactions. This allows us to significantly improve performance and increase the hardware configuration of the sorter that executes all transactions, while maintaining the hardware requirements for full nodes that only care about the current state.


On the sorter side, we have some optimizations, including a new data structure that is functionally the same as the Merkle Patricia tree, but is more efficient in utilizing actual hardware (such as SSDs and memory). We will compile Ethereum smart contracts in real time, converting from bytecode to native assembly code, and we will also execute EVM in parallel. So those are some of the highlights, and we call the final product a "live blockchain."


Similarities and Differences


Ryan: I was wondering what you think about the similarities and differences between Mega ETH and Monad? For the average person, or someone who is slightly familiar with the crypto space, the similarities are that you are both looking to scale the EVM and enable parallel execution. So, is it correct that Mega ETH uses a Layer 2 architecture, settles to Ethereum, and uses Ethereum as a data availability layer?


Lei Yang:


Actually, that's not entirely true. We use Eigenlayer as a data availability layer, and we also settle to Ethereum. But there's a small nuance in that we're not targeting parallelization as a flagship feature or flagship optimization. We're equally focused on single-threaded performance.


Ryan: So what do you think are the similarities and differences between the two projects?


Lei Yang:


I think you summarized the similarities very well. We're both focused on performance and trying to bring in the next generation of applications by providing rich resources. But I think the difference here is that, as I mentioned, we think differently about parallelization. I think if you want to build, say, 100 copies of Uniswap, there's enough block space in the EVM and the ETH ecosystem for you to use.


But what we actually need is entirely new applications. For these applications, our belief is that very high single-threaded performance is required because that's the actual performance for a single application to use simultaneously. So in other words, we're really pushing the frontier of single-threaded performance. The optimizations that I mentioned, like new data structures, just-in-time compilation of smart contracts, and the in-memory computing techniques that I didn't mention, are all for single-threaded performance. While we view parallelization as a second step after achieving high enough single-threaded performance to support entirely new applications, engineering and operations we're actually doing both at the same time.


Another difference is our explicit focus on performance. We chose to build Layer 2 because we believe that's the only optimal architecture for performance, allowing you to eliminate as much redundancy in execution and consensus as possible. So I want to highlight that as a key difference.


David: Keone, what do you think about this?


Keone:


I think one thing that's worth mentioning is that the goal of Monad and these individual architectural improvements are aimed at getting maximum performance out of minimal hardware requirements. I think it's really important from a decentralization perspective to enable anyone to run a node with commodity hardware. In order to achieve that, we need to make software improvements that allow us to get higher performance out of the hardware, rather than strictly relying on very high hardware requirements. So I think the two projects were probably very different in their original premise. In Monad, we're really focused on getting the most performance out of the hardware so that anyone can run a full node, access all the state, keep up with the network, and not have to trust, but verify directly.


Architectural Impact


David:Both Monad and Mega ETH have the same goal of having a very fast EVM. But the ways you get there are almost diametrically opposed. Mega ETH has chosen a Layer 2 single orderer or very few orderers, while Monad is pursuing a Layer 1 blockchain with a very broad collection of validators. Technically, your architecture matches these end goals.


Since the end goal is the same, having a very fast EVM, do you think the applications that emerge on each blockchain will be similar? Or will each form a different ecosystem due to the different underlying architectures? How will the underlying architecture affect the application ecosystem built on each blockchain? Or will it not affect this?


Lei Yang:


I think it will be very different. First, as I mentioned, we have a singular focus on performance. This allows us to achieve lower latency and higher throughput. I want to emphasize the latency part in particular because, as you mentioned, we use a single orderer. So the orderer can execute transactions in a streaming fashion, which really minimizes the latency of transaction feedback and block packaging time. In other words, we expect a feedback time of 1 millisecond from the time a transaction arrives at the orderer to the time it is executed, packaged, and the state is updated.


Coming from my consensus research background, I don't think this is possible in any system where the consensus algorithm is in the critical path of the system. Because for the consensus algorithm to really work, the message must be delivered between all nodes at least. If the nodes are globally distributed, then the message has to travel around the world. Even at the speed of light, it still takes a few hundred milliseconds, not to mention that it usually takes at least two or three rounds of messaging. This can result in latency of 600 to 700 milliseconds. This is the minimum latency for Layer 1 or Layer 2 using a decentralized consensus-based orderer.


So this low latency is very useful for certain applications. For example, imagine Minecraft, you don't want your character to wait 600 milliseconds before taking the next step. This will be very interesting for high-frequency trading, as market makers and traders can choose to collaborate with our sequencer. Therefore, this kind of real-time application will be a unique ecosystem project for us.


David: Keone, what do you think about the same question?


Keone:


I'm thinking about the decentralized Minecraft scenario, because if you are far away from the sequencer, you won't have low latency.


Lei Yang:


Yes, of course. I want to distinguish two parameters. One is the ticktime or block time, in other words, the precision or resolution of the action, such as one action can happen every 1 millisecond. But when the user wants to get feedback, of course we are not breaking the laws of physics. The user's action must be transmitted from the keyboard to the sequencer and then back to the user's monitor. However, this is common in any type of online MMO or RPG game. Therefore, this part of the delay is acceptable. What's really important is that if you introduce consensus into the critical path, you get three rounds of messaging around the world, resulting in latency of over 600 milliseconds, which is much higher than the typical latency between a user and a centralized orderer.


David: Keone, what is your take on the ecosystem of applications built on Monad? Are you guys focused on any particular ones?


Keone:


I think the beauty of the EVM is that it's a dominant standard with amazing network effects. There are a lot of libraries built for the EVM, a lot of applications developed there, and a lot of applied cryptography research is done in the context of the EVM. So as a project that is completely bytecode EVM equivalent, Monad offers the best combination of performance and portability for developers who are already building on the EVM. Monad continues to do that while remaining decentralized. I think this will be a crossover between Monad and Mega ETH to some extent because our team believes that decentralized block production is very important from the perspective of censorship resistance and other properties that the crypto community cherishes. Even if, as you said, consensus overhead due to the speed of light is inevitable unless we can find ways to speed up the speed of light.


David: Keone, that is to say there is a certain responsibility when building Layer 1 because it comes with the spirit of crypto. What is the point of building Layer 1 if there is no censorship resistance? What is the point of building Layer 1 if you can't effectively decentralize the validator set? In addition to the powerful architectural improvements of Monad EVM and Monad DB, you have to do something in order to be called a legitimate blockchain project, which are things that we in the crypto industry value. Is that what you mean?


Keone:


Yes. This actually also comes from the social layer, reinforcing these values. Decentralization is important in terms of hardware requirements, the number of nodes participating in consensus, the distribution of equity weights, etc. These are all properties that must be evaluated. In addition, a strong social layer is needed to reinforce these values.


Decentralization - Monad


Ryan:The word decentralization is a sacred word in the crypto space. The reason we chose to deploy as a Layer 2 is precisely because of this decentralization and censorship resistance. Keone Can you talk about Monad's position on decentralization? If you are going to achieve as much decentralization as possible, why not choose Layer 2 instead of Layer 1? For the typical Ethereum user, Ethereum has been tested for many years and has a strong decentralized effect. This is what it does best. In terms of the execution environment, its transaction processing capacity is about 10 to 15 transactions per second, which is not good enough in many aspects. But it's almost as decentralized as Bitcoin, maybe even better. So why give up all that and go to Layer 1?


Keone:


As you said, we optimized for decentralization in our design choices at Monad. First of all, I think it's important to point out that for any choice of network parameters, we should strive to get the maximum performance out of it. So if we have a network with 10,000 nodes that are completely globally distributed, the hardware requirements are a certain level. In the case of Ethereum, this is a bit of a joke, but I still say it, you have to be able to run it on a Raspberry Pi. So certain hardware requirements and number of nodes, we should strive to get the maximum performance out of those requirements. If the requirements are slightly higher, but still very reasonable, like in the case of Monad, 32GB of RAM, 2TB of SSD, 100Mbps bandwidth, and a relatively cheap CPU, we should try to get as much performance as possible from that hardware and network setup.


Ryan: Are those the exact requirements or is this a ballpark?


Keone: That's the exact set of parameters.


David: So this is basically something like a consumer-grade laptop, like a MacBook Pro, or a typical broadband consumer internet connection, that's the hardware requirements that you're describing, somewhere between 200 and 300 nodes.


Keone:Yeah, like a laptop you can buy at Costco.


Ryan: Do you think that's decentralized enough? Or how does it compare to how decentralized Ethereum is? Of course, it's not just about the nodes and the validators, but there are other participants associated with block creation, like the block builders, there's a whole supply chain around that. Would you say that Monad's design parameters are more decentralized than Ethereum? Or how would you have that discussion?


Keone:


I think what I'm trying to say is that for a given set of parameters, we should strive to get the maximum performance out of that setup. The only way to get the maximum performance is through software improvements. Some of the improvements that our team has pioneered in Monad will benefit different settings, like L1 that's more like Ethereum currently has 10,000 nodes. It's really a different number of nodes, and the hardware requirements are slightly different. The new database is not designed for a specific SSD size, it works with any SSD size. So at the end of the day, it's all about getting the maximum performance with very reasonable hardware requirements. My view is that if you constrain the network to only one node and don't constrain the hardware requirements, or allow very high RAM, you might get some different performance characteristics, and maybe even for free, like not having to use SSDs and keeping all the state in memory.


So at the end of the day, it's the job of the engineering team to get the maximum performance out of any hardware setup anyway. But everyone always picks a certain hardware starting point and then squeezes as much performance out of it as possible. We think that's a very good anchor point because the hardware is still very reasonable and we can get a lot of performance out of it.


Decentralization - Mega


Ryan: Lei, what do you think about the keyword decentralization? What is Mega ETH's position on that? How is that similar or different than what we just heard?


Lei Yang:


Of course. First, I want to talk about the performance of each hardware unit. Mega ETH is also working hard to maximize the performance of each hardware unit, such as per GB RAM, per CPU core, or per GHz CPU frequency. But in fact, if we can infinitely increase the hardware configuration, then all optimizations may be overlooked. Unfortunately, however, there is no hardware that can have unlimited CPU cores and memory. In fact, even the latest generation of top-level server hardware has its limitations, which requires us to do hard engineering work to truly maximize efficiency. The word should be efficiency, that is, performance per unit, such as compilation and new data structures.


Back to the question of decentralization. Actually, we also have lightweight nodes. Our full nodes are configured with 8 to 16 GB of memory, 1 TB of SSD, but not server-grade, but consumer-grade 4 to 8 core CPUs. So this is roughly the same configuration as Ethereum's execution nodes and S molar nodes. But the key difference is that the sorter has to do most of the work and execute all transactions. And BN Layer 2 allows us to maintain the hardware requirements for full nodes, allowing nodes that want to get the latest state to do so. This actually improves decentralization by being a Layer 2. In addition, the verification node can run directly on the Raspberry Pi, without storage, just dynamically fetching the state and transactions required for verification from the network. Therefore, they can verify the blockchain in small chunks.


We take decentralization very seriously. Node specialization is our response to Keone's observation. Yeah, if you require everyone to use very powerful hardware, then there is no decentralization at all, because no one can afford a server, and no one wants a noisy server. I have a server next to me, and when I upgrade or restart it, it makes a lot of noise, and no one wants to have such a device at home. So we are very concerned about this, and node specialization is our solution.


Ryan: So, is node specialization just a distinction between L1 and L2? Is this the case?


Lei Yang:


No, basically you have different types of nodes with different hardware configurations that perform different tasks in the Layer 2 system. You have orderers that execute all transactions, you have full nodes that subscribe to state updates, and you have validators that statelessly verify the transaction status.


But on the topic of decentralization, while we focus a lot on that and decentralization by minimizing the hardware configuration of full nodes and validating nodes, we actually think that decentralization is just a means to an end. You want correctness of execution, you want finality and censorship resistance. I think most of the important people from Web 2 are not looking for so-called decentralization when they enter the crypto space. I think what they want is correctness of execution, they want to not trust any single entity to run their important applications. Decentralization, as we have seen, works well for Layer 1, is just a means to an end. And now that Layer 1 has been established, Ethereum, has achieved as much decentralization as possible, I think it's time for us to move on. So Ethereum has done the decentralization part for us, and now it's time to really optimize for performance and build on the decentralization that Ethereum has achieved.


Who is more decentralized?


Ryan: Can we have a discussion about decentralization? Instead of using the word "decentralization" directly, we can use it as a proxy for all the properties you just mentioned, such as censorship resistance, settlement guarantees, etc. So, which super fast EVM is more decentralized? Mega ETH or Monad?


Lei Yang:


That's a good question. We can analyze it from several different angles, such as hardware configuration. I mentioned that the hardware configuration of Mega ETH's full nodes and validators is roughly the same as Ethereum and Monad nodes. Mega ETH's full nodes and validators are actually high-level and only download information as needed. Therefore, the hardware configuration is similar. That is, you don't need more powerful hardware than existing Ethereum execution nodes to download the latest state and keep the latest Mega ETH state.


Keone:


Sorry, you keep calling them full nodes, but I want to ask if this is wrong? Because I think most people in the blockchain space use full nodes to refer to nodes that execute all transactions and verify the results of these transactions. And what you call full nodes seems to be a little different.


Lei Yang:


I can answer this question. If you go to the Ethereum website, you will see instructions on how to sync a full node, and in fact the default option is snapshot sync or fast sync. I remember that their terminology changes frequently, but basically, the default option is to download the latest state snapshot, so that the node does not actually verify transactions before starting to provide the downloaded state. That is, it just downloads the latest copy of the state and then starts working, and may selectively verify previous transactions, but it does not necessarily do so.


I want to emphasize that our understanding of full nodes is to maintain and track the latest state changes. This is very important. Of course, terminology is up for debate, and it would be great if the industry could agree on the same set of terms.


David: Keone, how do you define a full node, and how does Lei’s understanding differ from yours?


Keone:


I think it's not very good if you have to trust a centralized entity when you run a node. The whole point of blockchain is that we don't need to trust, but we can verify. So whether we call it a full node or not, I think I'm just pointing out the use of the term "full node" because it's compared to other things. But what I really care about is that if I'm a small business owner, when I accept a payment, I need to make sure that payment actually went through. So, in my opinion, the premise of blockchain is that this small business owner can actually run a node and when they see the payment, they can verify that the transaction happened without having to trust any single centralized intermediary.


Lei Yang:


I think what you mentioned about "don't trust, verify" is very important. If I can, I would like to add: don't trust, verify, but better yet, let others verify for you. This is the case with optimistic Rollup and ZK Rollup.


Take the example you mentioned, I'm a seller of a laptop and someone pays $5,000 in Mega ETH, I am indeed trusting a single sorter to correctly sort and execute transactions when I decide whether to hand over the laptop. But the important thing is that this single sorter operates under a social and economic contract. So I think the key to the argument is cryptoeconomic security.


If the sorter cheats in this regard, such as excluding transactions or giving me an incorrect balance update, then every message that the sorter tells me at the time I decide to hand over the laptop is signed by the sorter. If the sorter has done anything wrong, this will become proof and the sorter will be punished. So while the sorter may lie to me, if the cost of doing so is 1,000 Teslas or 1,000 laptops, it's not worth it.


I think it comes down to the question of cryptoeconomic security. As a project in the Ethereum ecosystem, I started out working on it myself, and I originally started with proof of work and thought it was wonderful. But over time, I've learned about proof of stake and cryptoeconomic security and found them to be more important.


Ryan: We use "decentralization" as a proxy for features like censorship resistance. So Keone, what do you think about this? Which is more decentralized? What is the path to this goal?


Keone:


To me, decentralization means that anyone can run a full node. By full node I mean a node that has access to all state and executes all transactions. Monad is pioneering these different architectural improvements that allow anyone to run a node and keep up to date with the chain, execute all transactions and access the full state without significantly increasing the hardware requirements.


I think this is very important for decentralization. There are other architectures that may sacrifice decentralization and censorship resistance in some ways in order to move execution off of Layer 1. This is interesting and does have a lot of advantages. But from the perspective of a system with hundreds of nodes participating in consensus, even thousands of full nodes with access to the full state, this is very important. The hardware requirements must be reasonable so that decentralization is ensured. Blockchains with high hardware requirements will be very expensive to run nodes, which will lead to centralization effects. We see this in some high-performance blockchains where the hardware requirements are too high and small holders simply cannot afford to run nodes. Therefore, efficiency is critical, and improvements in software efficiency are critical to the establishment of decentralized networks.


Summary of the first half of the podcast


David:I want to summarize our conversation so far to make sure I've caught up on the main points of the discussion.


I agree with you, Keone. In the Ethereum space, my definition of a full node is this:I act as a listener to the blockchain, keeping a copy of the ledger, and also verifying the validity of incoming transactions and blocks.Sometimes I might receive an invalid block, I will reject it and continue to listen to different blocks from other sources instead of automatically accepting this invalid block.I will ensure that only valid blocks and transactions are added to the blockchain. It is because everyone performs this role that the blockchain remains decentralized and correct.


 What you said, if you only listen to blocks from a trusted single source and rely on a centralized sorter, this actually reduces the functionality of a full node. If you just automatically trust the validity of these blocks, it makes you more like an incomplete full node. This is the argument you mentioned.


And Lei’s response is that if the sorter signs the transaction, and there is an economic guarantee that the sorter will be heavily fined if it cheats, then we may not be the full node I described earlier, but there is a lot of economic security here. This is our new trust assumption, and this is what the discussion has been about so far.


Lei Yang:


That's right. I think the node type you mentioned is actually a combination of full node and validator in Ethereum terminology, because full nodes are not usually expected to actually sign or mine blocks. If a node does not stake, should it be called a full node? I think it should. This node type actually goes beyond the definition of a full node.


Keone:


Sorry, just to confirm, you mentioned earlier that if the sorter publishes an invalid state route, how will they be punished?


Lei Yang:


This is basically standard optimistic and zero-knowledge proofs. The sorter provides a traceable proof by publishing a state update, and if this state is wrong, subsequent transactions and blocks will be submitted to Ethereum. For optimistic proofs, validators have 7 days to re-execute these transactions. If any errors are found, they submit a fraud proof, which automatically punishes the sorter. With zero-knowledge proofs, the sorter needs to prove the state transfer in real time before publishing the state update. As I mentioned at the beginning, this is just standard optimistic and zk rollups.


Keone: These two mechanisms are different. Have you decided which one to use?


Lei Yan: We are currently using optimistic proofs because we believe that zero-knowledge proofs are not efficient enough to meet our current throughput needs.


Keone: I see. But you just mentioned zero-knowledge proofs as a future possibility.


Lei Yang: Yeah, I just wanted to give a high-level introduction to validity proofs.


Keone: But for optimistic proofs, it also requires a lot of memory for others to verify.


Lei Yang:This is actually incorrect, because I have repeatedly mentioned that the validator only needs the same configuration as the Raspberry Pi. Therefore, the key lies in the so-called stateless verification. This concept is not our original creation, but originated from the discussion of stateless clients in Ethereum a few years ago. We just adapted this technology.


Software Efficiency


David: Going back to the Monad software efficiency just mentioned, can it be understood as consuming as little computing resources as possible on all nodes around the world for each transaction? This is the source of efficiency you mentioned. Can you talk about what chain reactions will occur if you optimize the total computing cost of running the Monad blockchain? What benefits can Monad gain from it?


Keone:


I want to clarify that the efficiency mentioned by Lei before refers to the minimum computing resources required to complete a task. If the number of nodes participating in the consensus in the Monad network is reduced, it will indeed reduce power consumption to some extent, but this is not the main metric optimized by our team. We want the verification cost of each full node to be as low as possible, so that the full node can be run at the lowest cost and keep up with the changing global state.


David:Maybe I have some misunderstandings about how Mega ETH works. Let's assume that Mega ETH is a fast process achieved by centralizing computing resources, while Monad is a Layer 1 blockchain with a consensus mechanism that allows individual independent stakers to exist. So your point is that Mega ETH is relatively weak in terms of decentralization and censorship resistance due to its centralized sorter, while Monad is better able to maintain decentralization and censorship resistance due to its architecture. Do you agree with this statement?


Keone:Yes, I agree with this statement.


David: Because you have independent stakers, you can also have independent full nodes, thereby maintaining the complete blockchain architecture and ensuring decentralization and censorship resistance. Lei, what do you think about this?


Lei Yang:


It is important to realize that our finality, correctness, and censorship resistance are based on Ethereum. There are tens of thousands of nodes supporting these censorship tolerance guarantees compared to hundreds of nodes. So by the criteria you defined, I think Mega ETH is more decentralized. I personally don't agree with comparing the degree of decentralization of two products without a clear definition, but by the definition you mentioned of the number of nodes providing the final guarantee, Mega ETH is indeed more decentralized. Because we are built on Ethereum. This is a conscious choice.


Ryan: Keone, what's your response to this? How would you characterize Monad's claim to decentralization? How would you refute that statement?


Keone:


I think we should move in a more efficient direction. Going back to the discussion of Ethereum L1, there is currently discussion on how to increase Ethereum's execution throughput from 15 transactions per second to a higher level. It's frustrating when gas prices spike on L1. I think it's beneficial to improve Ethereum's performance, and it's important to drive software improvements to make Ethereum more powerful. So this is also the basic principle of our team's design of Monad, aiming to make some architectural improvements that may be incorporated into Ethereum L1 in the future.


Lei Yang:


If we go back to the point that Mega ETH relies on certain elements of Ethereum, all Layer 2 can claim to be more decentralized because we all use Ethereum directly for settlement. Monad attempts to introduce technology that makes Ethereum more efficient without increasing hardware requirements, which is helpful for the improvement of the entire decentralized system.


Keone:


I think we are all contributing to the Ethereum ecosystem. But I would like to add a point about the definition of overall hardware consumption. If we define the overall hardware consumption of both chains as the funds required to purchase servers and nodes to correctly generate blocks, then Mega ETH has only one sorter, while Monad has hundreds of full nodes. I am not sure whether it is more expensive to operate and purchase a sorter than hundreds of full nodes. This argument confuses me because if the Monad network only has two nodes instead of hundreds, the cost will indeed be lower.


Lei Yang:


Yes, it is. But in such an assumption, the guarantee of censorship tolerance and finality and correctness execution will rely on these two Monad nodes. Mega ETH relies on tens of thousands of Ethereum nodes, which is an important difference. All of the Layer 2s you mentioned can claim this, and we completely agree with that, and we choose to build on Layer 2.


Keone:


I want to ask a question because you keep mentioning censorship tolerance, and you are very careful with your wording, you don't say censorship resistance, but censorship tolerance. Is there any difference between the two?


Lei Yang:


I usually use these two words interchangeably. The key difference is whether you want to achieve real-time censorship tolerance or you want to achieve censorship tolerance with bounded latency. The censorship tolerance provided by Mega ETH is time-limited, allowing users to submit censored transactions directly to Ethereum, thereby guaranteeing inclusion and forcing the sorter to include it in the Layer 2 block. If the sorter fails to include the transaction, it will also be affected by economic security. So I think this is a way to combine delayed censorship tolerance and cryptoeconomic security.


Keone:


This is the standard OP stack.


Lei Yang:


This is not the technology of Mega ETH, but the basics of Layer 2.


Monolithic architecture vs. modular architecture


Ryan:From the discussion so far, from the perspective of observing the Ethereum ecosystem, both projects are, to some extent, innovative responses to Solana, especially the high-throughput SVM. Ethereum has not had much innovation in virtual machines since its inception, and the Ethereum Foundation has not focused on this, while your two projects are innovating EVM and building new possibilities.


I think Keone's Monad approach is more like a monolithic integration approach, which is to put all states and content in a complete stack instead of spreading execution, settlement, and data availability across multiple layers and dependencies. You support EVM and monolithic architecture at the same time. Mega ETH also supports EVM, but prefers modular architecture. Your retention of decentralized features and focus on Layer 2 are both good.


Both are innovating the EVM, which is good for the progress of Ethereum, especially in an open source way. The Monad approach is more monolithic, while Mega ETH is modular. I want to clarify the difference between the two architectures because in the crypto space, we like to discuss architectural ideas.


Keone:


I think of Monad as a full-stack approach, where all layers are optimizing and coordinating.Ethereum itself is a full stack, with a virtual machine, consensus mechanism, and as a data availability layer for other blockchains. When we try to improve Ethereum, we have to think about how these parts fit together. This is exactly what our team is currently solving. You mentioned Solana, and one of the four innovations of Monad is asynchronous execution. This concept can be described by the movie "Maximin", where there is a point in the movie that "you only use 10% of your brain", and how simple life would be if you could use 100% of your brain.


This metaphor is helpful because in existing blockchains, consensus and execution are performed alternately, and like Ethereum and almost all other blockchains, the execution budget is actually very small because consensus takes up most of the time. Consensus is expensive because nodes are distributed all over the world. Therefore, execution actually only takes a small part of the block time. In Ethereum, the execution budget is only 100 milliseconds in a 12 second block time, which is a very small ratio. By executing asynchronously, Monad moves execution off the hot path of consensus and runs it on a separate path that lags slightly behind consensus, which means that the entire block time can be allocated to execution. This is actually a huge improvement in execution budget, which comes from a different software architecture.


Our team has been discussing this concept for a long time, and recently I have seen other blockchains, including Solana, also start to talk about the idea of asynchronous execution and explore how to integrate it into other blockchains. This can only be achieved when the entire system is designed with consensus and execution in mind, and the entire stack is optimized. So Monad is really consistent with the idea of optimizing the entire system to provide the best performance.


Ryan: When you release the Monad product, it may confuse a lot of people because the narrative may see you as an Ethereum killer or a Solana killer. We have to have a "killer" or we can't get the narrative moving. So, Lei, what is your reaction to my framework? You are both supporters of EVM, and your projects take a modular approach, while Monad takes an integrated monolithic approach.


Lei Yang:


We are supporters of EVM, aiming to support new applications. Our product design focuses on improving performance, including throughput, low latency, and real-time. I don't remember mentioning the distinction between monolithic and modular in our design discussions. So I completely agree with you that we are taking the modular path. But I think our design is not guided by this religious debate, but because we choose to build on Layer 2.


In terms of engineering approach, we focus on performance, if Monad is a full stack approach, we focus on performance, willing to make some controversial and interesting design decisions.


About the community


David: What I find very interesting about both of your blockchain projects is that even though neither of you has a testnet, the communities are both very large.The brands you have built have attracted a lot of developers, and there are already people developing on Monad, and there are a lot of people working with Monad. The same is true for Mega ETH, there is an organization called Mega Mafia, and Mega ETH has a hat that everyone wants. I'm wondering, as an EVM chain, what is the process of community management and business development like?There are about 50 EVM chains right now, but both of your ecosystems have done a very good job of building communities and attracting developers.


Keone: I want to show you this Monad hat. The community engagement is incredible, and it’s all because of some of the people who love crypto, are passionate about DeFi, and follow the stories and memes on crypto Twitter every day. A lot of people aren’t famous on crypto Twitter, but they’ve been following and have found a place in the Monad community and opportunities to contribute and lead, like organizing events, contributing artwork, etc. We have a lot of fun events, like the Monad Running Club, which started out with a goal of running 10,000 kilometers together, which was quickly reached. There’s also Molingo, which is an event to learn a new language, similar to how people use Duolingo together. There’s a lot of energy in the community, and it’s all driven by individuals. This is consistent with the spirit of crypto: everyone matters, and everyone can make a difference.


David: How do you keep people engaged when only a few are actually participating in the internal Monad testnet? What are they doing? While it’s cool that they’re running together, are there other more “crypto-native” events?


Keone:


Our events evolve over time. Initially, a lot of people just came together and talked to each other during the bear market. I think a lot of it is path-dependent, it was a really tough time, and a lot of our older members are people who decided to take refuge while navigating the timeline. Over time, that evolved into more concrete activity. Some of us in the community are getting job opportunities within the Monad ecosystem, being able to connect with great community managers, marketers, or people who have good instincts about what it takes to move a project forward. It's overall a very nascent but growing environment full of opportunity.


David: If you're a reader of Monad stories, you'll find that there's not only Molandak, but there are other animals, like the Monad fish and other creatures that I don't know how they work. Where do these stories about Monad animals come from?


Keone:


That's a good question. When we started the open community, Bill Monday, our community lead, and I spent a lot of time brainstorming, trying to find the right mascot. I now have a Notion page with a list of potential mascots, and all the ideas are terrible. Every next idea is worse than the previous one. We over-complicated it, trying to come up with something that was quickly related to Monad, like a cheetah or a car. Ultimately, the creatures that became the unofficial mascots of Monad were all community-driven: someone uploaded an image, and then through the community's culling, the image surfaced, there was no formal voting process, just people's memories and repeated use. Now, Molandak, Mocodal, Salmonad, and Moyaki all have rich stories.


(DeepChao Note: The animals mentioned in the conversation refer to the meme-like cartoon characters related to monads)


David: Lei, how did you get started with the Mega ETH clothing brand?


Lei Yang:


Our community growth is really amazing because we actually just went public not long ago. Our brand is called Mega Mafia, which is our flagship user project and incubation project.


We look for 0 to 1 applications that can only be achieved on a live blockchain like Mega. We hope to find founders who are attracted to cryptocurrencies and may have read something online that promises that they can build decentralized and trustless applications on cryptocurrencies. However, when they come to the cryptocurrency field, they find that this is actually a lie, and the resources and infrastructure are not enough to achieve their goals. So we are telling them that Mega provides a new infrastructure to encourage them to pursue the dreams they had when they first entered cryptocurrency.


So, we have applications like on-chain Minecraft, infinitely scalable VPNs, and decentralized exchanges based on order books, real-time prediction markets, and live game shows. I am wearing a Mega Mafia T-shirt now. This is the center of our community. As for the rabbit, if Monad is mid-curving, then we are left permanent, because we chose the rabbit, really just because I like rabbits. When I first chatted with our co-founder and CBO Shuyao, she said we need to choose an animal as a mascot, and I said that I go to the MIT campus almost every night to feed the rabbits, so the mascot was decided to be a rabbit.


I think this is a great choice, and while it wasn’t community chosen, our community really loves rabbits. The only rule we have on Discord moderation is that if you eat rabbit, you’ll be banned forever. Other than that, I think everything is going pretty well. We’re seeing more and more people interested in the technology. Some of you may have seen our short TikTok videos explaining some common misconceptions and lesser-known anecdotes in the crypto industry, presented in a short form to the audience. We’re also doing some serious performance engineering, so there will also be a series of long-form tweets and upcoming blog posts that are more rigorous and suitable for those who are interested in the technology. We want to be able to drive the development of applications, and that’s the core of our community building efforts.


Why Ethereum Virtual Machine (EVM)?


David:Recently on our podcast with Justin Drake and Anatoly, Anatoly talked about the EVM and described it as being like building in quicksand. While it’s possible to rebuild the EVM architecture to be developer-friendly, it’s still difficult. I'd like to know what you think. You all chose the EVM, why did you choose it? How do you see the future role of the EVM in the industry? I remember in 2021, there was a saying in the Ethereum community that the EVM was the center of everything, because even Ethereum killers, chains like Phantom, Avalanche, etc. are using the EVM. What are your overall views on building on the EVM? Why commit to using the EVM? How do you think it will develop in the future? Keone: I think the EVM is actually a very suitable bytecode standard. There are some minor issues, such as the size of a single storage slot, 32 bytes is indeed large, but fundamentally it is a very reasonable and expressive standard. We have high-level languages like Solidity, Viper, and Huff, which are very powerful. So I think a lot of the criticisms of the EVM are actually misleading, or just unfounded, or just for attention. The EVM is an excellent standard that will get better over time through continued improvements and support for new precompilations or opcodes. There is really nothing wrong with it being an evolving standard. Combined with the network effect of the many developers already building applications for the EVM, the numerous libraries, and applied cryptography research, it really is a no-brainer.


David: Lei, what do you think?


Lei Yang:


I basically agree. There is nothing fundamentally wrong with the EVM. Interestingly, we had Vitalik at an event at EthCC. One of the students asked him what regrets he saw in Ethereum's technology choices. I don't think Vitalik mentioned the EVM. He mentioned that he would not start the slippery slope of precompiles. But fundamentally, there is nothing wrong with the EVM, and we couldn't agree with him more. Also, I realize that as engineers and researchers, there is always an urge to completely overthrow everything and start from scratch. But as Keone mentioned, the tools, libraries, research, and most importantly, people in the ecosystem are already used to the EVM. If Intel launches a new instruction set architecture standard every five years, there is a reason, because capital and knowledge has accumulated in the ecosystem of a particular technology. So there is nothing fundamentally wrong with the EVM, and I am 100% confident that the EVM will continue to thrive.


David: Keone, last question, about the future of what Monad is building and the open source nature of it. Will Monad EVM and Monad DB, these things eventually become open source?


Keone: That's our plan. We'll think about it over time, especially before the mainnet, the code will be completely open for people to read and verify. This is very important in terms of auditing and security. So no problem.


When will the mainnet be launched?


Keone: Our team is working hard and can't give an exact date. My answer is that when Molandak comes out of this locker and is distributed to everyone, that's when we will be live on the mainnet.


Lei Yang: We expect to open it at the end of the year or early next year.


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
PleaseLogin Farcaster Submit a comment afterwards
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit