Solana Founder Anatoly's Latest Interview: How to Build Solana's Moat? | Deep Dive

24-11-19 13:11
Read this article in 57 Minutes
总结 AI summary
View the summary 收起
Original Article Title: What's Next For Solana | Anatoly Yakovenko
Original Article Translation: Ismay, BlockBeats


Editor's Note: Last week, SOL broke through $248, marking its second-highest price in history since the FTX rug pull, just 4% shy of the all-time high of $259 on November 6, 2021. The latest Lightspeed podcast invited Solana Labs co-founder Anatoly Yakovenko and Solana RPC company Helius CEO Mert to delve into Solana transaction fees, how to stay competitive in the cryptocurrency space, SOL inflation, competition with Apple and Google, whether Solana has a moat, and other topics.



Question Directory


1. Why does Solana still have so many front-running transactions?

2. Can L2 architecture really solve congestion issues?

3. Can a chain focused on single-point optimization truly challenge Solana's global consensus advantage?

4. Is shared block space a "tragedy of the commons" or a key to DeFi capital efficiency?

5. What is Solana's core competitive advantage?

6. Will Solana's inflation rate decrease?

7. Is the FireDancer comprehensive development cost too high?

8. How does Solana compete with Ethereum?


1. Why does Solana still have so many front-running transactions?


Mert: So, let's start, Anatoly. Let's start from the beginning. Part of the reason you founded Solana was because you were tired of being front-run in the traditional markets. You wanted to achieve global real-time information synchronization through Solana, maximize competition, and minimize arbitrage, but none of this has been achieved so far. Almost everyone is constantly being front-run. Not only has MEV skyrocketed, Jito's tip has often exceeded Solana's priority fee. How do you see this issue? Why is this happening?


Anatoly: You can set up your own validation node, submit your own transactions without interference from others, right? Whereas in traditional markets, you don't have that choice at all, which is exactly where Solana's decentralization functionality comes in, and that part indeed plays a role.


The current challenge lies in the fact that setting up a validation node is not easy. It's not simple to stake enough to reach a significant position, and it's even more difficult to find other nodes willing to order transactions the way you expect. However, all of this is achievable; it just takes time and effort. The current market is not yet mature enough and has not established sufficient competition, such as the competition between Jito and its rivals, allowing users to easily choose "I will only submit to order flow creator Y and not submit to order flow creator K."


From a fundamental perspective, as an enthusiast, I can start my own validation node, stake some tokens, run my algorithm, and directly submit my transactions without anyone being able to stop me; all of this is feasible. The question now is whether we have matured to the point where users can always choose the best way to submit transactions. I believe we are far from reaching that stage.


In my view, the method to achieve this goal is actually simple but very challenging: increase bandwidth, reduce latency, optimize the network as much as possible, eliminate bottlenecks that make the system unfair, including mechanisms like multiple parallel block proposers. If there is only one leader per slot and you have a 1% stake, you will have approximately one opportunity every 100 slots. However, if there are two leaders per slot, and you have a 1% stake, you will have approximately one opportunity every 50 slots. Therefore, the more leaders we can add, the less stake you need, allowing you to run your algorithm at the service quality you require.


Someone created a website called the Solana Roadmap, which opens to display "increase bandwidth, reduce latency." Anatoly asks who did this.


Mert: Currently, you need to accumulate a certain amount of stake to prioritize processing your transactions, even if this is not the case. It seems that having more stake in the system not only benefits you in gaining your block space, and so on, but there is a dynamic relationship here: the wealthier you are, the greater your advantage. Is this acceptable?


Anatoly: Improved performance lowers the barrier to honest participants changing the market dynamics. If we have two leaders per second, the stake required to provide the same service is halved, reducing the economic entry barrier, enabling more people to compete. Then you can say, "Hey, I am the best validator; you should submit all Jupiter transactions to me, and I will do what you want." This way, I can operate a business offering it to users, and competition will force the market to achieve the fairest balance point. That is the ultimate goal.


However, to achieve this goal, I believe a significant difference between Solana and Ethereum is that I think this is merely an engineering issue. We just need to optimize the network, increase bandwidth, for example, more leaders per second, increase block size, everything scales up until competition drives the market to its optimal state.


2. Can L2 Architecture Really Solve Congestion Issues?


Mert: Speaking of an engineering issue, the reason Jito's tip exceeds the priority fee is not only because of MEV, but also because of the transaction landing process, or more precisely, the operation of the on-chain fee market does not always work deterministically, sometimes it is outright unstable. What is the reason for this?


Anatoly: This is still because the current transaction processing implementation is far from optimal. In cases of very high load, if the load is low, everything runs smoothly. During the mini bear market period in the last six months, I have seen confirmation times of less than a second from start to finish, and everything ran very smoothly because the number of transactions submitted to the leaders was small, these queues, fast connection tables, and other resources were not filled, and there was no backlog due to performance bottlenecks.


When these queues are backlogged, you cannot prioritize these queues before the scheduler, which effectively disrupts the on-chain fee market. So, in my opinion, this is also an engineering problem and perhaps the area in the current ecosystem that needs the most engineering effort, that is, extreme optimization of these processing pipelines.


Mert: Given the existence of these issues, it seems that your answer is that these issues do exist, but they are engineering issues and therefore solvable, future iterations will address them. Some might say that these issues do not exist in L2 due to its architecture, right? Because you can achieve first-come, first-served through a centralized sequencer.


Anatoly: First come, first serve would also lead to the same issues, and even Arbitrum has priority channels. So, if you implement first come, first serve, it would incentivize spam transactions, which is the same problem. If you have a generic L2 supporting multiple applications, it will eventually face the same issues as well.


Some may argue that because L2 does not have a consensus and vertically integrated ecosystem like Solana, they can iterate faster, just like a Web2 company, pushing a new version every 48 hours, quickly fixing issues through centralized sequencers. But they still face the same issues as Solana.


You could say that Jito does have the opportunity to address these issues because their relayer can be updated every 24 hours or continuously released. However, what they are currently not doing is— they are not sufficiently scheduling and filtering data to keep the traffic coming from these relayers within the range that the validator scheduler can handle, but you can achieve similar effects.


So, I don't believe L2 itself can solve these issues. L2 is only effective when you launch, only when there is a popular application and no other application can solve these problems. And this doesn't even apply to the application itself, as if you have an application with multiple markets, congestion in Market A will affect all other markets.


3. Can a chain focused on single-point optimization truly challenge Solana's global consensus advantage?


Mert: So let's look at it from a different angle. If this is not a general L2 but a chain like Atlas focused on DeFi, running an SVM. How does Solana compete with such a chain? Atlas doesn't have to worry about consensus overhead or shared block space issues, it can focus on DeFi optimization, and through SVM, it can even achieve free markets.


Anatoly: What you're talking about is actually Solana's competitiveness in a smaller validator set. In this case, there is only one node, which is easier to optimize as you can use larger hardware. This is the key point: is synchronous composability important at scale? This smaller network can only cover the region where the hardware is located, so information still needs to propagate globally. In Solana's end state with multiple validators, transactions can be globally synchronized and submitted in a permissionless, open system.


If this issue is resolved, the end result is Solana. Regardless of whether data is committed to L2, the key question is how to sync information globally and achieve consensus quickly. Once you start tackling this issue, it's no longer something a single machine in New York or Singapore can handle; you need some form of consensus, consistency, and linearization. Even if you later rely on L2 for stricter settlement assurances, you still face Solana's current issues. So from my perspective, these single-node SVMs are basically no different from Binance.


Competing with Binance is actually a more interesting question. If you choose, you can use SVM, but users will ultimately prefer using Binance because of its better user experience. So, we need to become the best version of a centralized exchange. And to achieve this goal, the only way is to embrace the concept of a decentralized multiproposer architecture.


Mert: Another advantage is that Solana itself must address these issues, and through L2, they can address these issues more quickly. It's easier to address issues on a single box than on 1500 boxes. In this way, more attention will be garnered at the outset, accumulating network effects. No matter how Solana does it, it needs to address these issues, and because they use the same architecture, they can learn from it and potentially release faster.


Anatoly: On a business level, the competition is whether these single boxes can survive when they reach a certain load. Because building a single box does not solve all problems immediately, you still face almost the same engineering challenges. Especially when you consider that what people are discussing is no longer Solana's consensus overhead, but the transaction submission process.


The transaction submission pipeline itself can be centralized on Solana, just like on some L2s. In fact, Solana employs a single box relayer, receiving a large number of transactions and then attempting to submit them to the validators. The data transfer rate between the relayer and validators can be limited to a lower level, ensuring that validators can always smoothly process these transactions.


Furthermore, such a design also allows components like Jito to iterate more quickly. Therefore, I believe that the advantage of this design in L2 is actually smaller than people imagine.


4. Is Shared Block Space a "Tragedy of the Commons" or the Key to DeFi Capital Efficiency?


Mert: If we broaden the discussion, Solana as an L1 shares block space, which leads to the "Tragedy of the Commons" issue, similar to the situation where public pool resources are overused. On L2, especially those that are not necessarily the L2 of an application chain, developers can have independent block space without sharing with others.


Anatoly: This independence may be more attractive to application developers, but the prerequisite is that this environment needs to be permissioned. Because once non-permissioned validators or sequencers are adopted, as soon as multiple apps run simultaneously, this control is lost.


Even in a single-app environment, such as Uniswap, if there are multiple markets on the platform, these markets may also interfere with each other. For example, an unknown Meme token may affect the order priority of mainstream markets. From a product perspective, envisioning a future where all assets are tokenized, as the CEO of a newly minted unicorn company, I decide on which platform to IPO. If I see that the trading volume of SHIB on Uniswap is too high, causing severe congestion to the extent that mainstream assets cannot trade normally, this is undoubtedly a failed state for this application-focused L2.


Therefore, the challenges faced by these application-focused L2 solutions are similar to Solana's, as they all need to isolate their own state in a way that does not affect other applications. Because even a single application like Uniswap, if one of its markets experiences congestion and impacts all other markets, then for a CEO of a company like me, such an environment is unacceptable. I don't want my primary market to be the kind where everyone is trading. I want each trading pair to operate independently.


Mert: If it's permissioned though? Since there's an exit mechanism, wouldn't that work?


Anatoly: Even in a permissioned environment, the issue of local isolation still needs to be addressed. Solving this isolation problem in a permissioned environment is not fundamentally different from solving it in a relayer or scheduler.


Mert: Do you think this market analogy can be applied to any type of application?


Anatoly: Some applications do not have these characteristics, like simple peer-to-peer payments where there is basic little congestion and scheduling is very straightforward. So the challenge in designing isolation mechanisms and all these seemingly complex things is that if you cannot guarantee that a single market or a single application won't cause global congestion, then companies like Visa would introduce their dedicated payment L2 because their transactions never compete. They don't care about priority; they care only about TPS. Whether I swipe my card in the first or last block of a batch is not important; what matters is that I can walk away within two and a half seconds after swiping. So in a payment scenario, a priority mechanism is not key, but it is indeed a very important real-world use case.


My perspective is that if we cannot properly implement isolation mechanisms, then the idea of a large composable state machine loses its meaning because you would see payment chains and L2s for single markets emerge. If I were a CEO of an IPO company, why would I choose to launch on Uniswap's chain in the next 20 years? Why not launch my own L2 that only supports my own trading pairs to ensure good performance?


This is a possible future, but I think there is no engineering reason to do that unless there are other reasons. If we can solve the engineering issues, then I think achieving composability in a single environment has a huge advantage because the friction of fund transfers between all states and liquidity can be greatly reduced, which is a very important feature for me. I believe Solana's DeFi can survive in a bear market and endure hits greater than anyone else precisely because of its composability, which makes its capital efficiency higher.


Mert: Vitalik recently stated, "In my view, synchronous composability is overrated." I think he probably came to this conclusion based on empirical data, believing that there aren't too many on-chain instances of its use. What are your thoughts?


Anatoly: Isn't Jupiter the epitome of synchronous composability? I think he's only focused on Ethereum, but unfortunately, Jupiter has a huge market share on Solana and a significant market share in the entire crypto space. Achieving synchronous composability, Jupiter is indispensable. Without synchronous composability, Jupiter cannot function. Look at 1inch, it's a competitor on Ethereum that can't scale because even the cost of transferring between L2s and the same L1 is extremely high and slow.


I think he's wrong. I believe financial systems can be asynchronous, which is also how most financial systems currently operate. It's not that these systems will fail or collapse as a result. But if Solana succeeds and the ecosystem addresses all these issues at the current pace, even if we only maintain the current execution level every year, you'll see significant improvements. Ultimately, I believe synchronous composability will be the winner.


5. What is Solana's core competitive advantage?


Mert: Let's temporarily set aside engineering concerns and assume engineering is not a moat, and other chains can achieve the same results. For example, a chain like Sui could also achieve synchronous composability and have a smaller set of validators. Assume some L2s would also face similar issues you mentioned, but they could also resolve those issues. I've asked you before, when engineering is no longer a moat, what is the moat? You said it's content and functionality.


Anatoly: Yes, Solana did not set a specific validator target. The testnet has about 3,500 validators, and the mainnet has a large scale as well because I want it to be as large as possible to prepare for the network's future. If you want as many block producers in the world as possible, you need a large validator set to allow anyone to enter and participate in every part of the network permissionlessly.


You should test at the highest rate possible because the cost of solving these problems is currently low. Solana is not dealing with trillions of dollars in user funds right now; that's what Wall Street does. Solana deals with cryptocurrency, which gives us an opportunity to let the smartest people in the world solve these problems, forcing them to face these challenges.


So my point is, rather than Solana reducing the validator set size for performance, Sui and Aptos are more likely to need to increase their validator set. If you find PMF, everyone will want to run their own node because it provides security. As the validator set grows, if you start restricting participants, you limit the network's scalability.


Mert: Okay, you mentioned an issue I want to discuss. Although that's the goal, if you look at the data, the number of validating nodes is decreasing over time. It seems you think this is due to a lack of product-market fit, so they don't have the incentive to run their own node infrastructure, right? Or what is the reason?


Anatoly: Yes, part of the reason is some staking support from the Solana Foundation. But I am indeed interested to know how many validating nodes are self-sustaining, is that number growing?


Mert: Hold on, we have around 420 validating nodes that are self-sustaining.


Anatoly: But what about two years ago?


Mert: We might not have that data. But we do know that the total staked amount by the Solana Foundation has decreased significantly since two years ago,


Anatoly: while fees are also increasing. So my guess is, the number of nodes that could self-sustain two years ago is much lower, even though the total node count was higher then. So my point is, we need the network to be able to scale to support everyone who wants to run node infrastructure. This is also one of the main purposes of the delegation program, to attract more people to run nodes and put some stress tests on them in the testnet.


But a testnet can never fully simulate the characteristics of a mainnet, right? No matter how many validator tests they run in the testnet, the mainnet situation will still be very different. So, as long as the number of self-sustaining nodes is growing, I see this as a positive trend. The network must physically be able to scale to support such a scale, or growth will be limited.


Mert: So, basically, you mean the delegation mechanism helps the network stress test different validator node scales, but fundamentally, the only important, or most important thing, is the number of self-sustaining validating nodes.


Anatoly: Absolutely, this can theoretically raise some arguments, such as the idea that in an extreme case a single node might not be self-sustaining. But even in a catastrophic failure, if that's the only surviving node, it is indeed helpful. However, this falls into the realm of the ultimate "nuclear war decentralization" problem.


Fundamentally, what truly matters is whether the network is growing and succeeding, which relies on self-sustaining validators who can cover their own costs, have enough interest in the network, are willing to commit commercial resources to continuous improvement, deep dive into data, and do their job well.


Mert: In a final state where anyone can run a fast, low-cost, permissionless system, why would people still choose Solana?


Anatoly: I think the future winner might be Solana because this ecosystem has demonstrated outstanding performance in execution and has been ahead in addressing all these issues. Or, the winner might be a project entirely similar to Solana, with the only reason it's not Solana being that it executes faster, enough to surpass Solana's existing network effects.


So I believe execution is the only moat. If you execute poorly, you will be overtaken by others. But the overtaker must perform exceptionally well to become a killer product. PMF stands for product-market fit, meaning it leads to a shift in user behavior.


For example, if transaction fees are ten times cheaper, will users switch from Solana to other projects? If users only have to pay half a cent, maybe not. But if moving elsewhere significantly reduces slippage, that might be enough to attract them or traders to switch.


Yes, observing the overall user behavior is crucial, to see if there is some fundamental improvement significant enough to make them choose another product. There is indeed such a difference between Solana and Ethereum. For users, when they sign a transaction, it shows they need to pay $30 to receive an ERC-20 token, even just for a very basic state change, which is an outrageously high price, exceeding user expectations, leading them to opt for a cheaper alternative.


Another factor is time; you cannot wait two minutes for a transaction confirmation, that's too long. Solana's current average confirmation time is around two seconds, sometimes up to 8 seconds, but they are moving towards 400 milliseconds, which is a significant driver for user behavior change, enough to make them willing to switch to a new product.


But this is still unknown. However, in Solana's technology, there are no barriers preventing the network from continuing to optimize for improved latency and throughput. So when people wonder why Solana's growth rate is faster than Ethereum's, some may think the next project will surpass it. But in reality, the marginal difference between Solana and the next competitor is very small, making it more difficult to create a difference significant enough to influence user behavior, posing a significant challenge.


Mert: If execution capability is the primary factor, then fundamentally, this is to some extent becoming an organizational or coordination issue. The difference between Solana's vision and the so-called modularity (although not a formal term) is that, for example, if you are an application developer like Drip, and your application is built on Solana, then you need to wait for L1 to make some changes, such as addressing congestion issues or fixing bugs.


But if it's on L2 or an application chain, you can address these issues directly. Perhaps you can see it from this perspective: on this other chain, you may be able to execute operations more quickly than relying on shared space. So if this is true, the overall execution speed will be faster.


Anatoly: Over time, this difference will gradually narrow. For example, Ethereum used to be very slow. If you were running Drip on Ethereum and transaction fees skyrocketed to $50, you would go ask Vitalik (Ethereum's founder) when this issue could be resolved. He might answer, "We have a six-year roadmap, buddy, it's going to take some time." But if you ask teams like Fire Dancer or Agave, they will say, "There is a team working hard to fix this issue and aim to resolve it as quickly as possible in the next release."


This is a cultural difference. The core L1 team and the entire infrastructure team, including you, are clear throughout the transaction submission process that when the network slows down or experiences global congestion, it is a most urgent (p0) issue that everyone needs to address immediately. Of course, sometimes unexpected issues arise, such as adjustments to the fee market design.


These issues become less common as the network's usage scale gradually expands. I don't think there are challenges now that require urgent design changes and take six months to a year to deploy. I don't see such challenges on the road ahead now.


However, you know there will surely be some bugs or other unexpected issues at the time of release, requiring people to work overtime on weekends, which is also part of the job. If you have your own dedicated L2 application chain, do not need shared resources, and have full control over this layer of infrastructure, you may move faster, but at a high cost that not everyone can afford.


Therefore, a shared, composable infrastructure layer may be cheaper and faster for the vast majority of use cases, serving as an infrastructure layer for software as a service that can be shared and used by all. As bugs are fixed and continuous improvements are made, this gap will narrow.


6. Will Solana's Inflation Rate Decrease?


Mert: Another related point of criticism is Sol's inflation mechanism, with many believing it is to help more validators by increasing rewards. However, the cost of doing so may be at the expense of those who are purely investors. When people say Solana's inflation rate is too high, what is your immediate reaction? How do you view this?


Anatoly: This is an endless debate, changing the numbers in a black box will not truly change anything. You can make some adjustments to make it impact certain people to the point where the black box cannot function properly, but this itself does not create or destroy any value, it's just a paper operation.


The reason the inflation mechanism is the way it is now is because it directly copied Cosmos' mechanism, as many of the initial validators were from Cosmos. However, does inflation affect the network as a whole? It may affect individuals under a specific taxation system, but for the entire network, it is a cost to non-stakers and equivalent reward to stakers, which mathematically adds up to zero. So from an accounting perspective, inflation does not affect the network as a whole black box.


Mert: I have seen people say since it is arbitrarily set, why not just lower it?


Anatoly: Go ahead, propose a change, I personally don't care, I've said it countless times; change it to any value you want, and persuade the validators to accept it. When these numbers were initially set, the main consideration was not to cause total disaster, and since Cosmos has not had any issues with this setting, it's reasonable enough.


7. Is the FireDancer Development Cost Too High?


Mert: So let's go back to the coordination challenge. We've been promoting FireDancer recently, and Jerry mentioned that some people are starting to think that FireDancer is a bit overhyped. However, Jerry also said that FireDancer has indeed slowed down progress because Anza engineers and Fire engineers clearly need to align on certain things before moving forward, causing some initial delays. Your view seems to be that once the specification and interfaces are sorted out, the iteration speed will increase, right?


Anatoly: Yes, it can basically be divided into three steps: the first step is the design phase, where you need to achieve consensus on what to do; next is the implementation phase, where both teams can work in parallel; and then the testing and validation phase to ensure there are no security or liveness issues. I think the design phase may take more time, but the implementation is done in parallel, so both teams can progress simultaneously, and the testing and audit phase will be faster because the probability of both independent teams releasing the same bug is lower.


I think the biggest difference is that Ethereum usually operates like this: we will release a major version that includes all the features intended for that release, focusing on feature sets rather than release dates. Solana's way of working is almost the opposite; it sets a release date, and if your feature is not ready, it gets cut, leading to a much faster release cadence.


In theory, if there are two teams looking to speed up the iteration process, the iteration cycle could be accelerated further. But this requires core engineers to have that sense of urgency, feeling like "we need to get these contents out as soon as possible within a reasonable range." In such a scenario, a certain amount of redundant implementation can be relied upon. I believe that, culturally, both teams have a similar background because they are not academic; they have grown up in a technical pressure cooker.


Mert: This leads to the third point I wanted to make: FireDancer. You have to assume you have no execution capabilities because you are working on the phone rather than helping L1 in development or coordinating these client teams. Is this really the optimal choice for individuals?


Anatoly: The last major change I was involved in with FireDancer was moving the account DB index out of memory. At that time, I could write a design spec and a small-scale implementation to prove it feasible. However, the issue was that to complete this work, it needed a full-time engineer dedicated to this task. I could hand this task over to Jash, who is responsible for executing it, but including the testing and release cycles, the whole process would take a year.


For me, it would be fantastic if I could join Anzar Fire Dancer as a pure individual contributor (IC), solely responsible for looking at Grafana (a performance monitoring tool) and developing some things. However, the reality is that my energy is scattered across countless projects. So, I find that the place where I can have the most impact is where I can define the state of a problem, such as a scaling problem, a concurrency leadership problem, a review problem, an MEV competition problem, etc. I can propose solutions and discuss them with everyone, and ultimately, everyone agrees that my problem analysis is correct and puts forward their possible solutions. We iterate on the design together, eventually shaping it and solidifying it.


Then, by the time the urgency I foresaw starts to heat up, people already have a design in place. The hardest part—the alignment between the two teams—is already done, and all that's left is implementation and testing. So, my role is almost like that of a principal engineer in a big company. I don't write code; instead, I communicate with multiple teams, saying, "I notice you are facing difficulties in a certain aspect, and so are other teams. You need to address the problem in this way so we can all align in this aspect." This is probably the opportunity where I can have the most significant impact in the core field.


Mert: Indeed, this is the responsibility of this job, but far from easy. So, are you saying, "Jack Dorsey can do it, Elon Musk can do it, so I can also develop a phone while doing these"?


Anatoly: Not exactly. There is an outstanding engineer who is in charge of the mobile side, a close friend of mine for over a decade, who has been involved in building BlackBerry, iPhone, and almost every phone you can think of. And there is a very excellent general manager; these two together manage the entire team, while I am responsible for setting the vision.


I don't think people fully understand this vision, but if you look at Android or iOS, they are actually a cryptographically signed firmware specification that defines the entire platform. Everyone has such a device and ensures its security through trusted boot. When you receive a firmware update, it verifies the correctness of the firmware signature and rewrites the entire phone system.


The most crucial part of this is the cryptographic signature because it could very well be generated by a DAO, which signs the entire firmware and is responsible for its release. Imagine if Apple's cryptographic signature certificate itself were controlled by a DAO; the whole concept of the software platform would be overturned. This is that extremely cool yet somewhat strange "hacker-like" mindset.


In addition, my main work is to set such a vision, drive the team to sell more phones, make it a truly meaningful product, and ultimately achieve the milestone of the entire ecosystem being able to control its own firmware. I am not involved in day-to-day execution work.


Regarding Elon Musk, I think his way of working may be like this: he has a grand idea, then finds an engineer who can convincingly tell him, "I can implement the entire project from start to finish." If you can find such a person, then the only thing you need to do is provide funding to accelerate the process. After giving that person funding, they will complete the entire project on their own, and then hire people to expedite their progress.


I try to operate in this way, not sure if it's the same as Elon's way, but I think it's a way to handle multiple projects simultaneously: have a grand vision, a very specific goal, and then find someone who is truly capable of achieving it. If time were unlimited, I could build every part. And after giving them funding, they will accelerate to accomplish it all.


Mert: You mentioned that the vision is clear, but the ideal outcome seems to be this: suppose you succeed, sell a lot of these phones, and even have a groundbreaking impact on Crypto Twitter and Apple. Then, Apple might lower their fees. In other words, what you did changed the world,


Anatoly: Indeed, a change has occurred, Midwest software companies no longer have to pay Apple a ransom-like 30%, more efficient software and games can be developed, which is really good.


Mert: But this seems more like a selfless effort rather than a business act, right?


Anatoly: Only when this selfless act can also be successful as a business act does it truly come to fruition. If Apple is to lower their fees, they must feel the competitive pressure from a growing and commercially viable ecosystem. Otherwise, they will just wait until that ecosystem dies out due to lack of commercial viability. Therefore, this ecosystem must find the product-market fit and have the ability to self-sustain.


But this does not mean it won't change the world. If it can make Apple's revenue share smaller, that is the essence of capitalism: when you see a group extracting rent at a 30% rate, and you provide the same service at a 15% rate, you change the market economy, benefiting everyone, including consumers.


Mert: So, what you're saying is that you have to believe in yourself to actually be able to compete with one of the world's largest companies, Apple and Google. So, why do you think you can compete with them?


Anatoly: Clearly, a 30% revenue share is indeed too high, as evidenced by cases like Tim Sweeney's lawsuits against them, which has become a pain point for companies utilizing Apple and Google's distribution channels. Apple and Google collect rent in this manner, and consumers do not care about these fees as they are hidden from consumers. Consumers pay a fixed amount to the app, and Apple takes 30% from that.


Cracking this issue is a networking challenge, and I think the crypto space has an advantage in this regard. Cryptography can financialize digital assets and scarcity in a way different from Web 2. However, even so, this can still fail. The reason for failure is not because app developers do not want lower fees, that point is obviously valid. The reason for failure is that we have not yet found how to leverage the incentives provided by cryptographic technology to scale networks.


This is a truly tricky problem; it is not a product problem, nor a business model problem, but a question of how we can compel users to change their behavior and switch to other networks.


8. What Does Solana Rely on to Compete with Ethereum?


Mert: Shifting gears, I'd like to discuss some ZK-related issues. One ultimate vision of blockchain seems to be everything being driven by ZK, where you don't need to execute all operations on a full node, just validate proofs. However, Solana doesn't seem to have a similar plan.


Anatoly: If you've read my article on APE (Asynchronous Processing Execution), you'll find that it brings significant changes to how validators work. By sharing a common prover, validators can verify state. Thus, you can have multiple validators sharing a trusted setup (e.g., TE) or some trust model, even using a ZK solution. When APE completes the entire asynchronous execution and computes a full snapshot hash, you can actually achieve this idea—a Rollup entirely based on ZK verification. This does not mean you need a Rollup, or that Rollup is in some way incompatible with Solana.


This perspective is absurd; asynchronous execution allows you to compute a snapshot hash based on your trust model, regardless of the environment you use, whether running your own full node, sharing a TE environment, or other environments—none of these would affect my full node. If I run my full node, you can use any environment to do what you want to do.


The core question is, what sets Solana apart from Ethereum and ZK? For the network to survive, it must have commercial viability, meaning it needs to be profitable. In my view, the L1's only business model is through priority fees, which is essentially the same as MEV. On the other hand, Rollups create their own MEV, perform independent sequencing, and compete with L1, creating a parasitic competitive environment for L1.


All this competition is fine, but it doesn't belong to the Solana ecosystem. Those Rollups are EVM-based, leveraging the power of open source to accelerate development, while Solana's ecosystem is based on the SVM.


In my view, this is the fundamental difference in how ZK is applied between Solana and Ethereum. The light protocol is great because on Solana's mainnet, sequencing is done by Solana's validators.


Mert: Let's give a highly theoretical example, completely the other way around, assuming that bandwidth has been maximized, latency has been minimized, and Moore's Law has been fully utilized. Even in a situation where the channel is saturated, just adding some additional hardware would solve the problem. If we really achieve all these but it's still not enough, what then? Assume that encryption technology has really become more widespread (though personally, I don't think it will happen, but let's assume it did), what would happen then?


Anatoly: Well, you wouldn't be able to boot up another network anymore because Solana's full nodes have already saturated the ISP's bandwidth, and each ISP has no more capacity; we've consumed all available bandwidth.


Mert: I guess before reaching full saturation, all engineering problems need to be addressed.


Anatoly: It needs to be realized that almost everywhere in the world, you can get a 1 Gbps network speed, even most phones have that capability, which is equivalent to processing 250,000 transactions per second (TPS) per second. Even according to the current efficiency specification of the Turbine protocol, this setup would support 250,000 TPS. This is an astronomical number, a ridiculous capacity. Let's saturate that first, and then discuss other issues like the limits of Moore's Law.


But for now, Solana still lacks a 250x increase in throughput to reach that point. We need a 250x improvement before we can start thinking about other issues. And this so-called 1 Gbps is a technology standard that is 25 years old, a very mature technology.


We haven't even come close to saturating this technological capacity yet. I think when we reach full saturation of 1 Gbps bandwidth, when Turbine is fully saturated, that's the scenario that the FireDancer team has demonstrated in a lab environment. Of course, this environment is still distributed, but fundamentally, it's a lab environment, although this is indeed achievable.


However, to make this technology commercially viable, there are still many issues to solve, and applications need to effectively utilize this capability. Currently, most of the load on Solana comes from market activity, first reaching saturation, then arbitrage filling the remaining block space. But this has not yet reached what I call "economic saturation."


Mert: In an environment where Ethereum has higher-quality assets and higher transaction volume due to existing liquidity effects, how does Solana compete? Assuming these assets, even stablecoins, have not reached Ethereum's level, what needs to change?


Anatoly: We can start calling Ethereum's assets "Legacy Assets" and then launch all new things on Solana. This meme needs to change, with the new version being that Ethereum is a platform for "legacy assets," while Solana is the birthplace of new things.


"Original Article Link"


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit